Chapter 2 Fundamentals of compressed sensing (CS) reilly/ece712/Nasser_thesis... Chapter 2 Fundamentals

  • View
    1

  • Download
    0

Embed Size (px)

Text of Chapter 2 Fundamentals of compressed sensing (CS) reilly/ece712/Nasser_thesis... Chapter 2...

  • Chapter 2

    Fundamentals of compressed

    sensing (CS)

    Compressed sensing or compressive sampling (CS) is a simple and efficient signal

    acquisition technique that collects a few measurements about the signal of interest

    and later uses optimization techniques for reconstructing the original signal from

    what appears to be an incomplete set of measurements [56]. Accordingly, CS can be

    seen as a technique for sensing and compressing data simultaneously (thus the name).

    The CS technique relies on two fundamental principals: 1) sparse representation of

    the signal of interest in some basis, which is called the representation basis; and

    2) incoherence between the sensing matrix and the representation basis. The terms

    sensing, sparsity, and incoherence will be defined in the next section.

    The objective of this chapter is to answer the following questions:

    1. Is it possible to design a sensing matrix with far fewer rows than columns to

    capture the main information in the measured signal?

    2. What is the sufficient number of measurements (rows of the sensing matrix)

    17

  • Ph.D. Thesis - Nasser Mourad McMaster - Electrical & Computer Engineering

    List of symbols

    Φ Sensing matrix.

    Ψ Dictionary matrix.

    N Null space. δk The isometry constant.

    x The vector of measurements of length m.

    s Sparse solution vector of length n.

    s∗ The true sparse vector.

    ŝ An estimate of s∗.

    s∗l The vector s ∗ with all but the l-largest entries set to zero.

    sk An estimate of s ∗ at the kth iteration.

    W k A diagonal weightng matrix at the kth iteration.

    m The number of measurements (length of x).

    n Length of s.

    k The diversity of s∗.

    | · | Cardinality of a vector.

    such that the original signal can be reconstructed with high probability?

    3. What are the available techniques that can solve the inverse problem, i.e., re-

    constructing the original signal from the few measurements?

    2.1 Problem formulation

    In this section we present the mathematical formulation of the problem of compressed

    sensing. The following subsections present the formal definitions of some terminolo-

    gies that will be used in this chapter.

    18

  • Ph.D. Thesis - Nasser Mourad McMaster - Electrical & Computer Engineering

    2.1.1 Compressed sensing

    In this subsection we present a formal description of ”compressed sensing”. Sensing of

    the time domain signal y(t) is defined as the process of collecting some measurements

    about y(t) by correlating y(t) with some sensing waveforms {φj(t)}, i.e.,

    xj = 〈y, φj〉, j = 1, 2, . . . , m. (2.1)

    Based on the sensing waveforms, the entries of the vector x have different interpre-

    tations. For example, if the sensing waveforms are sinusoids, then x is a vector of

    Fourier coefficients, and if the sensing waveforms are Dirac delta functions, then x is

    a vector of sampled values of y(t).

    To simplify the presentation of the CS technique we will restrict our attention to

    discrete signals y ∈ Rn. Accordingly, equation 2.1 can be rewritten in matrix form as

    x = Φy, (2.2)

    where the jth row of the sensing matrix Φ ∈ Rm×n is the discrete representation of the jth sensing function φj(t), and y ∈ Rn is the discrete representation of y(t). Based on this model, compressed sensing is defined as the sensing process for which

    the number m of available measurements is much smaller than the dimension n of

    the signal y. The problem associated with compressed sensing is that we have to

    solve an under–determined system of equations to recover the original signal y from

    the measurement vector x. However, since the number of equations is less than the

    number of unknowns, it is known that this system has infinitely many solutions , and

    thus it is necessary to impose constraints on the candidate solution to identify which

    of these candidate solutions is the desired one. A powerful constraint that can be

    used in this regard is the “sparsity” of the solution vector, which is defined in the

    next subsection.

    19

  • Ph.D. Thesis - Nasser Mourad McMaster - Electrical & Computer Engineering

    2.1.2 Sparsity

    Before explaining the importance of the sparsity constraint in solving under-determined

    systems of equations, we present the following definitions [41]:

    • Sparsity = |{s[i] = 0}| = number of zero entries in s, where | · | denotes cardi- nality of a set.

    • Diversity = |{s[i] 6= 0}| = number of nonzero entries in s.

    • k–sparse vector: a k–sparse vector is defined as the vector that has at most k nonzero entries.

    • Compressible vector: The vector s is called compressible if its entries obey a power law |s|(j) ≤ Crj−r, where |s|(j) is the jth largest value of s, i.e., (

    |s|(1) ≥ |s|(2) ≥ . . . ≥ |s|(n) )

    , r > 1, and Cr is a constant which depends only

    on r [57]. This means that most entries of a compressible vector are small

    while only few entries are large. Such a model is appropriate for the wavelet

    coefficients of a piecewise smooth signal, for example.

    As stated in the previous subsection, an underdetermined system of linear equa-

    tions has infinite candidate solutions of the form ŷ = y0 +N where y0 is any vector that satisfies x = Φy0, and N := N (Φ) is the null space of Φ. As will be shown later, if the candidate solution vector is known to be k–sparse, and under some conditions

    on the sensing matrix Φ, the solution vector can be uniquely determined using an

    optimization technique. Fortunately, this is also applies to nonsparse vectors that can

    be sparsely represented in a suitably selected basis Ψ, i.e.,

    y = Ψs, (2.3)

    20

  • Ph.D. Thesis - Nasser Mourad McMaster - Electrical & Computer Engineering

    where the coefficient vector s is sparse.1 Clearly y and s are equivalent representations

    of the signal, with y in the time or space domain and s in the Ψ domain. In some

    applications, it may be natural to choose Ψ as an orthonormal basis, while in others

    the signal y may only be sparsely represented when Ψ is a redundant dictionary; i.e.,

    it has more columns than rows. A good example is provided by an audio signal which

    is often sparsely represented in an overcomplete dictionary with atoms (columns) that

    have the form of modulated Gaussian pulses, e.g., σ− 1 2e−

    (t−t0) 2

    2σ2 eiωt, where t0, ω, and

    σ are the discrete shift, modulation and scale parameters, respectively [18].

    Combining (2.2) and (2.3) and taking into consideration the case of noisy mea-

    surements, the sensing process can be written as

    x = ΦΨs + v = As + v, (2.4)

    where A = ΦΨ ∈ Rm×n, and v ∈ Rn is a noise vector. Assuming that the coefficient vector s is k–sparse, then s, and hence y = Ψs, can only be estimated from x if the

    matrices Φ, Ψ, and A satisfy the properties described in the next subsection.

    2.1.3 Incoherence and restricted isometry properties

    The sparsity of the solution vector, or its representation in some basis, is a necessary

    but not sufficient condition for finding a unique solution to an underdetermined sys-

    tem of linear equations. In addition to the sparsity principle, CS relies on another

    principle which is the ”incoherence” between the sensing matrix Φ and the sparsity

    basis Ψ. The incoherence principal is also related to an equivalent property, which is

    associated with A, called restricted isometry property (RIP).

    1In some papers a vector is called k–sparse if it is a linear combination of only k basis vectors. However, in this thesis we use the definition presented in the text.

    21

  • Ph.D. Thesis - Nasser Mourad McMaster - Electrical & Computer Engineering

    2.1.3.1 Incoherence

    To simplify the treatment, we assume that the sparsity basis matrix Ψ is orthonormal,

    and the sensing matrix Φ consists ofm rows drawn randomly from an orthogonal basis

    Φ̂ ∈ Rn×n, which is normalized such that Φ̂T Φ̂ = nI, where I is an identity matrix. The operation of extracting the m rows of Φ from Φ̂ is denoted as Φ := Φ̂

    Ω , where

    Ω ⊂ {1, . . . , n} is a subset of indices of size |Ω| = m. Based on this notation, A can also be written as A := Â

    Ω , where  = Φ̂Ψ is an orthogonal matrix with Â

    T Â = nI.

    Let µ(Â) be the element with the largest magnitude among all entries of Â, i.e.,

    µ(Â) = max k,j |Âk,j|. (2.5)

    Assume that the measurements are noise-free and the sparse solution vector s is

    k–sparse and is reconstructed using basis pursuit, i.e.,

    ŝ = argmin z ||z||ℓ1 subject to Â

    Ω z = Â

    Ω s, (2.6)

    then it was proved in [58] that ŝ = s with overwhelming probability for all subsets Ω

    with size

    |Ω| ≥ C.µ2(Â).k. logn. (2.7)

    for some positive constant C. Equation (2.7) indicates that, in addition to the size

    and the sparsity of the solution vector, the number of measurements depends on the

    largest magnitude among all entries of Â. Since each row (or column) of  necessarily

    has an ℓ2-norm equal to √ n, µ(A