Introduction to Compressed Sensing Introduction to Compressed Sensing Gitta Kutyniok (Institut fu¢¨r

  • View
    0

  • Download
    0

Embed Size (px)

Text of Introduction to Compressed Sensing Introduction to Compressed Sensing Gitta Kutyniok (Institut...

  • Introduction to Compressed Sensing

    Gitta Kutyniok

    (Institut für Mathematik, Technische Universität Berlin)

    Winter School on “Compressed Sensing”, TU Berlin December 3–5, 2015

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 1 / 40

  • Outline

    1 Modern Data Processing Data Deluge Information Content of Data Why do we need Compressed Sensing?

    2 Main Ideas of Compressed Sensing Sparsity Measurement Matrices Recovery Algorithms

    3 Applications

    4 This Winter School

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 2 / 40

  • The Age of Data

    Problem of the 21th Century:

    We live in a digitalized world.

    Slogan: “Big Data”.

    New technologies produce/sense enormous amounts of data.

    Problems: Storage, Transmission, and Analysis.

    “Big Data Research and Development Initiative”

    Barack Obama (March 2012)

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 3 / 40

  • Olympic Games 2012

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 4 / 40

  • Better, Stronger, Faster!

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 5 / 40

  • Accelerating Data Deluge

    Situation 2010:

    1250 Billion Gigabytes generated in 2010:

    # digital bits > # stars in the universe

    Growing by a factor of 10 every 5 years.

    Available transmission bandwidth

    Observations:

    Total data generated > total storage

    Increases in generation rate >> increases in communication rate

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 6 / 40

  • What can we do...?

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 7 / 40

  • Quote by Einstein

    “Not everything that can be counted counts,

    and not everything that counts can be counted.”

    Albert Einstein

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 8 / 40

  • An Applied Harmonic Analysis Viewpoint

    Exploit a carefully designed representation system (ψλ)λ∈Λ ⊆ H:

    H ∋ f −→ (〈f , ψλ〉)λ∈Λ −→ ∑

    λ∈Λ

    〈f , ψλ〉ψλ = f .

    Desiderata:

    Special features encoded in the “large” coefficients | 〈f , ψλ〉 |. Efficient representations:

    f ≈ ∑

    λ∈ΛN

    〈f , ψλ〉ψλ, #(ΛN) small

    Goals:

    Derive high compression by considering only the “large” coefficients.

    Modification of the coefficients according to the task.

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 9 / 40

  • Review of Wavelets for L2(R2)

    Definition (1D): Let φ ∈ L2(R) be a scaling function and ψ ∈ L2(R) be a wavelet. Then the associated wavelet system is defined by

    {φ(x −m) : m ∈ Z} ∪ {2j/2 ψ(2jx −m) : j ≥ 0,m ∈ Z}.

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 10 / 40

  • Review of Wavelets for L2(R2)

    Definition (1D): Let φ ∈ L2(R) be a scaling function and ψ ∈ L2(R) be a wavelet. Then the associated wavelet system is defined by

    {φ(x −m) : m ∈ Z} ∪ {2j/2 ψ(2jx −m) : j ≥ 0,m ∈ Z}.

    Definition (2D): A wavelet system is defined by

    {φ(1)(x −m) : m ∈ Z2} ∪ {2jψ(i)(2jx −m) : j ≥ 0,m ∈ Z2, i = 1, 2, 3},

    where ψ(1)(x) = φ(x1)ψ(x2),

    φ(1)(x) = φ(x1)φ(x2) and ψ (2)(x) = ψ(x1)φ(x2),

    ψ(3)(x) = ψ(x1)ψ(x2).

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 10 / 40

  • The World is Compressible!

    N pixels k

  • JPEG2000

    Kompression auf 1/20 Kompression auf 1/200

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 12 / 40

  • The New Paradigm for Data Processing: Sparsity!

    Sparse Signals: A signal x ∈ RN is k-sparse, if

    ‖x‖0 = #non-zero coefficients ≤ k . Model Σk : Union of k-dimensional subspaces.

    Compressible Signals: A signal x ∈ RN is compressible, if the sorted coefficients have rapid (power law) decay.

    Model: ℓp ball with p ≤ 1.

    |xi |

    k N

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 13 / 40

  • “Not everything that can be counted counts...” (Einstein)

    Classical Approach:

    Sensing/Sampling Compression Reconstructionx N k N

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 14 / 40

  • “Not everything that can be counted counts...” (Einstein)

    Classical Approach:

    Sensing/Sampling Compression Reconstructionx N k N

    Sensing/Sampling: ◮ Linear processing.

    Compression: ◮ Non-linear processing.

    Why acquire N samples only to discard all but k pieces of data?

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 14 / 40

  • “Not everything that can be counted counts...” (Einstein)

    Classical Approach:

    Sensing/Sampling Compression Reconstructionx N k N

    Sensing/Sampling: ◮ Linear processing.

    Compression: ◮ Non-linear processing.

    Why acquire N samples only to discard all but k pieces of data?

    Fundamental Idea:

    Directly acquire “compressed data”, i.e., the information content.

    Take more universal measurements:

    Compressed Sensing Reconstructionx (k

  • Compressed Sensing enters the Stage

    ‘Initial’ Papers:

    E. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59 (2006), 1207–1223.

    D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), 1289–1306.

    Avalanche of Results (dsp.rice.edu/cs):

    Approx. 2000 papers and 150 conferences so far.

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 15 / 40

  • Compressed Sensing enters the Stage

    ‘Initial’ Papers:

    E. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59 (2006), 1207–1223.

    D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), 1289–1306.

    Avalanche of Results (dsp.rice.edu/cs):

    Approx. 2000 papers and 150 conferences so far.

    Relation to the following areas:

    Applied harmonic analysis.

    Applied linear algebra.

    Convex optimization.

    Geometric functional analysis.

    Random matrix theory.

    Application areas: Radar, Astronomy, Biology, Seismology, Signal processing and more.

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 15 / 40

  • What is Compressed Sensing...?

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 16 / 40

  • Compressed Sensing Problem, I

    General Procedure:

    Signal x ∈ RN . x is k-sparse.

    Take n

  • Compressed Sensing Problem, I

    General Procedure:

    Signal x ∈ RN . x is k-sparse.

    Take n

  • Compressed Sensing Problem, II

    =

    x

    Ay

    Fundamental Questions:

    What are suitable signal models?

    When and with which accuracy can the signal be recovered?

    What are suitable sensing matrices?

    How can the signal be algorithmically recovered?

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 18 / 40

  • Fundamental Theorem of Sparse Solutions

    Definition: Let A be an n × N matrix. Then spark(A) denotes the minimal number of linearly dependent columns; spark(A) ∈ [2, n + 1].

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 19 / 40

  • Fundamental Theorem of Sparse Solutions

    Definition: Let A be an n × N matrix. Then spark(A) denotes the minimal number of linearly dependent columns; spark(A) ∈ [2, n + 1].

    Lemma: Let A be an n × N matrix, and let k ∈ N. Then the following conditions are equivalent:

    (i) For every y ∈ Rn, there exists at most one x ∈ RN with ‖x‖0 ≤ k such that y = Ax .

    (ii) k < spark(A)/2.

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 19 / 40

  • Fundamental Theorem of Sparse Solutions

    Definition: Let A be an n × N matrix. Then spark(A) denotes the minimal number of linearly dependent columns; spark(A) ∈ [2, n + 1].

    Lemma: Let A be an n × N matrix, and let k ∈ N. Then the following conditions are equivalent:

    (i) For every y ∈ Rn, there exists at most one x ∈ RN with ‖x‖0 ≤ k such that y = Ax .

    (ii) k < spark(A)/2.

    Sketch of Proof:

    Assume y = Ax0 = Ax1.

    Then x0 − x1 ∈ N (A).

    Gitta Kutyniok (TU Berlin) Introduction to Compressed Sensing Winter School 2015 19 / 40

  • Sparsity and ℓ1

    Assumption: Letting A be an n × N-matrix, n

  • Sparsity and ℓ1