Compressed Sensing 2014-10-21¢  Sohail Bahmani Gerorgia Tech. Compressed Sensing Signal Processing

  • View
    0

  • Download
    0

Embed Size (px)

Text of Compressed Sensing 2014-10-21¢  Sohail Bahmani Gerorgia Tech. Compressed Sensing Signal...

  • Compressive Sensing and Beyond

    Sohail Bahmani

    Gerorgia Tech.

  • Compressed Sensing

    Signal Processing

  • Signal Models

  • The Sampling Theorem

    Any signal with bandwidth 𝐡 can be recovered from its samples collected uniformly at a rate no less that 𝑓s = 2𝐡.

    Classics: bandlimited

    Claude Shannon Harry Nyquist

    π‘₯(𝑑) = π‘₯𝑛sinc 𝑓s𝑑 βˆ’ 𝑛

    π‘›βˆˆβ„€

    π‘₯𝑛 = π‘₯ 𝑑 , sinc 𝑓s𝑑 βˆ’ 𝑛

    -3 -2 -1 0 1 2 3

    0

    1

    1

  • 𝑋(𝜏, πœ”) = π‘₯ 𝑑 , 𝑀 𝑑 βˆ’ 𝜏 𝑒jπœ”π‘‘

    π‘₯ 𝑑 = 1

    2πœ‹ 𝑋 𝜏,πœ” 𝑒jπœ”π‘‘dπœ”d𝜏

    ℝℝ

    πœ“π‘š,𝑛(𝑑) = π‘Ž βˆ’ π‘š 2πœ“ π‘Žβˆ’π‘šπ‘‘ βˆ’ 𝑛𝑏

    π‘₯ 𝑑 = π‘₯ 𝑑 ,πœ“π‘š,𝑛(𝑑) πœ“π‘š,𝑛(𝑑)

    π‘›βˆˆβ„€

    π‘šβˆˆβ„€

    Classics: time-frequency

    A. Haar

    I. Daubechies R. Coifman

    D Gabor

    2

  • Sparsity

    =

    𝒙 𝑭 𝒙

    𝑭Ω𝒙 ∣Ω

    =

    𝒙 T. Tao E. CandΓ¨s J. Romberg

    D. Donoho

    3

  • Structured sparsity

    Discrete Wavelet Transform

    Tree Structured Models Graph Structured Models

    Protein-Protein Interactions Gene Regulatory Network

    Couch Lab, George Mason University, http://osf1.gmu.edu/~rcouch/protprot.jpg 4

  • Other applications

    minimum order linear system realization

     low-rank matrix completion

     low-dimensional Euclidean embedding

    Low rank

    B. Recht

    M.Fazel

    P. Parrilo

    Model for Images

    5

  • Compressive Sensing

  • 𝒙⋆

    𝒙 = π‘¨β€ π’š

    𝑨𝒙 = π’š

    Linear inverse problems

    =

    𝒙⋆

    π‘š 𝑛

    𝑨 π’š

    π‘š < 𝑛 No unique inverse, but …

    6

  • argmin

    𝒙 𝒙 0

    subject to 𝑨𝒙 = π’š

     𝒙 0 = supp 𝒙

    supp 𝒙 = 𝑖 π‘₯𝑖 β‰  0

    ο‚§οŒ Generally NP-hard!

    ο‚§οŠ Tractable for many interesting 𝑨

    Sparse solutions

    π‘¨π’™πŸ

    π‘¨π’™πŸ

    π’™πŸ

    π’™πŸ

    7

  • Many sufficient conditions proposed

    Nullspace property, Incoherence, Restricted Eigenvalue, …

    Restricted Isometry Property

    1 βˆ’ π›Ώπ‘˜ 𝒙 2 2 ≀ 𝑨𝒙 2

    2 ≀ 1 + π›Ώπ‘˜ 𝒙 2 2, βˆ€π’™: 𝒙 0 ≀ π‘˜

    Specialize 𝑨

    π‘¨π’™πŸ

    π‘¨π’™πŸ

    1 Β± 𝛿𝑑 𝑑

    π’™πŸ

    π’™πŸ

    8

  • Randomness comes to rescue Random matrices can exhibit RIP:

    Random matrices with iid entries

    Gaussian, Rademacher (symmetric Bernoulli), Uniform

    Structured random matrices

    Random partial Fourier matrices

    Random circulant matrices

    RIP holds with high probability if π‘š ≳ π‘˜ log𝛾 𝑛

    9

  • argmin 𝒙

    𝒙 1

    subject to 𝑨𝒙 = π’š

     𝒙 1 = π‘₯𝑖𝑖

    Theorem [CandΓ¨s] If 𝑨 satisfies 𝛿2π‘˜-RIP

    with 𝛿2π‘˜ < 2 βˆ’ 1, then BP recovers any π‘˜-sparse target exactly.

    Basis Pursuit (β„“πŸ-minimization)

    10

  • Basis Pursuit (β„“πŸ-minimization)

    0 100 200 300 400 500 600 700 800 900 1000 -2

    -1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    I ndex

    V a

    lu e

    Spar se Signal

    `1 Reconst r uct ion

    CVX

    11

  • OMP, StOMP, IHT, CoSAMP, SP, …

     Iteratively estimate the support and the values on the support

    Many of them have RIP-based convergence guarantees

    CoSaMP (Needell and Tropp)

    Proxy vector 𝒛 = 𝑨T(𝑨𝒙 βˆ’ π’š)

    𝒡 = supp 𝒛2π‘˜

    𝒃 = argmin𝒙 𝑨𝒙 βˆ’ π’š 2 2 s. t. 𝒙 βˆ£π’―c= 𝟎

    Converges with 𝛿4π‘˜ ≀ 0.1

    Greedy algorithms

    𝒛 1

    𝒡

    2

    𝒯

    3

    𝒃 4

    𝒙

    supp 𝒙

    Update

    π’ƒπ‘˜ 5

    12

  • Rice single-pixel camera

    dim. reduction @ 20% dim. reduction @ 40% oroginal object

    Wakin et al

    13

  • Compressive MRI Lustig et al

    Acquisition Speed

    CS MRI

    Traditional MRI

    14

  • Image super-resolution Yang et al

    CS-based reconstruction Original

    Sparse representation 𝜢

    𝑫h𝜢

    𝑫l𝜢

    15

  • Low-Rank Matrix Recovery

  • Simple least squares requires 𝑛1𝑛2 measurements

    Degrees of freedom is π‘Ÿ 𝑛1 + 𝑛2 βˆ’ 1

    Can we close the gap?

    Linear systems w/ low-rank solutions

    𝑦𝑖 = 𝑿 ⋆ = , 𝑨𝑖𝑼 π‘½βˆ— 𝑛1

    𝑛2

    π‘Ÿ

    π‘Ÿ

    16

  • argmin

    𝑿 rank 𝑿

    subject to π’œ 𝑿 = π’š

    If identifiable, it recovers the low-rank solution exactly

    Just like β„“0-minimization, it is generally NP-hard

    Special measurement operators π’œ admit efficient solvers

    Rank minimization

    17

  • 𝑨𝑖 with iid entries exhibit low-rank RIP

    Gaussian

    Rademacher

    Uniform

    Some structured matrices

    Rank-one 𝑨𝑖 = 𝒂𝑖𝒃𝑖 βˆ—

    𝑨𝑖 with dependent entries

     Standard RIP usually doesn’t hold, but other approaches exist

    Random measurements

    Universal

    Instance optimal

    18

  • π’œ 𝑿 = π’š

    𝑿⋆

    argmin 𝑿

    𝑿 βˆ—

    subject to π’œ 𝑿 = π’š

     𝑿 βˆ— = πœŽπ‘–π‘–

    Theorem [Recht, Fazel, Parrilo] If π’œ obeys 𝛿5π‘Ÿ -RIP with 𝛿5π‘Ÿ < 0.1 then nuclear-norm minimization recovers any rank-π‘Ÿ target exactly.

    Nuclear-norm minimization

    19

  • Theorem [CandΓ¨s and Recht] If 𝑴 is a rank-π‘Ÿ matrix from the β€œrandom orthogonal model”, then w.h.p. the nuclear-norm

    minimization recovers 𝑴 exactly from π’ͺ(π‘Ÿπ‘›5 4 log 𝑛) uniformly observed entries, where 𝑛 = 𝑛1 ∨ 𝑛2.

    Matrix completion

    …

    Alice 4 5 ? 3 ? … ? 1

    Bob ? 2 ? 4 5 … ? ?

    .

    .

    .

    Yvon 5 ? 4 3 2 … 2 ?

    Zelda 3 2 ? ? 5 … 2 2

    20

  • Background Modeling

    Ordinary PCA is sensitive to noise and outliers

    𝑴 = 𝑳 + 𝑺

    argmin𝑳,𝑺 𝑳 βˆ— + πœ† 𝑺 1

    subject to 𝑃Ωobs 𝑳 + 𝑺 = 𝒀

    Robust PCA Candès et al

    low-rank component sparse component

    21

  • Compressive multiplexing Ahmed and Romberg

    Ξ©~𝑅 𝑀 +π‘Š log3 π‘€π‘Š

    22

    π‘₯π‘š 𝑑 = π›Όπ‘š πœ” 𝑒 j2πœ‹πœ”π‘‘

    𝐡

    πœ”=βˆ’π΅

    π‘Š = 2𝐡 + 1

  • Nonlinear CS

  • Squared error isn’t always appropriate

    example : non-Gaussian noise models

    argmin𝒙 𝑓 𝒙

    subject to 𝒙 0 ≀ π‘˜

    It’s challenging in its general form

    Objectives with certain properties allow approximations

     convex relaxation: β„“1-regularization

     greedy methods

    Sparsity-constrained minimization

    23

  • Sparse Logistic Regression

    argmin 𝒙

    log 1 + 𝑒𝒂𝑖 T𝒙 βˆ’ 𝑦𝑖𝒂𝑖

    T𝒙

    π‘š

    𝑖=1

    subject to 𝒙 0 ≀ π‘˜, 𝒙 2 ≀ 1

    Gene classification

    DNA Microarray

    𝒙 : sparse weights for genes

    𝒂𝑖 : microarray sample

    𝑦𝑖 : binary sample label (healthy/diseased)

    model: 𝑝 𝑦 𝒂; 𝒙

    24

  • Photon noise follows a Poisson distribution

    𝑝 π’š 𝑨; 𝒙 = 𝑨𝒙 𝑖

    𝑦𝑖

    𝑦𝑖! π‘’βˆ’ 𝑨𝒙 𝑖

    𝑖

    Physical constraints 𝑨 β‰₯ 0, 𝑨𝒙 β‰₯ 0, 𝒙 β‰₯ 0

    SPIRAL-TAP argmin 𝒙β‰₯0

    βˆ’ log 𝑝 π’š 𝑨; 𝒙 + πœπ‘…(𝒙)

    Imaging under photon noise

    𝒙 1

    𝑾T𝒙 1

    𝒙 TV …

    Harmany et al

    25

  • Coherent diffractive imaging

    * Lima et al., ESRF (www.esrf.eu)

    𝑓 𝒙 = 𝑳𝑭π‘ͺ𝒙 2 βˆ’ 𝑰 2 2

    𝑰 : diffracted field intensity

    π‘ͺ𝒙 : real image

    π‘ͺ : Shifts of generating function 𝑳 : low-pass filter 𝑭 : Fourier transform *

    *

    Szameit et al

    26

  • Generalizes CoSaMP

    Proxy vector 𝒛 = 𝛻𝑓 𝒙

    𝒡 = supp 𝒛2π‘˜

    𝒃 = argmin𝒙 𝑓 𝒙 s. t. 𝒙 βˆ£π’―c= 𝟎

    Converges under SRH

     the Hessian