Compressed Sensing and All duzhe/ 1 Outline Compressed Sensing 101 Traditional Sampling

  • View
    0

  • Download
    0

Embed Size (px)

Text of Compressed Sensing and All duzhe/ 1 Outline Compressed Sensing 101 Traditional Sampling

  • Compressed Sensing and All That CS880 Final Project

    Duzhe Wang Department of Statistics, UW-Madison

    December 11, 2017

  • 1

    Outline

    Compressed Sensing 101 Traditional Sampling Compressive Sampling

    RIP and JL Distributional JL⇐⇒RIP JL is Tight in Linear Case

    Compressed Sensing MRI Sparsity of Medical Imaging Mathematical Framework of CSMRI Image Reconstruction

    Fast and Efficient Compressed Sensing Sensing Matrices Structurally Random Ensemble System Theoretical Analysis

    CS880 Final Project

  • 2

    Pressure from Medical Imaging

    1

    I Nyquist-Shannon sampling theorem: no information loss if we sample at 2x signal bandwidth.

    I How do we reduce the data acquisition time? 1Comics courtesy of Michael Lustig

    CS880 Final Project

  • 3

    Traditional Sampling

    I The traditional sampling method: oversample and then remove redundancy to extract information

    I Compressed sensing principle :

    directly acquire the significant part of the signal by nonadaptive linear measurements

    CS880 Final Project

  • 4

    Sparse Image Representation

    I Signal transform

    I For example,

    CS880 Final Project

  • 5

    Compressive Sampling

    I Signal x is K-sparse in basis Ψ. For example, Ψ = I.

    I Replace samples with linear projections y = Φx . Φ is called sensing matrix.

    I Random measurements Φ will work and it requires Φ and Ψ are incoherent.

    I Signal is local, measurements are global. Each measurement picks up a litter information about each component.

    CS880 Final Project

  • 5

    Compressive Sampling

    I Signal x is K-sparse in basis Ψ. For example, Ψ = I.

    I Replace samples with linear projections y = Φx . Φ is called sensing matrix.

    I Random measurements Φ will work and it requires Φ and Ψ are incoherent.

    I Signal is local, measurements are global. Each measurement picks up a litter information about each component.

    CS880 Final Project

  • 5

    Compressive Sampling

    I Signal x is K-sparse in basis Ψ. For example, Ψ = I.

    I Replace samples with linear projections y = Φx . Φ is called sensing matrix.

    I Random measurements Φ will work and it requires Φ and Ψ are incoherent.

    I Signal is local, measurements are global. Each measurement picks up a litter information about each component.

    CS880 Final Project

  • 5

    Compressive Sampling

    I Signal x is K-sparse in basis Ψ. For example, Ψ = I.

    I Replace samples with linear projections y = Φx . Φ is called sensing matrix.

    I Random measurements Φ will work and it requires Φ and Ψ are incoherent.

    I Signal is local, measurements are global. Each measurement picks up a litter information about each component.

    CS880 Final Project

  • 6

    Sensing Matrices

    I It’s possible to fully reconstruct any sparse signal if sensing matrix Φ satisfies Restricted Isometry Property(RIP).

    I A matrix Φ ∈ RM×N is (�,K )-RIP if for all x 6= 0, s.t. ||x ||0 ≤ K , we have

    (1− �)||x ||22 ≤ ||Φx ||22 ≤ (1 + �)||x ||22

    CS880 Final Project

  • 7

    Theoretical Analysis

    Let Ψ be arbitrary fixed N × N orthonormal matrix, let �, δ be scalars in (0,1), let s be an integer in [N], and let M be an integer that satisfies

    M ≥ 100s ln(40N/(δ�)) �2

    Let Φ ∈ RM×N be a matrix, s.t. each element of Φ is distributed normally with zero mean and variance of 1/M. Then with probability of at least 1− δ over the choice of Φ, the matrix ΦΨ is (�, s)-RIP.

    I The proof uses Distributional Johnson-Lindenstrauss Lemma.

    CS880 Final Project

  • 8

    CS Signal Recovery

    I Mathematical notion of sparsity: Bq(Rq) = {θ ∈ RN ,

    ∑N j=1 |θj |q ≤ Rq}, q ∈ [0,1]

    I Reconstruct via L1 minimization(Basis Pursuit):

    minimize x

    ||x ||1

    subject to y = Φx

    I If x is K-sparse, solving the above basis pursuit problem can recover x exactly when M is large. [Candes and Tao, ’04; Donoho, ’04]

    CS880 Final Project

  • 9

    Theoretical Analysis

    [Candes ’08] Let � < 1

    1+ √

    2 and let Φ be a (�,2s)-RIP matrix, let x be any arbitrary

    vector and denote xs ∈ argmin||x − v ||1 v : ||v ||0 ≤ s

    and let x̂ ∈ argmin||x ||1 x : Φx = y

    be the reconstructed vector, then

    ||x̂ − x ||2 ≤ 2(1− ρ)−1s−1/2||x − xs||1

    where ρ = √

    2�/(1− �).

    CS880 Final Project

  • 10

    RIP and Distributional JL

    A matrix Φ ∈ RM×N is (�,K )-RIP if for all x 6= 0, s.t. ||x ||0 ≤ K , we have

    (1− �)||x ||22 ≤ ||Φx ||22 ≤ (1+ �)||x ||22

    A random matrix ΦM×N ∼ D satisfies the (�, δ)-distributional JL property if for any fixed x ∈ RN , with probability greater than 1− δ,

    (1− �)||x ||22 ≤ ||Φx ||22 ≤ (1+ �)||x ||22

    CS880 Final Project

  • 11

    Distributional JL⇐⇒RIP

    I Distributional JL=⇒RIP:

    [Baraniuk-Davenport-DeVore-Wakin ’08] Suppose � < 1 and M ≥ C1(�)K log( NK ). If Φ satisfies the ( �2 , δ)-distributional JL property with δ = e

    −M�, then with probability at least 1− e−M�/2, Φ satisfies (�,K )-RIP.

    I RIP=⇒Distributional JL:

    [Krahmer-Ward’11] Suppose ΦM×N satisfies (�,2K )-RIP, let D� = diag(�1, ..., �N) be a diagonal matrix of i.i.d. Rademacher RVs, then ΦD� satisfies the (3�,3e−cK )-distributional JL property.

    CS880 Final Project

  • 11

    Distributional JL⇐⇒RIP

    I Distributional JL=⇒RIP:

    [Baraniuk-Davenport-DeVore-Wakin ’08] Suppose � < 1 and M ≥ C1(�)K log( NK ). If Φ satisfies the ( �2 , δ)-distributional JL property with δ = e

    −M�, then with probability at least 1− e−M�/2, Φ satisfies (�,K )-RIP.

    I RIP=⇒Distributional JL:

    [Krahmer-Ward’11] Suppose ΦM×N satisfies (�,2K )-RIP, let D� = diag(�1, ..., �N) be a diagonal matrix of i.i.d. Rademacher RVs, then ΦD� satisfies the (3�,3e−cK )-distributional JL property.

    CS880 Final Project

  • 12

    Is JL Tight?

    I Distributional JL=⇒JL: Suppose 0 < � < 12 , let {x1, ...., xN} ⊂ R n,

    and m = 20 log N �2

    , then there exists a mapping f : Rn → Rm, s.t. for any (i, j),

    (1− �)||xi − xj ||22 ≤ ||f (xi )− f (xj )||22 ≤ (1 + �)||xi − xj ||22

    I For any n > 1,0 < � < 12 , and N > n C for some constant C > 0,

    then JL lemma is optimal in the case where f is linear [Larsen-Nelson’16]

    CS880 Final Project

  • 12

    Is JL Tight?

    I Distributional JL=⇒JL: Suppose 0 < � < 12 , let {x1, ...., xN} ⊂ R n,

    and m = 20 log N �2

    , then there exists a mapping f : Rn → Rm, s.t. for any (i, j),

    (1− �)||xi − xj ||22 ≤ ||f (xi )− f (xj )||22 ≤ (1 + �)||xi − xj ||22

    I For any n > 1,0 < � < 12 , and N > n C for some constant C > 0,

    then JL lemma is optimal in the case where f is linear [Larsen-Nelson’16]

    CS880 Final Project

  • 13

    MRI

    I The signal measured by the MRI system is the Fourier transform of the magnitude of the magnetization(the image)

    I A traditional MRI reconstruction method is inverse Fourier transform

    I Number of measurements ∝ scanning time I Reduce samples to reduce time

    CS880 Final Project

  • 14

    Sparsity of Medical Imaging

    2

    2Comics courtesy of Michael Lustig CS880 Final Project

  • 15

    Sparsity of Medical Imaging

    I In general, medical Images are not sparse, but have a sparse representation by Wavelet transform.

    3 4

    3This is an exercise from Michael Lustig’s webpage. 4Wavelet transform code is from Wavelab(David Donoho).

    CS880 Final Project

  • 16

    Mathematical Framework of CSMRI

    θ̂ ∈ argmin θ

    1 2 ||Fu(Ψθ)− y ||22 + λ||θ||1 (1)

    where Ψ is a sparsifying basis, θ is the transform coefficient vector, Fu is the undersampled Fourier transform, y is the samples we have acquired in the K-space. Hence, Ψθ̂ is the estimated image.

    CS880 Final Project

  • 17

    Compressed Sensing Reconstruction

    I Left: original image

    I Compute the 2D Fourier transform of the image, multiple by the non-uniform mask to get random sampled K-space data y

    I Implement the projection over convex sets(POCS) algorithm to solve (1)

    I Left: original image; Right: compressed sensing image

    CS880 Final Project

  • 17

    Compressed Sensing Reconstruction

    I Left: original image I Compute the 2D Fourier

    transform of the image, multiple by the non-uniform mask to get random sampled K-space data y

    I Implement the projection over convex sets(POCS) algorithm to solve (1)

    I Left: original image; Right: compressed sensing image

    CS880 Final Project

  • 17

    Compressed Sensing Reconstruction

    I Left: original image I Compute the 2D Fourier