67
Quantization of Compressed Sensing Measurements ¨ Ozg¨ ur Yılmaz Department of Mathematics University of British Columbia Jun 11, 2010

Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of Compressed Sensing Measurements

Ozgur Yılmaz

Department of MathematicsUniversity of British Columbia

Jun 11, 2010

Page 2: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Collaborators

Joint work with:

I Sinan Gunturk (Courant)

I Mark Lammers (UNC Wilmington)

I Alex Powell (Vanderbilt)

I Rayan Saab (UBC)

Page 3: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Motivation: Digital Signal Processing

Inherently analog signals: Audio, images, seismic, etc.

Objective: Use digital technology to store and process analog signals – findefficient digital representations of analog signals.

How this is done - classical approach:

Sampling(I)

Quantization(II)

Compression(III)

Signal f(analog)

{ f(nT): n ∈Ι } { fq(n): n ∈Ι } { bj: j∈Ι' }

A/D conversion: measurement & truncation

A/D conversion: measurement & truncation

Source coding:truncation & compression

(or other processing)

Page 4: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Compressed sensing (CS)

Motivation of CS: If the signal is sparse or compressible, can we combinesampling and compression stages to one compressed sensing stage?

What do we have so far? Sparse signals can be recovered from fewnon-adaptive, linear measurements. Major dimensionality reduction.

Missing link: Devise efficient quantization schemes for “compressedmeasurements”.

I Crucial if we want to compress.

I Little is known...

I This is the subject of this talk.

Page 5: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Compressed sensing

Notations.

I x ∈ RN is k-sparse of x has at most k non-zero entries.

I Σk := {x ∈ RN : x is k-sparse}I Measurement matrix: Φ, an m × N real matrix.

I Measurements: y = Φx (y = Φx + e)

I Dimensional setting: k < m < N.

Objective of CS. Suppose x ∈ Σk or can be well approximated from Σk . Giventhe (noisy) measurements y = Φx + e,

I recover x exactly (approximately),

I in a computationally efficient manner.

Page 6: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Compressed sensing – recovery by `1 minimization

The following “robust recovery” result is crucial for applications of CS (and ourresults in this talk).

Theorem [Candes-Romberg-Tao], also [Donoho]

Assume x ∈ Σk and Φ satisfies RIP(k , δ) for sufficiently small δ. Lety = Φx + e where ‖e‖2 ≤ ε. Define the decoder (BPDN)

∆ε1(y) := arg min ‖z‖1 subject to ‖Φz − y‖2 ≤ ε.

Thenx# = ∆ε

1(y) =⇒ ‖x − x#‖2 ≤ C0ε.

More generally, for any x ∈ RN , ‖x − x#‖2 ≤ C1ε+ C2σk (x)`1√

k.

Remarks.

1. Gaussian random matrices Φ with Φi,j ∼ N (0, 1/m) satisfy RIP of desiredorder.

2. Potential sources for measurement error include quantization error.

Page 7: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of compressed sensing measurements

It is clear that compressed sensing is very effective for dimension reduction.However, one of the initial goals was to compress.

Need efficient quantization strategies!

Setting. Suppose x ∈ Σk , Φ a compressed sensing matrix. Let y = Φx be themeasurement of x .

Problem. Given a (discrete) quantization alphabet A, e.g., A = dZ (stick tothis alphabet throughout this talk),

x ∈ RN Φ−→ y ∈ Rm Q−→ q ∈ Am ∆Q−→ x# ∈ RN

i.e., find a quantizer Q : Rm 7→ Am and a decoder ∆Q : Am 7→ RN such that

I ‖x −∆Q(q)‖ is small whenever x ∈ Σk , and

I ∆Q is computationally tractable.

Page 8: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of compressed sensing measurements

It is clear that compressed sensing is very effective for dimension reduction.However, one of the initial goals was to compress.

Need efficient quantization strategies!

Setting. Suppose x ∈ Σk , Φ a compressed sensing matrix. Let y = Φx be themeasurement of x .

Problem. Given a (discrete) quantization alphabet A, e.g., A = dZ (stick tothis alphabet throughout this talk),

x ∈ RN Φ−→ y ∈ Rm Q−→ q ∈ Am ∆Q−→ x# ∈ RN

i.e., find a quantizer Q : Rm 7→ Am and a decoder ∆Q : Am 7→ RN such that

I ‖x −∆Q(q)‖ is small whenever x ∈ Σk , and

I ∆Q is computationally tractable.

Page 9: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of compressed sensing measurements

It is clear that compressed sensing is very effective for dimension reduction.However, one of the initial goals was to compress.

Need efficient quantization strategies!

Setting. Suppose x ∈ Σk , Φ a compressed sensing matrix. Let y = Φx be themeasurement of x .

Problem. Given a (discrete) quantization alphabet A, e.g., A = dZ (stick tothis alphabet throughout this talk),

x ∈ RN Φ−→ y ∈ Rm Q−→ q ∈ Am ∆Q−→ x# ∈ RN

i.e., find a quantizer Q : Rm 7→ Am and a decoder ∆Q : Am 7→ RN such that

I ‖x −∆Q(q)‖ is small whenever x ∈ Σk , and

I ∆Q is computationally tractable.

Page 10: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

The most intuitive quantizer. Round each measurement yj to the nearestelement of A = dZ (Pulse Code Modulation or PCM)

QPCM : y 7→ qPCM.

Note that, setting x#PCM := ∆ε

1(qPCM),

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d√

m

This is counter-intuitive: more measurements =⇒ more error?

Not really! This is an artifact of our RIP-based choice of normalization for themeasurement matrix Φ, i.e., for Φi,j ∼ N (0, 1/m). Dynamic range ofmeasurements vary with m, conflicts with fixed A.

Fixed if we work with Φ such that Φij ∼ N (0, 1). Then

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d

Page 11: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

The most intuitive quantizer. Round each measurement yj to the nearestelement of A = dZ (Pulse Code Modulation or PCM)

QPCM : y 7→ qPCM.

Note that, setting x#PCM := ∆ε

1(qPCM),

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d√

m

This is counter-intuitive: more measurements =⇒ more error?

Not really! This is an artifact of our RIP-based choice of normalization for themeasurement matrix Φ, i.e., for Φi,j ∼ N (0, 1/m). Dynamic range ofmeasurements vary with m, conflicts with fixed A.

Fixed if we work with Φ such that Φij ∼ N (0, 1). Then

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d

Page 12: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

The most intuitive quantizer. Round each measurement yj to the nearestelement of A = dZ (Pulse Code Modulation or PCM)

QPCM : y 7→ qPCM.

Note that, setting x#PCM := ∆ε

1(qPCM),

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d√

m

This is counter-intuitive: more measurements =⇒ more error?

Not really! This is an artifact of our RIP-based choice of normalization for themeasurement matrix Φ, i.e., for Φi,j ∼ N (0, 1/m). Dynamic range ofmeasurements vary with m, conflicts with fixed A.

Fixed if we work with Φ such that Φij ∼ N (0, 1). Then

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d

Page 13: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

The most intuitive quantizer. Round each measurement yj to the nearestelement of A = dZ (Pulse Code Modulation or PCM)

QPCM : y 7→ qPCM.

Note that, setting x#PCM := ∆ε

1(qPCM),

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d√

m

This is counter-intuitive: more measurements =⇒ more error?

Not really! This is an artifact of our RIP-based choice of normalization for themeasurement matrix Φ, i.e., for Φi,j ∼ N (0, 1/m). Dynamic range ofmeasurements vary with m, conflicts with fixed A.

Fixed if we work with Φ such that Φij ∼ N (0, 1). Then

‖y − qPCM‖2 ≤d

2

√m

robust recovery=⇒ ‖x − x#

PCM‖2 . d

Page 14: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

I Still, the accuracy of reconstruction given by ‖x#PCM‖ . d .

I No improvement if we increase the number of measurements m!

I Of course, this is just an upper bound, but...

Why should we expect the approximation to improve?If (once) the support of x is known (recovered), then we have effectivelyoversampled x (m > k measurements for a k-dimensional signal)!

I In traditional oversampled quantization theory, we can do significantlybetter!

Page 15: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

I Still, the accuracy of reconstruction given by ‖x#PCM‖ . d .

I No improvement if we increase the number of measurements m!

I Of course, this is just an upper bound, but...

10 20 50 100

10−8

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 10

mea

n l 2−

norm

of t

he e

rror

!

PCM " l1

Why should we expect the approximation to improve?If (once) the support of x is known (recovered), then we have effectivelyoversampled x (m > k measurements for a k-dimensional signal)!

I In traditional oversampled quantization theory, we can do significantlybetter!

Page 16: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

I Still, the accuracy of reconstruction given by ‖x#PCM‖ . d .

I No improvement if we increase the number of measurements m!

I Of course, this is just an upper bound, but...

10 20 50 100

10−8

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 10

mea

n l 2−

norm

of t

he e

rror

!

PCM " l1#$ (order 1)c !−1

Why should we expect the approximation to improve?If (once) the support of x is known (recovered), then we have effectivelyoversampled x (m > k measurements for a k-dimensional signal)!

I In traditional oversampled quantization theory, we can do significantlybetter!

Page 17: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

I Still, the accuracy of reconstruction given by ‖x#PCM‖ . d .

I No improvement if we increase the number of measurements m!

I Of course, this is just an upper bound, but...

10 20 50 100

10−8

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 10

mea

n l 2−

norm

of t

he e

rror

!

PCM " l1#$ (order 1)#$ (order 2)c !−r, r=1,2

Why should we expect the approximation to improve?If (once) the support of x is known (recovered), then we have effectivelyoversampled x (m > k measurements for a k-dimensional signal)!

I In traditional oversampled quantization theory, we can do significantlybetter!

Page 18: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

I Still, the accuracy of reconstruction given by ‖x#PCM‖ . d .

I No improvement if we increase the number of measurements m!

I Of course, this is just an upper bound, but...

10 20 50 100

10−8

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 10

mea

n l 2−

norm

of t

he e

rror

!

PCM " l1#$ (order 1)#$ (order 2)c !−r, r=1,2

Why should we expect the approximation to improve?If (once) the support of x is known (recovered), then we have effectivelyoversampled x (m > k measurements for a k-dimensional signal)!

I In traditional oversampled quantization theory, we can do significantlybetter!

Page 19: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Quantization of CS measurements – PCM

I Still, the accuracy of reconstruction given by ‖x#PCM‖ . d .

I No improvement if we increase the number of measurements m!

I Of course, this is just an upper bound, but...

10 20 50 100

10−8

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 10

mea

n l 2−

norm

of t

he e

rror

!

PCM " l1#$ (order 1)#$ (order 2)c !−r, r=1,2

Why should we expect the approximation to improve?If (once) the support of x is known (recovered), then we have effectivelyoversampled x (m > k measurements for a k-dimensional signal)!

I In traditional oversampled quantization theory, we can do significantlybetter!

Page 20: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Compressed sensing: Undersampled or oversampled?

Consider

26664∗∗∗∗∗

37775| {z }

y

=

2666664− −

3z}|{•

4z}|{• −

6z}|{• − −

− − • • − • − −− − • • − • − −− − • • − • − −− − • • − • − −

3777775| {z }

Φ

26666666664

00••0•00

37777777775| {z }

xIf (once) the support T = {3, 4, 6} is known (recovered)26664∗∗∗∗∗

37775| {z }

y

=

26664• • •• • •• • •• • •• • •

37775| {z }

ΦT

24 •••

35| {z }

xT

Observe:

I Rows of ΦT is a frame for Rk with m > k vectors.

I Measurements yj are associated frame coefficients.

I This is a redundant frame quantization problem.

Page 21: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Perspective and limitations of PCM

I For processing purposes, one should keep in mind that the CSmeasurements are a redundant encoding of some low-dimensional signal.

I If a quantization scheme is not effective for quantizing redundant frameexpansions, it will not be effective in the CS setting.

I The following theorem gives a lower bound on the PCM-approximationerror even if the support of sparse x is known and even if thereconstruction is done optimally.

Theorem (Goyal-Vetterli-Thao)

Let E be an m × k real matrix, and let ∆opt be an optimal decoder. Then[E ‖x −∆opt(QPCM(Ex))‖2

2

]1/2

&k λ−1d

where the “oversampling rate” λ := m/k and the expectation is wrt aprobability measure on a bounded K ⊂ Rk that is, e.g., abs. continuous.

Page 22: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

General frame quantization

Finite frames. A collection E = {ej}m1 in Rk is a frame for Rk if

A‖x‖22 ≤

m∑j=1

|〈x , ej〉|2 ≤ B‖x‖22, ∀x ∈ Rk .

Identify E with the m × k matrix E whose rows are eT1 , . . . , e

Tm . Then,

I E is a frame for Rk if and only if E is full rank.

I The frame bounds: A = σ2min(E ) and B = σ2

max(E ).

I The entries of y = Ex are the frame coefficients of x ∈ Rk .

I The columns of any left inverse F of E is a dual frame of E . In this case,we can reconstruct x from y via x = Fy .

I The Moore-Penrose pseudo-inverse of E , given by

Fcan = E † = (E∗E )−1E∗

is the canonical dual of E .

Page 23: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

General frame quantization

Problem. Let E be a frame for Rk . Want to quantize the frame coefficientsy = Ex of x ∈ Rk : As before, let A = dZ, and let Q be a quantizer, i.e.,

Q : y ∈ Rm 7→ q ∈ Am.

Pick any left inverse (dual) F of E and set x = Fq. Then

approximation error = x − x = F (y − q).

Two criteria for quantizer design

I “Noise shaping” in this context: choose q such that y − q is close toKer(F ).

I Which dual F of E?

Page 24: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames

Fix the quantization method as r th-order Σ∆ quantization:

(∆ru)j = yj − qj .

Here qj can be chosen using the greedy rule which minimizes uj givenuj−1, . . . , uj−r and yj if A is sufficiently large. In this case,

y − qΣ∆ = Dru, with ‖u‖∞ ≤ d/2

where D =

266666664

1 0 0 0 · · · 0−1 1 0 0 · · · 0

0 −1 1 0 · · · 0...

. . .. . .

. . .. . .

...0 0 · · · −1 1 00 0 · · · 0 −1 1

377777775m×m

.

Note also that ‖y − qΣ∆‖∞ .r d (this will be important later).

Page 25: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – approximation error

From last slide: y − qΣ∆ = Dru with ‖u‖∞ . d .

Reconstruct with some dual F of E : xΣ∆ = FqΣ∆. Then

x − xΣ∆ = F (y − qΣ∆) = FDru.

I Use Σ∆ quantization if E admits a dual frame F whose columns varysmoothly. Accuracy: O(λ−r ) when r = 1, 2 where λ = m/k.(Benedetto-Powell-Y)

I Even Σ∆ schemes of order 1 and 2 are significantly superior to PCM.(BPY, later by Bodmann-Paulsen, Boufounos-Oppenheim,...).

I In these early results, the focus is on canonical-dual reconstructions.

Page 26: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – approximation error

From last slide: y − qΣ∆ = Dru with ‖u‖∞ . d .

Reconstruct with some dual F of E : xΣ∆ = FqΣ∆. Then

x − xΣ∆ = F (y − qΣ∆) = FDru.

One way to bound the approximation error:

‖x − xΣ∆‖ ≤ ‖u‖∞∑‖(FDr )j‖

I Use Σ∆ quantization if E admits a dual frame F whose columns varysmoothly. Accuracy: O(λ−r ) when r = 1, 2 where λ = m/k.(Benedetto-Powell-Y)

I Even Σ∆ schemes of order 1 and 2 are significantly superior to PCM.(BPY, later by Bodmann-Paulsen, Boufounos-Oppenheim,...).

I In these early results, the focus is on canonical-dual reconstructions.

Page 27: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – approximation error

From last slide: y − qΣ∆ = Dru with ‖u‖∞ . d .

Reconstruct with some dual F of E : xΣ∆ = FqΣ∆. Then

x − xΣ∆ = F (y − qΣ∆) = FDru.

One way to bound the approximation error:

‖x − xΣ∆‖ ≤ ‖u‖∞∑‖(FDr )j‖ ← frame variation

I Use Σ∆ quantization if E admits a dual frame F whose columns varysmoothly. Accuracy: O(λ−r ) when r = 1, 2 where λ = m/k.(Benedetto-Powell-Y)

I Even Σ∆ schemes of order 1 and 2 are significantly superior to PCM.(BPY, later by Bodmann-Paulsen, Boufounos-Oppenheim,...).

I In these early results, the focus is on canonical-dual reconstructions.

Page 28: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – approximation error

From last slide: y − qΣ∆ = Dru with ‖u‖∞ . d .

Reconstruct with some dual F of E : xΣ∆ = FqΣ∆. Then

x − xΣ∆ = F (y − qΣ∆) = FDru.

One way to bound the approximation error:

‖x − xΣ∆‖ ≤ ‖u‖∞∑‖(FDr )j‖ ← frame variation

I Use Σ∆ quantization if E admits a dual frame F whose columns varysmoothly. Accuracy: O(λ−r ) when r = 1, 2 where λ = m/k.(Benedetto-Powell-Y)

I Even Σ∆ schemes of order 1 and 2 are significantly superior to PCM.(BPY, later by Bodmann-Paulsen, Boufounos-Oppenheim,...).

I In these early results, the focus is on canonical-dual reconstructions.

Page 29: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – approximation error

From last slide: y − qΣ∆ = Dru with ‖u‖∞ . d .

Reconstruct with some dual F of E : xΣ∆ = FqΣ∆. Then

x − xΣ∆ = F (y − qΣ∆) = FDru.

One way to bound the approximation error:

‖x − xΣ∆‖ ≤ ‖u‖∞∑‖(FDr )j‖ ← frame variation

I Use Σ∆ quantization if E admits a dual frame F whose columns varysmoothly. Accuracy: O(λ−r ) when r = 1, 2 where λ = m/k.(Benedetto-Powell-Y)

I Even Σ∆ schemes of order 1 and 2 are significantly superior to PCM.(BPY, later by Bodmann-Paulsen, Boufounos-Oppenheim,...).

I In these early results, the focus is on canonical-dual reconstructions.

Page 30: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – Sobolev duals

Problem 1. Extension to higher-order schemes is non-trivial! (Negative resultfor, e.g., harmonic frames together with their canonical duals for schemes oforder 3 or higher by Lammers-Powell-Y.)

Remedy. Construct suitable alternative duals!

How? If we work with the `2 norm:

‖x − xΣ∆‖2 ≤ ‖FDr‖op‖u‖2.

Seek a dual F that minimizes ‖FDr‖op.

Solution is the rth-order Sobolev dual of E , introduced byBlum-Lammers-Powell-Y, given explicitly by

Fsob,r = (D−rE )†D−r .

Theorem [BLPY].

If E is sufficiently smooth (e.g., sampled from a piecewise-C 1 frame path), anr th-order Σ∆ scheme produces an accuracy of O(λ−r ) if the reconstruction isdone using Fsob,r .

Page 31: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – Sobolev duals

Problem 1. Extension to higher-order schemes is non-trivial! (Negative resultfor, e.g., harmonic frames together with their canonical duals for schemes oforder 3 or higher by Lammers-Powell-Y.)

Remedy. Construct suitable alternative duals!

How? If we work with the `2 norm:

‖x − xΣ∆‖2 ≤ ‖FDr‖op‖u‖2.

Seek a dual F that minimizes ‖FDr‖op.

Solution is the rth-order Sobolev dual of E , introduced byBlum-Lammers-Powell-Y, given explicitly by

Fsob,r = (D−rE )†D−r .

Theorem [BLPY].

If E is sufficiently smooth (e.g., sampled from a piecewise-C 1 frame path), anr th-order Σ∆ scheme produces an accuracy of O(λ−r ) if the reconstruction isdone using Fsob,r .

Page 32: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – Sobolev duals

Problem 1. Extension to higher-order schemes is non-trivial! (Negative resultfor, e.g., harmonic frames together with their canonical duals for schemes oforder 3 or higher by Lammers-Powell-Y.)

Remedy. Construct suitable alternative duals!

How? If we work with the `2 norm:

‖x − xΣ∆‖2 ≤ ‖FDr‖op‖u‖2.

Seek a dual F that minimizes ‖FDr‖op.

Solution is the rth-order Sobolev dual of E , introduced byBlum-Lammers-Powell-Y, given explicitly by

Fsob,r = (D−rE )†D−r .

Theorem [BLPY].

If E is sufficiently smooth (e.g., sampled from a piecewise-C 1 frame path), anr th-order Σ∆ scheme produces an accuracy of O(λ−r ) if the reconstruction isdone using Fsob,r .

Page 33: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for finite frames – Sobolev duals

Problem 1. Extension to higher-order schemes is non-trivial! (Negative resultfor, e.g., harmonic frames together with their canonical duals for schemes oforder 3 or higher by Lammers-Powell-Y.)

Remedy. Construct suitable alternative duals!

How? If we work with the `2 norm:

‖x − xΣ∆‖2 ≤ ‖FDr‖op‖u‖2.

Seek a dual F that minimizes ‖FDr‖op.

Solution is the rth-order Sobolev dual of E , introduced byBlum-Lammers-Powell-Y, given explicitly by

Fsob,r = (D−rE )†D−r .

Theorem [BLPY].

If E is sufficiently smooth (e.g., sampled from a piecewise-C 1 frame path), anr th-order Σ∆ scheme produces an accuracy of O(λ−r ) if the reconstruction isdone using Fsob,r .

Page 34: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Problem 2. What if the original frame E is not smooth, like our ΦT which is a“Gaussian random frame”.

This is our main problem. We will show that Sobolev duals can still be used

for Σ∆ quantization of Gaussian random frames.

Page 35: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Problem 2. What if the original frame E is not smooth, like our ΦT which is a“Gaussian random frame”.

This is our main problem. We will show that Sobolev duals can still be used

for Σ∆ quantization of Gaussian random frames.

−4 −2 0 2 4−3

−2

−1

0

1

2

3

Gaussian frame in R2

Page 36: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Problem 2. What if the original frame E is not smooth, like our ΦT which is a“Gaussian random frame”.

This is our main problem. We will show that Sobolev duals can still be used

for Σ∆ quantization of Gaussian random frames.

−4 −2 0 2 4−3

−2

−1

0

1

2

3

Gaussian frame in R2

Page 37: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Problem 2. What if the original frame E is not smooth, like our ΦT which is a“Gaussian random frame”.

This is our main problem. We will show that Sobolev duals can still be used

for Σ∆ quantization of Gaussian random frames.

−4 −2 0 2 4−3

−2

−1

0

1

2

3

−0.04 −0.02 0 0.02 0.04−0.03

−0.02

−0.01

0

0.01

0.02

0.03

Gaussian frame in R2 Its canonical dual

Page 38: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Problem 2. What if the original frame E is not smooth, like our ΦT which is a“Gaussian random frame”.

This is our main problem. We will show that Sobolev duals can still be used

for Σ∆ quantization of Gaussian random frames.

−4 −2 0 2 4−3

−2

−1

0

1

2

3

−0.1 −0.05 0 0.05 0.1 0.150

0.02

0.04

0.06

0.08

0.1

0.12

Gaussian frame in R2 Its Sobolev dualof order r = 1

Page 39: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

A generic error bound.

Let E be an m × k full-rank matrix. If y = Ex is quantized via a (stable)r th-order Σ∆ scheme, and xΣ∆ = Fsob,rq, then

‖x − xΣ∆‖2 .rd√

m

σmin(D−rE ).

In other words, we need to control σmin(D−rE ) as a function of theoversampling rate λ = m/k .

Rest of the talk: Focus on the case when E is an m × k Gaussian randommatrix, i.e., Eij ∼ N (0, 1).

How does σmin(D−rE ) behave when E is Gaussian random matrix

depending on m, k, and r?

Page 40: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

A generic error bound.

Let E be an m × k full-rank matrix. If y = Ex is quantized via a (stable)r th-order Σ∆ scheme, and xΣ∆ = Fsob,rq, then

‖x − xΣ∆‖2 .rd√

m

σmin(D−rE ).

In other words, we need to control σmin(D−rE ) as a function of theoversampling rate λ = m/k .

Rest of the talk: Focus on the case when E is an m × k Gaussian randommatrix, i.e., Eij ∼ N (0, 1).

How does σmin(D−rE ) behave when E is Gaussian random matrix

depending on m, k, and r?

Page 41: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Least (non-zero) singular value of D−rE .

Some important facts:

I For an m × k Gaussian E (in fact, also for sub-Gaussian), it is known[Rudelson-Vershynin] that, with high probability,

σmin(E ) ≈√

m −√

k − 1.

I Singular values of D: explicitly known (related to DCT-VIII, e.g., Strang).

I Eigenvalues of D∗rDr can be estimated from the eigenvalues of (D∗D)r

using Weyl’s inequality. (Gunturk-Lammers-Powell-Saab-Y — GLPSY fromnow on).

I In particular,σmin(D−r ) ≈r 1

Page 42: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Least (non-zero) singular value of D−rE .

Some important facts:

I For an m × k Gaussian E (in fact, also for sub-Gaussian), it is known[Rudelson-Vershynin] that, with high probability,

σmin(E ) ≈√

m −√

k − 1.

I Singular values of D: explicitly known (related to DCT-VIII, e.g., Strang).

I Eigenvalues of D∗rDr can be estimated from the eigenvalues of (D∗D)r

using Weyl’s inequality. (Gunturk-Lammers-Powell-Saab-Y — GLPSY fromnow on).

I In particular,σmin(D−r ) ≈r 1

Page 43: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Least (non-zero) singular value of D−rE .

Some important facts:

I For an m × k Gaussian E (in fact, also for sub-Gaussian), it is known[Rudelson-Vershynin] that, with high probability,

σmin(E ) ≈√

m −√

k − 1.

I Singular values of D: explicitly known (related to DCT-VIII, e.g., Strang).

I Eigenvalues of D∗rDr can be estimated from the eigenvalues of (D∗D)r

using Weyl’s inequality. (Gunturk-Lammers-Powell-Saab-Y — GLPSY fromnow on).

I In particular,σmin(D−r ) ≈r 1

Page 44: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Try the product bound:

σmin(D−rE ) ≥ σmin(D−r )σmin(E ).

Using the observations above, this gives:

‖x − xΣ∆‖2 .r d ,

This is not different from what we had from PCM-quantized measurementsusing `1-minimization decoding. Can’t we do better?

Theorem I [GLPSY]

Let Eij ∼ N (0, 1). For any α ∈ (0, 1), if λ := mk &α,r (log m)1/(1−α), then

σmin(D−rE ) &r λα(r− 1

2 )√

m

with probability at least 1− exp(−c ′mλ−α). Hence,

‖x − xΣ∆‖2 .rd

λα(r− 12 ).

Page 45: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for random frames – Sobolev duals

Try the product bound:

σmin(D−rE ) ≥ σmin(D−r )σmin(E ).

Using the observations above, this gives:

‖x − xΣ∆‖2 .r d ,

This is not different from what we had from PCM-quantized measurementsusing `1-minimization decoding. Can’t we do better?

Theorem I [GLPSY]

Let Eij ∼ N (0, 1). For any α ∈ (0, 1), if λ := mk &α,r (log m)1/(1−α), then

σmin(D−rE ) &r λα(r− 1

2 )√

m

with probability at least 1− exp(−c ′mλ−α). Hence,

‖x − xΣ∆‖2 .rd

λα(r− 12 ).

Page 46: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Singular values of σmin(D−rE ) – numerical experiment

100 10110−4

10−2

100

102

104

! = m/k

1σmin(D−rE)

r=0r=1r=2r=3r=4

k = 50, 1 ≤ λ ≤ 25, Eij ∼ N (0, 1),√

mσmin(D−rE) .r λ

−α(r− 12 ).

Page 47: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Proof of Theorem I

Main ingredients of the proof:

I Weyl’s inequality for the singular value estimates, in particular to estimatethe singular values of D−r .

I Unitary invariance of the i.i.d. Gaussian measure. Reduces the problem toestimating σmin(ΣE ) where Σ is diagonal with Σii are estimated asdescribed above.

I Concentration of measure for ΣE : estimate

P{γ‖x‖2 ≤ ‖ΣEx‖2 ≤ θ‖x‖2}.

I Pass to the singular values of ΣE by using a standard net argument.

Page 48: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Implications for compressed sensing quantization

Goal: Use Σ∆ quantization for compressed sensing measurements. Moreprecisely, with k < m < N, let:

I Φ: an m × N Gaussian CS measurement matrix.

I x ∈ Σk : a k-sparse vector in RN , supported on T .

I y = Φx : CS measurement of x

I qΣ∆ = y : Σ∆-quantized (r th-order) measurements.

Problem 1. Can we estimate the support T of x from qΣ∆?

I If yes, then recall: y = Φx = ΦT xT , where ΦT is m × k Gaussian randommatrix.

I Thus Theorem I applies for each x with high probability.

Problem 2. Can we obtain a uniform guarantee, i.e., have Theorem I hold for

each submatrix ΦT (with high probability)?

Page 49: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Implications for compressed sensing quantization

Goal: Use Σ∆ quantization for compressed sensing measurements. Moreprecisely, with k < m < N, let:

I Φ: an m × N Gaussian CS measurement matrix.

I x ∈ Σk : a k-sparse vector in RN , supported on T .

I y = Φx : CS measurement of x

I qΣ∆ = y : Σ∆-quantized (r th-order) measurements.

Problem 1. Can we estimate the support T of x from qΣ∆?

I If yes, then recall: y = Φx = ΦT xT , where ΦT is m × k Gaussian randommatrix.

I Thus Theorem I applies for each x with high probability.

Problem 2. Can we obtain a uniform guarantee, i.e., have Theorem I hold for

each submatrix ΦT (with high probability)?

Page 50: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Implications for compressed sensing quantization

Goal: Use Σ∆ quantization for compressed sensing measurements. Moreprecisely, with k < m < N, let:

I Φ: an m × N Gaussian CS measurement matrix.

I x ∈ Σk : a k-sparse vector in RN , supported on T .

I y = Φx : CS measurement of x

I qΣ∆ = y : Σ∆-quantized (r th-order) measurements.

Problem 1. Can we estimate the support T of x from qΣ∆?

I If yes, then recall: y = Φx = ΦT xT , where ΦT is m × k Gaussian randommatrix.

I Thus Theorem I applies for each x with high probability.

Problem 2. Can we obtain a uniform guarantee, i.e., have Theorem I hold for

each submatrix ΦT (with high probability)?

Page 51: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Answers

Answer 1. Yes! This follows from the robust recovery result for the `1 decoder.

Proposition.

If |xj | &r d for all j ∈ T , then the largest k entries of x# = ∆ε1(qΣ∆) are

supported on T .

Answer 2. Yes! This follows using a union bound.

Theorem II [GLPSY]

Let Φ be a Gaussian random matrix. If λ &α,r (log N)1/(1−α), Theorem I holdsfor ΦT for all T with |T | ≤ k w.h.p. on the draw of Φ.

Page 52: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Answers

Answer 1. Yes! This follows from the robust recovery result for the `1 decoder.

Proposition.

If |xj | &r d for all j ∈ T , then the largest k entries of x# = ∆ε1(qΣ∆) are

supported on T .

Answer 2. Yes! This follows using a union bound.

Theorem II [GLPSY]

Let Φ be a Gaussian random matrix. If λ &α,r (log N)1/(1−α), Theorem I holdsfor ΦT for all T with |T | ≤ k w.h.p. on the draw of Φ.

Page 53: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for compressed sensing

In light of all these results, we propose Σ∆ quantization and a two-stagerecovery procedure for compressed sensing:

1. Coarse recovery: `1-minimization (or any other robust recoveryprocedure) applied to qΣ∆ yields an initial, “coarse” approximationx#(qΣ∆) of x , and in particular, the exact (or otherwise approximate)support T of x .

2. Fine recovery: Sobolev dual of the frame ΦT applied to qΣ∆ yields afiner, final approximation xΣ∆ of x .

Page 54: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for compressed sensing

In light of all these results, we propose Σ∆ quantization and a two-stagerecovery procedure for compressed sensing:

1. Coarse recovery: `1-minimization (or any other robust recoveryprocedure) applied to qΣ∆ yields an initial, “coarse” approximationx#(qΣ∆) of x , and in particular, the exact (or otherwise approximate)support T of x .

2. Fine recovery: Sobolev dual of the frame ΦT applied to qΣ∆ yields afiner, final approximation xΣ∆ of x .

Page 55: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for compressed sensing

In light of all these results, we propose Σ∆ quantization and a two-stagerecovery procedure for compressed sensing:

1. Coarse recovery: `1-minimization (or any other robust recoveryprocedure) applied to qΣ∆ yields an initial, “coarse” approximationx#(qΣ∆) of x , and in particular, the exact (or otherwise approximate)support T of x .

2. Fine recovery: Sobolev dual of the frame ΦT applied to qΣ∆ yields afiner, final approximation xΣ∆ of x .

Page 56: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for compressed sensing

In light of all these results, we propose Σ∆ quantization and a two-stagerecovery procedure for compressed sensing:

1. Coarse recovery: `1-minimization (or any other robust recoveryprocedure) applied to qΣ∆ yields an initial, “coarse” approximationx#(qΣ∆) of x , and in particular, the exact (or otherwise approximate)support T of x .

2. Fine recovery: Sobolev dual of the frame ΦT applied to qΣ∆ yields afiner, final approximation xΣ∆ of x .

Combining all these:

Page 57: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ quantization for compressed sensing

In light of all these results, we propose Σ∆ quantization and a two-stagerecovery procedure for compressed sensing:

1. Coarse recovery: `1-minimization (or any other robust recoveryprocedure) applied to qΣ∆ yields an initial, “coarse” approximationx#(qΣ∆) of x , and in particular, the exact (or otherwise approximate)support T of x .

2. Fine recovery: Sobolev dual of the frame ΦT applied to qΣ∆ yields afiner, final approximation xΣ∆ of x .

Combining all these:

Theorem III [GLPSY]

With high probability on the initial draw of Φ and uniformly for all k-sparse xthat satisfy the above size condition, we have

‖x − xΣ∆‖2 .rd

(m/k)α(r− 12 )

if m &α,r k(log N)1/(1−α).

Page 58: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Decoder Performance on Sparse Signals

10 20 30 40 50 60 70 80 90 100

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 10

mea

n l 2−

norm

of t

he e

rror

λ

PCM → l1

PCM → l1 → F

can

Σ∆ (r=1) → l1 → F

sob,1

Σ∆ (r=2) → l1 → F

sob,2

Σ∆ (r=3) → l1 → F

sob,3

cλ−rk1/2, r=0.5,1,2

5 10 15 20 25 30 35 40 45 50

10−6

10−4

10−2

100

performance of various quantization/decoding schemes, k = 20

mea

n l 2−

norm

of t

he e

rror

λ

PCM → l1

PCM → l1 → F

can

Σ∆ (r=1) → l1 → F

sob,1

Σ∆ (r=2) → l1 → F

sob,2

Σ∆ (r=3) → l1 → F

sob,3

cλ−rk1/2, r=0.5,1,2

Page 59: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Rate-distortion issues

Implications regarding the dependence of the approximation error on thebit-budget?

Need to use finite alphabet quantizers.

I Signals of interest in K := {x ∈ ΣNk : A ≤ |xj | ≤ ρ, ∀j ∈ T}, with A� ρ.

I We use a Br -bit uniform quantizer with the largest allowable step-size, sayδr , for our support recovery result to hold.

I Above, choose Br = Br (ρ) so that the associated Σ∆ quantizer does notoverload

The approximation error (distortion) DΣ∆ after the fine recovery stage via Sobolevduals:

DΣ∆ .r λ−α(r−1/2)δr ≈

λ−α(r−1/2)A

2r+1/2.

How about PCM? Same step size δr along with `1 decoder requires roughly thesame number of bits Br , but provides the distortion bound

DPCM . δr ≈A

2r+1/2.

Page 60: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Rate-distortion issues

Implications regarding the dependence of the approximation error on thebit-budget?

Need to use finite alphabet quantizers.

I Signals of interest in K := {x ∈ ΣNk : A ≤ |xj | ≤ ρ, ∀j ∈ T}, with A� ρ.

I We use a Br -bit uniform quantizer with the largest allowable step-size, sayδr , for our support recovery result to hold.

I Above, choose Br = Br (ρ) so that the associated Σ∆ quantizer does notoverload

The approximation error (distortion) DΣ∆ after the fine recovery stage via Sobolevduals:

DΣ∆ .r λ−α(r−1/2)δr ≈

λ−α(r−1/2)A

2r+1/2.

How about PCM? Same step size δr along with `1 decoder requires roughly thesame number of bits Br , but provides the distortion bound

DPCM . δr ≈A

2r+1/2.

Page 61: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Rate-distortion issues

Implications regarding the dependence of the approximation error on thebit-budget?

Need to use finite alphabet quantizers.

I Signals of interest in K := {x ∈ ΣNk : A ≤ |xj | ≤ ρ, ∀j ∈ T}, with A� ρ.

I We use a Br -bit uniform quantizer with the largest allowable step-size, sayδr , for our support recovery result to hold.

I Above, choose Br = Br (ρ) so that the associated Σ∆ quantizer does notoverload

The approximation error (distortion) DΣ∆ after the fine recovery stage via Sobolevduals:

DΣ∆ .r λ−α(r−1/2)δr ≈

λ−α(r−1/2)A

2r+1/2.

How about PCM? Same step size δr along with `1 decoder requires roughly thesame number of bits Br , but provides the distortion bound

DPCM . δr ≈A

2r+1/2.

Page 62: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ Quantization for Compressed Sensing

Pros

1. More accurate than any known quantization scheme in this setting (evenwhen sophisticated recovery algorithms are employed).

2. Modular: If the fine recovery stage is not available or impractical, thenthe standard (coarse) recovery procedure is applicable as is.

3. Progressive: If new measurements arrive (in any given order), noiseshaping can be continued on these measurements as long as the state ofthe system (r real values for an r th order scheme) has been stored.

4. Universal: It uses no information about the measurement matrix or thesignal.

(Potential) ConsMore work for reconstruction. Out-of-band noise sensitivity to be sortedout as well as extension to compressible signals (work in progress).

Page 63: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ Quantization for Compressed Sensing

Pros

1. More accurate than any known quantization scheme in this setting (evenwhen sophisticated recovery algorithms are employed).

2. Modular: If the fine recovery stage is not available or impractical, thenthe standard (coarse) recovery procedure is applicable as is.

3. Progressive: If new measurements arrive (in any given order), noiseshaping can be continued on these measurements as long as the state ofthe system (r real values for an r th order scheme) has been stored.

4. Universal: It uses no information about the measurement matrix or thesignal.

(Potential) ConsMore work for reconstruction. Out-of-band noise sensitivity to be sortedout as well as extension to compressible signals (work in progress).

Page 64: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ Quantization for Compressed Sensing

Pros

1. More accurate than any known quantization scheme in this setting (evenwhen sophisticated recovery algorithms are employed).

2. Modular: If the fine recovery stage is not available or impractical, thenthe standard (coarse) recovery procedure is applicable as is.

3. Progressive: If new measurements arrive (in any given order), noiseshaping can be continued on these measurements as long as the state ofthe system (r real values for an r th order scheme) has been stored.

4. Universal: It uses no information about the measurement matrix or thesignal.

(Potential) ConsMore work for reconstruction. Out-of-band noise sensitivity to be sortedout as well as extension to compressible signals (work in progress).

Page 65: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ Quantization for Compressed Sensing

Pros

1. More accurate than any known quantization scheme in this setting (evenwhen sophisticated recovery algorithms are employed).

2. Modular: If the fine recovery stage is not available or impractical, thenthe standard (coarse) recovery procedure is applicable as is.

3. Progressive: If new measurements arrive (in any given order), noiseshaping can be continued on these measurements as long as the state ofthe system (r real values for an r th order scheme) has been stored.

4. Universal: It uses no information about the measurement matrix or thesignal.

(Potential) ConsMore work for reconstruction. Out-of-band noise sensitivity to be sortedout as well as extension to compressible signals (work in progress).

Page 66: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ Quantization for Compressed Sensing

Pros

1. More accurate than any known quantization scheme in this setting (evenwhen sophisticated recovery algorithms are employed).

2. Modular: If the fine recovery stage is not available or impractical, thenthe standard (coarse) recovery procedure is applicable as is.

3. Progressive: If new measurements arrive (in any given order), noiseshaping can be continued on these measurements as long as the state ofthe system (r real values for an r th order scheme) has been stored.

4. Universal: It uses no information about the measurement matrix or thesignal.

(Potential) ConsMore work for reconstruction. Out-of-band noise sensitivity to be sortedout as well as extension to compressible signals (work in progress).

Page 67: Quantization of Compressed Sensing Measurements...Quantization of compressed sensing measurements It is clear that compressed sensing is very e ective fordimension reduction. However,

Σ∆ Quantization for Compressed Sensing

Pros

1. More accurate than any known quantization scheme in this setting (evenwhen sophisticated recovery algorithms are employed).

2. Modular: If the fine recovery stage is not available or impractical, thenthe standard (coarse) recovery procedure is applicable as is.

3. Progressive: If new measurements arrive (in any given order), noiseshaping can be continued on these measurements as long as the state ofthe system (r real values for an r th order scheme) has been stored.

4. Universal: It uses no information about the measurement matrix or thesignal.

(Potential) ConsMore work for reconstruction. Out-of-band noise sensitivity to be sortedout as well as extension to compressible signals (work in progress).