200
Compressed Sensing and Generative Models Ashish Bora Ajil Jalal Eric Price Alex Dimakis UT Austin Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 1 / 41

Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing and Generative Models

Ashish Bora Ajil Jalal Eric Price Alex Dimakis

UT Austin

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 1 / 41

Page 2: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Talk Outline

1 Compressed sensing

2 Using generative models for compressed sensing

3 Learning generative models from noisy data

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 2 / 41

Page 3: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Talk Outline

1 Compressed sensing

2 Using generative models for compressed sensing

3 Learning generative models from noisy data

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 3 / 41

Page 4: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to recover a signal (e.g., an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Genetic Testing Streaming Algorithms

Linear measurements: see y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 41

Page 5: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to recover a signal (e.g., an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Genetic Testing Streaming Algorithms

Linear measurements: see y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 41

Page 6: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to recover a signal (e.g., an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Genetic Testing Streaming Algorithms

Linear measurements: see y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 41

Page 7: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to recover a signal (e.g., an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Genetic Testing Streaming Algorithms

Linear measurements: see y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 41

Page 8: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?

I Naively: m ≥ n or else underdetermined

: multiple x possible.

I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 9: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined

: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 10: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.

I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 11: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 12: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 13: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 14: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed SensingGiven linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer:

m ≈ (information in image)

(new info. per measurement)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 41

Page 15: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?

m ≈ (information in image)

(new info. per measurement)

Image “compressible” =⇒ information in image is small.

Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 6 / 41

Page 16: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?

m ≈ (information in image)

(new info. per measurement)

Image “compressible” =⇒ information in image is small.

Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 6 / 41

Page 17: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?

m ≈ (information in image)

(new info. per measurement)

Image “compressible” =⇒ information in image is small.

Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 6 / 41

Page 18: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: the “most compressible” image that fits measurements.

How should we formalize that an image is “compressible”?

Short JPEG compression

I Intractible to compute.

Standard compressed sensing: sparsity in some basis

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 41

Page 19: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: the “most compressible” image that fits measurements.

How should we formalize that an image is “compressible”?

Short JPEG compression

I Intractible to compute.

Standard compressed sensing: sparsity in some basis

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 41

Page 20: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: the “most compressible” image that fits measurements.

How should we formalize that an image is “compressible”?

Short JPEG compression

I Intractible to compute.

Standard compressed sensing: sparsity in some basis

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 41

Page 21: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: the “most compressible” image that fits measurements.

How should we formalize that an image is “compressible”?

Short JPEG compression

I Intractible to compute.

Standard compressed sensing: sparsity in some basis

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 41

Page 22: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: the “most compressible” image that fits measurements.

How should we formalize that an image is “compressible”?

Short JPEG compressionI Intractible to compute.

Standard compressed sensing: sparsity in some basis

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 41

Page 23: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: the “most compressible” image that fits measurements.

How should we formalize that an image is “compressible”?

Short JPEG compressionI Intractible to compute.

Standard compressed sensing: sparsity in some basis

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 41

Page 24: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Approximate sparsity is common

100 101 102 103 104 105 106 107

i

10 7

10 6

10 5

10 4

10 3

10 2

10 1

100

Reno

rmal

ized

mag

nitu

de o

fith

larg

est c

oord

inat

eCoefficient decay (log log plot)

Music frequenciesi 0.56

Wikipedia inlinksi 0.72

Image waveleti 0.87

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 41

Page 25: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Sample complexity of sparse recovery

100 101 102 103 104 105 106 107

i

10 7

10 6

10 5

10 4

10 3

10 2

10 1

100

Reno

rmal

ized

mag

nitu

de o

fith

larg

est c

oord

inat

e

Coefficient decay (log log plot)Music frequenciesi 0.56

Wikipedia inlinksi 0.72

Image waveleti 0.87

m ≈ (information in image)

(new info. per measurement)

If 99% of energy in largest k coordinates...

Information in image is ≈ log(nk

)≈ k log n

New info. per measurement is hopefully ≈ log 100 = Θ(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41

Page 26: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Sample complexity of sparse recovery

100 101 102 103 104 105 106 107

i

10 7

10 6

10 5

10 4

10 3

10 2

10 1

100

Reno

rmal

ized

mag

nitu

de o

fith

larg

est c

oord

inat

e

Coefficient decay (log log plot)Music frequenciesi 0.56

Wikipedia inlinksi 0.72

Image waveleti 0.87

m ≈ (information in image)

(new info. per measurement)

If 99% of energy in largest k coordinates...

Information in image is ≈ log(nk

)≈ k log n

New info. per measurement is hopefully ≈ log 100 = Θ(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41

Page 27: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Sample complexity of sparse recovery

100 101 102 103 104 105 106 107

i

10 7

10 6

10 5

10 4

10 3

10 2

10 1

100

Reno

rmal

ized

mag

nitu

de o

fith

larg

est c

oord

inat

e

Coefficient decay (log log plot)Music frequenciesi 0.56

Wikipedia inlinksi 0.72

Image waveleti 0.87

m ≈ (information in image)

(new info. per measurement)

If 99% of energy in largest k coordinates...

Information in image is ≈ log(nk

)≈ k log n

New info. per measurement is hopefully ≈ log 100 = Θ(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41

Page 28: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Sample complexity of sparse recovery

100 101 102 103 104 105 106 107

i

10 7

10 6

10 5

10 4

10 3

10 2

10 1

100

Reno

rmal

ized

mag

nitu

de o

fith

larg

est c

oord

inat

e

Coefficient decay (log log plot)Music frequenciesi 0.56

Wikipedia inlinksi 0.72

Image waveleti 0.87

m ≈ (information in image)

(new info. per measurement)

If 99% of energy in largest k coordinates...

Information in image is ≈ log(nk

)≈ k log n

New info. per measurement is hopefully ≈ log 100 = Θ(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41

Page 29: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.

I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I m = Θ(k log(n/k)) suffices for (1).I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 30: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I m = Θ(k log(n/k)) suffices for (1).I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 31: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I m = Θ(k log(n/k)) suffices for (1).I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 32: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I m = Θ(k log(n/k)) suffices for (1).I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 33: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I m = Θ(k log(n/k)) suffices for (1).I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 34: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I m = Θ(k log(n/k)) suffices for (1).

I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 35: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

with high probability.I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I m = Θ(k log(n/k)) suffices for (1).I Such an x can be found efficiently with, e.g., the LASSO.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 41

Page 36: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Lower bound: k = 1

Hard case: x is random ei plus Gaussian noise w with ‖w‖2 ≈ 1.

Robust recovery must locate i .

Observations 〈v , x〉 = vi + 〈v ,w〉 = vi + ‖v‖2√nz , for z ∼ N(0, 1).

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 41

Page 37: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 38: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 39: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 40: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 41: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 42: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 43: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 44: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n

= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 45: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 46: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1-sparse lower boundP-Woodruff ’11

Observe 〈v , x〉 = vi + ‖v‖2√nz , where z ∼ N(0,Θ(1))

v1 v2v3 v1 v2v3

Shannon 1948: AWGN channel capacity is

I (i , 〈v , x〉) ≤ 1

2log(1 + SNR)

where SNR denotes the “signal-to-noise ratio,”

SNR =E[signal2]

E[noise2]h

E[v2i ]

‖v‖22/n= 1

(info. per measurement) = O(1)Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 41

Page 47: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Lower boundP-Woodruff ’11

m ≈ (information in image)

(new info. per measurement)

(info. per measurement) = O(1)

k = 1 : (information in image) = log n =⇒ m = Ω(log n)

General k: m = Ω(log(nk

)) = Ω(k log(n/k)).

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 41

Page 48: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Lower boundP-Woodruff ’11

m ≈ (information in image)

(new info. per measurement)

(info. per measurement) = O(1)

k = 1 : (information in image) = log n =⇒ m = Ω(log n)

General k: m = Ω(log(nk

)) = Ω(k log(n/k)).

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 41

Page 49: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Lower boundP-Woodruff ’11

m ≈ (information in image)

(new info. per measurement)

(info. per measurement) = O(1)

k = 1 : (information in image) = log n =⇒ m = Ω(log n)

General k: m = Ω(log(nk

)) = Ω(k log(n/k)).

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 41

Page 50: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Talk Outline

1 Compressed sensing

2 Using generative models for compressed sensing

3 Learning generative models from noisy data

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 41

Page 51: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.

I Better structural understanding should give fewer measurements.

Best way to model images in 2019?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 52: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.

I Better structural understanding should give fewer measurements.

Best way to model images in 2019?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 53: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.

I Better structural understanding should give fewer measurements.

Best way to model images in 2019?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 54: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.

I Better structural understanding should give fewer measurements.

Best way to model images in 2019?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 55: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.I Better structural understanding should give fewer measurements.

Best way to model images in 2019?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 56: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.I Better structural understanding should give fewer measurements.

Best way to model images in 2019?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 57: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.I Better structural understanding should give fewer measurements.

Best way to model images in 2019?I Deep convolutional neural networks.

I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 58: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Alternatives to sparsity?

m ≈ (information in image)

(new info. per measurement)

MRI images are sparse in the wavelet basis.

Worldwide, 100 million MRIs taken per year.

Want a data-driven model.I Better structural understanding should give fewer measurements.

Best way to model images in 2019?I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 41

Page 59: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 60: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 61: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 62: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Training Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 63: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Training Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 64: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Training Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 65: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Training Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 66: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Training Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 67: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Training Generative Models

Randomnoise z

Imagenk

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 41

Page 68: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 69: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 70: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 71: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 72: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 73: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 74: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 75: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Page 76: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

Karras et al., 2018

Faces

Schawinski et al., 2017

Astronomy

Paganini et al., 2017

Particle Physics

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 41

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Page 77: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 78: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 79: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 80: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 81: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.

I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 82: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 83: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 84: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log L) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 85: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.

I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 86: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 41

Page 87: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .

I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.

I Just like for training, no proof this convergesI Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 88: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.

I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.

I Just like for training, no proof this convergesI Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 89: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.

I Just like for training, no proof this convergesI Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 90: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.

I Just like for training, no proof this convergesI Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 91: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.I Just like for training, no proof this converges

I Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 92: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.I Just like for training, no proof this convergesI Approximate solution approximately gives (3)

I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 93: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.I Just like for training, no proof this convergesI Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.

I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 94: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Our Results (II)“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (3)

m = O(kd log n) suffices for d-layer G .I Compared to O(k log n) for sparsity-based methods.I k here can be much smaller

Find x = G (z) by gradient descent on ‖y − AG (z)‖2.I Just like for training, no proof this convergesI Approximate solution approximately gives (3)I Can check that ‖x − x‖2 is small.I In practice, optimization error seems negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 41

Page 95: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Related Work

Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15)

I Conditions on manifold for which recovery is possible.

Deep network models (Mousavi-Dasarathy-Baraniuk ’17, Chang et al’17)

I Train deep network to encode and/or decode.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 41

Page 96: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Related Work

Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15)I Conditions on manifold for which recovery is possible.

Deep network models (Mousavi-Dasarathy-Baraniuk ’17, Chang et al’17)

I Train deep network to encode and/or decode.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 41

Page 97: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Related Work

Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15)I Conditions on manifold for which recovery is possible.

Deep network models (Mousavi-Dasarathy-Baraniuk ’17, Chang et al’17)

I Train deep network to encode and/or decode.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 41

Page 98: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Related Work

Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15)I Conditions on manifold for which recovery is possible.

Deep network models (Mousavi-Dasarathy-Baraniuk ’17, Chang et al’17)

I Train deep network to encode and/or decode.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 41

Page 99: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Experimental Results

Faces: n = 64× 64× 3 = 12288, m = 500O

rigin

al

Lass

o

(D

CT)

Lass

o

(W

avele

t)D

CG

AN

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 21 / 41

Page 100: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Experimental Results

Faces: n = 64× 64× 3 = 12288, m = 500O

rigin

al

Lass

o

(D

CT)

Lass

o

(W

avele

t)D

CG

AN

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 21 / 41

Page 101: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Experimental Results

Faces: n = 64× 64× 3 = 12288, m = 500O

rigin

al

Lass

o

(D

CT)

Lass

o

(W

avele

t)D

CG

AN

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 21 / 41

Page 102: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Experimental Results

MNIST: n = 28x28 = 784, m = 100.

Original

Lasso

VAE

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 21 / 41

Page 103: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Experimental Results

0

100

200

300

400

500

Number of measurements

0.00

0.02

0.04

0.06

0.08

0.10

0.12

Reco

nstru

ctio

n er

ror (

per p

ixel

) LassoVAE

MNIST

0

500

1000

1500

2000

2500

Number of measurements

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

Reco

nstru

ctio

n er

ror (

per p

ixel

) Lasso (DCT)Lasso (Wavelet)DCGAN

Faces

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 22 / 41

Page 104: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.

I Then analogous to proof for sparsity:(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 105: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 106: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 107: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 108: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 109: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 110: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 111: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 112: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 113: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 41

Page 114: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 115: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 116: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 117: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 118: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 119: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 120: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 121: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 122: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 123: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

z is k-dimensional.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Therefore d-layer network has ndk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 41

Page 124: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

I O(1) approximation factor

⇐⇒ O(1) SNR⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 125: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

I O(1) approximation factor

⇐⇒ O(1) SNR⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 126: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

I O(1) approximation factor

⇐⇒ O(1) SNR⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 127: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

I O(1) approximation factor

⇐⇒ O(1) SNR⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 128: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.I O(1) approximation factor

⇐⇒ O(1) SNR⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 129: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.I O(1) approximation factor⇐⇒ O(1) SNR

⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 130: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Summary of Compressed Sensing with Generative Models

m ≈ (information in image)

(new info. per measurement)

Generative models can bound information content as O(kd log n).

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.I O(1) approximation factor⇐⇒ O(1) SNR⇐⇒ O(1) bits each

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 41

Page 131: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Follow-up work

Theorem

For any L-Lipschitz G : Rk → Rn, recoving x from Ax satisfying

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2 + δ

requires m = O(k log rLδ ) linear measurements.

Matching lower bounds: [Liu-Scarlett] (Poster outside!) and[Kamath-Karmalkar-P]

Better results with better G : (Asim-Ahmed-Hand ’19)

Provably fast for random networks (Hand-Voroninski ’18)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 41

Page 132: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Follow-up work

Theorem

For any L-Lipschitz G : Rk → Rn, recoving x from Ax satisfying

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2 + δ

requires m = O(k log rLδ ) linear measurements.

Matching lower bounds: [Liu-Scarlett] (Poster outside!) and[Kamath-Karmalkar-P]

Better results with better G : (Asim-Ahmed-Hand ’19)

Provably fast for random networks (Hand-Voroninski ’18)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 41

Page 133: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Follow-up work

Theorem

For any L-Lipschitz G : Rk → Rn, recoving x from Ax satisfying

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2 + δ

requires m = O(k log rLδ ) linear measurements.

Matching lower bounds: [Liu-Scarlett] (Poster outside!) and[Kamath-Karmalkar-P]

Better results with better G : (Asim-Ahmed-Hand ’19)

Provably fast for random networks (Hand-Voroninski ’18)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 41

Page 134: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Follow-up work

Theorem

For any L-Lipschitz G : Rk → Rn, recoving x from Ax satisfying

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2 + δ

requires m = O(k log rLδ ) linear measurements.

Matching lower bounds: [Liu-Scarlett] (Poster outside!) and[Kamath-Karmalkar-P]

Better results with better G : (Asim-Ahmed-Hand ’19)

Provably fast for random networks (Hand-Voroninski ’18)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 41

Page 135: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

ExtensionsInpainting:

I A is diagonal, zeros and ones.

Deblurring:

Can apply even to nonlinear—but differentiable—measurements.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 41

Page 136: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

ExtensionsInpainting:

I A is diagonal, zeros and ones.

Deblurring:

Can apply even to nonlinear—but differentiable—measurements.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 41

Page 137: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

ExtensionsInpainting:

I A is diagonal, zeros and ones.

Deblurring:

Can apply even to nonlinear—but differentiable—measurements.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 41

Page 138: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

ExtensionsInpainting:

I A is diagonal, zeros and ones.

Deblurring:

Can apply even to nonlinear—but differentiable—measurements.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 41

Page 139: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

ExtensionsInpainting:

I A is diagonal, zeros and ones.

Deblurring:

Can apply even to nonlinear—but differentiable—measurements.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 41

Page 140: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

ExtensionsInpainting:

I A is diagonal, zeros and ones.

Deblurring:

Can apply even to nonlinear—but differentiable—measurements.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 41

Page 141: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Talk Outline

1 Compressed sensing

2 Using generative models for compressed sensing

3 Learning generative models from noisy data

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 28 / 41

Page 142: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Where does the generative model come from?

Training from lots of data.

Problem

If measuring images is hard/noisy, how do you collect a good data set?

Question

Can we learn a GAN from incomplete, noisy measurements?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 41

Page 143: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Where does the generative model come from?

Training from lots of data.

Problem

If measuring images is hard/noisy, how do you collect a good data set?

Question

Can we learn a GAN from incomplete, noisy measurements?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 41

Page 144: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Where does the generative model come from?

Training from lots of data.

Problem

If measuring images is hard/noisy, how do you collect a good data set?

Question

Can we learn a GAN from incomplete, noisy measurements?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 41

Page 145: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Where does the generative model come from?

Training from lots of data.

Problem

If measuring images is hard/noisy, how do you collect a good data set?

Question

Can we learn a GAN from incomplete, noisy measurements?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 41

Page 146: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 41

Page 147: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 41

Page 148: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 41

Page 149: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 41

Page 150: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 41

Page 151: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 41

Page 152: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Generator G wants to fool the discriminator D.

If G ,D infinitely powerful: only pure Nash equilibrium when G (Z )equals true distribution.

Empirically works for G ,D being convolutional neural nets.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 41

Page 153: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Generator G wants to fool the discriminator D.

If G ,D infinitely powerful: only pure Nash equilibrium when G (Z )equals true distribution.

Empirically works for G ,D being convolutional neural nets.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 41

Page 154: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

GAN Architecture

Z

G

Generated image

Real image D Real?

Generator G wants to fool the discriminator D.

If G ,D infinitely powerful: only pure Nash equilibrium when G (Z )equals true distribution.

Empirically works for G ,D being convolutional neural nets.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 41

Page 155: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Ambient

GAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 156: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Ambient

GAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 157: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 158: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 159: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 160: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 161: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN training

Z

G

Generated image

Real image D Real?Real measurement

Simulatedmeasurement

f

Discriminator must distinguish real measurements from simulatedmeasurements of fake images

Can try this for any measurement process f you understand.

Compatible with any GAN generator architecture.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 41

Page 162: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Gaussian blur + Gaussian noiseMeasured Wiener Baseline AmbientGAN

Gaussian blur + additive Gaussian noise attenuates high-frequencycomponents.

Wiener baseline: deconvolve before learning GAN.

AmbientGAN better preserves high-frequency components.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 41

Page 163: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Gaussian blur + Gaussian noiseMeasured Wiener Baseline AmbientGAN

Gaussian blur + additive Gaussian noise attenuates high-frequencycomponents.

Wiener baseline: deconvolve before learning GAN.

AmbientGAN better preserves high-frequency components.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 41

Page 164: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Gaussian blur + Gaussian noiseMeasured Wiener Baseline AmbientGAN

Gaussian blur + additive Gaussian noise attenuates high-frequencycomponents.

Wiener baseline: deconvolve before learning GAN.

AmbientGAN better preserves high-frequency components.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 41

Page 165: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Gaussian blur + Gaussian noiseMeasured Wiener Baseline AmbientGAN

Gaussian blur + additive Gaussian noise attenuates high-frequencycomponents.

Wiener baseline: deconvolve before learning GAN.

AmbientGAN better preserves high-frequency components.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 41

Page 166: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Obscured Square

Measured Inpainting Baseline AmbientGAN

Obscure a random square containing 25% of the image.

Inpainting followed by GAN training reproduces inpainting artifacts.

AmbientGAN gives much smaller artifacts.

No theorem: doesn’t know that eyes should have the same color.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 34 / 41

Page 167: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Obscured Square

Measured Inpainting Baseline AmbientGAN

Obscure a random square containing 25% of the image.

Inpainting followed by GAN training reproduces inpainting artifacts.

AmbientGAN gives much smaller artifacts.

No theorem: doesn’t know that eyes should have the same color.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 34 / 41

Page 168: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Obscured Square

Measured Inpainting Baseline AmbientGAN

Obscure a random square containing 25% of the image.

Inpainting followed by GAN training reproduces inpainting artifacts.

AmbientGAN gives much smaller artifacts.

No theorem: doesn’t know that eyes should have the same color.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 34 / 41

Page 169: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Obscured Square

Measured Inpainting Baseline AmbientGAN

Obscure a random square containing 25% of the image.

Inpainting followed by GAN training reproduces inpainting artifacts.

AmbientGAN gives much smaller artifacts.

No theorem: doesn’t know that eyes should have the same color.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 34 / 41

Page 170: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Limited View

Motivation: learn the distribution of panoramas from the distributionof photos?

Measured AmbientGAN

Reveal a random square containing 25% of the image.

AmbientGAN still recovers faces.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 35 / 41

Page 171: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Limited View

Motivation: learn the distribution of panoramas from the distributionof photos?

Measured AmbientGAN

Reveal a random square containing 25% of the image.

AmbientGAN still recovers faces.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 35 / 41

Page 172: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Limited View

Motivation: learn the distribution of panoramas from the distributionof photos?

Measured AmbientGAN

Reveal a random square containing 25% of the image.

AmbientGAN still recovers faces.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 35 / 41

Page 173: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Dropout

Measured Blurring Baseline AmbientGAN

Drop each pixel independently with probability p = 95%.

Simple baseline does terribly.

AmbientGAN can still learn faces.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 36 / 41

Page 174: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Dropout

Measured Blurring Baseline AmbientGAN

Drop each pixel independently with probability p = 95%.

Simple baseline does terribly.

AmbientGAN can still learn faces.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 36 / 41

Page 175: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Dropout

Measured Blurring Baseline AmbientGAN

Drop each pixel independently with probability p = 95%.

Simple baseline does terribly.

AmbientGAN can still learn faces.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 36 / 41

Page 176: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Measurement: Dropout

Measured Blurring Baseline AmbientGAN

Drop each pixel independently with probability p = 95%.

Simple baseline does terribly.

AmbientGAN can still learn faces.

Theorem: in the limit of dataset size and G , D capacity →∞, Nashequilibrium of AmbientGAN is the true distribution.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 36 / 41

Page 177: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1D Projections

So far, measurements have all looked like images themselves.

What if we turn a 2D image into a 1D image?

Motivation: X-ray scans project 3D into 2D.

Face reconstruction is crude, but MNIST digits work decently:

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 37 / 41

Page 178: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1D Projections

So far, measurements have all looked like images themselves.

What if we turn a 2D image into a 1D image?

Motivation: X-ray scans project 3D into 2D.

Face reconstruction is crude, but MNIST digits work decently:

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 37 / 41

Page 179: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1D Projections

So far, measurements have all looked like images themselves.

What if we turn a 2D image into a 1D image?

Motivation: X-ray scans project 3D into 2D.

Face reconstruction is crude, but MNIST digits work decently:

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 37 / 41

Page 180: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1D Projections

So far, measurements have all looked like images themselves.

What if we turn a 2D image into a 1D image?

Motivation: X-ray scans project 3D into 2D.

Face reconstruction is crude, but MNIST digits work decently:

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 37 / 41

Page 181: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

1D Projections

So far, measurements have all looked like images themselves.

What if we turn a 2D image into a 1D image?

Motivation: X-ray scans project 3D into 2D.

Face reconstruction is crude, but MNIST digits work decently:

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 37 / 41

Page 182: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Robustness to model mismatch

We assume we know the true measurement process.

What happens if we get it wrong?

On MNIST:

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Block probability (p)

0

1

2

3

4

5

6

7

8

9

Ince

pti

on s

core

Robustness, p* = 0.5

AmbientGAN (ours)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 38 / 41

Page 183: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Robustness to model mismatch

We assume we know the true measurement process.

What happens if we get it wrong?

On MNIST:

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Block probability (p)

0

1

2

3

4

5

6

7

8

9

Ince

pti

on s

core

Robustness, p* = 0.5

AmbientGAN (ours)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 38 / 41

Page 184: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Robustness to model mismatch

We assume we know the true measurement process.

What happens if we get it wrong?

On MNIST:

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Block probability (p)

0

1

2

3

4

5

6

7

8

9

Ince

pti

on s

core

Robustness, p* = 0.5

AmbientGAN (ours)

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 38 / 41

Page 185: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed sensing

Compressed sensing: learn an image x from low-dimensional linearprojection Ax .

AmbientGAN can learn the generative model from a dataset ofprojections (Ai ,Aixi ).Beats standard sparse recovery (e.g. Lasso).

10

50

10

0

20

0

30

0

40

0

50

0

75

0

Number of measurements (m)

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

Reco

nst

ruct

ion e

rror

(per

pix

el)

AmbientGAN (ours)

Lasso

Theorem about unique Nash equilibrium in the limit.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 39 / 41

Page 186: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed sensing

Compressed sensing: learn an image x from low-dimensional linearprojection Ax .

AmbientGAN can learn the generative model from a dataset ofprojections (Ai ,Aixi ).

Beats standard sparse recovery (e.g. Lasso).

10

50

10

0

20

0

30

0

40

0

50

0

75

0

Number of measurements (m)

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

Reco

nst

ruct

ion e

rror

(per

pix

el)

AmbientGAN (ours)

Lasso

Theorem about unique Nash equilibrium in the limit.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 39 / 41

Page 187: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed sensing

Compressed sensing: learn an image x from low-dimensional linearprojection Ax .

AmbientGAN can learn the generative model from a dataset ofprojections (Ai ,Aixi ).Beats standard sparse recovery (e.g. Lasso).

10

50

10

0

20

0

30

0

40

0

50

0

75

0

Number of measurements (m)

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

Reco

nst

ruct

ion e

rror

(per

pix

el)

AmbientGAN (ours)

Lasso

Theorem about unique Nash equilibrium in the limit.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 39 / 41

Page 188: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Compressed sensing

Compressed sensing: learn an image x from low-dimensional linearprojection Ax .

AmbientGAN can learn the generative model from a dataset ofprojections (Ai ,Aixi ).Beats standard sparse recovery (e.g. Lasso).

10

50

10

0

20

0

30

0

40

0

50

0

75

0

Number of measurements (m)

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

Reco

nst

ruct

ion e

rror

(per

pix

el)

AmbientGAN (ours)

Lasso

Theorem about unique Nash equilibrium in the limit.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 39 / 41

Page 189: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN

Z

G

Generated image

DReal measurement

Simulatedmeasurement

f

Plug the measurement process into the GANarchitecture of your choice.

The generator learns the pre-measurement groundtruth better than if you denoise before training.

Could let us learn distributions we have no data for.

Read the paper (“AmbientGAN”) for lots more experiments.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 40 / 41

Page 190: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN

Z

G

Generated image

DReal measurement

Simulatedmeasurement

f

Plug the measurement process into the GANarchitecture of your choice.

The generator learns the pre-measurement groundtruth better than if you denoise before training.

Could let us learn distributions we have no data for.

Read the paper (“AmbientGAN”) for lots more experiments.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 40 / 41

Page 191: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN

Z

G

Generated image

DReal measurement

Simulatedmeasurement

f

Plug the measurement process into the GANarchitecture of your choice.

The generator learns the pre-measurement groundtruth better than if you denoise before training.

Could let us learn distributions we have no data for.

Read the paper (“AmbientGAN”) for lots more experiments.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 40 / 41

Page 192: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN

Z

G

Generated image

DReal measurement

Simulatedmeasurement

f

Plug the measurement process into the GANarchitecture of your choice.

The generator learns the pre-measurement groundtruth better than if you denoise before training.

Could let us learn distributions we have no data for.

Read the paper (“AmbientGAN”) for lots more experiments.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 40 / 41

Page 193: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

AmbientGAN

Z

G

Generated image

DReal measurement

Simulatedmeasurement

f

Plug the measurement process into the GANarchitecture of your choice.

The generator learns the pre-measurement groundtruth better than if you denoise before training.

Could let us learn distributions we have no data for.

Read the paper (“AmbientGAN”) for lots more experiments.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 40 / 41

Page 194: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Conclusion and open questions

Main results:I Can use lossy measurements to learn a generative model.I Can use a generative model to reconstruct from lossy measurements.

Finite-sample theorems for learning the generative model?

Better measure than Lipschitzness of generative model complexity?

More uses of differentiable compression?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 41 / 41

Page 195: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Conclusion and open questions

Main results:I Can use lossy measurements to learn a generative model.I Can use a generative model to reconstruct from lossy measurements.

Finite-sample theorems for learning the generative model?

Better measure than Lipschitzness of generative model complexity?

More uses of differentiable compression?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 41 / 41

Page 196: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Conclusion and open questions

Main results:I Can use lossy measurements to learn a generative model.I Can use a generative model to reconstruct from lossy measurements.

Finite-sample theorems for learning the generative model?

Better measure than Lipschitzness of generative model complexity?

More uses of differentiable compression?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 41 / 41

Page 197: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Conclusion and open questions

Main results:I Can use lossy measurements to learn a generative model.I Can use a generative model to reconstruct from lossy measurements.

Finite-sample theorems for learning the generative model?

Better measure than Lipschitzness of generative model complexity?

More uses of differentiable compression?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 41 / 41

Page 198: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Conclusion and open questions

Main results:I Can use lossy measurements to learn a generative model.I Can use a generative model to reconstruct from lossy measurements.

Finite-sample theorems for learning the generative model?

Better measure than Lipschitzness of generative model complexity?

More uses of differentiable compression?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 41 / 41

Page 199: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 42 / 41

Page 200: Compressed Sensing and Generative Models · 2020. 1. 22. · Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 41 Sample complexity

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 43 / 41