27
On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs, San Jose, CA 95110 [email protected] (google: onur guleryuz) (Please view in full screen mode. The presentation tries to squeeze in too much, please feel free to email me any questions you may have.)

On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Embed Size (px)

Citation preview

Page 1: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

On Missing Data Prediction using Sparse Signal Models:

A Comparison of Atomic Decompositions with Iterated

DenoisingOnur G. Guleryuz

DoCoMo USA Labs,San Jose, CA 95110

[email protected]

(google: onur guleryuz)

(Please view in full screen mode. The presentation tries to squeeze in too much, please feel free to email me any questions you may have.)

Page 2: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

•Problem statement: Prediction of missing data.

•Formulation as a sparse linear expansion over overcomplete basis.

•AD ( regularized) and ID formulations.

•Short simulation results ( regularized) .

•Why ID is better than AD.

•Adaptive predictors on general data: all methods are mathematically the same.

Key issues are basis selection, and utilizing what you have effectively.

Overview

0l

0l 1

Mini FAQ:1. Is ID the same as ? No.2. Is ID the same as , except implemented iteratively? No.3. Are predictors that yield the sparsest set of expansion coefficients the best? No,

predictors that yield the smallest mse are the best.4. On images, look for performance over large missing chunks (with edges).

1lpl

Some results from Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz,``Pixel Recovery via l1 Minimization in the Wavelet Domain,'‘ Proc. IEEE Int'l Conf. on Image Proc. (ICIP2004), Singapore, Oct. 2004.

( Some software available at my webpage.)Pretty ID pictures: Onur G. Guleryuz, ``Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part II – Adaptive Algorithms,‘’ IEEE Tr. on IP, to appear.

Page 3: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Problem Statement

Original image

1

0

x

xx

available pixels

lost region pixels(assume zero mean)

1.

0

0x 0}n

1}n)( 10 Nnn 2.

Lost region

)0

( 00

x

yP available data projection (“mask”)

1

0

xyDerive predicted3.

Page 4: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

+ = )1(

0

0x

1

0

xySignal space

Noisy signal (noise correlated with the data)

type 1 iterations

?

Page 5: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Recipe for

i

M

iihcHcy

1

MhhhH ...21

1. Take NxM matrix of overcomplete basis,

1

0

xy

2. Write y in terms of the basis

3. Find “sparse” expansion coefficients (AD v.s. ID)

NM

Page 6: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Any y has to be sparse

1

0

xy 001 )(ˆ xxAx

Nn 0

A

1N null space of dimension 01 nNn

i

n

iigdx

xy

0

11

0

ˆy has to be sparse

0

1x

Ay

Onur’s trivial sparsity theorem:

Estimation algorithms equivalent basis in which estimates are sparse

Page 7: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Who cares about y, what about the original x?

2|||| yx

If successful prediction is possible x also has to be ~sparse

small, then x ~ sparse

1. Predictable sparse

2. Sparsity of x is a necessary leap of faith to make in estimation

i.e., if

•Caveat: Any estimator is putting up a sparse y. Assuming x is sparse, the estimator that wins is the one that matches the sparsity “correctly”!•Putting up sparse estimates is not the issue, putting up estimates that minimize mse is.

•Can we be proud of the formulation? Not really. It is honest, but ambitious.

i

iihcy

Page 8: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

AD: Find the expansion coefficients to minimize the norm

0l norm of expansion coefficients

Regularization Available data constraint

TxhcPM

iii

2

10 ||)(||

M

ii

cc

1

0||min subject to

0l

Getting to the heart of the matter:

Page 9: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

AD with Significance Sets

TxyP 20 ||)(||

)(min ScardS

subject to

Si

iihcy

and

Finds the sparsest (the most predictable) signal consistent with the available data.

Page 10: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Iterated Denoising with Insignificant Sets

subject to )(

2||minTIi

Ti

yyh xPyP 00

),,0

( 0 THx

reconsdenoising_y

(Once the insignificant set is determined, ID uses well defined denoising operators to construct mathematically sound equations)

),,(1 dTTHyreconsdenoising_y

)2,,( 12 dTTHyreconsdenoising_y

),,( 1f

PP THyreconsdenoising_y

...

Progressions

Pick )(TI

Recipe for using your transform based image denoiser (to justify progressions, think decaying coefficients): …

1.

2.

Page 11: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Mini Formulation Comparison

subject toIi

Ti

yyh 2||min xPyP 00

TxyP 20 ||)(||)(min ScardS

subject to ,

Si

iihcy

No progression ID

AD

•If H is orthonormal the two formulations come close.

•Important thing is how you determine the sets/sparsity (ID: Robust DSP, AD: sparsest)

•ID uses progressions, progressions change everything!

Page 12: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Simulation Comparison

vs.

H: Two times expansive M=2N, real, isotropic, dual-tree, DWT. Real part of:

N. G. Kingsbury, ``Complex wavelets for shift invariant analysis and filtering of signals,‘’ Appl. Comput. Harmon. Anal., 10(3):234-253, May 2002.

ID (no layering and no selective thresholding)

AD TxhcPM

iii

2

10 ||)(||

M

ii

cc

1

0||min subject to1

D. Donoho, M. Elad, and V. Temlyakov, ``Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise‘’.

0l :1l

Page 13: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Original

10 20

5

10

15

20

Missing

10 20

5

10

15

20

l1: 21.40 dB

10 20

5

10

15

20

ID: 30.38 dB

10 20

5

10

15

20

Original

10 20

5

10

15

20

Missing

10 20

5

10

15

20

l1: 23.49 dB

10 20

5

10

15

20

ID: 25.39 dB

10 20

5

10

15

20

Simulation Results

( results are doctored!)1l

Page 14: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Problems in ? Yes and no.

What is wrong with AD?

•I will argue that even if we used an “ solver”, ID will in general prevail.•Specific issues with .•How to fix the problems with based AD.•How to do better.

0l 1l

0l

1l1l

So let’s assume we can solve the problem ...0l

Page 15: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Bottom Up (AD) v.s. Top Down (ID)

Prediction as signal construction:

•AD is a builder that tries to accomplish constructions using as few bricks as possible. Requires very good basis.

•ID is a sculptor that removes portions that do not belong in the final construction by using as many carving steps as needed. Requires good denoising.

AD Builder ID Sculptor

Application is not compression! (“Where will the probe hit the meteor?”, “What is the value of S&P500 tomorrow?”)

easy

Page 16: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Significance v.s. Insignificance,The Sherlock Holmes Principle

•Both ID and AD do well with very good basis. But ID can also use unintuitive basis for sophisticated results.

E.g.: ID can use “unsophisticated”, “singularity unfriendly” DCT basis to recover singularities. AD cannot!Secret: DCTs are not great on singularities but they are very good on everything else!

"How often have I said to you that when you have eliminated the impossible whatever remains, however improbable, must be the truth?"Sherlock Holmes, in "The Sign of the Four"

non-singularities

singularities

•DCTs are very good at eliminating non-singularities.•ID is more robust to basis selection compared to AD (secretly violate coherency restrictions).

•You can add to the AD dictionary but solvers won’t be able to handle it.

Page 17: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Sherlock Holmes Principle using overcomplete DCTs for eliminationPredicting missing edge pixels: basis: DCT 16x16

Predicting missing wavelet coefficients over edges:

Onur G. Guleryuz, ``Predicting Wavelet Coefficients Over Edges Using Estimates Based on Nonlinear Approximants,’’ Proc. Data Compression Conference, IEEE DCC-04, April 2004..

basis: DCT 8x8

Do not abandon isotropic *lets, use a framework that can extract the most mileage from the chosen basis (“sparsest”).

Page 18: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

),,0

( 0 THx

reconsdenoising_y

Progressions

“Annealing” Progressions (think decaying coefficients)

),,(1 dTTHyreconsdenoising_y

),,( 1f

PP THyreconsdenoising_y

...

basis: DCT 16x16, best threshold

Progressions generate up to tens of dBs. If the data was very sparse with respect to H, if we were solving a convex problem, why should progressions matter? Modeling assumptions…

iterations of simple denoisingtype 1

Page 19: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Sparse Modeling Generates Non-Convex Problems

Pixel coordinates for a “two pixel” image

x x

1c

2c

Transform coordinates

available pixel

missing pixel

xavailable pixel constraint

Equally sparse solutions

More skeptical picture:

Page 20: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

How does this affect some “AD solvers”, i.e., ?

x

Geometry

x

ball1l

x

Case 1 Case 2 Case 3

Linear/Quadratic program, …, Not sparse!

1l

Page 21: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Case 3: the magic is gone…

“Under i.i.d. Laplacian model for the joint probability of expansion coefficients, ...

),...,,(max 21 Mcccp

1l normmin

x

You now have to argue:

Page 22: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Problems with the norm I1lWhat about all the optimality/sparsest results?

Results such as: D. Donoho et. al. ``Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise‘’…

are very impressive, but they are closely tied to H providing the sparsest decomposition for x. Not every problem has this structure.

)( 2212 aNn

),(),( 211fyxmsel

1 2

yx

x

xxw

00

1

0

Worst case noise robustness results, but overwhelming noise:

modeling error error due to missing data

Page 23: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Problems with the norm II

M

iic

1

min subject to

MhhhH ...21

0

~...

~~21

0Mhhh

HP

“nice” basis, “decoherent”

“not nice” basis (due to masking), may become very “coherent”

2(problem due to )

1l

TxhcPM

iii

2

10 ||)(||

TxPhPcM

iii

20

10 ||||

Page 24: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Example

6/12/13/1

6/203/1

6/12/13/1

H orthonormal, coherency=0

000

000

6/12/13/1

0HPunnormalized coherency= 6/1normalized coherency= 1(worst possible)

Optimal solution sometimes tries to make coefficients of scaling functions zero.

Page 25: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Possible fix using Progressions

),,0

( 0 THx

l1_reconsy

Tx

hcM

iii

2

0

1

||0

||

M

ii

cc

1

||min subject to

•If you pick a large T maybe you can pretend the first one is a convex problem.•This is not an l1 problem! No single l1 solution will generate the final.•After the first few solutions, you may start hitting l1 issues.

),,(1 dTTHyl1_reconsy

),,( 1f

PP THyl1_reconsy

...

1.

2. Enforce available data

Page 26: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

The fix is ID!

),,(1 dTTHyreconsdenoising_y

),,(1 dTTHyl1_reconsy v.s.

: You can do soft thresholding, “block descent”, or

Daubechies, Defrise, De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint”,Figueiredo and Nowak, “An EM Algorithm for Wavelet-Based Image Restoration”.

•There are many “denoising” techniques that discover the “true” sparsity.•Pick the technique that is cross correlation robust.

>>Experience suggests:

Page 27: On Missing Data Prediction using Sparse Signal Models: A Comparison of Atomic Decompositions with Iterated Denoising Onur G. Guleryuz DoCoMo USA Labs,

Conclusion

•Smallest mse not necessarily = sparsest. Somebody putting up really bad estimates maybe very sparse (sparser than us) with respect to some basis.•Good denoisers should be cross correlation robust (hard thresholding tends to beat soft).•How many iterations you do within each l1_recons() or denoising_recons() is not very important.•Progressions! •Wil l1 generate sparse results? In the sense of the trivial sparsity theorem, of course! (Sparsity may not be in terms of your intended basis :). Please check the assumptions for your problem!

TxhcPM

iii

2

10 ||)(||

M

ii

cc

1

0||min subject to1

•To see its limitations, go ahead and solve the real l1 (with or without masking setups, you can even cheat on T) and compare to ID.

The trivial sparsity theorem is true. The prediction problem is all about the basis. ID simply allows the construction of a sophisticated, signal adaptive basis, by starting with a simple dictionary!