Upload
gabriel-peyre
View
2.284
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Slides for a course on signal and image processing.
Citation preview
Sparse Regularization
Of Inverse Problems
Gabriel Peyré
www.numerical-tours.com
Overview
• Inverse Problems Regularization
• Sparse Synthesis Regularization
• Examples: Sparse Wavelet Regularizations
• Iterative Soft Thresholding
• Sparse Seismic Deconvolution
Forward model:
Observations Operator Noise(Unknown)Input� : RQ � RP
Inverse Problems
y = K f0 + w � RP
Forward model:
Observations Operator Noise(Unknown)Input� : RQ � RP
Denoising: K = IdQ, P = Q.
Inverse Problems
y = K f0 + w � RP
Forward model:
Observations Operator Noise(Unknown)Input
(Kf)(x) =�
0 if x � �,f(x) if x /� �.
K
� : RQ � RP
Denoising: K = IdQ, P = Q.
Inpainting: set � of missing pixels, P = Q� |�|.
Inverse Problems
y = K f0 + w � RP
Forward model:
Observations Operator Noise(Unknown)Input
(Kf)(x) =�
0 if x � �,f(x) if x /� �.
K
� : RQ � RP
Denoising: K = IdQ, P = Q.
Inpainting: set � of missing pixels, P = Q� |�|.
Super-resolution: Kf = (f � k) �� , P = Q/� .
Inverse Problems
K
y = K f0 + w � RP
Kf = (p�k)1�k�K
Inverse Problem in Medical Imaging
Magnetic resonance imaging (MRI):
Kf = (p�k)1�k�K
Kf = (f(�))���
Inverse Problem in Medical Imaging
f
Magnetic resonance imaging (MRI):
Other examples: MEG, EEG, . . .
Kf = (p�k)1�k�K
Kf = (f(�))���
Inverse Problem in Medical Imaging
f
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ � R assigns a score to images.
Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
f� � argminf�RQ
12||y �Kf ||2 + � J(f)
Prior model: J : RQ � R assigns a score to images.
Inverse Problem Regularization
Data fidelity Regularity
Noisy measurements: y = Kf0 + w.
Choice of �: tradeo�
||w||Regularity of f0
J(f0)Noise level
f� � argminf�RQ
12||y �Kf ||2 + � J(f)
Prior model: J : RQ � R assigns a score to images.
Inverse Problem Regularization
Data fidelity Regularity
Noisy measurements: y = Kf0 + w.
No noise: �� 0+, minimize
Choice of �: tradeo�
||w||Regularity of f0
J(f0)Noise level
f� � argminf�RQ
12||y �Kf ||2 + � J(f)
Prior model: J : RQ � R assigns a score to images.
f� � argminf�RQ,Kf=y
J(f)
Inverse Problem Regularization
Data fidelity Regularity
J(f) =�
||�f(x)||2dx
Smooth and Cartoon Priors
�|�f |2
J(f) =�
||�f(x)||2dx
J(f) =�
||�f(x)||dx
J(f) =�
Rlength(Ct)dt
Smooth and Cartoon Priors
�|�f |2 �
|�f |
Inpainting Example
Input y = Kf0 + w Sobolev Total variation
Overview
• Inverse Problems Regularization
• Sparse Synthesis Regularization
• Examples: Sparse Wavelet Regularizations
• Iterative Soft Thresholding
• Sparse Seismic Deconvolution
Q
N
Dictionary � = (�m)m � RQ�N , N � Q.
Redundant Dictionaries
�
�m = ei�·, m�
frequencyFourier:
Q
N
Dictionary � = (�m)m � RQ�N , N � Q.
Redundant Dictionaries
�
Wavelets:�m = �(2�jR��x� n)
m = (j, �, n)
scale
orientation
position
�m = ei�·, m�
frequencyFourier:
Q
N
Dictionary � = (�m)m � RQ�N , N � Q.
Redundant Dictionaries
�
� = 2� = 1
Wavelets:�m = �(2�jR��x� n)
m = (j, �, n)
scale
orientation
position
�m = ei�·, m�
frequencyFourier:
DCT, Curvelets, bandlets, . . .
Q
N
Dictionary � = (�m)m � RQ�N , N � Q.
Redundant Dictionaries
�
� = 2� = 1
Synthesis: f =�
m xm�m = �x.
xf
Image f = �xCoe�cients x
Wavelets:�m = �(2�jR��x� n)
m = (j, �, n)
scale
orientation
position
�m = ei�·, m�
frequencyFourier:
DCT, Curvelets, bandlets, . . .
Q
N
Dictionary � = (�m)m � RQ�N , N � Q.
Redundant Dictionaries
� =�
� = 2� = 1
Ideal sparsity: for most m, xm = 0.
J0(x) = # {m \ xm �= 0}
Sparse Priors
Image f0
Coe�cients x
�
Ideal sparsity: for most m, xm = 0.
J0(x) = # {m \ xm �= 0}
Sparse approximation: f = �x whereargminx2RN
||f0 � x||2 + T
2J0(x)
Sparse Priors
Image f0
Coe�cients x
�
Ideal sparsity: for most m, xm = 0.
J0(x) = # {m \ xm �= 0}
Sparse approximation: f = �x where
Orthogonal �: ��� = ��� = IdN
xm =�
�f0, �m� if |�f0, �m�| > T,0 otherwise.
��
ST
f = � � ST � ��(f0)
argminx2RN
||f0 � x||2 + T
2J0(x)
Sparse Priors
Image f0
Coe�cients x
�
Ideal sparsity: for most m, xm = 0.
J0(x) = # {m \ xm �= 0}
Sparse approximation: f = �x where
Orthogonal �: ��� = ��� = IdN
xm =�
�f0, �m� if |�f0, �m�| > T,0 otherwise.
��
ST
Non-orthogonal �:�� NP-hard.
f = � � ST � ��(f0)
argminx2RN
||f0 � x||2 + T
2J0(x)
Sparse Priors
Image f0
Coe�cients x
�
Image with 2 pixels:
q = 0
J0(x) = # {m \ xm �= 0}J0(x) = 0 �� null image.J0(x) = 1 �� sparse image.J0(x) = 2 �� non-sparse image.
x2
Convex Relaxation: L1 Prior
x1
Image with 2 pixels:
q = 0 q = 1 q = 2q = 3/2q = 1/2
Jq(x) =�
m
|xm|q
J0(x) = # {m \ xm �= 0}J0(x) = 0 �� null image.J0(x) = 1 �� sparse image.J0(x) = 2 �� non-sparse image.
x2
Convex Relaxation: L1 Prior
�q priors: (convex for q � 1)
x1
Image with 2 pixels:
q = 0 q = 1 q = 2q = 3/2q = 1/2
Jq(x) =�
m
|xm|q
J0(x) = # {m \ xm �= 0}J0(x) = 0 �� null image.J0(x) = 1 �� sparse image.J0(x) = 2 �� non-sparse image.
x2
J1(x) =�
m
|xm|
Convex Relaxation: L1 Prior
Sparse �1 prior:
�q priors: (convex for q � 1)
x1
L1 Regularization
coe�cientsx0 � RN
L1 Regularization
coe�cients image�
x0 � RN f0 = �x0 � RQ
L1 Regularization
observations
w
coe�cients image� K
x0 � RN f0 = �x0 � RQ y = Kf0 + w � RP
L1 Regularization
observations
� = K �⇥ ⇥ RP�N
w
coe�cients image� K
x0 � RN f0 = �x0 � RQ y = Kf0 + w � RP
Fidelity Regularization
minx�RN
12
||y � �x||2 + �||x||1
L1 Regularization
Sparse recovery: f� = �x� where x� solves
observations
� = K �⇥ ⇥ RP�N
w
coe�cients image� K
x0 � RN f0 = �x0 � RQ y = Kf0 + w � RP
x� � argmin�x=y
�
m
|xm|
x�
�x = y
Noiseless Sparse Regularization
Noiseless measurements: y = �x0
x� � argmin�x=y
�
m
|xm|
x�
�x = y
x� � argmin�x=y
�
m
|xm|2
Noiseless Sparse Regularization
x�
�x = y
Noiseless measurements: y = �x0
x� � argmin�x=y
�
m
|xm|
x�
�x = y
x� � argmin�x=y
�
m
|xm|2
Noiseless Sparse Regularization
Convex linear program.Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”.
Douglas-Rachford splitting, see [Combettes, Pesquet].
x�
�x = y
Noiseless measurements: y = �x0
RegularizationData fidelity
y = �x0 + wNoisy measurements:
x� � argminx�RQ
12
||y � �x||2 + � ||x||1
Noisy Sparse Regularization
�� �RegularizationData fidelityEquivalence
||�x =y|| �
�
y = �x0 + wNoisy measurements:
x� � argminx�RQ
12
||y � �x||2 + � ||x||1
x� � argmin||�x�y||��
||x||1
Noisy Sparse Regularization
x�
Iterative soft thresholdingForward-backward splitting
Algorithms:
�� �RegularizationData fidelityEquivalence
||�x =y|| �
�
y = �x0 + wNoisy measurements:
x� � argminx�RQ
12
||y � �x||2 + � ||x||1
x� � argmin||�x�y||��
||x||1
Noisy Sparse Regularization
Nesterov multi-steps schemes.
see [Daubechies et al], [Pesquet et al], etc
��
x�
Overview
• Inverse Problems Regularization
• Sparse Synthesis Regularization
• Examples: Sparse Wavelet Regularizations
• Iterative Soft Thresholding
• Sparse Seismic Deconvolution
Image De-blurring
Original f0 y = h � f0 + w
f� = argminf�RN
||f ⇥ h� y||2 + �||⇥f ||2
f�(⇥) =h(⇥)
|h(⇥)|2 + �|⇥|2y(⇥)
Sobolev regularization:
Image De-blurring
Original f0 y = h � f0 + w SobolevSNR=22.7dB
f� = argminf�RN
||f ⇥ h� y||2 + �||⇥f ||2
f�(⇥) =h(⇥)
|h(⇥)|2 + �|⇥|2y(⇥)
Sobolev regularization:
� = translation invariant wavelets.
x� � argminx
12
||h � (�x)� y||2 + �||x||1f� = �x� where
Sparsity
Image De-blurring
Original f0 y = h � f0 + w
Sparsity regularization:
SNR=24.7dBSobolev
SNR=22.7dB
SNR=23.7dB SNR=24.7dBL2 Sobolev Sparsity Invariant
Comparison of RegularizationsSN
R
SNR
SNR
� � ��opt �opt �opt
Sparsity regularizationSobolev regularizationL2 regularization
SNR=21.7dB SNR=22.7dB
K
y = Kf0 + wMeasures:
Inpainting Problem
(Kf)(x) =�
0 if x � �,f(x) if x /� �.
Image SeparationModel: f = f1 + f2 + w, (f1, f2) components, w noise.
Image SeparationModel: f = f1 + f2 + w, (f1, f2) components, w noise.
Union dictionary:
(x�1, x
�2) � argmin
x=(x1,x2)�RN
12
||f ��x||2 + �||x||1
� = [�1,�2] � RQ�(N1+N2)
Image Separation
Recovered component: f�i = �ix�
i .
Model: f = f1 + f2 + w, (f1, f2) components, w noise.
Examples of Decompositions
Cartoon+Texture Separation
Overview
• Inverse Problems Regularization
• Sparse Synthesis Regularization
• Examples: Sparse Wavelet Regularizations
• Iterative Soft Thresholding
• Sparse Seismic Deconvolution
Orthogonal-basis: ⇤ = IdN , x = ⇤f .
Regularization-based denoising:
x
? = argminx2RN
1
2||x� y||2 + �J(x)
Denoising: y = x0 + w 2 RN, K = Id.
Sparse regularization: J(x) =P
m |xm|q(where |a|0 = �(a))
Sparse Regularization Denoising
Orthogonal-basis: ⇤ = IdN , x = ⇤f .
Regularization-based denoising:
x
? = argminx2RN
1
2||x� y||2 + �J(x)
Denoising: y = x0 + w 2 RN, K = Id.
Sparse regularization: J(x) =P
m |xm|q
x
?m = S
qT (xm)
(where |a|0 = �(a))
Sparse Regularization Denoising
Sparse regularization:
x
? 2 argminx2RN
E(x) =1
2||y � �x||2 + �||x||1
E(x, x) = E(x)� 1
2||�(x� x)||2 + 1
2⌧||x� x||2
E(·, x)
x
xS�⌧ (u)
⌧ < 1/||�⇤�||
Surrogate Functionals
Surrogate functional:
E(·)
Sparse regularization:
x
? 2 argminx2RN
E(x) =1
2||y � �x||2 + �||x||1
E(x, x) = E(x)� 1
2||�(x� x)||2 + 1
2⌧||x� x||2
Proof: E(x, x) / 12 ||u� x||2 + �||x||1+ cst.
argminx
E(x, x) = S
�⌧
(u)Theorem:
where u = x� ⌧�⇤(�x� x)
E(·, x)
x
xS�⌧ (u)
⌧ < 1/||�⇤�||
Surrogate Functionals
Surrogate functional:
E(·)
Algorithm:
x
(`+1) = argminx
E(x, x(`))
x
(`+1) = S
1�⌧ (u
(`))
u
(`) = x
(`) � ⌧�⇤(�x(`) � ⌧y)
Initialize x
(0), set ` = 0.
x
(0)x
(1)x
(2)
Iterative Thresholding
x
E(·)
Algorithm:
x
(`+1) = argminx
E(x, x(`))
x
(`+1) = S
1�⌧ (u
(`))
u
(`) = x
(`) � ⌧�⇤(�x(`) � ⌧y)
Remark:
x
(`) 7! u
(`)is a gradient descent of ||�x� y||2.
S
1`⌧ is the proximal step of associated to ||x||1.
Initialize x
(0), set ` = 0.
x
(0)x
(1)x
(2)
Iterative Thresholding
x
E(·)
Theorem: if ⌧ < 2/||�⇤�||, then x
(`) ! x
?.
Algorithm:
x
(`+1) = argminx
E(x, x(`))
x
(`+1) = S
1�⌧ (u
(`))
u
(`) = x
(`) � ⌧�⇤(�x(`) � ⌧y)
Remark:
x
(`) 7! u
(`)is a gradient descent of ||�x� y||2.
S
1`⌧ is the proximal step of associated to ||x||1.
Initialize x
(0), set ` = 0.
x
(0)x
(1)x
(2)
Iterative Thresholding
x
E(·)
Overview
• Inverse Problems Regularization
• Sparse Synthesis Regularization
• Examples: Sparse Wavelet Regularizations
• Iterative Soft Thresholding
• Sparse Seismic Deconvolution
Seismic Imaging
1D propagation � convolution
P
1D Idealization
Initial condition: “wavelet”
f
�f = h � f
= band pass filter h
h(x)
h(�)
y = f0 � h + w
Pseudo-inverse:
Stabilization: h+(⇥) = 0 if |h(⇥)| � �
Pseudo Inverse
f+(�) =y(�)h(�)
=� f+ = h+ ⇥ y = f0 + h+ ⇥ w
where h+(�) = h(�)�1
h(�)
1/h(�)
y = h � f0 + w
f+
f with small ||f ||0
Sparse Spikes Deconvolution
⇥m[x] = �[x�m]Sparsity basis: Diracs
y = f � h + w
f� = argminf�RN
12
||f ⇥ h� y||2 + ��
m
|f [m]|.
f with small ||f ||0
Sparse Spikes Deconvolution
⇥m[x] = �[x�m]Sparsity basis: Diracs
y = f � h + w
Algorithm:
• Inversion: f (k) = f (k) � �h ⇥ (h ⇥ f (k) � y).
f (k+1)[m] = S1�⇥ (f (k)[m])
� < 2/||���|| = 2/max�
|h(⇥)|2
f� = argminf�RN
12
||f ⇥ h� y||2 + ��
m
|f [m]|.
oracle, minimize ||f0 � f�||
Numerical Example
y = f � h + w
f�
�
SNR(f0, f�)
f0
Choosing optimal �:
log10(E(f (k))/E(f�)� 1) log10(||f (k) � f�||/||f0||)
k
Not strictly convex =� no convergence speed.
Convergence Study
k
Energy: E(f) =12
||h ⇥ f � y||2 + ��
m
|f [m]|.
Sparse deconvolution: f� = argminf�RN
E(f).
Conclusion
Conclusion
Conclusion