59
Designing Optimal Spectral Filters and Low-Rank Matrices for Inverse Problems Julianne Chung Department of Mathematics Virginia Tech Joint work with: Matthias Chung, Virginia Tech Dianne O’Leary, University of Maryland Julianne Chung, Virginia Tech

Designing Optimal Spectral Filters and Low-Rank Matrices ......Designing Optimal Low-Rank Regularized Inverse Matrices. Outline. 1. Designing Optimal Spectral Filters Background on

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

  • Designing Optimal Spectral Filters and Low-RankMatrices for Inverse Problems

    Julianne Chung

    Department of MathematicsVirginia Tech

    Joint work with:

    Matthias Chung, Virginia Tech

    Dianne O’Leary, University of Maryland

    Julianne Chung, Virginia Tech

  • What is an inverse problem?

    Physical SystemInput Signal Output Signal

    Forward Model

    Julianne Chung, Virginia Tech

  • What is an inverse problem?

    Physical SystemInput Signal Output Signal

    Forward Model

    Inverse Problem

    Julianne Chung, Virginia Tech

  • Discrete Linear Inverse Problem

    b = Aξ + δ

    whereξ ∈ Rn - unknown parametersA ∈ Rm×n - large, ill-conditioned matrixδ ∈ Rm - additive noiseb ∈ Rm - observation

    Goal: Given b and A, compute approximation of ξ

    Julianne Chung, Virginia Tech

  • Application: Image Deblurring

    b = Aξ + δ

    Given: Blurred image, b, andsome information about theblurring, A

    Goal: Compute approximation oftrue image, ξ

    Julianne Chung, Virginia Tech

  • Application: Image Deblurring

    b = Aξ + δ

    Given: Blurred image, b, andsome information about theblurring, A

    Goal: Compute approximation oftrue image, ξ

    Julianne Chung, Virginia Tech

  • Application: Super-Resolution Imaging

    bi = A(yi) ξ + δi

    Given: LR images

    b1...bm

    ︸ ︷︷ ︸

    =

    A(y1)...A(ym)

    ︸ ︷︷ ︸

    ξ+

    δ1...δm

    ︸ ︷︷ ︸

    b = A(y) ξ + δ

    Goal: Improve parameters andapproximate HR image

    Julianne Chung, Virginia Tech

    SRimages.aviMedia File (video/avi)

  • Application: Limited-Angle Tomography

    X-ray ImagingDigital TomosynthesisComputed Tomography

    Julianne Chung, Virginia Tech

    tomomovie.movMedia File (video/quicktime)

  • Application: Tomosynthesis Reconstruction

    Given: 2D projection images

    Goal: Reconstruct a 3D volume

    bi = Υ[Aiξ] + δi

    where Υ[·] represents nonlinearenergy dependent transmissiontomography

    True Images

    Julianne Chung, Virginia Tech

  • What is an Ill-posed Inverse Problem?

    Hadamard (1923): A problem is ill-posed if the solutiondoes not exist,is not unique, ordoes not depend continuously on the data.

    Julianne Chung, Virginia Tech

  • What is an Ill-posed Inverse Problem?Hadamard (1923): A problem is ill-posed if the solution

    does not exist,is not unique, ordoes not depend continuously on the data.

    Forward  Problem  

    Inverse  Problem  

    True  image:  x   Blurred  &  noisy  image:  b  

    Inverse  solu:on:  A-‐1b  

    Julianne Chung, Virginia Tech

  • Regularization

    Incorporate prior knowledge:1 Knowledge about the noise in the data2 Knowledge about the unknown solution

    Goals of this work:Incorporate probabilistic information in the form of training dataCompute optimal regularization:

    Optimal Spectral FiltersOptimal Low-Rank Inverse Matrices

    Julianne Chung, Virginia Tech

  • Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Regularization and Filtering

    Singular Value Decomposition:Let A = UΣVT where

    Σ =diag(σ1, σ2, . . . , σn) , σ1 ≥ σ2 ≥ · · · ≥ σn ≥ 0UT U = I , VT V = I

    For ill-posed inverse problems,Singular values σi decrease to and cluster at 0There is no gap separating large and small singular valuesSmall singular values⇒ highly oscillatory singular vectors

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Discrete Picard Condition

    Inverse Solution, A−1b

    Investigate behavior of:Singular values, σiSingular vectors, viSVD coefficients, uTi bSolution coefficients, u

    Ti bσi

    Two toy problems:1D deconvolutionGravity surveying

    Hansen, Discrete Inverse Problems (2010)

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    SVD AnalysisThe naïve inverse solution:

    ξ = A−1b

    = VΣ−1UT b

    =n∑

    i=1

    uTi bσi

    vi

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    SVD AnalysisThe naïve inverse solution:

    ξ̂ = A−1(b + δ)

    = VΣ−1UT (b + δ)

    =n∑

    i=1

    uTi (b + δ)σi

    vi

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    SVD AnalysisThe naïve inverse solution:

    ξ̂ = A−1(b + δ)

    = VΣ−1UT (b + δ)

    =n∑

    i=1

    uTi (b + δ)σi

    vi

    =n∑

    i=1

    uTi bσi

    vi +n∑

    i=1

    uTi δσi

    vi

    = ξ + error

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Regularization via Spectral Filtering

    Filtered Solution

    ξfilter =n∑

    i=1

    φiuTi bσi

    vi

    = VCΣ−1φ

    φi - filter factorsφ ∈ Rn contains φiC = diag(UT b) 0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    TSVDTikhonov

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Some Filter RepresentationsTruncated SVD (TSVD)

    φtsvdi (α) =

    {1, if i ≤ α,0, else,

    α ∈ {1, . . . ,n}

    Tikhonov filter

    φtiki (α) =σ2i

    σ2i + α2, α ∈ R

    Error filterφerri (α) = αi , α ∈ R

    n

    Spline filterφ

    spli (α) = s(τ ,α;σi), α ∈ R

    `, ` < n

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    How to choose α?Previous approaches (1 parameter)

    Discrepancy PrincipleGeneralized Cross-Validation (GCV)L-Curve

    Our approach to compute optimal parameters:Stochastic programming formulation to incorporate probabilisticinformationUse training data and numerical optimization to minimize errors

    Shapiro, Dentcheva, Ruszczynski. SIAM, 2009.

    Vapnik. Wiley & Sons, 1998.

    Tenorio. SIAM Review, 2006.

    Horesh, Haber, Tenorio. Inverse Problems, 2008.

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Some Assumptions

    Suppose we havea set of possible signals Ξ ⊆ Rn

    a set of possible noise samples ∆ ∈ Rn

    Selectsignal ξ ∈ Ξ according to probability distribution Pξnoise sample δ ∈ ∆ according to probability distribution Pδ

    Inverse Problem: determine ξ given A and b, where

    b = Aξ + δ

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    How to define error?

    Error vector e(α, ξ, δ) = xfilter(α, ξ, δ)− ξ

    Error function: ρ : Rn → R+0

    Errorerr(α, ξ, δ) = ρ(e(α, ξ, δ))

    For example, ρ(x) = ‖x‖pp , err(α, ξ, δ) = ‖e(α, ξ, δ)‖pp

    Goal: Find optimal parameters α that minimize average error over setof signals and noise

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Bayes Risk Minimization

    Ideally...

    Optimal filter φ(α̌) where

    α̌ = arg minα

    Eξ,δ{

    err(α, ξ, δ)}

    Remarks:Minimizing expected value is difficultUse Monte Carlo sampling approach

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Empirical Bayes Risk MinimizationIf Pξ and Pδ known→ use Monte Carlo samples as training dataIf Pξ and Pδ unknown but samples are given→ use samples as training data

    Training data:

    ξ(1), . . . , ξ(K ) realizations of random variable ξ

    δ(1), . . . , δ(K ) realizations of random variable δ

    b(k) = Aξ(k) + δ(k) , k = 1, ...,K

    Empirical Bayes risk:

    Eξ,δ{

    err(α, ξ, δ)}≈ 1

    K

    K∑k=1

    ρ(e(k)(α))

    Optimal filter φ(α̂) where

    α̂ = arg minα

    1K

    K∑k=1

    ρ(e(k)(α))

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Suggested Optimization Methods

    error function\ filter tsvd tik spl errHuber GSS GN GN GN

    1 < p < 2 (smoothing) GSS GN GN GNp = 2 GSS GN GN LSp > 2 GSS GN GN GN

    p =∞,p = 1 GSS IPM-N IPM-N IMP-L

    GSS - discrete golden section search algorithmGN - Gauss-Newton methodLS - linear least squares systemIPM-N / IPM-L -interior point methods for nonlinear / linearproblems

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Gauss-Newton Optimization

    x(k)filter = VC(k)Σ−1φ

    Jacobian

    J =1K

    J(1)

    ...J(K )

    where J(k) = VC(k)Σ−1φαφα - partial derivatives of φ w.r.t. α

    Gradient and GN Hessian

    g =1K

    K∑k=1

    J(k)>

    g(k) , H =1K

    K∑k=1

    J(k)>

    D(k)J(k)

    g(k) and D(k) contain information regarding derivatives of ρ

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Special Case: 2-norm

    α̂ = arg minα

    12K

    K∑k=1

    ‖e(k)‖22

    1 If δ ∼ N(0, βδI) and φerri (α) = αi , approximate Wiener filter

    2 If, in addition, ξ ∼ N(0, βξI) , get Tikhonov filter with α = βδβξ

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    1D Deconvolution

    Convolution kernel Columns used for training

    0 50 100 150 200 2500

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    Blurred image

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Compare to Standard Approaches

    600 training signals2800 validation signalsNoise level: 0.001− 0.01

    For each validation signal,reconstruct:

    1 opt-Tik2 opt-error3 Tik-GCV4 Tik-MSE

    Box and whisker plot (err)

    opt−Tik opt−error Tik−GCV Tik−MSE

    0

    0.5

    1

    1.5

    2

    x 10−3

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Corresponding Error Images

    opt-Tik opt-error Tik-GCV Tik-MSE

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Pareto Curve

    100

    101

    102

    103

    10−2

    10−1

    number of training signals

    aver

    age

    RRE

    opt−Tik−SVDopt−error−SVDopt−Tik−GSVD

    minx‖Aξ − b‖22 + λ

    2 ‖Lξ‖2 , L =

    1

    −1 . . .. . . 1

    −1

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    2D DeconvolutionTraining Validation

    800 training images800 validation imagesGaussian point spread functionNoise level: 0.1− 0.15

    Filter, φ(α) :opt-TSVDopt-Tikopt-splineopt-errorsmooth

    Error Function, ρ(z) :Huber function2-norm4-norm

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Numerical Results

    Huber function 2-norm 4-norm

    opt−TSVD opt−Tik opt−spline opt−error smooth0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    0.08

    0.09

    0.1

    opt−TSVD opt−Tik opt−spline opt−error smooth0

    1

    2

    3

    4

    5

    6

    7

    8

    x 10−3

    opt−TSVD opt−Tik opt−spline opt−error smooth

    0

    1

    2

    x 10−4

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Filter Factors and Error Images

    Huber function 2-norm 4-norm

    0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    opt−erroropt−TSVDopt−Tikopt−spline

    0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    opt−erroropt−TSVDopt−Tikopt−spline

    0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    opt−erroropt−TSVDopt−Tikopt−spline

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Summary for Optimal Filters

    Computing good regularization parameters can be difficult

    Use training data to get optimal parameters/filters

    Different error measures and filter representations can be used

    Optimal filters can be computed off-line

    Chung, Chung, O’Leary. SISC, 2011.Chung, Chung, O’Leary. JMIV, 2012.Chung, Español, Nguyen. ArXiv, 2014.

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    What is an optimal regularized inverse matrix(ORIM)?

    Let Z ∈ Rn×m be a reconstruction matrixError vector: Zb− ξ

    Error function: ρ : Rn → R+0

    Error: err(Z) = ρ(Zb− ξ)= ρ(Z(Aξ + δ)− ξ)= ρ((ZA− In)ξ + Zδ)

    For example, ρ(y) = ‖y‖pp , err(Z) = ‖Zb− ξ‖pp

    Goal: Find a regularized inverse matrix Z ∈ Rn×m that minimizes

    minZρ((ZA− In)ξ + Zδ)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Rank-constrained Problem

    Basic Idea: Enforce ORIM Z to be low-rank

    Rank-constrained Problem:

    arg minrank(Z)≤r

    ρ((ZA− In)ξ + Zδ)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Why does a low-rank inverse approximation makesense?

    Rank-r truncated SVD (TSVD) solution can be written as

    ξTSVD =r∑

    i=1

    u>i bσi

    vi = VrΣ−1r U>r b ,

    whereVr and Ur contain the first r vectors of V and U respectivelyΣr is the r × r principal submatrix of Σ

    Matlab demo

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Bayes Risk Minimization

    Suppose wetreat ξ as a random variable with given probability distributiontreat δ as a random variable with given probability distribution

    Define the Bayes risk:

    f (Z) = Eξ,δ(ρ((ZA− In)ξ + Zδ))

    where E is the expected value.

    Rank-constrained Bayes risk minimization problem:

    minrank(Z)≤r

    f (Z) = Eξ,δ (ρ((ZA− In)ξ + Zδ)) (1)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Bayes Risk Minimization for ρ = ‖·‖22

    minrank(Z)≤r

    f (Z) = Eξ,δ(‖(ZA− In)ξ + Zδ)‖22

    )Assume

    ξ and δ are statistically independentµξ = 0 (mean) and C

    −1ξ = MξM

    >ξ for Pξ

    µδ = 0 (mean) and C−1δ = η

    2Im for Pδ

    Then Eξ,δ(‖(ZA− In)ξ + Zδ‖22

    )= ‖(ZA− In)Mξ‖2F + η

    2 ‖Z‖2F

    In addition, if C−1ξ = β2In,

    Eξ,δ(‖(ZA− In)ξ + Zδ‖22

    )= β2 ‖ZA− In‖2F + η

    2 ‖Z‖2F

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    A Theoretical ResultTheoremGiven a matrix A ∈ Rm×n of rank r ≤ n ≤ m and an invertible matrixM ∈ Rn×n, let their generalized singular value decomposition beA = UΣG−1, M = GS−1V>. Let η be a given parameter, nonzero ifr < m. Let J ≤ r be a given positive integer. DefineD = ΣS−2Σ> + η2Im. Let the symmetric matrix H = GS−4Σ>D−1ΣG>

    have eigenvalue decomposition H = V̂ΛV̂> with eigenvalues orderedso that λj ≥ λi for j < i ≤ n. Then a global minimizer Ẑ ∈ Rn×m of theproblem

    Ẑ = arg minrank(Z)≤J

    ‖(ZA− In)M‖2F + η2 ‖Z‖2F

    isẐ = V̂J V̂>J GS

    −2Σ>D−1U>,

    where V̂J contains the first J columns of V̂. Moreover this Ẑ is theunique global minimizer if and only if λJ > λJ+1.

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    A Special Case

    Theorem

    A global minimizer Ẑ ∈ Rn×m of the problem

    Ẑ = arg minrank(Z)≤r

    ‖ZA− In‖2F + α2 ‖Z‖2F

    is Ẑ = VrΨr U>r , where Vr contains the first r columns of V, Ur containsthe first r columns of U, and Ψr = diag

    (σ1

    σ21+α2 , . . . ,

    σrσ2r +α

    2

    ). Moreover, Ẑ

    is unique if and only if σr > σr+1.

    Remarks on Bayes problem:Expected value difficult to evaluateIn real applications, Pξ and Pδ are unknownNo theory for cases with ρ 6= ‖·‖22

    Chung, Chung, and O’Leary (LAA 2014), Spantini et. al. (ArXiv 2014)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Empirical Bayes Risk MinimizationIf Pξ and Pδ known→ use Monte Carlo samples as training dataIf Pξ and Pδ unknown but samples are given→ use samples as training data

    Training data:

    ξ(1), . . . , ξ(K ) realizations of random variable ξ

    δ(1), . . . , δ(K ) realizations of random variable δ

    b(k) = Aξ(k) + δ(k) , k = 1, ...,K

    Empirical Bayes risk:

    Eξ,δ (ρ((ZA− In)ξ + Zδ)) ≈1K

    K∑k=1

    ρ(Zb(k) − ξ(k))

    Rank-constrained Empirical Bayes Minimization Problem:

    Ẑ = arg minrank(Z)≤r

    1K

    K∑k=1

    ρ(Zb(k) − ξ(k))

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Empirical Bayes Minimization Problem

    Let Z be of rank r < min(m,n), then

    Z = XY> =r∑

    j=1

    xjy>j

    where X = [x1, ...xr ] ∈ Rn×r and Y = [y1, ...yr ] ∈ Rm×r

    Rank-constrained problem:

    (X̂, Ŷ) = arg minX,Y

    1K

    K∑k=1

    ρ(XY>b(k) − ξ(k))

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Special Case: 2-norm

    Let B = [b(1),b(2), ...,b(K )] and C = [ξ(1), ξ(2), ..., ξ(K )], then

    minrank(Z)≤r

    1K

    K∑k=1

    ‖Zb(k) − ξ(k)‖22 =1K‖ZB− C‖2F (2)

    Theorem

    Ẑ = Pr B†

    is a solution to the minimization problem (2), where Ṽ is the matrix ofright singular vectors of B and P = CṼs(Ṽs)> where s = rank(B). Thissolution is unique if and only if either r ≥ rank(P) or 1 ≤ r ≤ rank(P)and σ̄r > σ̄r+1, where σ̄r and σ̄r+1 denote the r and (r + 1)st singularvalues of P.

    Friedland and Torokhti (2007), Sondermann (1986), Chung and Chung (2013)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    For general error measures and large-scale problems

    Rank-` update approach for computing a solution

    Initialize Ẑ = 0n×m, X̂ = [ ], Ŷ = [ ]

    while rank Ẑ < r

    • Compute matrices X̂` ∈ Rn×` and Ŷ` ∈ Rm×` such that

    (X̂`, Ŷ`) = arg minX∈Rn×`,Y∈Rm×`

    1K

    K∑k=1

    ρ((Ẑ + XY>)b(k) − ξ(k))

    • update matrix inverse approximation: Ẑ←− Ẑ + X̂`Ŷ>`• update solutions: X̂←− [X̂, X̂`], Ŷ←− [Ŷ, Ŷ`]

    end

    Chung and Chung (Inverse Problems, 2014)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Empirical Bayes Problem

    Remarks:Knowledge about forward model can be incorporated, not required

    Need adequate number of training data

    Solve inverse problems with only a matrix-vector multiplication: Ẑb

    Framework allows more general error measures and morerealistic probability distributions

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Numerical Results: 1D Example10,000 signal observations, length 150Gaussian blur, white noise level 0.01Compare methods

    TSVD-Â: Estimate  from training data, then use TSVDTSVD-A: requires AORIM2: uses training data

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal1

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal2

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal3

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal4

    time

    TrueBlurred

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Reconstruction Errors for Validation Data

    10 20 30 40 50 60 70 80 90 100

    100

    rank r

    fK(Z

    )

    TSVD-ÂTSVD-AORIM2

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

    ORIM2

    TSVD-A

    TSVD-Â

    sample error

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Pareto Curves

    100 200 300 400 500 600 700 800 900 10000

    0.1

    0.2

    0.3

    0.4

    sample

    errorfor

    trainingset

    100 200 300 400 500 600 700 800 900 10000

    1

    2

    3

    # of training signals

    sample

    errorfor

    validationset

    25 to 75 percentilemedian error

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Numerical Results: 2D Example

    6,000 observations, 128× 128spatially invariant Gaussian blur,reflexive BCGaussian white noise, levels rangefrom 0.1 to 0.15Compare methods

    TSVD-Â: Estimate  from trainingdata, then use TSVDTSVD-A: requires AORIM2: uses training data

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Reconstruction Errors for Validation Data

    100 200 300 400 500 600 700 800 900 1000101

    102

    103

    104

    rank r

    fK(Z

    )

    TSVD-ÂTSVD-AORIM2

    0 200 400 600 800 1000 1200 1400 1600 1800 20000

    0.002

    0.004

    0.006

    0.008

    0.01

    0.012

    0.014

    0.016

    sample error

    den

    sity

    ORIM2TSVD-Â

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Reconstructed Images

    True

    Observed

    ORIM2 TSVD-A TSVD-Â

    ORIM2 TSVD-A TSVD-Â

    Julianne Chung, Virginia Tech

  • Conclusions

    Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Conclusions

    Concluding Remarks

    New framework for solving inverse problems.Bayes problem:

    Theoretical results can be derived for 2-normBayes problem provides insight

    Empirical Bayes problem:Use training data to get

    optimal spectral filters - for cases where A and its SVD are availableoptimal low-rank regularized inverse matrix - for cases where theforward model is unknown

    Incorporate probabilistic informationOptimal filters and matrices can be computed off-lineReconstruction can be done efficiently and quality is good

    Thank you!!

    Julianne Chung, Virginia Tech

  • Conclusions

    Some References on Inverse Problems

    Discrete Inverse Problems: Insights and Algorithms - Per ChristianHansen

    Deblurring Images: Matrices, Spectra, and Filtering - Hansen, Nagy andO’Leary

    Introduction to Bayesian Scientific Computing: Ten Lectures onSubjective Computing - Calvetti and Somersalo

    Computational Methods for Inverse Problems - Vogel

    Introduction to Inverse Problems in Imaging - Bertero and Boccacci

    Linear and Nonlinear Inverse Problems with Practical Applications -Mueller and Siltanen

    Julianne Chung, Virginia Tech

    Designing Optimal Spectral FiltersDesigning Optimal Low-Rank Regularized Inverse MatricesConclusions