Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
AUGMENTED LAGRANGIAN METHOD FOR EULER’S ELASTICA BASED
VARIATIONAL MODELS
by
MENGPU CHEN
WEI ZHU, COMMITTEE CHAIRSHAN ZHAO
DAVID HALPERNLAYACHI HADJI
SHUHUI LI
A DISSERTATION
Submitted in partial fulfillment of the requirementsfor the degree of Doctor of Philosophyin the Department of Mathematics
in the Graduate School ofThe University of Alabama
TUSCALOOSA, ALABAMA
2016
Copyright Mengpu Chen 2016ALL RIGHTS RESERVED
ABSTRACT
Euler’s elastica is widely applied in digital image processing. It is very challenging to
minimize the Euler’s elastica energy functional due to the high-order derivative of the curva-
ture term. The computational cost is high when using traditional time-marching methods.
Hence developments of fast methods are necessary. In the literature, the augmented La-
grangian method (ALM) is used to solve the minimization problem of the Euler’s elastica
functional by Tai, Hahn and Chung [41] and is proven to be more efficient than the gradient
descent method. However, several auxiliary variables are introduced as relaxations, which
means people need to deal with more penalty parameters and much effort should be made to
choose optimal parameters. In this dissertation, we employ a novel technique by Bae, Tai,
and Zhu [4], which treats curvature dependent functionals using ALM with fewer Lagrange
multipliers, and apply it for a wide range of imaging tasks, including image denoising, image
inpainting, image zooming, and image deblurring. Numerical experiments demonstrate the
efficiency of the proposed algorithm. Besides this, numerical experiments also show that our
algorithm gives better results with higher SNR/PSNR, and is more convenient for people to
choose optimal parameters.
ii
DEDICATION
This dissertation is dedicated to my loving parents. Their unwavering support and
encouragement have sustained me throughout the time taken to complete this work.
iii
ACKNOWLEDGMENTS
I would like to thank the committee members who were more than generous with their
expertise and precious time. A special thanks to my advisor, Dr. Wei Zhu, for his excellent
guidance, caring, patience, and insightful suggestions throughout the entire process. Thank
you Dr. Shan Zhao, Dr. David Halpern, Dr. Layachi Hadji, and Dr. Shuhui Li for their
help of my dissertation and academic progress.
I would also like to thank my friends, Wei Cui, Mingwei Sun, and Xuan He many
others in my department for their delightful support and help.
iv
CONTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
DEDICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Image Interpolation/Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Image Zooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Image Deblurring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 AUGMENTED LAGRANGIAN METHOD FOR EULER’S ELASTICA BASEDVARIATIONAL MODELS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Augmented Lagrangian Method For Euler’s Elastica Based Variational ModelsBy Tai et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 An Euler’s Elastica Based Segmentation Model By Zhu et al. . . . . . . . . . 18
v
2.3 A Novel Augmented Lagrangian Functional for Euler’s Elastica Based Varia-tional Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4 Minimization of Sub-problems . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.1 Minimization of ε1pvq . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.2 Minimization of ε2puq . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.3 Minimization of ε3ppq . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.4 Minimization of ε4pnq . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.5 Update the Lagrangian Multipliers . . . . . . . . . . . . . . . . . . . 26
2.5 Numerical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 NUMERICAL RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Image Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Image Zooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4 Image Deblurring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4 PARAMETER ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5 CONCLUSION AND FUTURE RESEARCH . . . . . . . . . . . . . . . . . . . . 62
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
vi
LIST OF TABLES
2.1 Augmented Lagrangian Method for Euler’s Elastica Based Variational Models 29
3.1 The Signal-to-Noise Ratio (SNR) and the Peak Signal-to-Noise Ratio (PSNR)of our proposed ALM algorithm compared to Tai’s ALM algorithm. . . . . . 41
3.2 SNR, PSNR, and computational time of proposed algorithm in image inpainting. 46
3.3 SNR, PSNR, and computational time of the deblurring results. . . . . . . . . 55
4.1 The SNR and PSNR of denoising results in Figure 4.1. . . . . . . . . . . . . 58
4.2 The SNR and PSNR of denoising results in Figure 4.2. . . . . . . . . . . . . 58
4.3 The SNR and PSNR of denoising results in Figure 4.3. . . . . . . . . . . . . 60
vii
LIST OF FIGURES
1.1 TV inpainting result when the inpainting domain is too "wide". . . . . . . . 6
3.1 Gaussian noise. a “ 1, b “ 1, η “ 17, r1 “ 300, r3 “ .5. . . . . . . . . . . . . . 32
3.2 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 32
3.3 Gaussian noise. a “ 1, b “ 1, η “ 15, r1 “ 1300, r3 “ 1. . . . . . . . . . . . . . 33
3.4 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 33
3.5 Gaussian noise. a “ 1, b “ 1, η “ 15, r1 “ 1300, r3 “ 1. . . . . . . . . . . . . . 34
3.6 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 34
3.7 Gaussian noise. a “ 1, b “ 1, η “ 17, r1 “ 300, r3 “ 1. . . . . . . . . . . . . . 35
3.8 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 35
3.9 Gaussian noise. a “ 1, b “ 1, η “ 17, r1 “ 1300, r3 “ 5. . . . . . . . . . . . . . 36
3.10 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 36
3.11 Gaussian noise. a “ 1, b “ 1, η “ 20, r1 “ 500, r3 “ 3. . . . . . . . . . . . . . 37
3.12 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 37
3.13 Salt Pepper noise. a “ 1, b “ 15, η “ 7, r1 “ 1500, r2 “ 100, r3 “ 50.SNR“ 29.06, PSNR“ 30.60. . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.14 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 38
3.15 Salt Pepper noise. a “ 1, b “ 10, η “ 3, r1 “ 1000, r2 “ 50, r3 “ 5. SNR“22.83, PSNR“ 29.19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.16 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 39
3.17 Salt Pepper noise. a “ 1, b “ 10, η “ 5, r1 “ 1000, r2 “ 50, r3 “ 10.SNR“ 24.44, PSNR“ 30.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.18 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 40
viii
3.19 Inpainting of a synthetic image. . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.20 Inpainting of a synthetic image. . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.21 Inpainting result. a “ 1, b “ 1, η “ 5, r1 “ 40, r2 “ 20, r3 “ 1. . . . . . . . . . 42
3.22 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 43
3.23 Inpainting result. a “ 1, b “ 10, η “ 10, r1 “ 800, r2 “ 20, r3 “ 10. . . . . . . 43
3.24 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 44
3.25 Inpainting result. a “ 1, b “ 1, η “ 5, r1 “ 40, r2 “ 20, r3 “ 1. . . . . . . . . . 44
3.26 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 45
3.27 Inpainting result. a “ 1, b “ 1, η “ 5, r1 “ 40, r2 “ 10, r3 “ 1. . . . . . . . . . 45
3.28 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 46
3.29 Checkerboard. Left: Original Image(256 ˆ 256); Right: 2x Size Image(511 ˆ511). Parameters used: a “ 0.1, b “ 1, η “ 10, r1 “ 800, r2 “ 10, r3 “ 5. . . . 47
3.30 2x size. Reltive errors of Lagrange multipliers (left) and u (right). . . . . . . 48
3.31 BMW Logo. Left: Original Image(256ˆ256); Right: 2x Size Image(511ˆ511).Parameters used: a “ 0.1, b “ 1, η “ 8, r1 “ 800, r2 “ 10, r3 “ 5. . . . . . . . 48
3.32 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 49
3.33 Einstein. Left: Original Image(256 ˆ 256); Right: 2x Size Image(511 ˆ 511).Parameters used: a “ 0.1, b “ 1, η “ 7, r1 “ 800, r2 “ 10, r3 “ 5. . . . . . . . 49
3.34 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 50
3.35 Gaussian blur with a standard deviation of 4. a “ .1, b “ 1, η “ 8000, r1 “
500, r3 “ 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.36 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 52
3.37 Gaussian blur with a standard deviation of 4. a “ 1, b “ 1, η “ 2000, r1 “
1000, r3 “ 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.38 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 53
3.39 Out-of-focus blur kernel with a radius of 6. a “ 1, b “ 1, η “ 2000, r1 “
1000, r3 “ 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
ix
3.40 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 54
3.41 Out-of-focus blur kernel with a radius of 6.. a “ 1, b “ 1, η “ 2000, r1 “
1000, r3 “ 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.42 Corresponding residuals, relative errors, and functional energy. . . . . . . . . 55
4.1 Gaussian noise. (a) Original Image; (b) Noisy Image; (c) Denoised Imageusing r1 “ 10; (d) Denoised Image using r1 “ 50; (e) Denoised Image usingr1 “ 100; (f) Denoised Image using r1 “ 500; (g) Denoised Image usingr1 “ 1000; (h) Denoised Image using r1 “ 5000. . . . . . . . . . . . . . . . . 57
4.2 Gaussian noise. (a) Original Image; (b) Noisy Image; (c) Denoised Imageusing r3 “ 0.1; (d) Denoised Image using r3 “ 0.5; (e) Denoised Image usingr3 “ 1; (f) Denoised Image using r3 “ 10; (g) Denoised Image using r3 “ 25;(h) Denoised Image using r3 “ 100. . . . . . . . . . . . . . . . . . . . . . . . 59
4.3 Salt-and-pepper noise with density 0.2. (a) Noisy Image; (b) Denoised Imageusing r2 “ 5, r3 “ 50; (c) Denoised Image using r2 “ 10, r3 “ 50; (d) DenoisedImage using r2 “ 25, r3 “ 50; (e) Denoised Image using r2 “ 25, r3 “ 100;(f) Denoised Image using r2 “ 50, r3 “ 100; (g) Denoised Image using r2 “
100, r3 “ 100; (h) Denoised Image using r2 “ 200, r3 “ 100. . . . . . . . . . . 61
x
CHAPTER 1
INTRODUCTION
The Euler’s elastica functional has the form
Epγq “
ż
γ
pa` bκ2qds (1.1)
where κ is the curvature of a curve γ and s is the arc length, as mentioned by Horn in [21].
It has a variety of applications in image processing.
One of the applications of Euler’s elastica functional is segmentation with depth,
which requires segmenting the objects in an image while identifying their occlusion relations.
Nitzberg, Mumford, and Shiota [32] proposed a variational model by minimizing the NMS
energy functional that includes the term
ż
BR
φpκqds (1.2)
where φ is a positive, convex and even function of the curvature K of the boundary of the
set R. Their model can detect both the shapes and the ordering of the objects.
Masnou and Morel defined disocclusion to be the process of interpolating the missing
areas from their surroundings [30]. They proposed a minimization problem using the angle
total variation along the level lines to perform disocclusion:
C “ÿ
ż
Li,j
p1` |σpsq|qds (1.3)
where Li,j denotes the level lines. They also presented another minimization problem which
1
has a better performance in image restoration:
min
ż
Ω
p1` |curv v|pq|Dv|, p ą 1 (1.4)
where Ω is the image domain, curv v is the curvature and v is the restored image. More
precisely, using the coarea formula and change of variable, (1.4) can be expressed as
min
ż
Ω
ˆ
1`
ˇ
ˇ
ˇ
ˇ
∇ ¨ ∇u|∇u
ˇ
ˇ
ˇ
ˇ
p˙
|∇u|, p ą 1 (1.5)
Chan, Kang, and Shen [39] also proposed an Euler’s elastica based variational in-
painting model (CKS model):
Jλ2 puq “
ż
EYD
pa` bκ2q|∇u| ` λ
2
ż
E
pu´ u0q2 (1.6)
where D is the inpainting domain, E is any closed domain in the complement of D, λ is a
constant, u is the inpainted image and u0 is the observed image. They derived the fourth-
order Euler-Lagrange equation corresponding to the variational elastica energy functional:
Bu
Bt“ |∇u|∇ ¨ ÝÑV ´ |u|λEpu´ u0q (1.7)
andÝÑV “ κ2ÝÑn ´
ÝÑt
|∇u|Bp2κ|∇u|qBÝÑt
(1.8)
They then evolved equation (1.7) using the gradient descent method. The CKS method
can give good inpainting results. However, the computational cost is very high due to the
high-order derivatives of the equations.
Tai, Hahn, and Chung [40] proposed an Euler’s elastica based variational model:
ż
Ω
ˆ
a` b
ˆ
∇ ¨ ∇u|∇u|
˙2˙
|∇u| ` η
s
ż
Γ
|u´ u0|s (1.9)
2
where η and s are both constants, Ω is the image domain and Γ is the set depending on
the imaging tasks. This model can used in different tasks, such as image denoising, image
inpainting, and image deblurring.
In this dissertation, we consider an Euler’s elastica based variational model that can
be applied to image denoising, image inpainting, image zooming, and image deblurring. In
the following sub-sections, we will briefly talk about the development of each one in the
literature.
1.1 Image Denoising
Image noise is a random variation of of brightness or color information. It can signif-
icantly degrade image quality. Reducing image noise is an important part in image process-
ing. Many algorithms have been developed to achieve this goal. However, it is very hard to
perfectly remove the noise while preserving fine, low-contrast detail that may have charac-
teristics similar to noise. One common kind of noise is the Gaussian noise. This kind of noise
arises during acquisition, e.g. sensor noise caused by poor illumination, high temperature,
or electronic circuit noise. The salt-and-pepper noise is another kind of noise which can be
caused by transmission errors. There are also other kinds of noise, such as the Poisson noise
and the speckle noise.
Huang, Yang, and Tang presented a median filtering algorithm for image denoising
[23]. The algorithm uses a non-linear filter that predicts the pixel in the true image by
using the median of its "neighbourhood", since the neighboring pixels are expected to have
similar values. The median filter works well on removing the salt-and-pepper noise because
one extreme value(noise) will not affect the median. However, it may not be ideal for other
types of image noise (e.g. Gaussian white noise).
Another filter is the mean filter. The idea of the mean filter is the same as the median
filter except that people take the mean of the neighbourhood instead of the median. The
mean filter works better on Gaussian white noise, but has its own disadvantages. While
3
removing the noise, details and edges of the true image are also reduced since all pixels are
treated the same way. One may use a weighted mean but the blurring effect still exists.
The Gaussian filter is a commonly used technique to remove the image noise. A
Gaussian filter modifies the noised image by convolution with a Gaussian function. Although
the Gaussian filtering is proved to be effective for removing Gaussian noise, it also has the
over-smooth issue. To avoid the blurring effect of the Gaussian filter, one can perform the
convolution only in the direction orthogonal to the gradient of the image. This is known as
the anisotropic diffusion (or the anisotropic filter), as mentioned in [34].
Tomasi and Manduchi proposed the bilateral filter [42] in 1998. Similar to the mean
filter, the intensity value of a given pixel is replaced by a weighted average of its neighbours.
The weights are given by a function that depends on the image distance and the color
intensity. The bilateral filter preserves sharp edges, but may cause staircase effects.
Rudin, Osher, and Fatemi introduced a PDE-based denoising method [36] that uses
the total variation of the image as regularizer, known as the ROF model:
u “ argminuErof puq “
ż
Ω
|∇u| ` λ
2||K ˚ u´ f ||2 (1.10)
where λ is a constant, f is the observed image and u is the denoisde image. The authors
obtained the corresponding Euler-Lagrangian equation and used the gradient descent method
to evolve the equation:
Bu
Bt“ ∇ ¨ ∇u
a
|∇u|2 ` ε`K˚
pf ´K ˚ uq (1.11)
where ε is a small positive number to avoid singularities and K˚ is the complex conjugate
transpose of K. The ROF model preserves the edges very well while removing the image
noise. However, the strict constraint of time step size makes the algorithm slow and the
staircase effect arises since it utilizes only the first-order derivative. The total variation-
based model has been extended to higher order models, such as [14,27,28,47].
4
TV based denoising algorithms can also be extended into color images. In [9], Blom-
gren and Chan derived a vectorial total variation (VTV) norm. For colored images, one can
simply replace the TV norm with the VTV norm in both the fidelity term and the regular-
ization term of the minimization functional (1.6). As mentioned in [41], efficient techniques
such as augmented Lagrangian method, dual methods, and split Bregman iteration can be
applied to improve the original VTV model.
Tai, Hahn, and Chung [40] proposed a denoising model based on Euler’s elastica
functional:ż
Ω
ˆ
a` b
ˆ
∇ ¨ ∇u|∇u|
˙2˙
|∇u| ` η
s
ż
Γ
|u´ u0|s (1.12)
They chose s “ 1 for the salt-and-pepper noise and s “ 2 for the Gaussian white noise.
Their work shows that the Euler’s elastica based model prevents the staircase effect and
yields smoother results.
1.2 Image Interpolation/Inpainting
Another important task in image restoration is image interpolation (also known as
image inpainting). In the museum world, it is very likely that some valuable paintings or
photographs are damaged. Restoring the damaged part is usually carried out by a skilled
person. The same task can also be done using the technique of image inpainting. In image
inpainting, the damaged areas in an image needs to be interpolated. The word "image
inpainting" was first introduced by Bertalmio, Sapiro, Caselles, and Ballester (BSCB) in [7].
The area Ω to be inpainted is interpolated along the isophote lines from the boundary BΩ.
The authors used the following formula to update the inpainting area:
In`1pi, jq “ Inpi, jq `∆tInt pi, jq, @pi, jq P Ω (1.13)
where Ii,j is the pixel value for the grid point pi, jq, n indicates the iteration step, and the
Laplacian Inxxpi, jq ` Inyypi, jq is used as Int pi, jq to update the image information. Only the
5
inpainting area needs to be given. However, the proposed model will not reproduce large
texture areas.
Caselles, Morel, and Sbert [13] proposed an axiomatic approach to image interpola-
tion. In their paper, the curvature operator is chosen to be Int pi, jq in equation (1.13).
The work of Chan and Shen in [38] focused on local inpainting for non-texture images.
The authors proposed three general principles for non-texture image inpaintings and also
proposed a TV inpainting model extended from the original ROF model [36]:
Jaλrus “
ż
EYD
a
a2 ` |∇u|2 ` λ
2
ż
E
|u´ u0|2 (1.14)
where a and λ are constants, D is the inpainting domain, and E is any closed domain in
DC . To minimize the functional (1.14), they applied a Gauss-Jacobi type iterative scheme
instead of the time marching method, resulting in a faster and more stable process. However,
if the inpainting domain is too "wide", the TV inpainting scheme may not return a good
result (See Figure 1.1). A Mumford-Shah [31] segmentation-based inpainting model was also
introduced in their paper, but is numerically less convenient than the TV inpainting model.
Figure 1.1: TV inpainting result when the inpainting domain is too "wide".
In order to fix the issue that appeared in the TV model, Chan and Shen proposed a
new inpainting model based on curvature-driven diffusion (CDD) [15]. Since the curvature κ
goes to infinity at the edge of the disconnected remaining parts of an image to be inpainted,
6
the authors add an "annihilator" function to control large curvatures:
gpsq “
$
’
’
’
’
’
&
’
’
’
’
’
%
0, s “ 0
8, s “ 8
sp, 0 ă s ă 8, p ě 1
and the inpainting process becomes more stable.
Ballester, Bertalmio, Caselles, Sapiro, and Verdra [5] introduced an inpainting algo-
rithm by using the vector field and the gray values. Their minimization functional is given
by:ż
Ω
|divpθq|ppa` b|∇k ˚ u|q ` α
ż
Ω
p|∇u| ´ θ ¨∇uq (1.15)
where θ is a vector field that satisfies |θ| ă 1, k is a smooth kernel, a, b, and α are positive
constants. Notice that if u is the characteristic function and p “ 2, the first integral term
is exactly the Euler’s elastica. The second integral term is a penalty term since in the ideal
situation θ and u will satisfy the condition θ “ ∇u|∇u| .
Interpolation via diffusion always makes the area blurry, hence not effective to restore
the textures. Efros and Leung proposed a texture synthesis algorithm to fill in the missing
texture parts [19]. In their work, they assumed that the statistical information of the neigh-
bourhood of a given pixel (considered as a square window) is independent of the rest of the
image. They used heuristics to help synthesizing textures pixel by pixel. The algorithm is
effective to restore textures and fill the holes, but it is slow.
Bertalmio, Vese, and Sapiro presented a structure and texture image inpainting
method in [8]. Instead of directly inpainting the given image, they first decomposed the
image into a sum of a structure based image and a texture based image. They used the
BSCB model [7] to reconstruct the structure based image and a texture synthesis algorithm
given by Efros and Leung [19] to restore the texture based image.
Criminisi, Perez, and Toyama proposed an effective and very efficient exemplar-based
7
inpainting method [18]. Usually the inpainting area is propagated inwards from the bound-
ary, but the authors introduced a "confidence value" to determine the filling order and en-
courage both structure and texture propagation. The algorithm automatically searches the
source exemplar to fill in the inpainting area pixel-by-pixel and then update the confidence
value for next iteration.
To deal with non-local geometric features such as long edges, Cao, Gousseau, Masnou,
and Perez presented a exemplar-based inpainting method using the geometric information
of the image [12]. The geometric sketch, which is a piecewise constant approximation, is
first computed as a guide of their algorithm by segmentation. They minimized the Euler’s
elastica functional by selecting a proper Euler’s spiral. The method can successfully handle
sharp discontinuities. However, as mentioned in the paper [12], it may not work well if the
sketch to be inpainted is too complicated.
Traditional image inpainting methods usually consider the image in the pixel domain.
Chan, Shen, and Zhou defined the inpainting problem in the wavelet domain [16]. Based on
the fact that the damaged pixels result in loss of wavelet coefficients in the wavelet domain,
they combined the TV model together with the standard wavelet transformation:
upβ, xq “ÿ
j,k
βj,kψj,kpxq (1.16)
The mathematical model then becomes:
minβj,k
F pu, zq “
ż
|∇upβ, xq| ` λj,kj,k
pβj,k ´ αj,kq2 (1.17)
where αj,k are the wavelet coefficient to be restored. Their model can successfully restore
the structure even if there’s a big damaged area.
The CKS model introduced by Chan, Kang, and Shen [39] is:
Jλ2 puq “
ż
EYD
pa` bκ2q|∇u| ` λ
2
ż
E
pu´ u0q2 (1.18)
8
Since the model uses Euler’s elastica as regularizer, the inpainting results is good. However,
because of the high-order derivatives in the Euler-Lagrange equation and the constraint of
the time-step size when using the gradient descent method, the computational cost is very
high.
The inpainting model based on Euler’s elastica functional given by Tai, Hahn, and
Chung [40] is:ż
Ω
ˆ
a` b
ˆ
∇ ¨ ∇u|∇u|
˙2˙
|∇u| ` ηż
Γ
|u´ u0| (1.19)
where Ω is the image domain and Γ is the inpainting domain.
1.3 Image Zooming
Image zooming is closely related to image interpolation. People want to increase
the size of a given image while keeping the high resolution. Since the enlarged image has
many more pixels than the original one, the empty pixel values in the new image need to be
interpolated.
One simple interpolation method is the nearest neighbor (NN) interpolation. Using
this method, one only needs to replace the empty pixel value with the nearest known pixel
value. The NN scheme has proved to be very fast, but does not give good interpolation
result.
The bilinear filter (or bilinear interpolation) is another good image interpolation tech-
nique. For any pixel to be interpolated, the idea is to calculate the weighted average of the
closest 4 pixels (located in diagonal directions). In other words, one first performs the linear
interpolation in one direction and then in the other direction. This technique provides better
interpolating results than the NN interpolation with an acceptable computational cost, but
causes the blurry effect.
The bicubic interpolation is a better choice when people only focus on the quality
of the interpolation result. Compared with the bilinear interpolation, which only considers
9
the 4 neighboring pixels, the bicubic interpolation fills in the empty pixels using 16 pixels.
Hence it results in smoother images and preserves more details, but may cause overshoot.
This technique can be done by cubic spline [22]. In [24] Keys also derived a cubic convolution
scheme to accomplish the bicubic interpolation, which is more efficient than the cubic spline
scheme.
For low resolution images with few colors (i.e. pixel art images), the hqx filters (hq2x,
hq3x, hq4x) will give better results. The hqx filters were developed by Maxim Stepin where
"hq" means "high quality" and "x" indicates the magnifying factor. It first compares the
color value of a given pixel with its 8 neighbors, determining whether they are similar or not
to the given one, then fill in the missing pixels according to a predefined table.
Allebach and Wong proposed an edge-directed interpolation (EDI) in [1]. The idea
of using the edge information is to prevent smoothing across the edges. They first used a
center-on-surround-off (COSO) filter to obtain the sub-pixel edge map and then used the
bilinear filter to interpolate the image. The problem of their method is that it also produces
noisy artifacts.
To overcome the drawback of the original EDI method, Li and Orchard proposed a
new edge directed interpolation [25]. They take the local covariance characteristics in the
lower resolution image into account. Since higher local variance suggests a larger change,
it gives us the information about the edges. Thus interpolations along the edges without
crossing are guaranteed. This new EDI method is a good trade-off between the computational
cost and the image quality.
Based on the TV inpainting model, Chan and Shen proposed a TV zooming model
[38]:
Jλrus “ÿ
αPΩ
|∇αu| `ÿ
αPΩ
λα2puα ´ u
0αq
2 (1.20)
where α “ pi, jq is a grid point in the image domain Ω, |∇αu| is the local variation.
10
In [40], Tai, Hahn, and Chung proposed a Euler’s elastica based zooming model:
ż
Ω
ˆ
a` b
ˆ
∇ ¨ ∇u|∇u|
˙2˙
|∇u| ` ηż
Γ
|u´ u0| (1.21)
where Ω is the domain of the enlarged image and Γ is the set containing the pixel values of
the original image.
1.4 Image Deblurring
There are several possible reasons that can cause the blurring of an image: long
exposure times; movements while capturing the image; out-of-focus optics; use of a wide-
angle lens; atmospheric turbulence; and scattered light distortion. The blurring effect can
be described as a convolution process, i.e.
gpx, yq “ hpx, yq˚ fpx, yq ` npx, yq (1.22)
where g is the observed (blurry) image, f is the original image, h is the blurring kernel,
also called the point spread function (PSF), and n is the additive noise. Linear motion
kernels, Gaussian kernels, and disk kernels are some commonly used PSF’s to simulate the
motion blur, the Gaussian blur, and the out-of-focus blur, respectively. In the situations
where the PSF is unknown, people need to perform the blind deconvolution. More precisely,
the unknown PSF needs to be estimated based on statistics such as maximum likelihood
estimation (MLE). If the PSF is known, the deblurring problem is actually a deconvolution
problem.
In the literature, the inverse filter is the most straight forward deblurring technique.
According to the convolution theorem, equation (1.23) is equivalent to:
Gpx, yq “ Hpx, yqF px, yq `Npx, yq (1.23)
11
where G, H, F , and N are the corresponding Fourier transforms of g, h, f , and n, respec-
tively. Therefore, one may directly obtain the Fourier transform of the original image F
from (1.24), i.e.
F px, yq “Gpx, yq
Hpx, yq“ F px, yq `
Npx, yq
Hpx, yq(1.24)
However, most of the time noise N exists and is unknown. The second term of (1.24) will
dominate the first term as Hpu, vq becomes small. Thus the inverse filter will fail to give the
original image.
One basic deblurring method is the Wiener filter, which is first introduced by N.
Wiener during WW2 [45]. The Wiener filter is based on the minimum mean square error
estimator and can be expressed as:
W px, yq “H˚px, yqSf px, yq
|Hpx, yq|2Sf px, yq ` Sηpx, yq(1.25)
where H˚ is the complex conjugate of the blurring kernel H, Sf and Sη are the power
spectra of the original image and the additive noise respectively. The Wiener filter assumes
the signal and the additive noise to be a stationary stochastic process with known spectral
characteristics. Hence it does not work well when the SNR is low.
Deblurring can also be achieved by considering the Laplacian as a smoothness mea-
surement, i.e. minimizing the functional
C “ÿ
x
ÿ
y
|∇2fpx, yq|2 (1.26)
together with the constraint:
||Gpx, yq ´Hpx, yqF px, yq|| “ ||Npx, yq|| (1.27)
This is the so-called constrained least squares filtering (CLS). According to the Lagrangian
method, one can convert the constrained minimization problem into an unconstrained one,
12
which yields the following solution:
Mpx, yq “H˚px, yq
|Hpx, yq|2 ` γ|P px, yq|2(1.28)
where γ is the Lagrange multiplier and P px, yq is the Fourier transform of the Laplacian
operator. The CLS filter requires the mean and the variance of the noise and is less efficient
than the Wiener filter, but gives better results.
The Richardson-Lucy (RL) algorithm was presented by Richardson in [35] and Lucy
in [26] respectively. They assumed that the pixels of the original image follow a Poisson
distribution. The iterative formula is
fn`1px, yq “ fnpx, yq
„
gpx, yq
fnpx, yq˚ Hpx, yq˚ H˚
px, yq
(1.29)
where n indicates the iteration and f 0 “ g. It is proved that the iterating process converges
to the maximum likelihood solution. However, just like the Wiener filter, the RL algorithm
will fail when the assumption is not met, generating ringing disturbing artifacts.
To address the issue of the RL iteration algorithm, Yuan, Sun, Quan, and Shum pro-
posed the bilateral Richardson-Lucy (BRL) algorithm [48]. They added a new regularization
term EBpuq to help preserve the edges, which leads to a new iterative formula:
fn`1px, yq “
fnpx, yq
1` λ∇EBpfnpx, yqq
„
gpx, yq
fnpx, yq˚ Hpx, yq˚ H˚
px, yq
(1.30)
where the term ∇EBpfnpx, yqq can be computed by the bilateral filter. The BRL algorithm
is slower than the original RL algorithm because of the bilateral filter. It can reduce the
number of ringing artifacts, but also reduce some details and sharpen some edges.
Yuan et al. [48] also introduced a joint bilateral Richardson-Lucy (JBRL) algorithm.
They modified the regularization term in BRL by taking the "guide" image into account.
The idea is to first obtain a deconvoluted image without the ringing artifacts, then use this
13
image to guide the next deconvolution and get an image with higher resolution. This process
repeats until a well deconvoluted image is acquired.
Unlike those previous mentioned methods, the total variation (TV) deconvolution can
be used when there is no known point spread function, i.e. blind deconvolution. The TV
deconvolution model proposed by Vogel and Oman [43] is:
minfPBV
||H ˚ f ´ g||2 ` λf ||f ||TV (1.31)
Based on their work, Chan and Wong extended this model to the blind deconvolution case
in [17] using the following TV minimization:
minfPBV
||k ˚ f ´ g||2 ` λf ||f ||TV ` λk||k||TV (1.32)
where k is the unknown PSF. The TV deconvolution method does not create ringing artifacts,
but smoothens the areas with high variations, thus the details are lost.
In this dissertation, we will also try to extend the Euler’s elastica based variational
model to image deblurring.
14
CHAPTER 2
AUGMENTED LAGRANGIAN METHOD FOR EULER’S ELASTICABASED VARIATIONAL MODELS
2.1 Augmented Lagrangian Method For Euler’s Elastica Based Variational Mod-
els By Tai et al.
Tai, Hahn, and Chung introduced an Euler’s elastica functional for image denoising,
image inpainting, and image zooming in [40]:
ż
Ω
ˆ
a` b
ˆ
∇ ¨ ∇u|∇u|
˙2˙
|∇u| ` η
s
ż
Γ
|u´ u0|s (2.1)
where Ω is the image domain. For image denoising, Γ “ Ω; for image inpainting, Γ is the
inpainting area; for image zooming, Ω is the enlarged image domain and Γ is the set of pixels
from the original image. The authors set s “ 1 for salt-and-pepper noise removal, image
inpainting, and image zooming and s “ 2 for Gaussian white noise removal.
Traditionally, to minimize this functional, one needs to obtain the corresponding
Euler-Lagrange equation, then solves the equation using the gradient descent method. How-
ever, since the time step size must satisfy the CFL condition to guarantee the convergence,
the step size is required to be very small and the computational cost is very high. There-
fore, the development of fast and reliable methods is necessary. Fast methods, such as
the multigrid method [10], the augmented Lagrangian method [4, 40, 41], and the homo-
topy method [46] have drawn a lot of attention. In this dissertation, we will focus on the
augmented Lagrangian method (ALM).
15
The augmented Lagrangian method is a technique for turning constrained minimiza-
tion problems into unconstrained ones. That is, for a constrained problem:
minF pxq
subject to Cipxq “ 0, i “ 1, 2, 3, ....
(2.2)
using the ALM, the original problem (2.2) will be converted into an unconstrained problem
as follows:
minGpxq “ F pxq `ÿ
i
µk2Cipxq
2`ÿ
i
λiCipxq (2.3)
where µk’s are constants and λi’s are the Lagrange multipliers. Then the minimization
problem (2.3) can be solved iteratively. That is, one finds the minimizer of (2.3) during each
iteration and then updates the Lagrange multipliers by
λki “ λk´1i ` µkCipxkq (2.4)
where xk is the minimizer of (2.3) at the kth iteration.
In order to apply the ALM, the authors of [40] first introduced four auxiliary variables
v, p, n, and m to turn the minimization problem of (2.1) into a constrained minimization
problem, i.e.
minv,u,m,p,n
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
s
ż
Γ
|v ´ u0|s
s.t. v “ u,p “ ∇u,n “m, |p| “m ¨ p, |m| ď 1.
(2.5)
16
and then they obtained the corresponding Lagrangian functional
Lpv, u,p,n;λ1, λ2,λ3q “
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
s
ż
Γ
|v ´ u0|s
` r1
ż
Ω
p|p| ´m ¨ pq `
ż
Ω
λ1p|p| ´m ¨ pq
`r2
2
ż
Ω
|p´∇u|2 `ż
Ω
λ2 ¨ pp´∇uq
`r3
2
ż
Ω
pv ´ uq2 `
ż
Ω
λ3pv ´ uq
`r4
2
ż
Ω
|m´ n|2 `
ż
Ω
λ4 ¨ pm´ nq ` δRpmq
(2.6)
where
δRpmq “
$
’
&
’
%
0, |m| ď 1
8, otherwise
Since the saddle points of the Lagrangian functional (2.6) correspond to its local
minimizers, during each iteration, one needs to solve the minimization problem of each
variable while keeping the others fixed. This leads to the following sub-problems of each
variable:
ε1pvq “η
s
ż
Γ
|v ´ u0|s`r3
2
ż
Ω
pv ´ uq2 ` λ3v
ε2puq “
ż
Ω
”r2
2pp´∇uq2 ´ λ2 ¨∇u`
r3
2pv ´ uq2 ´ λ3u
ı
ε3pmq “ δRpmq `
ż
Ω
”r4
2pn´mq2 ´ λ4 ¨m
ı
ε4ppq “
ż
Ω
”
pa` bp∇ ¨ nq2q|p| ` pr1 ` λ1qp|p| ´m ¨ pq `r2
2pp´∇uq2 ` λ2 ¨ p
ı
ε5pnq “
ż
Ω
”
bp∇ ¨ nq2|p| ` r4
2pn´mq2 ` λ4 ¨ n
ı
(2.7)
Then the Lagrange multipliers need to be updated.
Numerical experiments have shown that the ALM for the Euler’s elastica based
model is very accurate and efficient. However, since there are too many tuning parame-
ters pr1is, i “ 1, 2, 3, 4q, choosing the appropriate ones becomes time-consuming. Therefore,
17
we want to reduce the number of parameters to be tuned and keep the advantages of the
original algorithm (accuracy and efficiency).
2.2 An Euler’s Elastica Based Segmentation Model By Zhu et al.
Bae, Tai, and Zhu proposed an L1-Euler’s elastica based Chan-Vese segmentation
model in [4]:
Epφ, c1, c2q “
ż
Ω
pf ´ c1q2Hpφq ` pf ´ c2q
2p1´Hpφqq `
ż
Ω
„
a` b
ˇ
ˇ
ˇ
ˇ
∇ ¨ ∇φ|∇φ|
ˇ
ˇ
ˇ
ˇ
|∇Hpφq| (2.8)
Hpφq “1
2`
1
πarctan
φ
ε
∇Hpφq “ 1
π
ε
ε2 ` φ2∇φ
(2.9)
where Hpφq is a Heaviside function, φ is the level set function, and ε is a small positive
number.
Based on the work in Tai et al.’s paper [40], they also introduced auxiliary variables to
obtain an constrained minimization problem. In their paper, they let n “ ∇φ|∇φ| and used
the relationship between the variables p and n, i.e. |p|n ´ p “ 0. Thus the minimization
problem becomes:
minv,u,m,p,n
ż
Ω
pf ´ c1q2Hpφq ` pf ´ c2q
2p1´Hpφqq `
ż
Ω
pa` b|q|qε
πpε2 ` φ2q|p|
s.t. p “ ∇φ, q “ ∇ ¨ n, |p|n´ p “ 0.
(2.10)
Therefore they acquired an augmented Lagrangian functional with fewer Lagrange multipli-
18
ers:
Lpφ, q,p,n, c1, c2;λ1, λ2,λ3q “
ż
Ω
pf ´ c1q2p1
2`
1
πarctan
φ
εq
` pf ´ c2q2p1
2´
1
πarctan
φ
εq `
ż
Ω
pa` b|q|qε
πpε2 ` φ2q|p|
`r1
2
ż
Ω
|p´∇φ|2 `ż
Ω
λ1 ¨ pp´∇φq
`r2
2
ż
Ω
pq ´∇ ¨ nq2 `ż
Ω
λ2pq ´∇ ¨ nq
`r3
2
ż
Ω
||p|n´ p|2 `
ż
Ω
λ3 ¨ p|p|n´ pq
(2.11)
They used the same iterative strategy in Tai et al’s paper to find the saddle points of
the Lagrangian functional. Finding the minimizer of the sub-problem of p is tricky. They
rewrote the functional corresponding to p as:
ε3ppq “
ż
Ω
„
pa` b|q|qε
πpε2 ` φ2q` λ3 ¨ n
|p| `r1 ` r3p1` |n|
2q
2
ˇ
ˇ
ˇ
ˇ
p´λ3 ` r1∇φ´ λ1
r1 ` r3p1` |n|2q
ˇ
ˇ
ˇ
ˇ
`
ż
Ω
r3p ¨ n|p| ` c,
(2.12)
which has the form
gpxq “ λ|x| `µ
2|x´ a|2 ` pν ¨ xq|x| (2.13)
The authors then introduced a theorem to obtain the minimizer for the functional (2.14):
Theorem 1. Assume that µ ą 2|ν|. Let θ be the angle between the vector a and the minimum
vector of gpxq listed in (2.14), and α the angle between a and ν. Then the following arguments
hold:
• If λ ě µ|a|, then gpxq attains its minimum at x “ 0.
• If λ ă µ|a|, the minimum can be determined according to the following four cases:
1. If a “ ν “ 0, the minimum occurs at x “ 0 if λ ą 0 and any vector of length ´λµ
if λ ă 0;
19
2. If a ‰ 0, ν “ 0, the minimum occurs at´
1´λ
µ|a|
¯
a;
3. If a “ 0, ν ‰ 0,the minimum occurs atλ
µ´ 2|ν|
ν
ν;
4. If a ‰ 0, ν ‰ 0, the angles θ and α satisfy the equation
µ2|a| sin θ ´ µ|ν||a| sin θ cospθ ` αq ` λ|ν| sinpθ ` αq ´ µ|a||ν| sinα “ 0, (2.14)
and gpxq has its minimum atrµpb ¨ aq ´ λsbµ` 2ν ¨ b
with b a unit vector satisfying
b “1
|a|
»
—
–
cos θ ´ sin θ
sin θ cos θ
fi
ffi
fl
a. (2.15)
where θ “ θ if detrν as ě 0 and θ “ ´θ if detrν as ă 0
2.3 A Novel Augmented Lagrangian Functional for Euler’s Elastica Based Vari-
ational Models
Following the idea in Zhu et al’s papper [4], we rewrite the minimization problem of
the Euler’s elastica functional (2.1) as:
minv,u,m,p,n
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
s
ż
Γ
|v ´ u0|s
s.t. v “ u,p “ ∇u, |p|n´ p “ 0.
(2.16)
Using the augmented Lagrangian method, we obtain a new augmented Lagrangian
20
functional for Euler’s elastica based variational model with fewer Lagrange multipliers:
Lpv, u,p,n;λ1, λ2,λ3q “
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
s
ż
Γ
|v ´ u0|s
`r1
2
ż
Ω
pp´∇uq2 `ż
Ω
λ1 ¨ pp´∇uq
`r2
2
ż
Ω
pv ´ uq2 `
ż
Ω
λ2pv ´ uq
`r3
2
ż
Ω
||p|n´ p|2 `
ż
Ω
λ3 ¨ p|p|n´ pq
(2.17)
where λ1, λ2, and λ3 are Lagrange multipliers and r1, r2, and r3 are positive penalty pa-
rameters.
We will apply our algorithm for different tasks, including image denoising, image
inpainting, image zooming, and image deblurring.
We use the same strategy for choosing s, as mentioned in [40]. For image inpainting,
image zooming, and salt-and-pepper noise removal, we set s “ 1, then the Lagrangian
functional is:
Lpv, u,p,n;λ1, λ2,λ3q “
ż
Ω
pa` bp∇ ¨ nq2q|p| ` ηż
Γ
|v ´ u0|
`r1
2
ż
Ω
pp´∇uq2 `ż
Ω
λ1 ¨ pp´∇uq
`r2
2
ż
Ω
pv ´ uq2 `
ż
Ω
λ2pv ´ uq
`r3
2
ż
Ω
||p|n´ p|2 `
ż
Ω
λ3 ¨ p|p|n´ pq
(2.18)
For salt-and-pepper noise removal, Ω is the image domain and Γ “ Ω; for image inpainting,
Ω is the image domain and Γ is the inpainting domain; for image zooming, given a factor r,
an M ˆN image will be magnified into the domain Ω “ rrpM ´ 1q` 1s ˆ rrpN ´ 1q` 1s and
Γ “ tpi, jq P Ω|i ” 1 mod r, j ” 1 mod ru.
To find the saddle points of the augmented Lagrangian functional (2.19), we divide
21
it into the following sub-problems:
ε1pvq “ η
ż
Γ
|v ´ u0| `r2
2
ż
Ω
pv ´ uq2 ` λ2v
ε2puq “
ż
Ω
”r1
2pp´∇uq2 ´ λ1 ¨∇u`
r2
2pv ´ uq2 ´ λ2u
ı
ε3ppq “
ż
Ω
”
pa` bp∇ ¨ nq2q|p| ` r1
2pp´∇uq2 ` λ1 ¨ p`
r3
2||p|n´ p|2 ` λ3 ¨ p|p|n´ pq
ı
ε4pnq “
ż
Ω
”
bp∇ ¨ nq2|p| ` r3
2||p|n´ p|2 ` λ3 ¨ |p|n
ı
(2.19)
We set s “ 2 for image deblurring and Gaussian white noise removal. Notice that the
auxiliary variable v is used to avoid the nonlinearity of the fitting term in the Lagrangian
functional when s “ 1. Hence it is not necessary when s “ 2.
For Gaussian noise removal, the Lagrangian functional can be written as:
Lpv, u,p,n;λ1, λ2,λ3q “
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
2
ż
Ω
|u´ u0|2
`r1
2
ż
Ω
pp´∇uq2 `ż
Ω
λ1 ¨ pp´∇uq
`r2
2
ż
Ω
||p|n´ p|2 `
ż
Ω
λ2 ¨ p|p|n´ pq
(2.20)
and the corresponding sub-problems are:
ε1puq “
ż
Ω
”r1
2pp´∇uq2 ´ λ1 ¨∇u`
η
2|u´ u0|
2
ε2ppq “
ż
Ω
”
pa` bp∇ ¨ nq2q|p| ` r1
2pp´∇uq2 ` λ1 ¨ p`
r3
2||p|n´ p|2 ` λ3 ¨ p|p|n´ pq
ı
ε3pnq “
ż
Ω
”
bp∇ ¨ nq2|p| ` r3
2||p|n´ p|2 ` λ3 ¨ |p|n
ı
(2.21)
In image deblurring, the blurred image can be expressed as
u0 “ K ˚ u` n. (2.22)
22
where K is the convolution operator and n is the additive noise. We can then use
|K ˚ u´ u0|2 (2.23)
as the fitting term in the Euler’s elastica functional. Therefore the Lagrangian functional
becomes
Lpv, u,p,n;λ1, λ2,λ3q “
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
2
ż
Γ
|K ˚ u´ u0|2
`r1
2
ż
Ω
pp´∇uq2 `ż
Ω
λ1 ¨ pp´∇uq
`r2
2
ż
Ω
||p|n´ p|2 `
ż
Ω
λ2 ¨ p|p|n´ pq
(2.24)
with the corresponding sub-problems
ε1puq “
ż
Ω
”r1
2pp´∇uq2 ´ λ1 ¨∇u`
η
2|K ˚ u´ u0|
2
ε2ppq “
ż
Ω
”
pa` bp∇ ¨ nq2q|p| ` r1
2pp´∇uq2 ` λ1 ¨ p`
r3
2||p|n´ p|2 ` λ3 ¨ p|p|n´ pq
ı
ε3pnq “
ż
Ω
”
bp∇ ¨ nq2|p| ` r3
2||p|n´ p|2 ` λ3 ¨ |p|n
ı
(2.25)
2.4 Minimization of Sub-problems
In this section, we will discuss the solutions of each sub-problem in (2.20). The
sub-problem ε1pvq can be solved by using closed-form solutions, the sub-problems ε2puq and
ε4pnq can be solved by fast Fourier Transform, and the sub-problem ε3ppq can be solved
using the theorem introduced by Bae, Tai, and Zhu [4]. In the case when s “ 2, only a few
modifications need to be done to obtain the minimizers.
23
2.4.1 Minimization of ε1pvq
Let w “ u´ λ3r3. Then we can rewrite ε1pvq in (2.9) as
ε1pvq “ η
ż
Γ
|v ´ u0| `r3
2
ż
Ω
pv ´ wq2 ` C1. (2.26)
where C1 is independent of v.
In the domain ΩzΓ, the minimizer is v “ w. In the domain Γ, the minimizer is given
by
v “ u0 `Mpw ´ u0q, (2.27)
where
M “ max
˜
0, 1´η
r3|w ´ u0|
¸
. (2.28)
2.4.2 Minimization of ε2puq
To solve ε2puq, we first find the corresponding Euler-Lagrange equation:
´r2∆u` r3u “ ´r2divp´ divλ2 ` r3v ` λ3. (2.29)
Since the coefficients on the left-hand-side are constants, we can use FFT to find the
minimizer.
2.4.3 Minimization of ε3ppq
The sub-problem ε3ppq can be reformulated as
ε3ppq “
ż
Ω
pa` bp∇ ¨ nq2 ` λ3 ¨ nq|p| `r1 ` r3p1` |n|
2q
2
ˇ
ˇ
ˇ
ˇ
p´λ3 ` r1∇φ´ λ1
r1 ` r3p1` |n|2q
ˇ
ˇ
ˇ
ˇ
`
ż
Ω
r3p ¨ n|p| ` C3,
(2.30)
where C3 is independent of p.
24
which has the form:
gpxq “ λ|x| `µ
2|x´ a|2 ` pν ¨ xq|x|, (2.31)
In this sub-problem, µ “ r1 ` r3p1` |n|2q and ν “ r3n. It is clear that the condition
µ ą 2|ν| is satisfied since r3 is positive and 1` |n|2 ě 2|n|. Therefore, the minimizer can be
found using the theorem given by Zhu et al. [4], as mentioned in Section 2.2.
2.4.4 Minimization of ε4pnq
Similar to solving ε2puq, we obtain the Euler-Lagrange equation for ε4pnq:
´2b|p|∇p∇ ¨ nq ` r3|p|2n “ r3|p|p´ λ3|p|. (2.32)
i.e.
´2b∇p∇ ¨ nq ` r3|p|n “ r3p´ λ3. (2.33)
Since the coefficients on the left-hand-side of (2.37) are not constants, we can set
C “ maxpr3|p|q ` εn (2.34)
where εn is a small number. Then (2.37) can be reformulated as:
´2b∇p∇ ¨ nq ` Cn “ pC2 ´ r3|p|qn` r3p´ λ3. (2.35)
where the coefficient on the left hand side are constants, which allows us to use the Fourier
transform.
25
2.4.5 Update the Lagrangian Multipliers
During each iteration, after we find the minimizers of the variables v, u, p, and n,
we will update the Lagrange Multipliers λ1 “ pλ11, λ12q, λ2, and λ3 “ pλ31, λ32q as follows:
λk`11 “ λk1 ` r1pp
k´∇ukq
λk`12 “ λk2 ` r2pv
k´ ukq
λk`13 “ λk3 ` r3p|p
k|nk ´ pkq
(2.36)
2.5 Numerical Implementation
Similar to the work in [4], we use a grid system to discretize the equations in those
sub-problems. Let the image domain Ω “ tpi, jq|1 ď i ď Nx, 1 ď j ď Nyu such that each
pair pi, jq denotes a single pixel.
In order to apply the fast Fourier transform, periodic boundary conditions are re-
quired. We discretize the forward and backward difference operators as follows:
B´x upi, jq “
$
’
’
&
’
’
%
upi, jq ´ upi´ 1, jq, 1 ă i ď Nx,
up1, jq ´ upNx, jq, i “ 1,
B`x upi, jq “
$
’
’
&
’
’
%
upi` 1, jq ´ upi, jq, 1 ď i ă Nx,
up1, jq ´ upNx, jq, i “ Nx,
B´y upi, jq “
$
’
’
&
’
’
%
upi, jq ´ upi, j ´ 1q, 1 ă j ď Ny,
upi, 1q ´ upi, Nyq, j “ 1,
B`y upi, jq “
$
’
’
&
’
’
%
upi, j ` 1q ´ upi, jq, 1 ď j ă Ny,
upi, 1q ´ upi, Nyq, j “ Ny.
(2.37)
Thus, the forward and backward gradient operators and divergence operators can be defined
26
as:∇´upi, jq “ pB´x upi, jq, B´y upi, jqq,
∇`upi, jq “ pB`x upi, jq, B`y upi, jqq,
div´upi, jq “ B´x upi, jq ` B´y upi, jq,
div`upi, jq “ B`x upi, jq ` B`y upi, jq.
(2.38)
We also define the magnitude of the vector p “ pp1, p2q at each pixel pi, jq to be
|ppi, jq| “a
p1pi, jq2 ` p2pi, jq2. (2.39)
To solve for u, we apply FFT to the Euler-Lagrange equation (2.30) and obtain:
`
´ 2r2Bpi, jq ` r3
˘
F`
upi, jq˘
“ F`
gpi, jq˘
, (2.40)
where F denotes the Fourier transform, gpi, jq “ ´r2div´ppi, jq ´ div´λ2pi, jq ` r3vpi, jq `
λ3pi, jq, and B is the Fourier transform of the Laplacian operator
Bpi, jq “ cos´2πpi´ 1q
Nx
¯
` cos´2πpj ´ 1q
Ny
¯
´ 2 (2.41)
where 1 ď i ď Nx and 1 ď j ď Ny. Then we solve for u by taking the inverse Fourier
transform F´1:
u “ F´1
«
F`
g˘
´2r2B ` r3
ff
. (2.42)
For the variable n, we discretize equation (2.36) as:
´2b∇`pdiv´nq ` Cn “ pC2 ´ r3|p|qn` r3p´ λ3. (2.43)
27
Then we use FFT and obtain the following system of linear equations:
¨
˚
˝
a11 a12
a21 a22
˛
‹
‚
¨
˚
˝
F pn1q
F pn2q
˛
‹
‚
“
¨
˚
˝
F pf1q
F pf2q
˛
‹
‚
(2.44)
wherea11pi, jq “ C ´ 4bpcos zi ´ 1q,
a12pi, jq “ ´2bp1´ cos zj `?´1 sin zjqp´1` cos zi `
?´1 sin ziq,
a21pi, jq “ ´2bp1´ cos zi `?´1 sin ziqp´1` cos zj `
?´1 sin zjq,
a22pi, jq “ C ´ 4bpcos zj ´ 1q,
Dpi, jq “ C2´ 4bCpcos zi ` cos zj ´ 2q
(2.45)
andf1 “ r3p1 ´ λ31 ´ B
`x pC ´ r3|p|qn1,
f2 “ r3p2 ´ λ32 ´ B`y pC ´ r3|p|qn2.
(2.46)
Thus, the minimizer n “ pn1, n2q is given by:
n1 “ F´1
„
a22F pf1q ´ a12F pf2q
D
, n2 “ F´1
„
´a21F pf1q ` a11F pf2q
D
. (2.47)
Finally, we update the Lagrange multipliers by:
λk`111 pi, jq “ λk11pi, jq ` r1pp1pi, jq ´ B
`1 uq
λk`112 pi, jq “ λk12pi, jq ` r1pp2pi, jq ´ B
`2 uq
λk`12 “ λk2 ` r2pv ´ uq
λk`131 pi, jq “ λk31pi, jq ` r3p|p|n1pi, jq ´ p1pi, jqq
λk`132 pi, jq “ λk32pi, jq ` r3p|p|n2pi, jq ´ p2pi, jqq
(2.48)
We end this chapter by giving a summary of our algorithm. See Table 2.1 for details.
28
Table 2.1: Augmented Lagrangian Method for Euler’s Elastica Based Variational Models
• Initialize all variables and Lagrange multipliers: v0, u0,p0,n0,λ01, λ
02,λ
03.
• Start iteration. For k=1,2,3,..., solve the following sub-problems to obtain the approx-
imate minimizer pvk, uk,pk,nkq with fixed Lagrange multipliers pλk´11 , λk´1
2 ,λk´13 q:
vk “ arg minLpvk´1, uk´1,pk´1,nk´1;λk´11 , λk´1
2 ,λk´13 q
uk “ arg minLpvk, uk´1,pk´1,nk´1;λk´11 , λk´1
2 ,λk´13 q
pk “ arg minLpvk, uk,pk´1,nk´1;λk´11 , λk´1
2 ,λk´13 q
nk “ arg minLpvk, uk,pk,nk´1;λk´11 , λk´1
2 ,λk´13 q
(2.49)
• At the end of each iteration, update the Lagrange multiplier by:
λk1 “ λk´11 ` r1pp
k´∇ukq
λk2 “ λk´12 ` r2pv
k´ ukq
λk3 “ λk´13 ` r1p|p
k|nk ´ pkq
(2.50)
• Measure the relative residuals and stop the iteration if they are smaller than a threshold
εr.
29
CHAPTER 3
NUMERICAL RESULTS
In this section, we show numerical examples using the algorithm for image denoising,
image painting, image zooming, and image deblurring. In order to monitor the convergence
of our algorithm, we calculate the residuals, the relative errors of the Lagrangian multipliers
and the relative error of u as follows:
`
Rk1 , R
k2 , R
k3
˘
“1
|Ω|
`
|pk ´∇uk|L1 , |v ´ u|L1 , |pk|nk ´ pk|L1
˘
. (3.1)
`
Lk1, Lk2, L
k3
˘
“
ˆ
|λk1 ´ λk´11 |L1
|λk´11 |L1
,|λk2 ´ λ
k´12 |L1
|λk´12 |L1
,|λk3 ´ λ
k´13 |L1
|λk´13 |L1
˙
. (3.2)
|uk ´ uk´1|L1
|uk´1|L1
. (3.3)
where | ¨ |L1 is the L-1 norm and Ω is the image domain.
It is also natural to calculate the Signal-to-Noise Ratio (SNR) and the Peak Signal-
to-Noise Ratio (PSNR) to measure the quality of the results:
SNR “ 10 log10
ˆ
ř
i,jpukpi, jq ´ a1q
2
ř
i,jp|ukpi, jq ´ ucpi, jq| ´ a2q
2
˙
(3.4)
where uc is the original image and a1 and a2 are the average of uk and uk´uc, respectively [40].
30
3.1 Image Denoising
In this sub-section. we show the numerical results of image denoising. In Figures 3.1,
3.3, 3.5, 3.7, 3.9 and 3.11, we add Gaussian white noise to the original images, and in Figures
3.13, 3.15, and 3.17, we add salt-and-pepper noise to the given images. As mentioned before,
we use s “ 2 for Gaussian white noise and s “ 1 for salt-and-pepper noise.
In Figures 3.1, 3.3, 3.5, and 3.7, we show the denoising results for some 256ˆ256 pixel
real images with Gaussian white noise. The original images are shown on the left, the noised
images are shown in the middle and the restored images are shown on the right. Since the
Euler’s elastica based variational model involves higher-order derivatives, it can prevent the
staircase effect and give more smooth results. The plots of relative errors and the functional
energies are also given in Figures 3.2, 3.4, 3.6, and 3.8. All plots are in log-scale. As can be
seen from these plots, the algorithm is stable and convergent.
In Figure 3.9 and 3.11, we try to apply our algorithm to some MRI (Magnetic Reso-
nance Imaging) images. In Figure 3.9, we use a cardiac MRI image with size 256ˆ 256 and
in Figure 3.11, we use a brain MRI image with size 204ˆ 204. It is clear that the proposed
algorithm has its application in medical image analysis.
In Table 3.1, we compare the SNR and PSNR of our algorithm with those of Tai’s
ALM algorithm. Since we use fewer relaxation variables, the variables are closer related.
Thus we obtain reasonable results with slightly higher SNR and PSNR. From all the results,
we can conclude that our algorithm keeps the advantages of the original ALM algorithm for
Euler’s elastica model in [40].
Figure 3.13 shows the denoising result of a synthetic image. Figures 3.15 and 3.17
show the results of some real images. Salt-and-pepper noise with density 0.2 is added to
the original image. Same as the original ALM for Euler’s elastica model, our algorithm can
handle jump discontinuities without causing the staircase effect.
31
Figure 3.1: Gaussian noise. a “ 1, b “ 1, η “ 17, r1 “ 300, r3 “ .5.
Figure 3.2: Corresponding residuals, relative errors, and functional energy.
32
Figure 3.3: Gaussian noise. a “ 1, b “ 1, η “ 15, r1 “ 1300, r3 “ 1.
Figure 3.4: Corresponding residuals, relative errors, and functional energy.
33
Figure 3.5: Gaussian noise. a “ 1, b “ 1, η “ 15, r1 “ 1300, r3 “ 1.
Figure 3.6: Corresponding residuals, relative errors, and functional energy.
34
Figure 3.7: Gaussian noise. a “ 1, b “ 1, η “ 17, r1 “ 300, r3 “ 1.
Figure 3.8: Corresponding residuals, relative errors, and functional energy.
35
Figure 3.9: Gaussian noise. a “ 1, b “ 1, η “ 17, r1 “ 1300, r3 “ 5.
Figure 3.10: Corresponding residuals, relative errors, and functional energy.
36
Figure 3.11: Gaussian noise. a “ 1, b “ 1, η “ 20, r1 “ 500, r3 “ 3.
Figure 3.12: Corresponding residuals, relative errors, and functional energy.
37
Figure 3.13: Salt Pepper noise. a “ 1, b “ 15, η “ 7, r1 “ 1500, r2 “ 100, r3 “ 50.SNR“ 29.06, PSNR“ 30.60.
Figure 3.14: Corresponding residuals, relative errors, and functional energy.
38
Figure 3.15: Salt Pepper noise. a “ 1, b “ 10, η “ 3, r1 “ 1000, r2 “ 50, r3 “ 5. SNR“ 22.83,PSNR“ 29.19.
Figure 3.16: Corresponding residuals, relative errors, and functional energy.
39
Figure 3.17: Salt Pepper noise. a “ 1, b “ 10, η “ 5, r1 “ 1000, r2 “ 50, r3 “ 10. SNR“24.44, PSNR“ 30.10.
Figure 3.18: Corresponding residuals, relative errors, and functional energy.
40
Images Size Our Algorithm Tai’s AlgorithmSNR PSNR SNR PSNR
BMW Logo 256ˆ256 18.64 25.44 18.24 25.05Einstein 256ˆ256 19.76 26.01 19.37 25.63Pumpkin 256ˆ256 18.85 27.32 18.32 26.78TMNT 256ˆ256 17.60 25.93 17.26 25.60
Cardiac MRI 256ˆ256 18.43 27.71 17.18 26.45Brain MRI 204ˆ204 15.24 25.64 14.65 25.04
Table 3.1: The Signal-to-Noise Ratio (SNR) and the Peak Signal-to-Noise Ratio (PSNR) ofour proposed ALM algorithm compared to Tai’s ALM algorithm.
3.2 Image Inpainting
Image inpainting is the process of restoring the damaged areas of digital images. In
this subsection, we show the results of image inpainting using the Euler’s elastica functional.
In our experiments, the damaged areas D are shown in gray color.
We first test our algorithm in some extreme case. In Figure 3.19, even if the inpainting
domain is large, our algorithm can still restore the damaged area. In Figure 3.20, the
inpainting area is wide. The Euler’s elastica model can handle this issue while the TV
inpainting model [38] will fail to connect the two ends of the black bar. All images used in
Figures 3.19 and 3.20 are 64ˆ 64 pixels and 128ˆ 128 pixels, respectively.
Figure 3.19: Inpainting of a synthetic image.
41
Figure 3.20: Inpainting of a synthetic image.
Figures 3.21, 3.23, 3.25, and 3.27 are the inpainting results of some 256 ˆ 256 real
images. Same as before, the first row shows the original images, the second row shows the
damaged images, and the third row shows the restored images. The plots of corresponding
relative errors and energy can be found in Figures 3.22, 3.24, 3.26, and 3.28. Since the
Euler’s elastica functional is not convex, there’s no guaranteed convergence of the algorithm.
However, all inpainting results are visually good and as can be seen from the plots, all
residuals Ri’s and Lagrange multipliers λi’s converge at the same rate.
The SNR and PSNR, together with the computational time are shown in Table 3.2.
Figure 3.21: Inpainting result. a “ 1, b “ 1, η “ 5, r1 “ 40, r2 “ 20, r3 “ 1.
42
Figure 3.22: Corresponding residuals, relative errors, and functional energy.
Figure 3.23: Inpainting result. a “ 1, b “ 10, η “ 10, r1 “ 800, r2 “ 20, r3 “ 10.
43
Figure 3.24: Corresponding residuals, relative errors, and functional energy.
Figure 3.25: Inpainting result. a “ 1, b “ 1, η “ 5, r1 “ 40, r2 “ 20, r3 “ 1.
44
Figure 3.26: Corresponding residuals, relative errors, and functional energy.
Figure 3.27: Inpainting result. a “ 1, b “ 1, η “ 5, r1 “ 40, r2 “ 10, r3 “ 1.
45
Figure 3.28: Corresponding residuals, relative errors, and functional energy.
Image Size SNR PSNR Iterations CPU Time(sec)
BMW Logo 256ˆ256 24.74 31.54 500 32.20Einstein 256ˆ256 24.34 30.60 500 37.55Pumpkin 256ˆ256 29.57 38.02 500 30.76TMNT 256ˆ256 25.97 34.31 500 32.18
Table 3.2: SNR, PSNR, and computational time of proposed algorithm in image inpainting.
3.3 Image Zooming
In this subsection, we show the the results of image zooming. A given image u0 with
size M ˆ N will be magnified into an rrpM ´ 1q ` 1s ˆ rrpN ´ 1q ` 1s image, where r is a
positive integer. Hence we can define Γ “ tpi, jq P Ω|i ” 1 mod r, j ” 1 mod ru and u0 in
the elastica functional will be
46
u0pi, jq “
$
’
’
&
’
’
%
u0
ˆ
i´ 1
r` 1,
j ´ 1
r` 1
˙
pi, jq P Γ
0 pi, jq R Γ
(3.5)
Figure 3.29 shows the zooming result of a checkerboard. The original image (left) is
256 ˆ 256, and is magnified by a factor r “ 2. The relative errors of u and the Lagrange
multipliers can be found in Figure 3.30.
Figures 3.31 and 3.33 show the zooming results of the BMW logo and the Einstein
image. The original images (left) are all 256 ˆ 256, and the enlarged images are 511 ˆ
511. Moreover, we plot the residuals, relative errors of u and Lagrange multipliers and the
functional energy in Figures 3.32 and 3.34. Our algorithm can keep the sharpness of edges
and preserve details. There is no blurring effect in the enlarged images.
Figure 3.29: Checkerboard. Left: Original Image(256ˆ256); Right: 2x Size Image(511ˆ511).Parameters used: a “ 0.1, b “ 1, η “ 10, r1 “ 800, r2 “ 10, r3 “ 5.
47
Figure 3.30: 2x size. Reltive errors of Lagrange multipliers (left) and u (right).
Figure 3.31: BMW Logo. Left: Original Image(256ˆ256); Right: 2x Size Image(511ˆ511).Parameters used: a “ 0.1, b “ 1, η “ 8, r1 “ 800, r2 “ 10, r3 “ 5.
48
Figure 3.32: Corresponding residuals, relative errors, and functional energy.
Figure 3.33: Einstein. Left: Original Image(256 ˆ 256); Right: 2x Size Image(511 ˆ 511).Parameters used: a “ 0.1, b “ 1, η “ 7, r1 “ 800, r2 “ 10, r3 “ 5.
49
Figure 3.34: Corresponding residuals, relative errors, and functional energy.
3.4 Image Deblurring
Our proposed algorithm can also be applied to reduce the blurring effect of a given
image. In this subsection, we illustrate the deblurring results. Assume that u is the true
image, then the blurred image can be expressed as
u0 “ K ˚ u` n. (3.6)
where K is the convolution operator and n is an additive noise. We can then use
|K ˚ u´ u0|2 (3.7)
50
as the fitting term in the Euler’s elastica functional. Therefore the Lagrangian function to
be minimized becomes
Lpv, u,p,n;λ1, λ2,λ3q “
ż
Ω
pa` bp∇ ¨ nq2q|p| ` η
2
ż
Γ
|K ˚ u´ u0|2
`r1
2
ż
Ω
pp´∇uq2 `ż
Ω
λ1 ¨ pp´∇uq
`r2
2
ż
Ω
||p|n´ p|2 `
ż
Ω
λ2 ¨ p|p|n´ pq
(3.8)
Similarly, minimizing (3.8) is equivalent to solving the sub-problems of u,p, and n.
Thus, using the same technique as mentioned in Chapter 2, we can obtain the minimizer of
each sub-problem. Notice that the extra variable v is not needed in image debluring. There
are fewer tuning parameters.
We test our algorithm with two different kinds of blur effects: Gaussian blur and
out-of-focus blur. In Figures 3.35 and 3.37, we blur the original images using a Gaussian
kernel with a standard deviation of 4. In Figures 3.39 and 3.41, we simulate the our-of-focus
blur by using a disk kernel with a radius of 6. The relative errors are shown in Figures 3.36,
3.38, 3.40 and 3.42, and the SNR and PSNR of the deblurring results are shown in Table
3.3. Our proposed algorithm can successfully restore the image without creating artificial
rings.
Figure 3.35: Gaussian blur with a standard deviation of 4. a “ .1, b “ 1, η “ 8000, r1 “
500, r3 “ 10.
51
Figure 3.36: Corresponding residuals, relative errors, and functional energy.
Figure 3.37: Gaussian blur with a standard deviation of 4. a “ 1, b “ 1, η “ 2000, r1 “
1000, r3 “ 1.
52
Figure 3.38: Corresponding residuals, relative errors, and functional energy.
Figure 3.39: Out-of-focus blur kernel with a radius of 6. a “ 1, b “ 1, η “ 2000, r1 “
1000, r3 “ 1.
53
Figure 3.40: Corresponding residuals, relative errors, and functional energy.
Figure 3.41: Out-of-focus blur kernel with a radius of 6.. a “ 1, b “ 1, η “ 2000, r1 “
1000, r3 “ 1.
54
Figure 3.42: Corresponding residuals, relative errors, and functional energy.
Image Size SNR PSNR Iterations CPU Time(sec)
BMW Logo (Gaussian) 256ˆ256 15.55 22.36 1500 175.22
BMW Logo (Out-Of-Focus) 256ˆ256 18.53 25.34 1500 153.50
Cardiac MRI (Gaussian) 256ˆ256 19.78 29.05 1500 180.41
Cardiac MRI (Out-Of-Focus) 256ˆ256 21.64 30.91 1500 166.35
Table 3.3: SNR, PSNR, and computational time of the deblurring results.
55
CHAPTER 4
PARAMETER ANALYSIS
In this chapter, we will briefly discuss how to choose the penalty parameters (ri’s)
of the Lagrangian functional (2.17). Let’s consider the 256 ˆ 256 cardiac MRI image with
Gaussian white noise in Figure 3.1. The parameter η in the fitting term controls the difference
between the restored image u and the noised image u0. Thus we usually choose a small η
value to remove the noise. We set a “ 1, b “ 10, η “ 15, and let s “ 2.
First, we fix r3 “ 1, and apply our algorithm with different values of r1 for 500
iterations. The denoising results are shown in Figure 4.1 and the SNR/PSNR are shown in
Table 4.1. Comparing the denoising results 4.1(c)-(h), it is clear that as r1 becomes larger,
the denoised image becomes smoother. Larger r1 also leads to higher SNR/PSNR. However,
some details will be lost if the value of r1 is too huge. In Figure 4.1(h), since we choose
r1 “ 5000, the denoised image is slightly blurry and the small hole at the bottom of the
image is almost gone. On the other hand, the choice of r1 is quite flexible because visually
there are very tiny differences between 4.1(d)-(g). One can simply choose a number between
50 and 1000 for r1, and either increase or decrease the value if necessary.
56
Figure 4.1: Gaussian noise. (a) Original Image; (b) Noisy Image; (c) Denoised Image usingr1 “ 10; (d) Denoised Image using r1 “ 50; (e) Denoised Image using r1 “ 100; (f) DenoisedImage using r1 “ 500; (g) Denoised Image using r1 “ 1000; (h) Denoised Image usingr1 “ 5000.
57
Image SNR PSNR Image SNR PSNR
4.1(c) 17.47 26.74 4.1(d) 17.49 26.77
4.1(e) 17.55 26.83 4.1(f) 17.68 26.95
4.1(g) 17.74 27.02 4.1(h) 17.85 27.13
Table 4.1: The SNR and PSNR of denoising results in Figure 4.1.
In Figure 4.2, we fix r1 “ 500, and run 500 iterations with different r3 values. Choos-
ing the correct value of r3 is more crucial, as it determines the amount of detail preserved.
As can be seen in Figure 4.2, the denoising results look good when the value of r3 is less than
10 (4.2(c)-(e)). However, more and more detail is missing as r3 becomes larger (4.2(f)-(h)).
In fact, r3 “ 1 works well in most of the numerical experiments in this dissertation. Thus
one can set r3 “ 1 and use a smaller r3 if more detail needs to be preserved. The SNR/PSNR
corresponding to Figure 4.2 can be found in Table 4.2.
Image SNR PSNR Image SNR PSNR
4.2(c) 17.86 27.13 4.2(d) 17.74 27.01
4.2(e) 17.55 26.83 4.2(f) 17.28 26.55
4.2(g) 16.92 26.19 4.2(h) 15.75 25.02
Table 4.2: The SNR and PSNR of denoising results in Figure 4.2.
58
Figure 4.2: Gaussian noise. (a) Original Image; (b) Noisy Image; (c) Denoised Image usingr3 “ 0.1; (d) Denoised Image using r3 “ 0.5; (e) Denoised Image using r3 “ 1; (f) DenoisedImage using r3 “ 10; (g) Denoised Image using r3 “ 25; (h) Denoised Image using r3 “ 100.
59
Now let’s consider the denoising problem for the salt-and-pepper noise. In this case,
s “ 1 and there is one more tuning parameter r2. For simplicity, we set a “ 1, b “ 10, η “ 10,
and r1 “ 500. Figure 4.3 shows the denoising results under different values of r2 (from 5 to
200). All denoising results look good except 4.3(d) when a small white area appears. This
can be avoided by increasing the value of r3. In 4.2(b)-(d), we use r3 “ 50, and in 4.2(e)-(h),
we use r3 “ 100. The SNR/PSNR can be found in Table 4.3.
Compared with the original ALM algorithm in [41], our algorithm is less sensitive to
the penalty parameters. More precisely, as can be seen in this chapter, there’s a wide range
of choices of the tuning parameters. Therefore, our algorithm is more convenient to use and
people can obtain a good result more easily.
Image SNR PSNR Image SNR PSNR
4.2(b) 19.97 29.24 4.2(c) 20.40 29.67
4.2(d) 20.53 29.80 4.2(e) 20.49 29.77
4.2(f) 20.73 30.00 4.2(g) 20.81 30.09
4.2(h) 20.81 30.08
Table 4.3: The SNR and PSNR of denoising results in Figure 4.3.
60
Figure 4.3: Salt-and-pepper noise with density 0.2. (a) Noisy Image; (b) Denoised Imageusing r2 “ 5, r3 “ 50; (c) Denoised Image using r2 “ 10, r3 “ 50; (d) Denoised Image usingr2 “ 25, r3 “ 50; (e) Denoised Image using r2 “ 25, r3 “ 100; (f) Denoised Image usingr2 “ 50, r3 “ 100; (g) Denoised Image using r2 “ 100, r3 “ 100; (h) Denoised Image usingr2 “ 200, r3 “ 100.
61
CHAPTER 5
CONCLUSION AND FUTURE RESEARCH
In this dissertation, we present a fast method for the Euler’s elastica model in image
processing based on the work of Tai et al. [40] and Zhu et al. [4]. The original algorithm
given by Tai et al. utilizes the augmented Lagrange method (ALM) to efficiently solve the
minimization problem of the Euler’s elastica functional. However, the use of too many La-
grange multipliers makes it hard to choose optimal penalty parameters. Zhu et al. proposed
an L1´Euler’s elastica based Chan-Vese segmentation model and also introduced an aug-
mented Lagrangian functional with fewer Lagrange multipliers. We apply the technique in
Zhu et al.’s paper and present a new augmented Lagrangian functional which involves fewer
Lagrange multipliers and penalty parameters while keeping the advantages of Tai et al’s fast
algorithm.
Our proposed algorithm benefits from the employment of fewer Lagrange multipliers.
Moreover, compared with Tai et al.’s algorithm, since we use fewer relaxation variables, the
solution of each variable (u, v,p, and n) becomes more accurate. Therefore, the quality of
the numerical results is improved.
We extend the use the Euler’s elastica based variational model to image deblurring.
Numerical results show that the proposed algorithm can solve the deconvolution problem
without creating artificial rings.
We also analyze the choice of penalty parameters r1, r2, and r3 of our algorithm. It
is clear that our algorithm is less sensitive to the penalty parameters and thus choosing the
optimal tuning parameters becomes more convenient.
In future research, we are interested in further reducing the number of Lagrange
multipliers for minimizing Euler’s elastica based variational models.
62
REFERENCES
[1] J. Allebach and P. W. Wong. Edge-directed interpolation. In Image Processing, 1996.Proceedings., International Conference on, volume 3, pages 707–710. IEEE, 1996.
[2] L. Ambrosio and S. Masnou. A direct variational approach to a problem arising inimage reconstruction. Interfaces and Free Boundaries, 5(1):63–81, 2003.
[3] E. Bae, J. Shi, and X.-C. Tai. Graph cuts for curvature based image denoising. ImageProcessing, IEEE Transactions on, 20(5):1199–1210, 2011.
[4] E. Bae, X.-C. Tai, and W. Zhu. Augmented lagrangian method for an euler’s elasticabased segmentation model that promotes convex contours. Accepted by Inverse Problemsand Imaging, 2016.
[5] C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, and J. Verdera. Filling-in by jointinterpolation of vector fields and gray levels. Image Processing, IEEE Transactions on,10(8):1200–1211, 2001.
[6] C. Ballester, V. Caselles, and J. Verdera. Disocclusion by joint interpolation of vectorfields and gray levels. Multiscale Modeling & Simulation, 2(1):80–123, 2003.
[7] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In Proceedingsof the 27th annual conference on Computer graphics and interactive techniques, pages417–424. ACM Press/Addison-Wesley Publishing Co., 2000.
[8] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher. Simultaneous structure and textureimage inpainting. Image Processing, IEEE Transactions on, 12(8):882–889, 2003.
[9] P. Blomgren and T. F. Chan. Color tv: total variation methods for restoration ofvector-valued images. Image Processing, IEEE Transactions on, 7(3):304–309, 1998.
[10] C. Brito-Loeza and K. Chen. Multigrid algorithm for high order denoising. SIAMJournal on Imaging Sciences, 3(3):363–389, 2010.
[11] A. Buades, B. Coll, and J.-M. Morel. A review of image denoising algorithms, with anew one. Multiscale Modeling & Simulation, 4(2):490–530, 2005.
[12] F. Cao, Y. Gousseau, S. Masnou, and P. Pérez. Geometrically guided exemplar-basedinpainting. SIAM Journal on Imaging Sciences, 4(4):1143–1179, 2011.
[13] V. Caselles, J.-M. Morel, and C. Sbert. An axiomatic approach to image interpolation.Image Processing, IEEE Transactions on, 7(3):376–386, 1998.
63
[14] T. Chan, A. Marquina, and P. Mulet. High-order total variation-based image restora-tion. SIAM Journal on Scientific Computing, 22(2):503–516, 2000.
[15] T. F. Chan and J. Shen. Nontexture inpainting by curvature-driven diffusions. Journalof Visual Communication and Image Representation, 12(4):436–449, 2001.
[16] T. F. Chan, J. Shen, and H.-M. Zhou. Total variation wavelet inpainting. Journal ofMathematical imaging and Vision, 25(1):107–125, 2006.
[17] T. F. Chan and C.-K. Wong. Total variation blind deconvolution. Image Processing,IEEE Transactions on, 7(3):370–375, 1998.
[18] A. Criminisi, P. Perez, and K. Toyama. Object removal by exemplar-based inpainting.In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE ComputerSociety Conference on, volume 2, pages II–721. IEEE, 2003.
[19] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. InComputer Vision, 1999. The Proceedings of the Seventh IEEE International Conferenceon, volume 2, pages 1033–1038. IEEE, 1999.
[20] S. Esedoglu and R. March. Segmentation with depth but without detecting junctions.Journal of Mathematical Imaging and Vision, 18(1):7–15, 2003.
[21] B. K. Horn. The curve of least energy. ACM Transactions on Mathematical Software(TOMS), 9(4):441–460, 1983.
[22] H. S. Hou and H. Andrews. Cubic splines for image interpolation and digital filtering.Acoustics, Speech and Signal Processing, IEEE Transactions on, 26(6):508–517, 1978.
[23] T. S. Huang, G. J. Yang, and G. Y. Tang. A fast two-dimensional median filteringalgorithm. Acoustics, Speech and Signal Processing, IEEE Transactions on, 27(1):13–18, 1979.
[24] R. G. Keys. Cubic convolution interpolation for digital image processing. Acoustics,Speech and Signal Processing, IEEE Transactions on, 29(6):1153–1160, 1981.
[25] X. Li and M. T. Orchard. New edge-directed interpolation. Image Processing, IEEETransactions on, 10(10):1521–1527, 2001.
[26] L. B. Lucy. An iterative technique for the rectification of observed distributions. Theastronomical journal, 79:745, 1974.
[27] M. Lysaker, A. Lundervold, and X.-C. Tai. Noise removal using fourth-order partialdifferential equation with applications to medical magnetic resonance images in spaceand time. Image Processing, IEEE Transactions on, 12(12):1579–1590, 2003.
[28] M. Lysaker and X.-C. Tai. Iterative image restoration combining total variation min-imization and a second-order functional. International journal of computer vision,66(1):5–18, 2006.
64
[29] S. Masnou and J. Morel. Singular interpolation and disocclusion, 1998.
[30] S. Masnou and J.-M. Morel. Level lines based disocclusion. In Image Processing, 1998.ICIP 98. Proceedings. 1998 International Conference on, pages 259–263. IEEE, 1998.
[31] D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions andassociated variational problems. Communications on pure and applied mathematics,42(5):577–685, 1989.
[32] M. Nitzberg, D. Mumford, and T. Shiota. Filtering, segmentation and depth. 1993.
[33] R. Olivier and C. Hanqiang. Nearest neighbor value interpolation. arXiv preprintarXiv:1211.1768, 2012.
[34] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion.Pattern Analysis and Machine Intelligence, IEEE Transactions on, 12(7):629–639, 1990.
[35] W. H. Richardson. Bayesian-based iterative method of image restoration*. JOSA,62(1):55–59, 1972.
[36] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removalalgorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992.
[37] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image.In ACM Transactions on Graphics (TOG), volume 27, page 73. ACM, 2008.
[38] J. Shen and T. F. Chan. Mathematical models for local nontexture inpaintings. SIAMJournal on Applied Mathematics, 62(3):1019–1043, 2002.
[39] J. Shen, S. H. Kang, and T. F. Chan. Euler’s elastica and curvature-based inpainting.SIAM Journal on Applied Mathematics, 63(2):564–592, 2003.
[40] X.-C. Tai, J. Hahn, and G. J. Chung. A fast algorithm for euler’s elastica model usingaugmented lagrangian method. SIAM Journal on Imaging Sciences, 4(1):313–344, 2011.
[41] X.-C. Tai and C. Wu. Augmented lagrangian method, dual methods and split bregmaniteration for rof model. In Scale space and variational methods in computer vision, pages502–513. Springer, 2009.
[42] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In ComputerVision, 1998. Sixth International Conference on, pages 839–846. IEEE, 1998.
[43] C. R. Vogel and M. E. Oman. Iterative methods for total variation denoising. SIAMJournal on Scientific Computing, 17(1):227–238, 1996.
[44] L.-Y. Wei and M. Levoy. Fast texture synthesis using tree-structured vector quantiza-tion. In Proceedings of the 27th annual conference on Computer graphics and interactivetechniques, pages 479–488. ACM Press/Addison-Wesley Publishing Co., 2000.
65
[45] N. Wiener. Extrapolation, interpolation, and smoothing of stationary time series, vol-ume 2. MIT press Cambridge, MA, 1949.
[46] F. Yang, K. Chen, and B. Yu. Homotopy method for a mean curvature-based denoisingmodel. Applied Numerical Mathematics, 62(3):185–200, 2012.
[47] Y.-L. You and M. Kaveh. Fourth-order partial differential equations for noise removal.Image Processing, IEEE Transactions on, 9(10):1723–1730, 2000.
[48] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum. Progressive inter-scale and intra-scalenon-blind image deconvolution. In ACM Transactions on Graphics (TOG), volume 27,page 74. ACM, 2008.
66