Upload
khalid-gidaya
View
5.772
Download
1
Embed Size (px)
DESCRIPTION
Blind Image Deconvolution Via Penalized Maximum Likelihood Estimation
Citation preview
HOWARD UNIVERSITY
Blind Image Deconvolution Via Penalized Maximum Likelihood Estimation
A ThesisSubmitted to the Faculty of the
Graduate School
of
HOWARD UNIVERSITY
in partial fulfillment ofthe requirements for the
degree of
Master of Engineering
Department of Electrical and Computer Engineering
by
Khalid A. Abubaker
Washington, D.C.May 2007
© 2007 Khalid A. Abubaker
All Rights Reserved
HOWARD UNIVERSITYGRADUATE SCHOOL
DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING
THESIS COMMITTEE
Mohamed Chouikha, Ph.D.Chairman
John M.M. Anderson, Ph.D.
Charles J. Kim, Ph.D.
John M.M. Anderson, Ph.D.
Thesis Advisor
Candidate: Khalid A. Abubaker
Date of Defense: April 20, 2007
ii
Dedication
I dedicate this work to my late father, Abdurahman Gidaya who passed away very
early in his life without enjoying the fruit of his work and to my late grand mother, Fatuma
Yonis whose wisdom has contributed so much to the success of my endeavors.
iii
Acknowledgments
First of all, I would like to thank my academic advisor and mentor, Dr. John M.
M. Anderson, for his excellent guidance and support of my research work. Without Dr.
Anderson’s help, this research work could not have come to its final stage. I would also like
to thank my thesis defense committee members Dr. Mohamed Chouikha and Dr. Charles
J. Kim for their comments and corrections. I thank all the professors in the Department of
Electrical and Computer Engineering and the Department of Systems and Computer Science
for sharing their priceless knowledge with me.
Furthermore, I thank all my colleagues in the Electrical Engineering Lab for their
help and support. I would specially like to thank Aaron Jackson for providing me with his
Howard University Thesis and Dissertation writing Latex template and for all the technical
help he has given me .
I also would like to acknowledge the support provided by the U.S. Army High Perfor-
mance Computing Research Center located at the University of Minnesota.
Finally, my special thanks goes to Muna Mahdi and the rest of my family for their
continued love and support throughout all the good and bad times.
iv
Abstract
In this thesis, we present image restoration algorithms that were motivated by the
problem of reducing blur in hyperspectral images. We address the problem of interest,
known as the blind image deconvolution problem, by applying a penalized maximum
likelihood (PML) algorithm to each spectral plane of hyperspectral images. The quantities
to be estimated are the mean value of the true spectral plane, referred to as the reflectance
parameters, and hyperspectral sensor’s point spread function (PSF). In the PML method,
an objective function equal to the negative log likelihood function plus a penalty function
is minimized with respect to the unknown parameters. To minimize the PML objective
function, we use an iterative technique known as the iterative majorization algorithm. Given
initial estimates of the parameters, the PML objective function is decreased with respect to
the reflectance parameters while setting the PSF parameters to their current estimate. Then,
the PML objective function is decreased with respect to the PSF parameters while setting
the reflectance parameters to their current estimate. The algorithm alternates between the
two steps until some chosen stopping criterion is met. In experimental studies, the PML
algorithm was stable and produced promising results.
v
Table of Contents
Committee Approval Form . . . . . . . . . . . . . . . . . . . . . . . . . ii
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapters
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. History of Hyperspectral Imagery . . . . . . . . . . . . . . . . . . . . . 1
1.2. Hyperspectral Image Data Collection . . . . . . . . . . . . . . . . . . . 2
1.3. Source of Image Degradation and Need for Restoration . . . . . . . . . . . 3
1.4. Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5. Background on a Poisson Model for Hyperspectral Image Data and MaximumLikelihood Image Deconvolution Algorithms. . . . . . . . . . . . . . . . 9
1.5.1. A Poisson Model for Hyperspectral Image Data . . . . . . . . . . . 9
1.5.2. Outline of Maximum Likelihood Image Deconvolution Algorithm . . . 12
1.5.3. Outline of Maximum Likelihood Blind Image Deconvolution Algorithm 14
1.6. Outline of Penalized Maximum Likelihood Blind Image DeconvolutionAlgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
vi
2. Maximum Likelihood Expectation Maximization Image DeconvolutionAlgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3. Penalized Maximum Likelihood Image Deconvolution Algorithm . . . . . . 28
3.1. Penalized Maximum Likelihood Algorithm Via Majorizing Functions . . . . . 29
4. Maximum Likelihood Blind Image Deconvolution Algorithm . . . . . . . . 36
5. Penalized Maximum Likelihood Blind Image Deconvolution Algorithm . . . 42
5.1. Penalized Maximum Likelihood Algorithm for Estimating the Point SpreadFunction With Unity DC Gain Penalty Function . . . . . . . . . . . . . . 43
6. Experimental Study and Simulation Results . . . . . . . . . . . . . . . . 49
6.1. Maximum Likelihood Image Deconvolution . . . . . . . . . . . . . . . . 49
6.2. Penalized Maximum Likelihood Image Deconvolution . . . . . . . . . . . 54
6.3. Maximum Likelihood Blind Image Deconvolution . . . . . . . . . . . . . 59
6.4. Penalized Maximum Likelihood Blind Image Deconvolution . . . . . . . . . 62
7. Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . 67
7.1. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.2. Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Appendices
Matlab Source Code for PML Blind Image Deconvolution Algorithm . . . . . . 70
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
vii
List of Figures
Figure1.1. 2-Dimensional Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2. Lexicographical Representation of a 3 � 3 PSF . . . . . . . . . . . . . . . . 10
1.3. Reflect-Extending an Image . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4. A 1-D Illustration of Iterative Majorization Technique . . . . . . . . . . . . . 17
2.1. A 9 by 9 example image for visualizing the elements of Sj when the PSF maskis centered at an interior pixel j D 42 . . . . . . . . . . . . . . . . . . . . . . 24
2.2. Examples showing some of the elements of Sj when the PSF mask is notcentered at an interior pixel j D 42. . . . . . . . . . . . . . . . . . . . . . . 25
2.3. A 9 by 9 example image for visualizing the elements of Sj when the PSF maskis centered at a border pixel j D 1. . . . . . . . . . . . . . . . . . . . . . . . 26
2.4. Examples showing some of the elements of Sj when the PSF mask is notcentered at border pixel j D 1. . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.1. MLEM image deconvolution of an image with Poisson Noise . . . . . . . . . 50
6.2. MLEM image deconvolution of an image with Poisson noise and plot of theobjective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.3. MLEM image deconvolution of an image without Poisson Noise and plot of theobjective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.4. MLEM image deconvolution of an image without Poisson Noise . . . . . . . 53
6.5. PML image deconvolution of an image with Poisson noise and plot of theobjective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.6. PML image deconvolution of an image without Poisson noise and plot of theobjective function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.7. PML deblurred images with different ˇ values and plot of the objective functionfor noisy observed image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
viii
6.8. PML deblurred images with different ˇ values and plot of the objective functionfor noiseless observed image . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.9. ML BID results for an image blurred with Poisson noise . . . . . . . . . . . 60
6.10. ML BID results for landmine field image . . . . . . . . . . . . . . . . . . . . . 61
6.11. PML BID results for an image blurred with Poisson noise . . . . . . . . . . . 63
6.12. PML BID results for landmine field image with ˛ D 0:1 . . . . . . . . . . . 65
6.13. PML BID results for the bright spiral galaxy Messier 61 (NGC 4303) in theVirgo cluster with unity DC gain penalty function when ˛ D 0:0001 . . . . . 66
ix
List of Abbreviations
PSF – Point Spread Function
HYDICE – Hyperspectral Digital Image Collection Experiment
AVIRIS – Airborne Visible/Infrared Imaging Spectrometer
ER – Earth Resources
ERS – Earth Resources Survey
MSS – Multispectral Scanner
TM – Thermic Mapper
CCD – Charge-Coupled Device
PET – Positron Emission Tomography
MLEM – Maximum Likelihood Expectation Maximization
PML – Penalized Maximum Likelihood
OSL – One-Step-Late
ENR – Edge-to-Noise Ratio
MRP – Median Root-Prior
SNR – Signal to Noise Ratio
BID – Blind Image Deconvolution
GMRF – Gauss-Markov Random Field
RHS – Right Hand Side
LHS – Left Hand Side
x
List of Symbols
F – Poisson random variable for which the observations are pixels of true image of a scene
G – Poisson random variable for which the observations are pixels of the observed image
f – True image of a scene
g – Observed/Blurred image
� – Mean value of the true image of a scene
– Mean value of observed/blurred image
h – Point spread function
N – Negative of the log likelihood function.
Nm – Majorizing function for N when PSF is known
P – PML objective function for estimating �
Pm – Majorizing function for P
L – Edge preserving penalty function
ˇ – Penalty parameter controlling the edge preserving penalty function L
NLm – Huber’s majorizing function for L
Lm – Chang and Anderson’s majorizing function for L
– A general neighbor smoothing function
N m – Huber’s majorizing function for
m – Proposed majorizing function for
� – Neighbor smoothing function used in L
N�m – Huber’s majorizing function for �
�m – Proposed majorizing function for �
P� – First derivative of �
e – A non-increasing function derived from P�r – Gradient of a function
xi
Bi – Set of pixels in the neighborhood of the i th pixel
Q – Majorizing function for N when PSF is unknown
R – PML objective function for estimating h
Rm – Majorizing function for R
K – Unity DC Gain penalty function
˛ – Penalty parameter controlling the function K
Km – Majorizing function for K
� – Standard deviation used for generating Gaussian PSF
Convention: All bold symbols represent a vector whereas all non-bold symbols
represent either a scalar value, a matrix or a function name.
xii
Chapter 1. Introduction
Our motivation for investigating blind image deconvolution algorithms is to restore
hyperspectral images on a plane by plane basis. It is our belief that the performance
of anomaly detection algorithms for hyperspectral images will improve when a suitable
restoration is applied.
1.1. History of Hyperspectral Imagery
In hyperspectral imaging, an airborne or spaceborne sensor captures images that provide
spectral information on the terrain under investigation. Every physical object, based on its
shape, molecular composition, and surrounding terrain, absorbs and reflects electromagnetic
radiation from the sun. The reflected electromagnetic energy, measured by a hyperspectral
sensor, is divided into hundreds of contiguous narrow frequency bands known as channels.
For each channel, the hyperspectral sensor generates an image, referred to as a spectral
plane, with pixel values that are proportional to the reflected electromagnetic energy that
lies within the channel’s frequency band. Therefore, in hyperspectral imaging, a "stack" of
spectral planes are generated for each scene captured by the sensor. Hyperspectral imaging
is used in civil, commercial, and military applications where the problem is to detect and
identify objects of interest, geological features, and topographical features. For example, an
important military and humanitarian de-mining application is landmine detection.
Space technology developments in the late 1950s [3] allowed artificial satellites to
make observations of the earth from space and acquire some useful information helping
many science and engineering developments in civil, commercial and military remote
sensing applications such as civil engineering, Petroleum exploration, mining, forestry,
water management, landmine detection, and military information gathering. Starting with
the Earth Resources Survey (ERS) program launched by the United States in the mid-1960s,
1
NASA has been the pioneer launching the first ERS satellite in July of 1972, later renamed
Landsat I. It was on this Landsat I, and the later launched Landsat II satellites that NASA
has used its successive generations of multispectral scanner instruments, each having few
spectral bands, to collect images of the earth remotely. Electrooptical remote sensing is a
process of gathering information about an object or scene by a sensor array without coming
to physical proximity. When the reflected electromagnetic radiation from an object or scene
arrives at the airborne or spaceborne image sensor array, it is measured at sufficiently high
number of spectral channels for every pixel to form multispectral image of the same object
or scene. These multispectral images are different from images captured by conventional
devices in that they not only have spatial information but also wealth of information from
the different spectral dimensions which are studied in the field of Spectroscopy 1. Remote
sensing has included multispectral imaging devices (i.e., devices which image over a small
number of broad spectral bands) since Landsat I was launched in 1972. The two main
Landsat multispectral imaging instruments were the Multispectral Scanner (MSS) launched
aboard Landsat I, and the Thematic Mapper (TM), a second generation Landsat sensor. Both
instruments used a system of filters and discrete detectors to split image data into multiple
spectral bands, four filter/detector pairs in the case of the MSS, and seven for the TM.
1.2. Hyperspectral Image Data Collection
Hyperspectral image sensors, as the name indicates differ from their predecessors (i.e.,
multispectral Image sensors) in that the number of channels is much higher. The images most
often obtained by space imagery applications are hyperspectral images which are images
acquired by the hyperspectral image sensors. All the spectral sensors used onboard NASA-
developed Landsat satellites as well as all the others developed and used before late 1980s,
only had few spectral bands on the order of ten. The first sensor collecting data in several
hundreds of contiguous narrow spectral bands (hyperspectral sensor) emerged in the late
1Spectroscopy is the study of matter and its properties by investigating light, sound, or particles that areemitted, absorbed or scattered by the matter under investigation.
2
1980s. The airborne visible/infrared imaging spectrometer (AVIRIS), developed by NASA,
was tested in 1987, and then launched two years later in 1989. AVIRIS is mounted and flown
on the NASA ER-2 airplane at an altitude of 12.5 miles, and speed of 454 mi/h [4]. Among
the many hyperspectral sensors developed following the development of AVIRIS, was the
Hyperspectral Digital Imagery Collection Experiment (HYDICE), owned and operated by
the U.S. Navy Research Laboratory and used for surveillance missions[4]. HYDICE uses an
airborne camera to acquire 210 band images whose electromagnetic spectrum ranges from
visible light to the near Infrared in the wavelength of 0:4 � 2:5�m . Real data experiments
in this paper are performed using HYDICE images obtained from the Countermine Division
of the Night Vision and Electronic Sensors Directorate in Fort Belvoir, Virginia.
1.3. Source of Image Degradation and Need for Restoration
Hyperspectral image processing is becoming increasingly popular due to the wealth
of information hyperspectral images can provide to civilian, commercial, and military
applications. By exploiting fine spectral differences between various natural and man-made
materials of interest, it can support improved detection and classification capabilities of
hyperspectral remote sensors. One of the important military and humanitarian de-mining
applications is landmine detection. According to the landmine facts published by United
Nation [5], it is estimated that there are more than 110 million active mines scattered in 70
countries and an equal number stockpiled around the world waiting to be planted. Over
2,000 people are killed or maimed by mine explosions every month. Most of the casualties
of mine explosions are civilians who are killed or injured after hostilities of war have ended.
Although the use of hyperspectral imagery for automatic target detection and recogni-
tion is a fairly new area of research, it has a high potential in detection and identification
of man-made anomalies from their natural background. Literature reveals that spectral
characteristics of man-made objects differ greatly from spectral characteristics of natural
background [6],[7],[8] . However, due to the hyperspectral sensor’s optical system, atmo-
3
spheric conditions, and relative motion between the sensor and the scene, hyperspectral
images may suffer varying degrees of blur. If these images can be restored from the blur, then
landmines can be detected easily. Although some of the blur factors such as aircraft motion
and non-uniformity of the photodetector can be measured and corrected beforehand, the
biggest factor that makes the hyperspectral image restoration problem extremely challenging
is the continuously varying nature of atmospheric condition which is the media for the
reflected signal to be detected by the airborne camera. Because of this changing nature of
the atmosphere, atmospheric blur compensation has been a very challenging area of research
for the remote-sensing society. Mathematically speaking, image blurring happens when
clean image is convolved with system’s point spread function. The point spread function h,
in our case, models the blur due to motion, atmospheric distortion caused by scattering of
photons in the reflected electromagnetic radiation, and the sensor’s optical system. A typical
convolution process is shown in Figure 1.1.
Figure 1.1. 2-Dimensional Convolution : (a) blurred Image (b) clean Image of a scene (c)point spread function (d) additive Noise.
Recovering the original scene image from a degraded or observed image using knowl-
edge about its nature is called image Restoration. Image restoration is a useful technique
relevant in many fields: medicine and astronomy among others. Because the image data
recorded by an airborne camera, miles away from the scene, has to pass through different
blurring factors as mentioned above, the acquired image data may be highly blurred and
noisy. Hence, image restoration plays a major role in hyperspectral imagery. HYDICE
4
images of land-mine fields can have highly blurred mine signatures which are very hard to
detect without restoring the images; Specially small mine signatures. Therefore, it is highly
important to restore hyperspectral images before performing any image analysis or landmine
detection. Our objective in this thesis is to develop an algorithm to restore the original scene
image and, at the same time, estimate the blurring PSF merely using the observed image
and no prior information about the image or the PSF. Once the hyperspectral images are
restored, anomaly detector algorithms such as the RX algorithm [9] and the Gauss-Markov
Random Field (GMRF) algorithm [10] can be applied.
1.4. Literature Review
Image Restoration is the process of recovering the original scene image from a degraded
or observed image using knowledge about its nature. Many image processing applications
critically depend on image restoration; it is a useful technique relevant in many fields
including medicine and astronomy. We classify image restoration into two major categories.
The first category, also called image deconvolution, is a linear image restoration problem
where the parameters of the true image are estimated using the observed or degraded image
and a known point spread function (PSF) [11], [12]. In reality, the degrading PSF of a
system is sometimes hard or even impossible to identify due to physical, economical, or
strategic limitations. The second category, which is called blind image deconvolution (BID),
is a more difficult image restoration problem where image recovery is performed with little
or no prior knowledge of the degrading PSF. Since our goal in this paper is to find a BID
algorithm that works well for the hyperspectral image restoration problem, we focus on the
later category problem.
Although the BID problem has been a focus of study for many researchers in the last
few decades, this notorious but very important problem still remains to be open-ended.
Many of the previously proposed BID methods are outlined in [13]. There are several
BID algorithms that estimate the blurring PSF from the observed image and then solve the
5
linear image restoration problem to estimate the true image using the estimated PSF and
the degraded or observed image. This approach, which was used in [14] and [15], is called
the "a priori blur identification" method because the blurring kernel is first estimated using
prior knowledge. Several other BID algorithms have been proposed for simultaneously
estimating the clean image and the blurring kernel [16], [17] (see [13] for a comprehensive
review). This later method usually employs maximum likelihood estimators sometimes
with incorporated priors. The proposed method uses PML algorithms for the restoration
of spectral planes from blurred hyperspectral images. In the PML method, an objective
function equal to the negative log likelihood function plus a penalty function is minimized
with respect to the unknown parameters. The penalty function accounts for certain a priori
information about the unknown parameters. To solve the optimization problem, we use
an iterative technique commonly known as the alternating minimization technique initially
developed by Holmes [18]. Given initial estimates of the parameters, the PML objective
function is minimized with respect to the reflectance parameters (spectral plane parameters)
while setting the PSF parameters to their current estimate. Then, the PML objective function
is minimized with respect to the PSF parameters while setting the reflectance parameters to
their current estimate. The algorithm alternates between the two steps until some chosen
stopping criterion is met.
In a general sense, the a priori blur identification method is used only when enough
prior information is known about the blurring PSF. However, the proposed PML method can
be applied to any BID problem with little or no prior knowledge of either the original scene
image or the PSF. Moreover, the PML approach takes into account the statistical nature of
quantum photon emission and a basic rule of probability theory to enforce a nonnegativity
constraint on the reflectance means and the PSF respectively. A major advantage of the
approach is the ability to incorporate multiple regularizing penalty functions among which is
the very important noise suppression function. The problems associated with the alternating
PML method include the burden of computational complexity and determination of the
6
stopping criteria for the alternating state.
The most widely used BID approach is a parametric PSF estimation as opposed to
non-parametric estimation. In parametric PSF estimation, the degrading kernel is modeled
with a known statistical distribution so that the parameters of the model are estimated.
Review of BID algorithms [13], [19], [20], reveals that almost all BID algorithms assume
irreducible PSF. An irreducible signal is a signal that can not be exactly expressed as the
convolution of two or more component signals of the same family on the assumption that a
two-dimensional delta function is not a component signal. However, a Gaussian PSF, which
is the most natural image blurring kernel, is reducible which means that G� D G�1�G�2
for �2 D �12 C �2
2. It is this reducible nature of the Gaussian PSF that rendered the BID
of hyperspectral images most difficult. This problem, even with the noiseless case is well
known as the ill-posed inverse problem of heat transfer and still remains to be a challenge
[21].
In 1982, Shepp and Vardi [22] proposed MLEM algorithm for estimating emission
means in PET. The problem with the ML based method for PET was that the images
produced were very noisy because, as discussed in Section 1.5.1, the data obeys a Poisson
distribution which made the problem ill-posed. Different methods have been tried to address
this issue. One of the methods used to obtain ML images with reduced noise was to stop
the MLEM algorithm before the objective function is completely minimized. But the
major problem associated with this method is that it is impossible to exactly know where
to stop the algorithm. Although, there is ambiguity on how to choose the filter, people
have tried to obtain a noisy image from the MLEM algorithm and then post-process the
image with a lowpass filter. In 1990, [23], Silverman suggested filtering the image after
each iteration. An alternative way researchers have used to try to remove noise from ML
images is pre-processing the observed data. In [24], and [25], Lu and Anderson proposed
the pre-processing technique to denoise a PET emission data before using the ML algorithm.
In [20] and [21], Jian and Wang, assuming an isotropic Gaussian PSF, proposed ML based
7
alternating optimization technique used to approximate an oblique computed tomography
image by iteratively maximizing edge-to-noise ratio (ENR) to characterize the image quality
due to image deconvolution.
The most widely used regularization method is through the introduction of penalty
functions in order to force estimates of neighboring pixels to have similar values. In PML
algorithm, which is a maximum a posteriori (MAP) method, an objective function equal
to the negative log likelihood function plus a penalty function is minimized with respect
to the unknown parameters. The penalty function accounts for certain a priori information
about the unknown parameters. The Median Root-Prior (MRP) algorithm was introduced by
Alenius et al. [26] that suggests Gaussian-like prior depending on the median of the pixels
within local neighborhoods. Although the MRP algorithm generates "good" images in terms
of the Signal to Noise Ratio (SNR) of the reconstructed images, it can not be considered a
PML algorithm because of the fact that it is based on iteration dependant objective function.
In 1990, Green [27] proposed the one-step-late (OSL) PML algorithm derived from the
Kuhn-Tucker equations [28] for the PET PML optimization problem. Implementation of the
OSL algorithm is easy but it neither guaranties non-negative estimates nor does it guarantee
convergence just like most algorithms. Shepp and Vardi showed that the MLEM algorithm
could be derived from the same Kuhn-Tucker condition by minimizing the Kullback Leibler
distance between the expected emission mean values and the observed data. The PML
algorithm for PET proposed by Levitan and Herman [29] was based on the assumption that
the prior distribution of the true emission means was a multivariate Gaussian distribution.
The penalty function that was followed by this assumption was in the form of a weighted
least-squares distance between the true emission means and a reference image. The problem
is that the authors did not provide information on how to choose the reference image.
De Pierro [30, 31] came up with an algorithm that minimizes majorizing functions
constructed for certain penalty functions. He used the fact that the negative log likelihood
function and penalty functions such as quadratic functions are convex [32, pp. 860 – 862], in
8
order to construct his majorizing functions. Closed form expressions for the minimizers of
the majorizing functions do not exist except for the quadratic penalty functions. Therefore,
Newton’s method or similar optimization method is required to find the minimizers of the
majorizing functions. A quadratic majorizing function was developed by Huber [1, pp. 184
– 186] which was later used by many researches including Chang and Anderson [2], and
Erdogan and Fessler [33].
1.5. Background on a Poisson Model for Hyperspectral Image Data and MaximumLikelihood Image Deconvolution Algorithms
In this section, we first introduce a Poisson model for a single spectral plane from a
hyperspectral image data set. Then, we outline approaches for deriving image deconvolution
algorithms for the cases where the PSF is known and unknown.
1.5.1. A Poisson Model for Hyperspectral Image Data
Shepp and Vardi [22] proposed a Poisson model for positron emission tomography
(PET) data. Later, Snyder et al. extended Shepp and Vardi’s model to hyperspectral images
[34]. Now, we will describe the model we use for hyperspectral images. The difference
between our model and the one put forth by Snyder et al. is the way border pixels are
modeled and their model includes Gaussian readout noise 2. Let f denote the true M1�M2
image. Further, let g denote the observed image (i.e., blurry image) obtained from an
airborne sensor. The pixel values f .m1;m2/ and g.m1;m2/ are assumed to be observations
of Poisson random variables F.m1;m2/ and G.m1;m2/ respectively. Moreover, the random
variables fF.m1;m2/g and fG.m1;m2/g are assumed to be independent. Let �.m1;m2/ and
.m1;m2/ denote the mean values of F.m1;m2/ and G.m1;m2/, respectively. Additionally,
let the point spread function h model the blur due to motion, atmospheric distortion, and
the sensor’s optical system. In the Poisson model, the key relationship is that equals the
2The additive Gaussian readout noise is often left behind in the model not only to keep the model simplerbut also by considering the fact that the effect of the additive noise is mostly minimal compared to theconvolution of the input signal and the system’s PSF.
9
convolution of � and h. This relationship is what connects the blurry image to the true
image.
.m1;m2/ DXiD1
XjD1
h .i; j / � .m1 � i;m2 � j /: (1.1)
The problem of estimating the mean values of the true image f�.m1;m2/g given the
observed blurry image and the PSF is known as a deconvolution problem. When the PSF is
unknown, the problem is known as a blind deconvolution problem. It will be convenient to
use vector notation and standard lexicographical ordering to convert the two-dimensional
notation to one-dimensional notation. For example, an M1 �M2 image x.m1;m2/ can be
viewed as an M1M2 � 1 vector x by stacking the rows of x.m1;m2/. In other words, the
first M2 elements of x are the elements of the first row of x.m1;m2/. The next set of M2
elements of x are the elements of the second row of x.m1;m2/ and so on. Thus, with the
lexicographical ordering just described, the images g, f , , and � can be represented by
the I � 1 vectors g, f , , and � respectively, where I DM1 �M2, g D Œg1;g2; :::;gI �,
f D Œf1; f2; :::; fI �, D Œ 1; 2; :::; I � , and � D Œ�1; �2; :::; �I �. Let the size of the
PSF be L1 by L2 , where L1 and L2 are both odd. Also, let L , .L1�L2�1/
2. Then,
after rearranging it in lexicographical order, the PSF can be represented as the vector
h D Œh�L; :::; h�1; h0; h1; :::; hL�.
We now depart from Snyder’s model [34] and express the mean value of the observed
image i as
i D< h; �i >; i D 1; 2; :::; I; (1.2)
where < h; �i >,LP
lD�L
hl�iland �i ,
��i�L
; �i�LC1; :::; �iL�1
; �iL
�: The key question at
this point is how should the indices filg be defined for each i ? We will define the indices filgso that each mean value i is expressed as a linear combination of all the PSF coefficients
and certain mean values of the true image f�ig. The reason we express filg in this way is
that when i is a border pixel, the standard two-dimensional deconvolution is ill-posed. We
10
Figure 1.2. Lexicographical Representation of a 3 � 3 PSF.
will now consider some examples.
Case1: When pixel i does not lie on the border, we choose the indices filg such that
< h; �i >DL1�1
2PiD1
L2�12P
jD1
h .i; j / � .m1 � i;m2 � j /. To make this statement clearer,
we consider an example where M1 D M2 D 9 and L1 D L2 D 3, and explicitly
determine the indices filg for j D 42 (note: pixel 42 is not on the border of the
image). Figure 1.3 (a) illustrates the lexicographical ordering for the 9 � 9 image �.
Figure 1.3 (b) shows the location of the PSF mask that corresponds to the convolution
expression in (1.1) for i D 42.
42 D h�4�42�4C h�3�42�3
C h�2�42�2C h�1�42�1
C h0�420C h1�421
C h2�422C h3�423
C h4�424
D h�4�32 C h�3�33 C h�2�34 C h�1�41 C h0�42
C h1�43 C h2�50 C h3�51 C h4�52: (1.3)
As seen from Figure 1.3 (b), �42lD Œ�32; �33; �34; �41; �42; �43; �50; �51; �52�.
Case2: When i is a border pixel, we modify the image space so that the indices filg can
be obtained. To do that, we symmetrically reflect-extend the blurred image before
11
performing the deconvolution process. By reflect-extend, we mean reflecting some of
the border rows and columns of the image and extending the image by the reflected
data so that all the L1 � L2 elements of �ilexist when i is at the border and h0
is located at �i . The number of top and bottom rows that should be reflected is
equal to .L1�1/
2, whereas the number of left and right columns to reflect equals .L2�1/
2.
Using the example image used in Case1, where M1 D M2 D 9 and L1 D L2 D 3,
Figure 1.3 (c) illustrates the reflect-extended example image. Figure 1.3 (d) shows the
location of the PSF mask that corresponds to the convolution expression in (1.1) for
i D 1.
1 D h�4�1�4C h�3�1�3
C h�2�1�2C h�1�1�1
C h0�10C h1�11
C h2�12C h3�13
C h4�14
D h�4�1 C h�3�1 C h�2�2 C h�1�1 C h0�1
C h1�2 C h2�10 C h3�10 C h4�11: (1.4)
As seen from Figure 1.3 (d), �1lD Œ�1; �1; �2; �1; �1; �2; �10; �10; �11�. At this point, we
have established a technique by reflect-extending the blurred image, to allow us to use (1.2)
for all i regardless of its location in the image.
1.5.2. Outline of Maximum Likelihood Image Deconvolution Algorithm
Based on the model assumptions, the likelihood function of the observed data is given
by
Pr ŒG D gj�� D Pr ŒG1 D g1;G2 D g2; :::;GI D gI j�� (1.5)
DIY
iD1
Pr ŒGi D gij�� (1.6)
DIY
iD1
e�
LP
lD�L
hl�il
! �LP
lD�L
hl�il
�gi
gi !; (1.7)
12
(a) (b)
(c) (d)
Figure 1.3. Reflect-Extending an Image: (a) original Image, (b) when i is not a border pixel,(c) reflect-extended Image , and (d) when i is a border pixel.
13
where G D ŒG1;G2; :::;GI �. Therefore, maximum likelihood (ML) estimates for the
unknown mean values f�ig, which we now refer to as reflectance means, can be obtained by
maximizing the likelihood function in (1.7) with respect to � under the constraint � � 0.
Equivalently, ML estimates can be determined by minimizing the negative log likelihood
function N , as defined below, for � � 0:
N .�/ , � log Pr ŒG D gj�� (1.8)
D �IX
iD1
gi log
LX
lD�L
hl�il
!C
IXiD1
LXlD�L
hl�ilC
IXiD1
log gi ! (1.9)
Stated succinctly, ML estimates for the reflectance parameters are obtained by solving the
following optimization problem
O� D arg min�>0
N .�/ : (1.10)
Shepp and Vardi developed the Maximum Likelihood Expectation Maximization (MLEM)
algorithm for solving (1.10). Their derivation of the MLEM algorithm will be provided in
Chapter 2.
1.5.3. Outline of Maximum Likelihood Blind Image Deconvolution Algorithm
This section outlines a BID algorithm when the PSF is unknown. Keeping in mind
the discussion in Section 1.5.2, ML estimates for the reflectance means f�ig and the PSF
coefficients fhlg can be found by maximizing the log likelihood function Pr ŒG D gj�;h�or minimizing the negative log likelihood function under the constraints � � 0 and h � 0.
N .�;h/ , � log Pr ŒG D gj�;h� (1.11)
D �IX
iD1
gi log
LX
lD�L
hl�il
!C
IXiD1
LXlD�L
hl�ilC
IXiD1
log gi ! : (1.12)
14
In other words, the ML estimates for the reflectance means and the PSF coefficients can be
expressed as
� O�; Oh� D arg min�>0;h>0
N .�;h/ : (1.13)
Since solving (1.13) involves the challenging BID problem that tries to estimate both the
reflectance means and the underlying PSF from the blurred image (i.e., the observed image),
it is impossible to find a straightforward solution for the problem. Therefore, we use an
alternating minimization approach initially suggested by Holmes [18] and later used by other
researchers as well (see [21] for example). Given the current estimates for the reflectance
means and PSF coefficients, we obtain an improved estimate for the reflectance means by
minimizing N��;h.n/
�with respect to �, where h.n/ is the current estimate of the PSF
coefficients. The resulting estimate, �.nC1/, is then used to obtain the next estimate for the
PSF coefficients, h.nC1/.
Summarizing, a modified alternating minimization for BID consists of the following steps:
Given �.0/ > 0 and h.0/ > 0,
Step 1 Find �.nC1/ � 0 such that N��.nC1/;h.n/
�� N
��.n/;h.n/
�(1.14)
Step 2 Find h.nC1/ � 0 such that N��.nC1/;h.nC1/
�� N
��.nC1/;h.n/
�(1.15)
Step 3 Iterate between Step 1 and Step 2 until chosen convergence criterion is met.
1.6. Outline of Penalized Maximum Likelihood Blind Image DeconvolutionAlgorithm
The optimization problem in (1.13) is ill-posed in nature, meaning that a unique optimal
solution does not exist for the convex objective function. The most widely used method
for addressing this ill-posed nature of the optimization problem is by introducing certain
regularizing penalty functions. Therefore, based on some a priori information, we add
characteristic penalty functions for both the reflectance parameters and PSF coefficients.
Therefore, the Penalized Maximum Likelihood (PML) estimates for the reflectance means
15
and the PSF coefficients can be expressed as
� O�; Oh� D arg min�>0;h>0
N .�;h/C ˇL .�/C ˛K .h/ : (1.16)
L and K are the penalty functions for � and h respectively Penalty parameters ˇ and ˛.
In Chapter 3, under the assumption that the PSF is known, we consider the problem of
minimizing the negative log likelihood function plus a penalty function for the reflectance
parameters. The penalty function is used to address the ill-possed nature of the problem of
simply minimizing the negative log likelihood function.
Unfortunately, closed form solutions for the PML optimization problem does not exist.
Therefore, employing iterative optimization techniques is a must. The proposed PML
algorithm is based on iterative majorizing technique [30, 31, 35–37] where at each iteration
of the optimization problem, a function called a majorizing function is constructed for the
objective function that is easier to solve and satisfies certain conditions. Then, the minimizer
of the majorizing function is used as the next iterate of the objective function.
For demonstration purposes, consider a one-dimensional minimization problem that
does not have a closed form solution
Ot D arg mint>0
f .t/: (1.17)
Suppose a function fm can be determined such that
(C1.1) fm
�t; t .n/
�> f .t/ , for t > 0:
(C1.2) fm
�t .n/; t .n/
� D f �t .n/� .
Then, in the iterative majorizing technique, the next iterate is defined to be
t .nC1/ D arg mint>0
fm
�t; t .n/
�: (1.18)
In this example, and more generally, the primary motivation for the iterative majorizing
technique is that it produces a sequence of iterates that monotonically decreases the objective
16
function.
In summary, a modified alternating minimization for BID using PML based algorithms
consists of the following steps:
Given �.0/ > 0 and h.0/ > 0,
Step 1 Find �.nC1/ � 0 such that N��.nC1/;h.n/
�C ˇL
��.nC1/
�� N
��.n/;h.n/
�C ˇL
��.n/
�(1.19)
Step 2 Find h.nC1/ � 0 such that N��.nC1/;h.nC1/
�C ˛K
�h.nC1/
�� N
��.nC1/;h.n/
�C ˛K
�h.n/
�: (1.20)
Step 3 Iterate between Step 1 and Step 2 until chosen convergence criterion is met.
In Chapter 2, we derive the MLEM algorithm and in Chapter 3, a PML algorithm with a
class of edge-preserving penalty functions, both for estimating the reflectance parameters by
assuming that we know the underlying PSF responsible for the degradation of the true image.
In Chapter 4, we derive a BID algorithm based on the a modified alternating minimization
technique using the MLEM algorithm. In chapter 5, we derive the proposed alternating BID
algorithm that is based on a PML algorithm. In the proposed method, we bring together
the ideas discussed and/or used in previous chapters and also derive a PML algorithm and
associated penalty functions for estimating the PSF.
Summarizing the proposed BID algorithm, given a strictly positive initial estimates
�.0/ > 0, and h.0/ > 0 the steps for n D 1; 2; :::; are as follows:
Step 1 Let �.0/ > 0 and h.0/ > 0, be the initial estimates.
Step 2 Estimate �.n/ using h.n�1/.
Step 3 Estimate h.n/ using �.n/.
17
Figure 1.4. A 1-D illustration of iterative majorization technique. At each iteration, amajorizing function is obtained and minimized. The minimizer of the majorizingfunction is used as the next iterate of the algorithm.
18
Step 4 Iterate between Steps 2 and 3 until chosen convergence criterion is met.
At this point, it is important to note that Step 4 is the main iteration of the algorithm that
includes Steps 2 and 3 within a single iteration. Although a person can choose how many
iterations to run for each sub-iterations (i.e., Steps 2 and 3), for practical reasons, a single
sub-iteration usually suffices within the bigger iteration.
19
Chapter 2. Maximum Likelihood Expectation Maximization Image DeconvolutionAlgorithm
In this chapter, we present an ML algorithm for estimating the reflectance means˚�j
under the assumption that the PSF coefficients fhlg are known.
Recall from Section 1.5.2 that the negative log likelihood function is given by
N .�/ , � log Pr ŒG D gj�� (2.1)
D �IX
iD1
gi log
LX
lD�L
hl�il
!C
IXiD1
LXlD�L
hl�ilC
IXiD1
log gi ! : (2.2)
D �IX
iD1
gi log < h; �i > CIX
iD1
< h; �i > CIX
iD1
log gi ! : (2.3)
where < h; �i > ,�
LPlD�L
hl�il
�. Therefore, ML estimates of the reflectance parameters
are obtained by determining a solution to the following optimization problem
(P1) O�ML D arg min�>0
N .�/ . (2.4)
Unfortunately, it is impossible to find a closed form solution to the problem (P1). Conse-
quently, iterative optimization techniques must be employed. In 1982, Shepp and Vardi used
the expectation maximization algorithm [38] to develop an iterative algorithm for solving an
optimization problem in positron emission tomography that is essentially identical to the
problem (P1)[22]. Their algorithm is known as the Maximum Likelihood Expectation Max-
imization (MLEM) algorithm. 3. Later, De Pierro demonstrated that the MLEM algorithm
could be derived using certain functions known as majorizing functions. We use De Pierro’s
approach to derive an algorithm for solving (P1).
Consider an arbitrary I �1 vector �0 > 0 and a function Nm that satisfies the following
3The Richardson-Lucy algorithm, also known as the Richardson-Lucy deconvolution algorithm, is aniterative procedure for recovering an image that has been blurred by a known point spread function [39], [40]
20
conditions:
(C2.1) Nm.�;�0/ > N .�/ for all � > 0
(C2.2) Nm.�0;�0/ D N .�0/ for �0 > 0.
The function Nm . � ;�0/ is said to be a majorizing function for the function N at the point
�0. Given an initial estimate for � and a majorizing function Nm, an iterative algorithm for
solving (P1) is
(P2) �.nC1/ D arg min�>0
Nm.�;�.n//; (2.5)
where �.n/ > 0 is the current estimate for �. The algorithm described above is known by
several names including optimization transfer, iterative majorization, and minorize-maximize
algorithms.
We will now use properties (C2.1) and (C2.2) to show that the iterative majorization
algorithm (2.5) monotonically decreases the negative log likelihood function with increasing
iterations. The inequality N��.nC1/
� � Nm.�.nC1/;�.n// follows from (C2.1) and by
(C2.2), the equality Nm.�.n/;�.n// D N
��.n/
�holds true. Now, from (2.5) it follows that
Nm.�.nC1/;�.n// � Nm.�
.n/;�.n//. Therefore, we get the desired result
N��.nC1/
�� Nm.�
.nC1/;�.n// � Nm.�.n/;�.n// D N
��.n/
�. (2.6)
We will now explicitly determine a majorizing function for the negative log likelihood
function N by exploiting the concavity of the log function. It will be convenient to express
the negative log likelihood function as follows
N .�/ D �IX
iD1
gi log
LX
lD�L
hl�il
�.n/il
�.n/il
< h; �.n/i >
< h; �.n/i >
!
CIX
iD1
< h; �i >CIX
iD1
log gi ! (2.7)
where < h; �.n/i >,
�LP
kD�L
hk�.n/ik
�and �.n/ � 0: Continuing, we express the negative
21
log likelihood function as
N .�/ D �IX
iD1
gi log
LX
lD�L
d.n/
il< h; �
.n/i >
�il
�.n/il
!
CIX
iD1
< h; �i >CIX
iD1
log gi ! ; (2.8)
where d.n/
il,
hl�.n/
il
<h;�.n/
i>
. Using the concavity of the log likelihood function [32, pp. 860 –
862] and the fact thatLP
lD�L
d.n/
ilD 1, we obtain the following inequality
log
LX
lDL
d.n/
il< h; �
.n/i >
�il
�.n/il
!�
LXlD�L
d.n/
illog
< h; �
.n/i >
�il
�.n/il
!: (2.9)
Therefore, it follows that
N .�/ 6 �IX
iD1
gi
LXlD�L
d.n/
illog
< h; �
.n/i >
�il
�.n/il
!
CIX
iD1
< h; �i >CIX
iD1
log gi ! : (2.10)
We will now show that the function on the RHS of (2.10) is a majorizing function for the
negative log likelihood function N .
Let Nm
� � ;�.n/� denote the function on the RHS of (2.10)
Nm.�;�.n// D �
IXiD1
gi
LXlD�L
d.n/
illog
< h; �
.n/i >
�il
�.n/il
!
CIX
iD1
< h; �i >CIX
iD1
log gi ! . (2.11)
Clearly, by construction, the function Nm
� � ;�.n/� satisfies (C2.1) for �.n/ > 0. We will
now show that condition (C2.2) is also satisfied. Using straightforward calculations, it can
be seen that
22
Nm.�.n/;�.n// D �
IXiD1
gi log < h; �.n/i >
LXlD�L
d.n/
ilC
IXiD1
< h; �.n/i >
CIX
iD1
log gi ! . (2.12)
SinceLP
lD�L
d.n/
ilD 1, it follows that
Nm.�.n/;�.n// D �
IXiD1
gi log < h; �.n/i >C
IXiD1
< h; �.n/i >C
IXiD1
log gi !
D N��.n/
�. (2.13)
Thus, the function Nm
� � ;�.n/� satisfies (C2.2).
At this point, we have constructed a majorizing function for the negative log likelihood
function N . To obtain the next iterate �.nC1/, the majorizing function N at a point �.n/ > 0
must be minimized (see (2.5)). In the remainder of the chapter, we drive an explicit
expression for �.nC1/ by initially ignoring the constraint � � 0 and solving the system of
equations
@
@�j
Nm.�;�.n// D 0; j D 1; 2; :::;J: (2.14)
Straightforward calculations lead to the following expression for the derivative of Nm. � ; �.n//with respect to �j
@
@�j
Nm.�;�.n// D � @
@�j
IXiD1
gi
LXlD�L
d .n/il
�log < h; �
.n/i >C log�il
� log�.n/il
�C @
@�j
IXiD1
< h; �i >C @
@�j
IXiD1
log gi ! (2.15)
D � @
@�j
IXiD1
gi
LXlD�L
d.n/
illog�il
C @
@�j
IXiD1
LXlD�L
hl�il: (2.16)
To complete the calculation, we must consider the values for i and l such that il D j .
23
For j D 1; 2; :::;J , let Sj , f.i; l/ W il D j g. It will be convenient to denote the elements
of Sj by the ordered pairs�.aj1; bj1/; .aj2; bj2/; :::; .ajT ; bjT /
�, where T , 2L C 1 is
the number of PSF coefficients. Using this notation, we can express the derivative of
Nm. � ; �.n// with respect to �j as follows
@
@�j
Nm.�;�.n// D � @
@�j
TX
tD1
gaj td.n/
aj t bj tlog�j
!C @
@�j
TXtD1
hbj t�j : (2.17)
Note that ajt represents an index of a pixel whereas bjt represents the index for a PSF
coefficient.
Now, we will discuss the members of the set Sj in more detail. The key idea is that the
model represented by (1.2) is equivalent to the two-dimensional convolution model (1.1)
when the two-dimensional PSF mask (i.e., hŒ�m;�n�) centered at a pixel i lies within the
spatial extent of �.m; n/. Pixels that satisfy this condition are said to be interior pixels. For
pixels that do not meet this condition, an alternative model is used and they are referred to
as border pixels. The alternative model used for border pixels helps address the well known
"border problem" in image deconvolution. Finding the members of Sj requires moving
the PSF mask around the image and centering it at each pixel location i . If there is an i
value PSF coefficient index l such that il D j , then .i; l/ 2 Sj . We will now explicitly
demonstrate this point by considering examples for the two cases.
Interior Pixel Case:
When the pixel j does not lie on the border of the image, the PSF mask will totally lie
within the boundary of the image and all the elements of Sj do exist in the neighborhood of
j . We will now demonstrate this case by finding the members of Sj for j D 42 which can
serve as an example of a pixel that does not lie on the border. Consider a 9 by 9 image and a
3 � 3 PSF . Finding Sj requires moving the PSF around the image centering the mask at
each i value. If there is an i and an associated l value for which il D 42, then .i; l/ will be
an element of S42 .
From Figure 2.1, we see that when the mask is centered at i D 42, then i D 42 and
24
Figure 2.1. A 9 by 9 example image for visualizing the elements of Sj when the PSF maskis centered at an interior pixel j D 42.
l D 0 corresponds j D 42 such that il D j ; Which means that .42; 0/ 2 S42. Note that for
all i and j , if i D j , then i0 D j and .i; 0/ is always an element of Sj . In Figure 2.2 below,
we elaborate more on the example above and show in more detail, the elements of S42 when
the mask is not centered at the pixel in question. When the mask is centered at i D 41 as
shown in Figure 2.2 (a), j D 42 corresponds to i D 41 and l D 1 such that il D j . Similarly,
by centering the mask at i D 43, Figure 2.2 (b) shows that i D 43 and l D �1 such that
il D j . Also, Figure 2.2 (c), and (d) tells us that .32; 4/ 2 S42, and .52;�4/ 2 S42
respectively. If we keep on centering the mask at each i value, we will finally arrive
at4 S42 D f.32; 4/; .33; 3/; .34; 2/; .41; 1/; .42; 0/; .43;�1/; .50;�2/; .51;�3/; .52;�4/g .
Therefore, the ordered pairs representation of S42 corresponds to a.42; 1/ D 32, b.42; 1/ D4, a.42; 2/ D 33, b.42; 2/ D 3,..., a.42;T / D 52, b.42;T / D �4:
Border Pixel Case:
When j is at the border of the image, then portion of the PSF mask lies outside of the range
4It should be noticed that the PSF index vector is in reverse order when the l values of S42 viewedseparately. This reverse ordering is the basis for solving the inverse problem.
25
Figure 2.2. Examples showing some of the elements of S42 when the PSF mask is notcentered at an interior pixel j D 42.: (a) i D 41 and l D 1, (b) i D 43 andl D �1, (c) i D 32 and l D 4, and (d) i D 52 and l D �4 .
of the image and some members of Sj do not exist within the boundary of the image. To
deal with this border problem we symmetrically reflect-extend the image as discussed in
Section 1.5.1. We will now illustrate the usage of the reflect-extend method by using a 9� 9
example image and a 3 � 3 PSF to find Sj when j D 1.
From Figure 2.3, we see that when the mask is centered at i D 1, then i D 1 and l D 0
corresponds j D 1 such that il D j ; Which means that .1; 0/ 2 S1. In Figure 2.4 we give
examples of i and l elements of S1 when the mask is not centered at j D 1. When the mask
is centered to the left of j D 1 as shown in Figure 2.4 (a), j D 1 corresponds to i D 1 and
l D 1 from the extended section of the image such that il D j . Similarly, by centering the
mask one pixel to the right at i D 2, Figure 2.4 (b) shows that i D 2 and l D �1 correspond
to j such that il D j . Also, Figure 2.4 (c), and (d) tells us that .1; 4/ 2 S1, and .11;�4/ 2S1 respectively. If we keep on centering the mask at each i value, we will finally arrive at
S1 D f.1; 4/; .1; 3/; .2; 2/; .1; 1/; .1; 0/; .2;�1/; .10;�2/; .10;�3/; .11;�4/g . Therefore,
for when j D 1, the ordered pairs representation of S1 is given by a.1; 1/ D 1, b.1; 1/ D 4,
26
Figure 2.3. A 9 by 9 example image for visualizing the elements of Sj when the PSF maskis centered at a border pixel j D 1 .
Figure 2.4. Examples showing some of the elements of Sj when the PSF mask is notcentered at border pixel j D 1: (a) i D 1 and l D 1, (b) i D 2 and l D �1, (c)i D 1 and l D 4, and (d) i D 11 and l D �4 .
27
a.1; 2/ D 1, b.1; 2/ D 3,..., a.1;T / D 11, b.1;T / D �4.
From (2.17) and the discussion on Sj above, it follows that the derivative of the
majorizing function can be reduced to the following expression
@
@�j
Nm.�;�.n// D � @
@�j
TX
tD1
gaj td.n/
aj t bj tlog�j
!C @
@�j
TXtD1
hbj t�j (2.18)
D �TX
tD1
gaj td.n/
aj t bj t
1
�j
CTX
tD1
hbj t(2.19)
A necessary condition for the minimum of Nm. � ;�.n// is
@
@�j
Nm.�;�.n//j
�jD�.nC1/
j
D 0; j D 1; 2; :::;J: (2.20)
Or, equivalently
�TP
tD1
gaj td.n/
aj t bj t
�.nC1/j
CTX
tD1
hbj tD 0: (2.21)
Therefore, since d.n/
il,
h;�.n/
il
<h;�.n/
i>; the next iterate �.nC1/
j can be expressed as
�.nC1/j D
TPtD1
gaj td.n/
aj t bj t
TPtD1
hbj t
(2.22)
D
TPtD1
gaj t
hbj t�
.n/
j
<h;�.n/aj t>
TPtD1
hbj t
: (2.23)
Using the fact thatTP
tD1
hbj tD
LPlD�L
hl , the iterative majorizing algorithm for obtaining ML
reflectance parameter estimate is
�.nC1/j D �.n/j
1
LPlD�L
hl
TXtD1
gaj thbj t
< h; �.n/aj t>; j D 1; 2; :::;J: (2.24)
28
Chapter 3. Penalized Maximum Likelihood Image Deconvolution Algorithm
As discussed in Section 1.4, the MLEM algorithm developed in Chapter 2 can be
used to restore / deblur hyperspectral images from the degradation they suffer due to many
degradation factors such as atmospheric blur, motion blur and sensor noise (see [22] for
usage in PET data). However, the problem of restoring hyperspectral images is ill-posed
in nature due to the fact that the data obeys Poisson statistics. Therefore, a unique optimal
solution does not exist for the convex objective function in (1.10). As a result, the MLEM
algorithm fails to recover the true image and after certain number of iterations, the image
becomes more and more noisy with increasing iterations. The most widely used method
for addressing this ill-posed nature of the minimization problem is by introducing a penalty
function. In the area of medical imaging, specially in Positron Emission Tomography (PET)
image reconstruction algorithms, there are certain types of penalty functions used for noise
reduction. These penalty functions are designed in such a way that estimates of neighboring
pixels are forced to be similar in value unless there is an edge within neighbors. An edge
occurs in an image, whenever there is distinctive activity in group of connected pixels that
is different from the neighboring pixels. In a monochrome image, like the hyperspectral
image used in this thesis, an edge can be identified because of the difference in pixel values
(i.e., contrast). The hyperspectral image restoration problem is similar to the PET image
reconstruction problem in that the data obeys Poisson distribution in both problems. Hence,
the use of edge-preserving penalty functions developed for PET yields good results in the
hyperspectral image restoration problem. Since in the context of our research, restoring
hyperspectral images is important mainly for the identification of anomalies in the image, it
is extremely important to use a penalty function that can reduce the noise while keeping the
edges of the anomalies (landmines).
In Section 3.1, using majorizing functions, we will introduce a penalized maximum
29
likelihood (PML) algorithm for image deconvolution.
3.1. Penalized Maximum Likelihood Algorithm Via Majorizing Functions
We now present a modification of an algorithm developed by Chang and Anderson [2]
for PET that provides PML estimates of the reflectance means˚�j
under the assumption
that the PSF coefficients are known. Our derivation closely follows their own, except we
use the model in (1.2). Recall from Section 1.5.2 that the negative log likelihood function is
given by
N .�/ , � log Pr ŒG D gj�� (3.1)
D �IX
iD1
gi log
LX
lD�L
hl�il
!C
IXiD1
LXlD�L
hl�ilC
IXiD1
log gi ! . (3.2)
For a user specified penalty function L, PML estimates of the reflectance parameters are
obtained by solving the following optimization problem
(P3) O�PML D arg min�>0
P .�/; (3.3)
where the PML objective function P is given by
P .�/ , N .�/C ˇL .�/ : (3.4)
The constant ˇ, which is called the penalty parameter, controls the degree of influence of
the penalty function. Ideally, the penalty function L is an edge-preserving penalty function
in the sense that it forces neighboring pixel values to have similar values, unless they occur
at an edge. Like a number of researchers, the penalty functions we consider are of the form
L .�/ DIX
iD1
Xk2Bi
wik .�i; �k/; (3.5)
where is a cost function and Bi is a set of pixels in the neighborhood of the i th pixel that
excludes the i th pixel itself. Further, the neighborhoods fBig are defined in such a way that
for a pixel k in Bi , it is always true that the pixel i is in Bk . In our formulation, Bi is the
30
eight nearest neighbors of pixel i . The weight wik is chosen to be inversely proportional to
the distance between pixels i and k.
Let .u; v/ , � .u � v/, where the function � satisfies the following assumptions:
(AS1) � .t/ is symmetric
(AS2) � .t/ is differentiable everywhere
(AS3):
� .t/ , ddt� .t/ is increasing for all t
(AS4) e .t/ ,:
�.t/
tis non-increasing function for all t > 0
(AS5) � .0/ D limt!0
z .t/ is finite and nonzero
(AS6) � .t/ is bounded from below.
Note that assumption (AS3) implies that � is strictly convex function and assumption (AS6)
implies that L .�/ is bounded from below. Some of the functions that satisfy (AS1)-(AS6)
include Green’s log-cosh function [27] � .t/ D log .cosh .t// and the quadratic function
� .t/ D t2.
Since it is impossible to find a closed form solution for the minimization problem (P3),
we use the iterative majorization method to minimize the PML objective function P . The
PML objective function in (3.4) has two components: the negative log likelihood function
N and the penalty function L To determine a majorizing function for P , we use De Pierro’s
majorizing function for N and the majorizing function for L developed in [2].
Now, we introduce a majorizing function for the penalty function for L. In [1], Huber
developed a majorizing function for � under the assumptions (AS1)-(AS6). Given an
arbitrary point t .n/, Huber’s majorizing function for � is defined by
N�m
�t; t .n/
�, �
�t .n/�C :
�
�t .n/� �
t � t .n/�C 1
2e�t .n/� �
t � t .n/�2
; (3.6)
where e .t/ ,:
�.t/
t: The majorizing function has the following three properties:
31
(C3.1) N�m
�t; t .n/
�> � .t/ for all t
(C3.2) N�m
�t .n/; t .n/
� D � �t .n/�(C3.3) PN�m
�t .n/; t .n/
� D :
��t .n/�
.
Note, the dot over the function represents the first derivative of the function. Replacing t by
�i � �k and t .n/ by �.n/i � �.n/k, a majorizing function for L is given by
NLm
��;�.n/
�D
IXiD1
Xk2Bi
wikN m
��i; �k ; �
.n/i ; �
.n/
k
�; (3.7)
where N m
�u; v;u.n/; v.n/
�, N�m
�u � v;u.n/ � v.n/�. By the convexity of the square func-
tion, it is clear that the following inequality holds true
h�i � �k �
��.n/i � �.n/k
�i2 D�
1
2
�2�i � 2�
.n/i
�C 1
2
�2�
.n/
k� 2�k
��2
6�
1
2
�2�i � 2�
.n/i
�2 C 1
2
�2�
.n/
k� 2�k
�2�
. (3.8)
Using Huber’s[1] majorizing function N�m as a starting point, Chang and Anderson [2] used
the above inequality to construct a separable majorizing function for L defined as follows
m .�i; �k/ , ���.n/i � �.n/k
�C :
�
��.n/i � �.n/k
� h��i � �.n/i
����k � �.n/k
�iC 1
4e��.n/i � �.n/k
� ��2�i � 2�
.n/i
�2 C�2�k � 2�
.n/
k
�2�
. (3.9)
By construction, it is clear that the following statements hold true: (1) m .�i; �k/ >
.�i; �k/ for all �i > 0 and �k 2 Bi , (2) m
��
.n/
i ; �.n/
k
�D
��
.n/
i ; �.n/
k
�. The differ-
ence between m and N m is that m is de-coupled or separable in the sense that m does
not have coupled terms of the form �i�k . This de-coupled nature of m will later help us
find a closed form expression for the next iterate of the PML iteration. Given the advantages
of m as compared to N m, the majorizing function for L at the point �.n/ we use is
Lm
��;�.n/
�D
IXiD1
Xk2Bi
wik m
��i; �k ; �
.n/i ; �
.n/
k
�(3.10)
32
Using the majorizing function Nm for the negative log likelihood function N (see
(2.11)) and the majorizing function for the penalty function Lm, the majorizing function for
the PML objective function P at the point �.n/ is given by
Pm.�;�.n// D Nm.�;�
.n//C ˇLm
��;�.n/
�. (3.11)
From the properties of Nm and Lm, it follows that the majorizing function Pm satisfies:
(C3.4) Pm.�;�.n// > P .�/ for all � for all � > 0
(C3.5) Pm
��.n/;�.n/
� D P��.n/
�for �.n/ > 0
Now that we have developed a majorizing function for our PML objective function P , given
the previous iterate �.n/, the next iterate �.nC1/ can be found by minimizing the majorizing
function Pm at the point �.n/. Stated mathematically, an iterative algorithm for solving (P3)
is
(P4) �.nC1/ D arg min�>0
Pm
��;�.n/
�. (3.12)
We will now solve the minimization problem (P4). In [2], it is shown that Lm can be written
as
Lm
��;�.n/
�D 2
JXjD1
Xk2Bj
�.n/
jk
��j
�CC.n/
2 (3.13)
where
�.n/
jk
��j
�, wjke
��.n/j � �.n/k
� ��j � �jk
.n/�2
(3.14)
�ik.n/ ,
�.n/j C �.n/k
2(3.15)
33
C.n/
2 ,JX
jD1
Xk2Bj
wjk
����.n/j � �.n/k
�� 1
2
:
�
��.n/j � �.n/k
� ��.n/j � �.n/k
��. (3.16)
We can also rewrite the majorizing function for the negative log likelihood function in (2.11)
as
Nm
��;�.n/
�D �
IXiD1
gi
LXlD�L
d.n/
illog
��il
�C IXiD1
< h; �i >CC.n/
1 ; (3.17)
where
C.n/
1 , �IX
iD1
gi< h; �.n/i >
LXlD�L
d.n/
illog
1
�.n/il
!C
IXiD1
log gi ! . (3.18)
From (3.13) and (3.17), Pm can be written as
Pm
��;�.n/
�D Nm
��;�.n/;h.n/
�C ˇLm
��;�.n/
�(3.19)
DIX
iD1
< h; �i > � gi
LXlD�L
d.n/
illog
��il
�!
C 2ˇ
JXjD1
Xk2Bj
�.n/
jk
��j
�CC.n/
1 CC.n/
2 . (3.20)
In order to simplify (3.20), we have to find i and l values such that il D j . From the
discussion in Chapter 2 about the set Sj , f.i; l/ W il D j g (see discussion and (2.18)), the
majorizing function for the PML objective function can be written as follows
Pm
��;�.n/
�D
JXjD1
(TX
tD1
hbj t�j �
TXtD1
gaj td.n/
aj t bj tlog�j
)
C 2ˇ
JXjD1
8<:Xk2Bj
wjke��.n/j � �.n/k
���2
j � 2�jk.n/�j C
��jk
.n/�2�9=;
CC.n/
1 CC.n/
2 (3.21)
DJX
jD1
nX.n/
j log�j C Y.n/
j �2j CZ
.n/j �j
oCC
.n/
3 ; (3.22)
34
where
X.n/
j , �TX
tD1
gaj td.n/
aj t bj t(3.23)
Y.n/
j , 2ˇ
JXjD1
Xk2Bj
wjke��.n/j � �.n/k
�(3.24)
Z.n/j ,
TXtD1
hbj t� 4ˇ
JXjD1
8<:Xk2Bj
wike��.n/j � �.n/k
��jk
.n/
9=; (3.25)
C.n/
3 , C.n/
1 CC.n/
2 C 2ˇ
IXiD1
Xk2Bi
wike��.n/i � �.n/k
� ��ik
.n/�2
. (3.26)
Since Pm
��;�.n/
�is de-coupled, as can be seen from (3.22), the solution to (3.12) is given
by
(P5) �.nC1/j D arg min
�>0Pm;j
��j
�; j D 1; 2; :::;J: (3.27)
where Pm;j is defined to be
Pm;j .t/ , X.n/
j log .t/C Y.n/
j t2 CZ.n/j t . (3.28)
Before we try to solve (P5), lets first show that the function Pm;j is strictly convex for all
j and n under the assumption that �.n/j > 0 for all j and n. In [2], a similar function was
proven to be strictly convex by showing that the second derivative of a function similar to
Pm;j is positive when �.n/i > 0 for all j and n. For completeness and reader’s convenience,
we will now provide our proof by mimicking their own. First note that X.n/
j is negative and
Y.n/
j is positive for all j and n. The fact that the function e, weights˚wjk
, and ˇ are all
positive is the reason for Y.n/
j > 0. Recall by (AS1) and (AS3) that � .t/ is symmetric and
strictly convex function. It follows that:
� .t/ > 0 over .0;1/ and:
� .t/ < 0 over .�1; 0/.
35
Using the fact that e .0/ is finite and non zero (see (AS5)), we have that e .t/ > 0 for
�1 < t <1. Using easy calculations, the second derivative of Pm;j can now be shown
to be RPm;j .t/ D��X
.n/
j
t2 C 2Y.n/
j
�, where the double dot over a function represents the
second derivative of a function. Since Y.n/
j > 0 and X.n/
j < 0, the second derivative of Pm;j
is positive for all j and n. Therefore, Pm;j is strictly convex for all j and t > 0. It can then
be concluded from (3.22), that Pm is strictly convex over the set f� W � > 0g. Since Pm .�/
is de-coupled, Pm;j .t/ is strictly convex, and Pm;j .t/!1 as t ! 0C, it follows that
�.nC1/j > 0 and PPm;j
��.nC1/j
�D 0. (3.29)
Note that (3.29) satisfies the assumption that �.n/ > 0 for all j .
Now that we have shown strict convexity of the majorizing function for the PML
objective function, the minimization problem (P5) can easily be solved by computing the
first derivative of Pm;j and setting it to zero. Since X.n/
j < 0 and Y.n/
j > 0, the root of the
quadratic equation in (3.28) that satisfies the non-negativity constraint is
�.nC1/j D
�Z.n/j C
r�Z.n/j
�2 � 8X.n/
j Y.n/
j
4Y.n/
j
; j D 1; 2; :::;J: (3.30)
Given a strictly positive initial estimate �.0/ > 0, the PML algorithm can be summarized as
follows:
For n D 1; 2; :::;
Step 1 Construct majorizing function for P from the current iterate �.n/
using.3:22/ � .3:25/:
Step 2 Estimate �.nC1/using.3:30/:
Until stopping criterion is satisfied
36
Chapter 4. Maximum Likelihood Blind Image Deconvolution Algorithm
In this chapter, we use the ML method to develop a BID algorithm for jointly estimating
the reflectance means and PSF coefficients.
From discussions in Section 1.5.3, it follows that the negative log likelihood function
under the assumption that the PSF coefficients are unknown is given by
N .�;h/ , � log Pr ŒG D gj�;h� (4.1)
D �IX
iD1
gi log
LX
lD�L
hl�il
!C
IXiD1
LXlD�L
hl�ilC
IXiD1
log gi ! (4.2)
D �IX
iD1
gi log< h;�i >CIX
iD1
< h;�i >CIX
iD1
log gi ! : (4.3)
where < h;�i >,LP
lD�L
hl�il. The optimization problem for obtaining ML estimates of
the reflectance means and the PSF coefficients follows
(P6)� O�; Oh� D arg min
�>0;h>0N .�;h/ : (4.4)
To solve (P6), we use an alternating minimization approach that has been used by Holmes
[18] and other researchers (see [21] for example). Given the current estimates for the
reflectance means �.n/ and PSF coefficients h.n/, we obtain an improved estimate for the
reflectance means by minimizing N��;h.n/
�with respect to �, where h.n/ is the current
estimate of PSF coefficients. The resulting estimate �.nC1/ is then used to obtain the next
estimate for the PSF coefficients h.nC1/ by minimizing N��.nC1/;h
�. Summarizing, the
ML BID consists of the following steps:
37
Get initial estimates �.0/ > 0, and h.0/ > 0
Step 1. �.nC1/ D arg min�>0
N��;h.n/
�(4.5)
Step 2. h.nC1/ D arg minh>0
N��.nC1/;h
�(4.6)
Step 3. Iterate between Step 1 and Step 2 until chosen convergence criterion is met.
The optimization problems in Steps 1 and 2 are difficult to solve. Therefore, for practical
reasons, we replace Step 1 and Step 2 above with Step 1a and Step 2a respectively.
Step 1a: Find �.nC1/ � 0 such that N��.nC1/;h.n/
�� N
��.n/;h.n/
�(4.7)
Step 2a: Find h.nC1/ � 0 such that N��.nC1/;h.nC1/
�� N
��.nC1/;h.n/
�(4.8)
In Chapter 2, where the PSF coefficients were assumed to be known, we constructed
a majorizing function Nm that enabled us to determine iterates such that N��.nC1/
� �N��.n/
�: Thus, by viewing h.n/ as the known PSF coefficient in Step 1a, it follows from
(2.24) that a solution to Step 1a is
�.nC1/j D �.n/j
1
LPlD�L
hl
TXtD1
gaj thbj t
< h; �.n/aj t>; j D 1; 2; :::;J: (4.9)
The focus of the remainder of this chapter is to determine a solution to Step 2a .
Towards this end, we construct a majorizing function for the function N��.nC1/; � � at an
arbitrary point h0, where h0 is T � 1, that satisfies the following conditions:
(C4.1) Q��.nC1/;h;h0
�> N
��.nC1/;h
�, for all h > 0
(C4.2) Q��.nC1/;h0;h0
� D N��.nC1/;h0
�for h0 > 0 :
From (4.1), N��.nC1/;h
�can be expressed as
38
N��.nC1/;h
�D �
IXiD1
gi log< h;�.nC1/i >C
IXiD1
< h;�.nC1/i >
CIX
iD1
log gi ! (4.10)
D �IX
iD1
gi log
LX
lD�L
hl�.nC1/il
!C
IXiD1
< h;�.nC1/i >
CIX
iD1
log gi ! (4.11)
D �IX
iD1
gi log
LX
lD�L
hl�.nC1/il
h.n/
l< h.n/;�
.nC1/i >
h.n/
l< h.n/;�
.nC1/i >
!
CIX
iD1
< h;�.nC1/i >C
IXiD1
log gi ! ; (4.12)
where< h;�.nC1/i > ,
LPlD�L
hl�.nC1/il
, < h.n/;�.nC1/i > ,
�LP
kD�L
h.n/
k�.nC1/ik
�; h.n/ � 0;
and �.nC1/ � 0. Continuing, we write N��.nC1/;h
�as
N��.nC1/;h
�D �
IXiD1
gi log
LX
lD�L
d.n/
ilhl
< h.n/;�.nC1/i >
hl.n/
!
CIX
iD1
< h;�.nC1/i >C
IXiD1
log gi ! ; (4.13)
where d.n/
il,
h.n/
l�
.nC1/
il LP
kD�L
h.n/
k�
.nC1/
ik
! . Using the concavity of the log likelihood function [32,
pp. 860 – 862] and the fact thatLP
lD�L
d.n/
ilD 1, we obtain the following inequality
log
LX
lD�L
d.n/
ilhl
< h.n/;�.nC1/i >
h.n/
l
!�
LXlD�L
d.n/
illog
hl
< h.n/;�.nC1/i >
h.n/
l
!:
(4.14)
Applying the inequality in (4.14) to (4.13) yields the desired result
39
N��.nC1/;h
�6 �
IXiD1
gi
LXlD�L
d.n/
illog
hl
< h.n/;�.nC1/i >
h.n/
l
!
CIX
iD1
< h;�.nC1/i >C
IXiD1
log gi ! . (4.15)
We will now show that the function on the RHS of (4.15) ,which we denote by Q��.nC1/; � ;h.n/�,
is a majorizing function for the function N��.nC1/; � � at the point h.n/.
Q��.nC1/;h;h.n/
�, �
IXiD1
gi
LXlD�L
d.n/
illog
hl
< h.n/;�.nC1/i >
h.n/
l
!
CIX
iD1
< h;�.nC1/i >C
IXiD1
log gi ! : (4.16)
By construction, it is clear that the function Q��.nC1/; � ;h.n/� satisfies (C4.1) for�.nC1/ �
0 and h.n/ � 0. Next, we show that (C4.2) is also satisfied. Straightforward calculations
lead to the result below
Q��.nC1/;h.n/;h.n/
�D �
IXiD1
gi
LXlD�L
d.n/
illog < h.n/;�
.nC1/i >
CIX
iD1
LXlD�L
hl.n/�
.nC1/il
CIX
iD1
log gi ! (4.17)
D �IX
iD1
gi log < h.n/;�.nC1/i >
LXlD�L
d.n/
il
CIX
iD1
< h.n/;�.nC1/i >C
IXiD1
log gi ! . (4.18)
Using the fact thatLP
lD�L
d.n/
ilD 1, it follows that
40
Q��.nC1/;h.n/;h.n/
�D �
IXiD1
gi log < h.n/;�.nC1/i >
CIX
iD1
< h.n/;�.nC1/i >C
IXiD1
log gi ! (4.19)
D N.�.nC1/;h.n//. (4.20)
Thus, the function Q��.nC1/; � ;h.n/� satisfies (C4.2).
At this point, we have developed a majorizing function for N��.nC1/; � � using ideas
from De Pierro’s work [30, 31]. To obtain the next iterate h.nC1/ defined in Step 2a, we
must solve the following optimization problem
(P7) h.nC1/ D arg minh>0
Q��.nC1/;h;h.n/
�: (4.21)
For s D �L; .�L C 1/; :::; .L � 1/; L ; the derivative of the majorizing function
Q��.nC1/;h;h.n/
�with respect to hs is given by
@
@hs
Q��.nC1/;h;h.n/
�D @
@hs
�IX
iD1
gi
LXlD�L
d.n/
il
log hl C log
< h.n/;�.nC1/i >
h.n/
l
!
C @
@hs
IXiD1
< h;�.nC1/i >C @
@hs
IXiD1
log gi ! (4.22)
D �IX
iD1
gid.n/is
�1
hs
�C
IXiD1
�.nC1/is
. (4.23)
Ignoring the constraint h � 0 momentarily, a necessary condition for h.nC1/ would be
@
@hs
Q��.nC1/;h;h.n/
�ˇ̌̌̌hsDh
.nC1/s
D 0; (4.24)
or equivalently
�IX
iD1
gid.n/is
�1
h.nC1/s
�C
IXiD1
�.nC1/is
D 0: (4.25)
41
After replacing d.n/is by
h.n/s �
.nC1/
is
<h.n/;�.nC1/
i>
in (4.25) and solving the resulting equation, a solution
to Step 2a is given by
h.nC1/s D h.n/s
1
IPiD1
�.nC1/is
IXiD1
gi�.nC1/is
< h.n/;�.nC1/i >
; s D �L; .�LC 1/; :::; .L� 1/;L:
(4.26)
Since gi � 0 for all i , it can be seen from (4.26) that h.nC1/s � 0 for all s provided that
�.0/ � 0 and h.0/ � 0.
The ML BID algorithm can be summarized as follows:
Get initial estimates �.0/ > 0, and h.0/ > 0
For n D 0; 1; 2; :::;
Step 1 Estimate �.nC1/ from �.n/ and h.n/ using .4:9/:
Step 2 Estimate h.nC1/ from �.nC1/ and h.n/using .4:26/:
Iterate between Steps 2 and 3 until chosen convergence criterion is met.
42
Chapter 5. Penalized Maximum Likelihood Blind Image Deconvolution Algorithm
In this chapter, we present a BID algorithm where both the reflectance parameters and
PSF coefficients are estimated using the penalized maximum likelihood method.
As discussed in Chapter 4, the negative log likelihood function when the PSF coeffi-
cients are unknown is given by
N .�;h/ , � log Pr ŒG D gj�;h� (5.1)
D �IX
iD1
gi log< h;�i >CIX
iD1
< h;�i >CIX
iD1
log gi ! : (5.2)
Unfortunately, the problem of minimizing N .�;h/ with respect to � and h is ill-posed.
Therefore, we propose a penalized maximum likelihood method that incorporates penalty
functions K and L for the PSF coefficients and reflectance parameters, respectively
(P8)� O�; Oh� D arg min
�>0;h>0N .�;h/C ˇL .�/C ˛K .h/ : (5.3)
The strictly positive constants ˛ and ˇ are penalty parameters that control the influence of
the penalty functions K and L, respectively.
Given initial estimates �.0/ > 0 and h.0/ > 0, an alternating minimization algorithm
for solving the optimization problem (P8) is
Step 1. �.nC1/ D arg min�>0
N��;h.n/
�C ˇL .�/ (5.4)
Step 2. h.nC1/ D arg minh>0
N��.nC1/;h
�C ˛K .h/ (5.5)
Step 3. Iterate between Step 1 and Step 2 until chosen convergence criterion is met.
For convenience, we define the objective function in the optimization problem of Step 2 as
43
follows
R��.nC1/;h
�, N
��.nC1/;h
�C ˛K .h/ : (5.6)
Closed form expressions for the solutions of the optimization problems in Steps 1 and 2
are unavailable. Consequently, we replace Step 1 and Step 2 with Step 1b and Step 2b,
respectively:
Step 1b: Find �.nC1/ such that
N��.nC1/;h.n/
�C ˇL
�h.n/
�� N
��.n/;h.n/
�C ˇL
�h.n/
�Step 2b: Find h.nC1/ such that R
��.nC1/;h.nC1/
�� R
��.nC1/;h.n/
�: (5.7)
Exploiting the results in Chapter 3 and viewing h.n/ as the known PSF coefficients in Step
1b, a solution to Step 1 is given by (3.30) with h replaced by h.n/.
5.1. Penalized Maximum Likelihood Algorithm for Estimating the Point SpreadFunction With Unity DC Gain Penalty Function
In this section, we will determine a solution to Step 2b when the penalty function K is
chosen so that the sum of the PSF coefficients approximately equals one. We choose K in
this way so that the sum of the observed and restored images are approximately equal. The
implication is that the overall brightness of the observed and restored images are the same.
We refer to the penalty function as the unity DC Gain penalty function becauseLP
lD�L
hl D 1
implies that the Fourier transform at � D 0 equals one. Given, the above discussion, the
unity DC gain penalty function is defined as follows
K.h/ ,
LXlD�L
hl � 1
!2
: (5.8)
We will now find a majorizing function Rm for the objective function R��.nC1/; � �
at the point h.n/. Since R��.nC1/; � � is a sum of two functions N
��.nC1/; � � and K , we
have to find majorizing functions for each of these functions. Using the results in Chapter 4 ,
44
a majorizing function for N��.nC1/; � � at a point h.n/ is given by
Q��.nC1/;h;h.n/
�, �
IXiD1
gi
LXlD�L
d.n/
illog
hl
< h.n/;�.nC1/i >
h.n/
l
!
CIX
iD1
< h;�.nC1/i >C
IXiD1
log gi ! : (5.9)
where d.n/
il,
h.n/
l�
.nC1/
il LP
kD�L
h.n/
k�
.nC1/
ik
! . Once we find a majorizing function Km for the penalty
term K , a solution to Step 2b is the minimizer of the function Rm
��.nC1/; � ;h.n/�
(P10) h.nC1/ D arg minh>0
Rm
��.nC1/;h;h.n/
�; (5.10)
where the majorizing function for R��.nC1/; � � at a point h.n/ is defined as
Rm
��.nC1/; � ;h.n/
�, Q
��.nC1/; � ;h.n/
�C ˛Km
�� ;h.n/
�: (5.11)
We will now start with the definition of K and construct a majorizing function Km for
the penalty function. We first express K in the following manner
K.h/ ,
LXlD�L
hl � 1
!2
(5.12)
D
LXlD�L
hl
!2
� 2 NhC 1; (5.13)
where Nh ,LP
lD�L
hl : Now, it will be convenient to express the penalty function as
K.h/ D
LXlD�L
hl
h.n/
l
h.n/
l
Nh.n/Nh.n/
!!2
� 2 NhC 1 (5.14)
D
LXlD�L
a.n/
l
Nh.n/h.n/
l
hl
!!2
� 2 NhC 1 ; (5.15)
where Nh.n/ ,LP
kD�L
h.n/
kand a
.n/
l, h
.n/
l
Nh.n/. Using the convexity of the square function and
45
the fact thatLP
lD�L
a.n/
lD 1, the following inequality holds true
LX
lD�L
a.n/
l
Nh.n/h.n/
l
hl
!2
6LX
lD�L
a.n/
l
Nh.n/h.n/
l
hl
!2
: (5.16)
Therefore, it follows that
K.h/ 6� Nh.n/�2
LXlD�L
l
hl
h.n/
l
!21A � 2 NhC 1. (5.17)
We will now show that the function on the RHS of (5.17) is a majorizing function for the
penalty function K .
Let Km. � ;h.n// denote the function on the RHS of (5.17)
Km.h;h.n// ,
� Nh.n/�2LX
lD�L
a.n/
l
hl
h.n/
l
!2
� 2 NhC 1. (5.18)
In order for Km to be the majorizing function for the penalty function, it must satisfy the
following two conditions:
(C5.1) Km
�h;h.n/
�> K .h/ , for all h > 0
(C5.2) Km
�h.n/;h.n/
� DK�h.n/
�, for h.n/ > 0.
Clearly, by construction, (C5.1) is satisfied for h.n/ � 0. All that remains is to show that
(C5.2) is satisfied as well. SinceLP
lD�L
a.n/
lD 1 , evaluating Km
�h;h.n/
�at h.n/ yields the
following expression
Km
�h.n/;h.n/
�D� Nh.n/�2
LXlD�L
a.n/
l
h.n/
l
h.n/
l
!2
� 2 NhC 1 (5.19)
D� Nh.n/�2 � 2 Nh.n/ C 1 (5.20)
DK�h.n/
�. (5.21)
Since both Q and Km satisfy the requisite conditions for majorizing functions, it
46
follows by construction that a majorizing function for R is
Rm
��.nC1/;h;h.n/
�, Q
��.nC1/;h;h.n/
�C ˇKm
�h;h.n/
�: (5.22)
The reason is because Rm
��.nC1/; � ;h.n/� satisfies the following conditions:
(C5.3) Rm
��.nC1/;h;h.n/
�> R .h/ , for all h > 0
(C5.4) Rm
��.nC1/;h.n/;h.n/
� D R�h.n/
�, for h.n/ > 0.
All that remains now is to solve the minimization problem in (5.10). We will find the
minimizer of Rm
��.nC1/; � ;h.n/� by initially ignoring the constraint h � 0; and com-
puting its derivative with respect to hs and setting the result to zero. The derivative of
Rm
��.nC1/; � ;h.n/� with respect to hs can be written as
@
@hs
Rm
��.nC1/;h;h.n/
�D @
@hs
Q��.nC1/;h;h.n/
�C ˛ @
@hs
Km
�h;h.n/
�. (5.23)
In Chapter 4, we showed that derivative of the majorizing function Q��.nC1/; � ;h.n/�
with respect to hs is given by (see (4.23))
@
@hs
Q��.nC1/;h;h.n/
�D �
IXiD1
gid.n/is
�1
hs
�C
IXiD1
�.nC1/is
. (5.24)
Straightforward calculations lead to the following expression for the derivative of the
majorizing function Km with respect to hs
@
@hs
Km
�h;h.n/
�D @
@hs
24� Nh.n/�2LX
lD�L
a.n/
l
hl
h.n/
l
!2
� 2 NhC 1
35 (5.25)
D� Nh.n/�2
a.n/s
�1
h.n/s
�2
2hs � 2. (5.26)
After replacing a.n/s by h
.n/s
Nh.n/, we arrive at the following
@
@hs
Km
�h;h.n/
�D Nh.n/ 2hs
h.n/s
� 2 : (5.27)
47
Substituting (5.24) and (5.27) into (5.23), yields the derivative of the majorizing function
Rm
��.nC1/; � ;h.n/� with respect to hs
@
@hs
Rm
��.nC1/;h;h.n/
�D �
IXiD1
gid.n/is
�1
hs
�C
IXiD1
�.nC1/is
(5.28)
C
LXkD�L
h.n/
k
!˛2hs
h.n/s
� 2˛. (5.29)
Ignoring the constraint h � 0 momentarily, the necessary condition for the minimizer in
(P10) is
@
@hs
Rm
��.nC1/;h;h.n/
�jhsDh
.nC1/sD 0 (5.30)
Or, equivalently,
�IX
iD1
gid.n/is
�1
h.nC1/s
�C
IXiD1
�.nC1/is
C
LXkD�L
h.n/
k
!2˛h
.nC1/s
h.n/s
� 2˛ D 0: (5.31)
After replacing d.n/is in (5.31) by
0@ h.n/s �
.nC1/
is
LPlD�L
h.n/
l�
.nC1/
il
1A, we arrive at the following equivalent
expression0BBB@2˛
LPkD�L
h.n/
k
h.n/s
1CCCA�h.nC1/s
�2 C
IXiD1
�.nC1/is
� 2˛
!h.nC1/
s
�IX
iD1
gi
0BBB@ h.n/s �
.nC1/is
LPlD�L
h.n/
l�.nC1/il
1CCCA D 0. (5.32)
For convenience, we introduce the following simplified notation
a ,2˛
LPkD�L
h.n/
k
h.n/s
: (5.33)
48
b ,IX
iD1
�.nC1/is
� 2˛ : (5.34)
c , h.n/s
IXiD1
0BBB@ �gi�.nC1/is
LPlD�L
hl.n/�
.nC1/il
1CCCA : (5.35)
Considering the non-negativity constraint h � 0, we are only interested in the positive root
of the quadratic equation in 5.32. For h.0/ > 0 and �.0/ > 0, it follows that a > 0 and c � 0.
Therefore, the next iterate for hs is given by
h.nC1/s D �b Cpb2 � 4ac
2a; s D 1; 2; :::; .2LC 1/: (5.36)
Summarizing, the proposed PML BID algorithm consists of the steps below:
Get initial estimates �.0/ � 0, and h.0/ � 0
For n D 0; 1; 2; :::;
Step 1 Estimate �.nC1/ from �.n/ and h.n/ using .3:30/:
Step 2 Estimate h.nC1/ from �.nC1/ and h.n/using .5:33/ � .5:36/:
Until stopping criterion is satisfied
49
Chapter 6. Experimental Study and Simulation Results
The algorithms developed in Chapters 2, 3, 4, and 5 were implemented using Matlab
and their performances were assessed using images with real and synthetic blur. To test the
image deconvolution algorithms of Chapters 2 and 3, we convolved the well known Lena
image with a lowpass filter. A landmine field image as well as an image of the spiral galaxy,
Messier 61 (NGC 4303), were used in the experiments to test the BID algorithms in Chapters
4 and 5. We now present the simulation results for each of the algorithms developed.
6.1. Maximum Likelihood Image Deconvolution
The MLEM algorithm developed in Chapter 2 was applied to Lena, an image widely
used to test image processing algorithms. In this experiment, it is assumed that the true PSF
coefficients are known where the problem is to estimate the reflectance means f�ig. For
the observed image, Lena was blurred with a known 5 � 5 gaussian PSF with � D 3 and
then Poisson noise was applied to it. The MLEM algorithm produces iterates that become
noisier and noisier with increasing iterations. Figure 6.1 shows the result of the MLEM
algorithm after applying it to the blurred Lena image. Figures 6.1 (a) and (b) show the true,
and observed images respectively. Observing Figures 6.1 (c)-(h), it can be seen that the best
quality reconstruction occures at the 10th iteration. After the 10th iteration, the ML estimates
become noisier and noisier.
50
PSfrag replacements
(a) True Image (b) Blurred Image with Poisson Noise
(c) MLEM: 10 (d) MLEM: 20
(e) MLEM: 30 (f) MLEM: 40
(g) MLEM: 50 (h) MLEM: 60
5010015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
100150200250
Figure 6.1. MLEM results: (a) true image, (b) blurred image with Poisson noise, (c) MLEMresult at 10th iteration, (d) MLEM result at 20th iteration, (e) MLEM resultat 30th iteration, (f) MLEM result at 40th iteration, (g) MLEM result at 50th
iteration, and (h) MLEM result at 60th iteration.
51
In Figures 6.2 (d), we show the plot of the ML objective function, which is the negative
log likelihood function. The true, observed, and restored images are shown in Figures 6.2
(a), (b), and (c) respectively. As mathematically proven in Chapter 2, the objective function
monotonically decreases with increasing iterations.
PSfrag replacements
(a) True Image (b) Blurred Image with Poisson Noise
(c) MLEM: 10 (d) ML Objective Function (�107)
2 4 6 8 10
5010015020025050
10015020025050
100150200250
-2.4078
-2.4076
-2.4074
-2.4072
-2.407
5010015020025050
10015020025050
100150200250
Figure 6.2. MLEM results: (a) true image, (b) blurred image with Poisson noise, (c) MLEMresult at the 10th iteration, and (d) objective function plot.
We now investigate the performance of the MLEM algorithm when it is applied to
simply the blurred Lena image (i.e., no noise). The true image, noiseless blurred image, and
20th MLEM iterate are shown in Figures 6.3 (a), (b), and (c), respectively. As expected, the
MLEM algorithm is more efficient when it is applied to blurred images without Poisson
noise. As in the noisy case, the plot of the negative log likelihood function shown in
Figure 6.3 (d) is monotonically decreasing by the MLEM algorithm.
52
We see from Figure 6.4 that the MLEM algorithm did not produce noisy iterates when
Poisson noise was absent. Figures 6.4 (a) and(b) show the true and observed Lena images
respectively whereas Figures 6.4 (c)-(h) show the result of the MLEM algorithm at different
iterations.
PSfrag replacements
(a) True Image (b) Blurred Image w/o Poisson Noise
(c) MLEM: 20 (d) ML Objective Function (�107)
5 10 15 20
5010015020025050
10015020025050
100150200250
-2.4054
-2.4052
-2.405
-2.4048
5010015020025050
10015020025050
100150200250
Figure 6.3. MLEM result: (a) true image, (b) blurred image without Poisson noise, (c)MLEM result at 20th iteration, and (d) objective function plot.
53
PSfrag replacements
(a) True Image (b) Blurred Image w/o Poisson Noise
(c) MLEM: 10 (d) MLEM: 20
(e) MLEM: 30 (f) MLEM: 40
(g) MLEM: 50 (h) MLEM: 60
5010015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
10015020025050
100150200250
Figure 6.4. MLEM results: (a) true image, (b) blurred image without Poisson noise, (c)MLEM result at 10th iteration, (d) MLEM result at 20th iteration, (e) MLEMresult at 30th iteration, (f) MLEM result at 40th iteration, (g) MLEM result at50th iteration, and (h) MLEM result at 60th iteration.
54
6.2. Penalized Maximum Likelihood Image Deconvolution
In Chapter 3, a PML algorithm with a class of separable majorizing function was
presented for estimating the reflectance means f�ig, with the assumption that the PSF
coefficients are known. We will now follow with the simulation results of the PML algorithm
applied to the same synthetic image used in Section 6.1. The true and the blurred images
are shown in Figures 6.5 (a) and (b) respectively whereas Figure 6.5 (c) shows the restored
image at convergence. Note that the convergence criterion is defined by the user. In this
and all PML based algorithms to follow, we let the algorithm stop once the difference
between the mean square error of two successive iterates fall below � D 0:00001. As
seen from Figure 6.5 (d), the PML objective function is monotonically decreasing with
increasing iterations. The synthetic image in Figure 6.6 (a) was blurred without Poisson
PSfrag replacements
(a) True Image (b) Blurred image with Poisson noise
(c) PML: 129, beta = 500 (d) PML Objective Function (�107)
0 50 100 150
5010015020025050
10015020025050
100150200250
-2.408
-2.4075
-2.407
-2.4065
5010015020025050
10015020025050
100150200250
Figure 6.5. PML results: (a) true image, (b) blurred image with Poisson noise, (c) PMLdeblurred at 129th iteration, ˇ D 500, and (d) objective function plot.
noise (see Figure 6.6 (b)). The blurred image was then restored using the PML algorithm.
55
The optimal ˇ value for the noiseless case was found to be ˇ D 20. Figure 6.6 (c) shows the
reconstructed image at convergence. As expected, the objective function shown in Figure 6.6
(d) decreases monotonically.
PSfrag replacements
(a) True Image (b) Blurred Image w/o Poisson Noise
(c) PML: 60, beta=20 (d) PML Objective Function (�107)
0 20 40 60
5010015020025050
10015020025050
100150200250
-2.4056
-2.4054
-2.4052
-2.405
-2.4048
-2.4046
5010015020025050
10015020025050
100150200250
Figure 6.6. PML results: (a) true image, (b) blurred image without Poisson noise, (c) PMLresult at 60th iteration, ˇ D 20, and (d) objective function plot.
The choice of ˇ dictates the outcome of the PML algorithm. Since ˇ controls the
influence of the penalty function L on the PML objective function P , using a ˇ value too
large makes the algorithm give heavier weight to the neighbor smoothing penalty function
L, rather than minimizing the negative log likelihood function. The result is an image
smoothed too much such that it looses some contrast information. Such an image is shown
in Figure 6.7 (e). Figures 6.7 (a) and (b) show the true and observed images respectively.
Also, by using too small ˇ value, we can make the algorithm give little weight to the penalty
term and focus on minimizing the negative log likelihood function. Note that the smaller
the ˇ value we use, the closer the PML algorithm gets to the MLEM algorithm where we
56
had noise build up problem. Figure 6.7 (f) shows a noisy image reconstructed using too
small ˇ value. The most important question is then how to determine the penalty parameter
value ˇ � 0, that balances between image deconvolution and noise reduction. The answer
is no one came up with a theoretical method to determine the optimal ˇ value as of now.
Therefore, ˇ must be determined by experiment. After trying several ˇ values, the optimal
value that balances between image deconvolution and noise reduction was found to be
ˇ D 500 for the synthetic image with Poisson noise and ˇ D 20 for the noiseless case.
The reconstructed image with the optimal ˇ value, and the plot of its objective function
are shown in Figures 6.7 (c) and (d) respectively for the noisy image. Also, note that the
number of iterations required for convergence is different for different ˇ values. In the case
where we used too large value for the noisy image, ˇ D 1000, 87 iterations were enough to
converge. The required number of iterations was determined to be 129 for the optimal value
of ˇ D 500. 322 iterations were required for the noisy image in Figure 6.7 (f) to converge
when ˇ D 5 was used, which was too small for the algorithm to give enough weight so that
the noise is suppressed in the reconstructed image. Comparison of different ˇ value was
also done for the case where the blurred image has no Poisson noise. The reconstructed
image with the optimal ˇ value, ˇ D 20 and the plot of its objective function are shown in
Figures 6.8 (c) and (d) respectively for the noiseless blurred image. Figure 6.8 (b) shows the
blurred image without Poisson noise whereas (e) and (f) of Figure6.8 show PML images
reconstructed with too large and too small ˇ values respectively. The true image is shown
in Figure6.8 (a). Note that for the case where ˇ was too small, the reconstructed image
resembles that of the MLEM algorithm. Recall from Section 6.1 that MLEM reconstruction
was not as bad for noiseless images as it was for blurred images with Poisson noise.
57
PSfrag replacements
(a) True Image (b) Blurred Image with Poisson Noise
(c) PML: 129, beta = 500 (d) PML Objective Function (�107)
(e) PML: 87, beta = 1000 (f) PML: 322, beta = 5
5010015020025050
100150200250
0 50 100 150
5010015020025050
10015020025050
10015020025050
10015020025050
100150200250
-2.4075
-2.407
-2.4065
�107
5010015020025050
10015020025050
100150200250
Figure 6.7. PML deblurred images with different ˇ values and plot of the objective functionfor noisy observed image: (a) true image, (b) blurred image with Poisson noise,(c) PML result at 129th iteration with ˇ D 500, (d) objective function plot,(e) PML result at 87th iteration with ˇ D 1000 , and (f) PML result at 322th
iteration with ˇ D 5 .
58
PSfrag replacements
(a) True Image (b) Blurred Image w/o Poisson Noise
(c) PML: 60, beta=20 (d) PML Objective Function (�107)
(e) PML: 47, beta=200 (f) PML: 55, beta=0.002
5010015020025050
100150200250
0 20 40 60
5010015020025050
10015020025050
10015020025050
10015020025050
100150200250
-2.4055
-2.405
-2.4045
5010015020025050
10015020025050
100150200250
Figure 6.8. PML deblurred images with different ˇ values and plot of the objective functionfor noiseless observed image: (a) true image, (b) blurred image without Poissonnoise, (c) PML result at 60th iteration with ˇ D 60, (d) objective functionplot, (e) PML result at 47th iteration with ˇ D 200, and (f) PML result at 55th
iteration with ˇ D 0:002 .
59
6.3. Maximum Likelihood Blind Image Deconvolution
In Chapter 4, an alternating BID algorithm was developed to jointly estimate the
reflectance means f�ig and the PSF coefficients fhlg based on the ML estimation method
without incorporating any kind of penalty function. Although, the alternating BID algorithm
estimates both f�ig and fhlg at the same time, the noise build up problem of ML estimation
method still exists. Figure 6.9 shows the different iterates of the ML BID algorithm.
Figures 6.9 (a), (b), (g), and(h) show the true image, observed image, the initial PSF and the
estimated PSF.
It is obvious from the images of Figures 6.9 (c)-(f) that the noise build up is not any
better than the MLEM algorithm with a known PSF. However, the one advantage that the
ML BID algorithm has is that we don’t need to know the actual PSF that is responsible for
the blur. Instead, we can just start the algorithm with a uniform PSF as the initial guess.
However, the more the initial guess is closer to the true PSF, the better the final ML estimate
will be. Figure 6.10 is the result of the ML image deconvolution algorithm applied to
a real landmine field image (HYDICE image) obtained from the Countermine Division
of the Night Vision and Electronic Sensors Directorate in Fort Belvoir, Virginia. Visual
comparison of the observed image with the reconstructed image at the 50th iteration shows
that the reconstructed image in Figure 6.10 (f) reveals less mine signatures than the observed
image in Figure 6.10 (a). The reason for the disappearance of the small mine signatures is
that the noise build up is covering up the contrast of mine signatures. Observing the images
Figures 6.10 (b) through (f), it is obvious that the mine signatures are disappearing with
increasing iterations. Figures 6.10 (g) and (h) depict the initial guess and the final estimate
of the PSF.
60
PSfrag replacements
(a) True Image (b) Blurred image with Poisson noise
(g) Starting PSF
(c) ML-based blind deblurred: 10 (d) ML-based blind deblurred: 20
(e) ML-based blind deblurred: 30 (f) ML-based blind deblurred: 40
(h) Estimated PSF
1 2 3 4 5
5010015020025050
10015020025050
10015020025050
100150200250
1 2 3 4 5
5010015020025050
100150200250
0.0533
0.0534
0.0535
5010015020025050
10015020025050
10015020025050
100150200250
0
0.05
0.1
5010015020025050
100150200250
Figure 6.9. ML BID results for an image blurred with Poisson noise: (a) true image, (b)blurred image with Poisson noise, (c) blind deblurred at 10th iteration, (d)blind deblurred at 20th iteration, (e) blind deblurred at 30th iteration, (f) blinddeblurred at 40th iteration, (g) starting PSF, and (h) estimated PSF (sum=1.3) .
61
PSfrag replacements
(a) Observed image
(g) Starting PSF
(b) ML-based blind deblurred: 10
(c) ML-based blind deblurred: 20 (d) ML-based blind deblurred: 30
(e) ML-based blind deblurred: 40 (f) ML-based blind deblurred: 50
(h) Estimated PSF
1 2 3 4 5
100200300400500100200300400500100200300400500100200300400500100200300400500
1 2 3 4 5
100200300400500
0.04
0.04
0.04
100200300400500100200300400500100200300400500100200300400500100200300400500
0
0.05
0.1
100200300400500
Figure 6.10. ML BID results for landmine field image: (a) observed image, (b) blind de-blurred at 10th iteration, (c) blind deblurred at 20th iteration, (d) blind deblurredat 30th iteration, (e) blind deblurred at 40th iteration, (f) blind deblurred at 50th
iteration, (g) starting uniform PSF, and (h) estimated PSF (Sum = 1.002) .
62
6.4. Penalized Maximum Likelihood Blind Image Deconvolution
A PML based alternating BID algorithm was developed in Chapter 5 for joint estimation
of the reflectance means f�ig and the PSF coefficients fhlg. In Section 6.2, we have identified
ˇ D 500 to be the optimal penalty parameter value for the neighbor smoothing penalty
function L with the noiseless blurred synthetic image. In this section, we use ˇ D 500,
for the PML BID experiment with the synthetic data. For the landmine and astronomical
images, the optimal ˇ values were experimentally determined to be ˇ D 100 and ˇ D 400
respectively. Together with ˇ, the penalty parameter ˛ D 1000, ˛ D 0:1, and ˛ D 0:0001
for the synthetic, land mind, and astronomical images respectively, were used for controlling
the influence of the unity DC gain penalty function K on the PML minimization problem.
Again, just like ˇ, ˛ was determined by experiment. Figures 6.11 (a) and (b) show the true
and observed images. Also, the plot of the initial flat PSF and the final estimate of the PSF
are shown Figures 6.11 (e) and (f) respectively. It is obvious that the image in Figure 6.11 (c),
which is restored blindly out knowing the blurring PSF, is almost identical to the image in
Figure 6.5 (c), that is deblurred with the true PSF. The plot of the monotonically decreasing
objective function is shown in Figure 6.11 (d).
The PML based alternating BID algorithm was also tested using a real landmine image
(HYDICE image) shown in Figure 6.12 (a). As shown in Figure 6.12 (c), the algorithm
did not lead to improved contrast as hoped and we believe this is due to the fact that the
observed image did not have significant blur and also the intensity of the pixels with small
mine signature are very close to the pixel intensity of their background. Furthermore, the
unknown blurring PSF might be a reducible Gaussian PSF. Figure 6.12 (b) shows the result
of using too small ˇ value in which case, the deblurred image is much noisier with lost
contrast compared to the blurred/observed image in (a). Also, when too large penalty
parameter, ˇ D 5000, was used to deblur the landmine image, Figure 6.12 (b) reveals a
highly smoothed out image where most of the contrast is lost because of the effect of too
much smoothing due to the penalty function. The image in Figure 6.12 (c) is deblurred with
63
PSfrag replacements
(b) Blurred Image with Poisson noise
(d) PML Objective Function (�107)
(f) Estimated PSF
(a) True Image
(c) Blind PML: 110 iterations, beta=500
(e) Starting PSF
1 2 3 4 5
5010015020025050
100150200250
1 2 3 4 5
0 50 100 150
50100150200250
0
0.05
0.1
5010015020025050
100150200250
0.039
0.04
0.041
-2.4075
-2.407
-2.4065
50100150200250
Figure 6.11. PML BID results for an image blurred with Poisson noise: (a) true image,(b) blurred image with Poisson noise, (c) PML result at 110th iteration withˇ D 500 and ˛ D 1000, (d) objective function, (e) starting uniform PSF, and(f) estimated PSF (Sum = 1.002 ).
64
the optimal ˇ value which was found to be 100. The plot of the objective function using the
optimal ˇ value is shown in Figure 6.12 (d) whereas (e) of the same figure shows another
restored image with too large ˇ value.
We claim that the PML BID algorithm developed in Chapter 5 works as equally, if not
more, for astronomical images captured with CCD cameras as it works for a spectral plane of
hyperspectral images. To demonstrate this, we present, in Figure 6.13 (a), a blurred/observed
image of the bright spiral galaxy Messier 61 (NGC 4303) taken from NASA’s Hubble space
telescope and the restored image using optimal ˇ value in Figure 6.13 (c) along with the
plot of the monotonically decreasing objective function shown Figure 6.13 (d). It is obvious
from the images that some of the small or hazy stars, which are not showing in Figure 6.13
(a) seem to show up in Figure 6.13 (c). Moreover, the restored image appears to have lesser
noise and interstellar dust than the observed image. The smallest ˛ value that will keep the
sum of the PSF at unity was by experiment found to be ˛ D 0:0001 for the astronomical
image. Also, the optimal ˇ value for the same data was found to be ˇ D 400. In Figure 6.13
(f), we show the galaxy image restored with too small ˇ value (ˇ D 4) demonstrating that
the smaller the ˇ value we use that is far from the optimal value, the noisier the image
gets. On the contrary, Figure 6.13 (e) illustrates the image deblurred with too large ˇ value
(ˇ D 4000). The restored image in Figure 6.13 (e) and even worst in Figure 6.13 (b) shows
that the image is smoothed out too much to the point where it lost its contrast information.
65
PSfrag replacements
(a) Observed Image (b) Blind PML: 97 iterations, beta=5000
(c) Blind PML: 119 iterations, beta=100 (d) PML Objective Function (�1010)
(e) Blind PML: 88 iterations, beta=1000 (f) Blind PML: 100 iterations, beta=5
100200300400500100200300400500
0 50 100
100200300400500100200300400500100200300400500100200300400500100200300400500
-1.7964
-1.7964
-1.7964100200300400500100200300400500100200300400500
Figure 6.12. PML BID results for landmine field image with ˛ D 0:1: (a) observed image,(b) blind PML result with ˇ D 50000 after 97 iterations, (c) blind PML resultwith ˇ D 100 after 119 iterations, (d) objective function plot when ˇ D 100
(optimal ˇ value), (e) blind PML result with ˇ D 1000 after 88 iterations, (f)blind PML result with ˇ D 5 after 100 iterations.
66
PSfrag replacements
(a) Observed Image (b) Blind PML: 80 iterations, beta=400000
(c) Blind PML: 99 iterations, beta=400 (d) PML Objective Function (�107)
(e) Blind PML: 67 iterations, beta=4000 (f) Blind PML: 101 iterations, beta=4
100200300400500100200300400500
0 50 100
100200300400500100200300400500100200300400500100200300400500100200300400500
-7.6
-7.55
-7.5
100200300400500100200300400500100200300400500
Figure 6.13. PML BID results for the bright spiral galaxy Messier 61 (NGC 4303) in theVirgo cluster with unity DC gain penalty function when ˛ D 0:0001: (a)observed image, (b) blind PML result with ˇ D 400000 after 80 iterations, (c)blind PML result with ˇ D 400 after 99 iterations, (d) objective function plotwhen ˇ D 400 (optimal ˇ value), (e) blind PML result with ˇ D 4000 after67 iterations, and (f) blind PML result with ˇ D 4 after 101 iterations.
67
Chapter 7. Conclusions and Future Work
7.1. Conclusions
In Chapter 2, we derived a modification of Shepp and Vardi’s MLEM algorithm for
estimating the reflectance means˚�j
under the assumption that the PSF coefficients fhlg
are known. The optimization problem that must be solved to obtain the desired ML estimates
is ill-posed and the negative log likelihood function is convex but not strictly convex. As
a result, after certain number of iterations, the reconstructed images become noisier and
noisier with increasing iterations.
The second algorithm presented was the PML algorithm developed in Chapter 3 for
estimating the reflectance means˚�j
under the assumption that the PSF coefficients, fhlg
are known. The PML algorithm can incorporate a class of penalty function,
L .�/ DIX
iD1
Xk2Bi
wik� .�i � �k/; (7.1)
where the function � satisfies the assumptions (AS1)-(AS6). The penalty functions are
designed in such a way that estimates of neighboring pixels are forced to be similar in value
unless there is an edge within the neighborhood. Our choice for � was � .t/ D log cosh .t/.
As shown in Section 6.2, the PML algorithm, unlike the MLEM algorithm, can produce
reconstructed images without excessive noise. However, the biggest challenge lies in
choosing the penalty parameter ˇ. To date, no one has developed theoretical way to
choose the penalty parameter. Thus, in practice, the penalty parameter must be obtained
experimentally through trial and error.
In Chapter 4, we derived an ML BID algorithm using an alternating minimization
technique to jointly estimate the reflectance means f�jg and the PSF coefficients fhlg. Like
the ML image deconvolution algorithm discussed in Chapter 2, the ML BID algorithm
68
produces images that become noisier with increasing iterations. Moreover, the reducible
nature of the Gaussian PSF implies that the ML BID algorithm can not be used for Gaussian
PSFs.
Finally, a PML BID algorithm was developed in Chapter 5. In Section 6.4, the algorithm
was shown to work as good as the PML algorithm with known PSF, and at the same time,
estimate the blurring PSF using the unity DC gain penalty function together with its penalty
parameter ˛. The limitation due to the reducible nature of a Gaussian PSF still applies to
the PML BID algorithm. Moreover, the challenge of obtaining optimal ˇ and ˛ values is
still unsolvable. In simulation studies, the PML BID algorithm was shown to address the
noise build up problem of the ML BID algorithm. In summary,
1. Blind PML algorithm produces nonnegative estimates.
2. Blind PML algorithm monotonically decreases the PML objective function with
increasing iterations.
3. Blind PML algorithm can incorporate a class of penalty functions for noise suppres-
sion.
4. Performance of PSF estimation indicates that additional a prior information is needed.
5. Preliminary experiments suggest that blind PML algorithm is promising.
7.2. Future Work
A weakness of the blind PML algorithm is that it produces poor estimates for the
PSF coefficients. Recently, Bioucas-Dias et al. [41] developed a non separable majorizing
function for the total variation penalty function, which has been used successfully in a
number of estimation problems (see [42] for example). Therefore, in the future, we intend
to incorporate the total variation penalty function into the blind PML algorithm. A separable
majorizing function for the total variation can be constructed by combining Bioucas-Dias’s
69
result with Huber’s [1] majorizing function for quadratic functions.
Additionally, we plan to:
1. Perform additional experiments to assess the performance of the deconvolution algo-
rithms.
2. Investigate approaches for automatically determining penalty parameters ˇ and ˛
from the data.
3. Asses if the algorithms developed in this thesis could improve anomaly detection
algorithms that identity landmines in hyperspectral images.
4. Determine more accurate initial estimate for the PSF coefficients.
5. Investigate approaches for estimating the spatial extent of unknown PSF.
70
Appendix
Matlab Source Code for PML Blind Image Deconvolution Algorithm
1 clear all % Clear all variables2 load lamd.mat; %load blurred image3 psfsize=5; %Size of deblurring PSF4 sigvalue=5;5 psf=ones(psfsize,psfsize)./(psfsize.^2); % Initial guess for the deblurring PSF6 Bigiterations=500; %Max number of iteration to run if convergence criteria is not
met early.7 smallNumber= 0.0000000001;8 alpha=1; %Parameter for unity DC gain penalty function.9 weight=sqrt(0.5);
10 beta=500; %Parameter for log(cosh) neighbor smoothing penalty function.11 converge=0; %Convergence flag12 input=im(129:640,1:512); % Take 512x512 part of the blurred image.13 [M N]=size(input);14 Npixels=M�N; %Number of pixels in the lexicographically ordered image.15 [n1 n2]=size(psf); %Size of PSF16 inputimg=input;17 blrd=inputimg;18 origi=reshape(input,Npixels,1);1920 %Reflect�Extend the input image21 for i=1:M,22 for j=1:N,23 locmatrix(i,j)= (i�1)�N+j;24 end,25 end;26 extndlocmatrix=zeros(M+(n1�1),N+(n2�1));27 extndlocmatrix((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2)=locmatrix;2829 % Vertical extension30 extndlocmatrix(1:(n1�1)/2,:)=flipdim(extndlocmatrix((n1�1)/2+1:n1�1,:),1);31 extndlocmatrix(M+(n1�1)/2+1:M+(n1�1),:)=flipdim(extndlocmatrix(M+1:M+(n1
�1)�(n1�1)/2,:),1);3233 %Horizontal Extension34 extndlocmatrix(:,1:(n2�1)/2)=flipdim(extndlocmatrix(:,(n2�1)/2+1:n2�1),2);35 extndlocmatrix(:,N+(n2�1)/2+1:N+(n2�1))=flipdim(extndlocmatrix(:,N+1:N+(n2
�1)�(n2�1)/2,:),2);36 reference= reshape(extndlocmatrix’,size(extndlocmatrix,1)�size(extndlocmatrix,2)
,1);37 data=reshape(inputimg,Npixels,1); % data is in format needed by program
71
38 originaldata=data;39 mlOutput=inputimg; % Deblurred image is the initial image at the start4041 [m n]=size(extndlocmatrix); % size of the extended image42 numcoefficients=n1�n2; % number of PSF coefficients43 dumm=(n1�1)/2;44 dumn=(n2�1)/2;45 numpixels=m�n; % number of pixels in the extended image46 repPSF=zeros(numpixels,numcoefficients); % matrix of probabilities47 repIndex=zeros(numpixels,numcoefficients); % matrix of pixels48 outputimg=zeros(m,n);49 flatImg=ones(m,n);50 onesPSF=ones(n1,n2);51 weights=conv2(flatImg,onesPSF,’same’);52 for i=0:m�1,53 for j=0:n�1,54 SUM=0;55 counter=0;56 outpixelnum=(i�n+j)+1; % pixel number of current output image57 if (weights(i+1,j+1)==numcoefficients),58 for k1=�dumm:dumm,59 for k2=�dumn:dumn,60 val1=i�k1;61 val2=j�k2;62 if val1>=0 & val1<=m�1 & val2>=0 & val2<=n�1,63 counter=counter+1;64 index_k1=k1+dumm+1; % +1 terms are due to
matlab’s array convention65 index_k2=k2+dumn+1;66 index_val1=val1+1;67 index_val2=val2+1;68 inpixelnum=(val1�n+val2)+1;69 repPSF(outpixelnum,counter)=psf(index_k1,
index_k2);70 repIndex(outpixelnum,counter)=reference(
inpixelnum);71 end72 end73 end74 end75 end76 end77 Pn=reshape(repPSF,m,n,numcoefficients);78 PT=Pn((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2,1:numcoefficients);79 P=reshape(PT,M�N,numcoefficients);
72
80 Vn=reshape(repIndex,m,n,numcoefficients);81 VT=Vn((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2,1:numcoefficients);82 V=reshape(VT,M�N,numcoefficients);8384 for outerloop=1:Bigiterations, % Start the main iteration85 if(converge==1), % Don’t do anything if PML algorith has converged86 continue,87 end;88 inputimg=mlOutput;89 gausspsf=customgaussian(psfsize,sigvalue); %Blurring90 sigmavalue(outerloop)=sigvalue;91 Zs=find(inputimg==0);92 inputimg(Zs)=smallNumber; % If there s zero value in the starting image,
replace it with smallNumber93 sm(outerloop)=sum(sum(psf));94 smx(outerloop)=sum(sum(mlOutput));95 niterations=1; %Number of sub iterations to run for estimating reflectance
parameters96 blurryimg=inputimg;97 [m n]=size(blurryimg);98 Npixels=m�n;99 innerProduct=zeros(Npixels,1); % Calculate the inner product of PSF and
certain reflectance values100101 %get initial estimate102 x=reshape(blurryimg,Npixels,1); % x is in format needed by program103 meanval=mean(blurryimg(:));104105 % get scale factor so that sum =1106 scale=sum(psf); % update term of ml algorithm107108 %Reflect�Extend the current image109 TMPextended=zeros(M+(n1�1),N+(n2�1));110 TMPextended((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2)=blurryimg;111112 % Vertical extension113 TMPextended(1:(n1�1)/2,:)=flipdim(TMPextended((n1�1)/2+1:n1�1,:),1);114 TMPextended(M+(n1�1)/2+1:M+(n1�1),:)=flipdim(TMPextended(M+1:M+(
n1�1)�(n1�1)/2,:),1);115116 %Horizontal Extension117 TMPextended(:,1:(n2�1)/2)=flipdim(TMPextended(:,(n2�1)/2+1:n2�1),2);118 TMPextended(:,N+(n2�1)/2+1:N+(n2�1))=flipdim(TMPextended(:,N+1:N+(
n2�1)�(n2�1)/2,:),2);
73
119 input=TMPextended((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2); %Extract the non�extended part
120 innerWide=conv2(TMPextended,psf,’same’);121 innerProduct=reshape(innerWide((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1
�1)/2),Npixels,1);122 cost(outerloop)=sum(innerProduct�(originaldata.�log(innerProduct))); %Cost
function123 for i=1:niterations, %Estimate reflectance parameters124 update=zeros(Npixels,1); % update term of ml algorithm125 % COMPUTE UPDATE TERM126 for tnum=1:Npixels,127 dum=length(find(V(tnum,:)));128 Vvalues=V(tnum,1:dum)’; % transponse => column vectors129 Pvalues=P(tnum,1:dum)’; % transponse => column vectors130 update(Vvalues)=update(Vvalues)+PSFvalues�(data(tnum)/
innerProduct(tnum));131 end132 Tmpx=TMPextended’;133 [g1 g2]=size(Tmpx);134 Ftemp=zeros(g1,g2);135 Gtemp=zeros(g1,g2);136 E=zeros(Npixels,1); % update term of PML algorithm137 tnum=0;138 for mt=2:g1�1,139 for nt=2:g2�1,140 Gt=0;141 gam=0;142 sumgammaFn=0;143 tnum=tnum+1;144 for k1=�1:1,145 index_k1=mt+k1; % +1 terms are due to matlab’s array
convention146 for k2=�1:1,147 index_k2=nt+k2;148 if (~((k1==0)&(k2==0)))149 dif=Tmpx(mt,nt)�Tmpx(index_k1,index_k2);150 if(abs(dif) >=smallNumber)151 gam=(tanh(dif/delta))/(delta�dif);152 else gam= 1/(delta�delta);153 end;154 sumgammaFn=sumgammaFn+gam;155 Gt=Gt+gam.�((Tmpx(mt,nt)+Tmpx(index_k1,
index_k2))/2);156 end; %if (~((k1==0)&(k2==0)))157 end;
74
158 end;159 Ftemp(mt,nt)=sumgammaFn;160 Gtemp(mt,nt)=Gt;161 end;162 end;163 E=�x.�update;164 F=2�beta.�reshape((Ftemp((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1
�1)/2))’,Npixels,1);165 G=(scale�4�beta.�reshape((Gtemp((n1�1)/2+1:M+(n1�1)/2,(n2�1)
/2+1:N+(n1�1)/2))’,Npixels,1));166 Deriv=(Ftemp((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2))’;167 Deriv= reshape(Deriv,Npixels,1);168 for tnum=1:Npixels,169 dum=length(find(V(tnum,:)));170 pixellocations=V(tnum,1:dum)’; % transponse => column vectors171 PSFvalues=P(tnum,1:dum)’; % transponse => column vectors172 end173 xnew = ((�G+ sqrt((G.^2)�(8.�E.�F)))./(4.�F)); %PML estimate of
reflectance parameter174 MSE(outerloop)=sqrt(sum( (abs(xnew.^2 � x.^2))./((abs(xnew)).^2)));175 if (outerloop>=2),176 if ((MSE(outerloop�1)�MSE(outerloop))<=0.000000001),177 converge=1; ’PML solution found’,178 end;179 end;180 x=xnew; update current iterate181 x=x�(meanval/mean(x(:))); % scale iterate so that perceived brightness is
retained182 for tnum=1:Npixels,183 dum=length(find(V(tnum,:)));184 pixellocations=V(tnum,1:dum)’; % transponse => column vectors185 PSFvalues=P(tnum,1:dum)’; % transponse => column vectors186 innerProduct(tnum,1)=x(pixellocations)’�PSFvalues;187 end188189 end % End of iterations for estimating reflectance parameters.190191 %Next, we will estimate the PSF192 Inp=conv2(extended,psf,’same’);193 convd=Inp((n1�1)/2+1:M+(n1�1)/2,(n2�1)/2+1:N+(n1�1)/2);194 diff(outerloop)=sum(sum((blrd�convd).^2));195 psf=reshape(psf,numcoefficients,1);196 for tnum=1:numcoefficients,197 dum=length(find(V(:,tnum)));198 pixellocations=V(1:dum,tnum)’; % transponse => column vectors
75
199 PSFvalues=P(1:dum,tnum)’; % transponse => column vectors200 pro(pixellocations)=x(pixellocations)’.�PSFvalues;201 sumX(tnum)=sum(x(pixellocations));202 updt=0;203 upd=0;204 for len=1:dum,205 updt=updt�data(pixellocations(len)).�x(pixellocations(len))./innerProduct
(pixellocations(len));206 upd=upd+data(pixellocations(len)).�x(pixellocations(len))./innerProduct(
pixellocations(len));207 end; %len208 C=psf(tnum)�updt;209 B=sumX(tnum)�2�alpha;210 A=2�alpha�sum(psf)/psf(tnum);211 r1=(�B+sqrt(B.^2�4�A�C))/(2�A);212 r2=(�B�sqrt(B.^2�4�A�C))/(2�A);213 if r1>0,214 psf(tnum)=r1; %’Positive root’215 else, psf(tnum)=r2; %’Negative root’216 end;217 updateValue(tnum)=upd/sumX(tnum);218 end; % End of for loop, tnum=1:numcoefficients,219220 P=repmat(psf’,Npixels,1); %Replicate lexicographical PSF vector221 psf=reshape(psf,n1,n2); %Reshape PSF back to 2�Dimention222 psf=psf’;223 mlOutput=reshape(x,m,n); %Current iterate of the PML algorithm (updated image
)224 end; % End of main iteration
76
References
[1] P. J. Huber. Robust statistics. John Wiley and Sons, New York, NY, 1981.
[2] Ji-Ho Chang and John M. M. Anderson. Regularized image reconstruction algorithmsfor positron emission tomography. IEEE Trans. Medical Imaging, 23(9):1165–1175,September 2004.
[3] Toygar Akgun, Yucel Altunbasak, and Russell M. Mersereau. Super-resolution recon-struction of hyperspectral images. IEEE Trans. medical Image proc., 14(11), November2005.
[4] Susan M. Schweizer and José M. F. Moura. Hyperspectral imagery: Cluster adaptationin anomaly detection. IEEE trans. Information Theory, 46(5):1855–1871, August2000.
[5] http://www.un.org/Depts/dha/mct/facts.htm. Technical report.
[6] M. T. Eismann, J. H. Seldin, C. R. Schwartz, J. R. Maxwell, K. K. Ellis, J. N. Ced-erquist, A. D. Stocker, A. Oshagan, R. O. Johnson, W. A. Shaffer, M. R. Surette, M. J.McHugh, A. P. Schaum, and L. B. Stotts. Target detection in desert backgrounds:Infrared hyperspectral measurements and analysis. SPIE: Signal and Data Processingof Small Targets, 2561:80–97, 1995.
[7] A. D. Stocker, A. Oshagan, J. H. Seldin, J. N. Cederquist, and C. R. Schwartz. Analysisof infrared multi-spectral target/background field measurements. SPIE: Signal andData Processing of Small Targets, 2235:148–161, 1994.
[8] A. Stocker, X. Yu, E. Winter, and L. E. Hoff. Adaptive detection of subpixel targetsusing multi-band frame sequences. SPIE: Signal and Data Processing of Small Targets,1481:156–169, April 1991.
[9] X. Yu, L. E. Hoff, I. S. Reed, A. M. Chen, and L. B. Stotts. Automatic targe detectionand recognition in multiband imagery: A unified ml detection and estimation approach.IEEE trans. image processing, 6:143–156, Jan. 1997.
[10] S. M. Schweizer and J. M. F. Moura. Efficient detection in hyperspectral imagery.IEEE trans. image processing, 10(4), April 2001.
[11] H. C. Andrews and B. R. Hunt. Digital Image Restoration. Prentice-Hall, Inc, NewJersey, 1977.
[12] A. K. Katsaggelos, editor. Digital Image Restoration. Springer Verlag, New York,1991.
77
[13] D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE Signal Process. Mag.,5:43–64, May 1996.
[14] R. Fabian and D. Malah. Robust identification of motion and out of focus blurparameters from blurred and noisy images. CVGIP: Graph. Models Image process.,53(5):403–412, July 1991.
[15] M. Cannon. Blind deconvolution of spatially invariant image blurs with phase. IEEETrans. Acoust., Speech, Signal process., ASSP-24(1):58–63, February 1976.
[16] E. Thiébault and J.-M. Conan. Strict a priori constraints for maximum likelihood blinddeconvolution. Journal of Opt. Soc. Amer. A, 12(3):485–492, 1995.
[17] T. F. Chan and C. K. Wong. Total variation blind deconvolution. Tech. Report, 1996.
[18] T. J. Holmes. Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach. Journal of the Optical Society of America A: Optics, ImageScience, and Vision, 9(7):1052–1061, July 1992.
[19] D. Kundur and D. Hatzinakos. Blind image deconvolution revisited. IEEE SignalProcess. Mag., 13:61–63, November 1996.
[20] M. Jian and G. Wang. Development of blind image deconvolution and its applications.J. X-ray Sci. Technol., 11(1):13–19, 2003.
[21] Ming Jiang, Ge Wang, Margaret W. Skinner, Jay T. Rubinstein, and Michael W. Vannier.Blind deblurring of spiral ct images. IEEE Trans. Medical Imaging, 22(7), July 2003.
[22] L. A. Shepp and Y. Vardi. Maximum likelihood reconstruction for emission tomogra-phy. IEEE Trans. Medical Imaging, 1:113–121, 1982.
[23] B. W. Silverman, M. C. Johnes, J. D. Wilson, and D. W. Nychka. A soothed emapproach to indirect estimation problems, with particular reference to stereology andemission tomography (with discussion). Journal of the Royal Statistical Society B, 52:271–324, 1990.
[24] H. Lu and J. M. M. Anderson. Image reconstruction of pet images using denoised data.IEEE Nuclear Science Symposium Conference, pages 1746–1749, 2001.
[25] Ji-Ho Chang and John M. M. Anderson. Improved poisson intensity estimation:denoising application using poisson data. IEEE trans. image processing, 13(1128–1135), August 2004.
[26] S. Alenius, U. Ruostsalainen, and J. Astola. Using local median as the location of theprior distribution in iterative emission tomography image reconstruction. IEEE Trans.Nucl. Sci., 45(3097–3104), December 1998.
[27] P. J. Green. Bayesian reconstruction from emission tomographydata using a modifiedem algorithm. IEEE Trans. Medical Imaging, 9:84–93, March 1990.
78
[28] W. I. Zangwill. Nonlinear programming. Prentice-Hall, Englewood Cliffs, NJ, 1996.
[29] E. Levitan and G. T. Herman. A maximum a posteriori probability expectationmaximization algorithm for image reconstruction in emission tomography. IEEE Trans.Medical Imag., 6:185–192, September 1987.
[30] A. R. De Pierro. On the relation between the isra and em algorithms for positronemission tomography. IEEE Trans. Medical Imaging, 12:328–333, June 1993.
[31] A. R. De Pierro. A modified expectation maximization algorithm for penalizedlikelihood estimation in emission tomography. IEEE Trans. Medical Imaging, 14:132–137, March 1995.
[32] T. K. Moon and W. C. Stirling. Mathematical methods and algorithms. Prenstice-Hall,Upper Saddle River, NJ, 2000.
[33] H. Erdogan and J. A. Fessler. Monotonic algorithms for transmission tomography.IEEE Trans. Med. Imag., 18:801–814, September 1999.
[34] D. L. Snyder, A. M. Hammoud, and R. L. White. Image recovery from data acquiredwith a charge-coupled-device camera. Journal of the Optical Society of America A, 10:1014–1023, 1993.
[35] H. Erdogan and J. A. Fessler. Ordered subset algorithms for transmission tomography.Physics in Medicine and Biology, 44:2835–2851, May 1999.
[36] I. T. Hsiao, A. Rangarajan, and G. Gindi. A new convergent map reconstructionalgorithm for emission tomography using ordered subsets and separable surrogates.IEEE International Symposium on Biomedical Imaging, pages 409–412, 2002.
[37] K. Lange, D. R. Hunter, and I. Yang. Optimization transfer using surrogate functions.Journal of Computational and Graphical Statistics, 9:1–20, March 2000.
[38] A. P. Dempster, N. M. Larid, and D. B. Rubin. Maximum likelihood from incompletedata via the em algorithm. J R Stat. Soc., B39:1–38, 1977.
[39] W. H. Richardson. Bayesian-based iterative method of image restoration. Journal ofOpt. Soc. Amer., 62(55), 1972.
[40] Lucy. L.B. An iterative technique for the rectification of observed distributions.Astronomical Journal, 79(745), 1974.
[41] José M. Bioucas-Dias, Mário A. T. Figueiredo, and João P. Oliveira. Total variation-based image deconvolution: A majorization-minimization approach.
[42] Hussein A. Aly and Eric Dubois. Image up-sampling using total-variation regulariza-tion with a new observation model. IEEE trans. image processing, 14(10), Obtober2005.
79