Transcript

Smart Computing Review, vol. 4, no. 3, June 2014

DOI: 10.6029/smartcr.2014.03.005

181

Smart Computing Review

AN OVERVIEW OF SR TECHNIQUES APPLIED TO

IMAGES, VIDEOS AND MAGNETIC RESONANCE

IMAGES Kathiravan S and Kanakaraj J

Dept. of EEE, PSG College of Technology / Peelamedu, Coimbatore, Tamilnadu, India – 641 004 / [email protected], [email protected]

* Corresponding Author: Kathiravan S

Received March 20, 2013; Revised May 25, 2014; Accepted June 01, 2014; Published June 30, 2014

Abstract: In this paper, we have summarized the necessary statistical theory behind an image

reconstruction, and it discusses the idea of super resolution as an inverse problem. Moreover, it

presents the various contributions of various techniques to improve spatial resolution of images,

video, and magnetic resonance images using super resolution methods. Subsequently, it portrays

the major findings pertaining to the study. Additionally, we give a glimpse of the numerous factors

influencing the performance of super resolution.

Keywords: super-resolution, MR image, image and video

Statistical Approach

■ Well-Posed Problems

According to the classification of Jacques Hadamard, a well-posed problem is a problem where the solution has the

following properties:

Solution existence – For all data the solution of the problem or model must exist.

Solution uniqueness – For all data, if the solution exists, it must be unique.

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

182

Stability of the solution process - The solution of the problem or the model must depend continuously on the data.

■ Ill-Posed Problems

The problems that fail to satisfy the above-mentioned properties are considered to be ill-posed. Inverse problems are often

ill-posed, since the process of computing an inverse solution can be extremely unstable in that a small change in observed

data can lead to a great change in the estimated model. The condition of stability is often violated and the problems have to

be reformulated for a numerical treatment. Many of the super-resolution (SR) methods are ill-posed. In this work, we

consider regularization of the SR reconstruction by utilizing additional information (a- priori knowledge) to compensate for

the loss of information.

■ Inverse Problems

Inverse problems deal with finding the cause of some results that is, we have results and we want to find the cause that has

produced those results. The term inverse problem considers inferring the values of the parameters characterizing the system

under investigation from the results of actual observations. Inverse problems do not fulfil Hadamard's definition of well-

posedness. An inverse problem will not satisfy one or more of the well-posed properties. In the context of SR, the results

are low-resolution (LR) images and causes are high-resolution (HR) images. We have LR images and we want to find the

HR image that has produced the LR images.

Inverse problems are typically ill-posed; A classical example of an inverse problem considers polynomials. Given a

polynomial of order p, the direct problem would be to find the p roots of the polynomial, and the inverse problem would be

concerned with finding the polynomial given the p roots. This example illustrates that the data of the direct problem is the

solution of the inverse problem and vice versa. SR reconstruction is also an inverse problem. In this case, the HR image

that is the source of information is estimated from the observed data (the LR images). Figure 1 illustrates SR as an inverse

problem where a given a number of LR frames of the same scene are constructed into a single frame with an improved

resolution.

Figure 1. Super-resolution as an inverse problem

■ Bayesian Theorem

The Bayesian Theorem describes the relationship between two conditional probabilities. It expresses the posterior

probability of a hypothesis H (the probability of H after evidence E is observed) in terms of the prior probability of E and

the conditional probability of E given H. The mathematical expression is presented as Equation (1) in appendix A.

In this case, the HR image f is considered the hypothesis and the LR images gk are the evidence. The SR reconstruction

that employs the Bayesian approach has to find the estimate of the HR image, given the LR observations. The mathematical

expression is presented as Equation (2) in appendix A.

■ Markov Property

A stochastic process has the Markov property if the conditional probability distribution of future states of the process

depends only on the current state, but not on the states in the past. In other words, Markov property means “an absence of

Smart Computing Review, vol. 4, no. 3, June 2014

183

memory‟‟ of a random process. The mathematical expression is presented as Equation (3) in appendix A.

■ Markov Random Fields

Let S = {1, 2, . . . ,N} be the set of indexes and R = {ri , i ϵ S} denote any family of random variables indexed by S , in

which each random variable R i takes a value ri in its state space. Such a family r is called a random field. According to

Rangarajan, Markov Random Fields (MRF) extend the Markov property to two or more dimensions and provide a

convenient way for modelling context-dependent entities such as image pixels. Mutual influences among the entities are

characterized using conditional MRF distributions.

In an MRF, the sites in S are related to one another via a neighborhood system, which can be multidimensional. Let it be

defined by N = {Ni, i ϵ S}, where Ni is the set of sites, neighboring i, i is not an element of Ni . The mathematical expression

is presented as Equations (4) and (5) in appendix A.

Figure 2. Example of an MRF of order one

Consider the grid of Figure 2 as a random field and consider a first order neighborhood system. If the probability of site

S, given the rest of the grid sites probabilities is equal to the probability of site S, given the probabilities of neighbors of S,

then such process is an MRF.

■ Gibbs Random Fields

The Hammersley-Clifford theorem establishes that an MRF can equivalently be characterized using Gibbs distribution. S is

a Gibbs random field (GRF) with respect to the neighborhood system G = {Gx, x ϵS } if and only if Equations (6) and (7)

presented in appendix are true.

Many scientists have proposed Gibbs prior models to regularize images in the ill-posed SR reconstruction problem.

Unfortunately, hyper-parameters used to specify Gibbs priors could greatly influence the degree of regularity imposed by

such priors, Geman & Geman (1984).

■ Maximum A-Posteriori Estimation

Maximum A - Posteriori (MAP) estimate uses the prior probability distribution along with the likelihood for finding the

best fit of hypothesis H to event E. An MAP estimate is the value that occurs the most frequently, called a mode, in the

posterior distribution. It is a Bayesian approach and exploits not only the experimental data, but also the a priori available

statistical information on the unknown parameter vector. This information can come either from the correct scientific

knowledge or from previous empirical evidence.

Image Acquisition

A camera is the most common way of capturing an image or scene. The camera got its name from a phenomenon called

camera obscura (Latin for dark room), in which light travelling through a pin hole forms an inverted image on a screen

inside a dark room. Although the projected image is in the analog domain, it is often converted to a digital representation

for further analysis.

A video camera employs a lens to focus light from the scene onto an array of sensors called photosites (Jaehne et al.

2000) that measure light intensity at different locations on the image plane. Video sequences are obtained by repeating this

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

184

spatial measurement process over time. Therefore, we can visualize the scene as a spatio-temporal continuous signal of

arbitrary bandwidth that is to be measured by a discrete sampling device. As illustrated in Figure 3, light from the scene is

affected by different elements on its way to the image sensor. In particular, light has to travel through different media that

usually behave as low-pass filters by attenuating high frequency components, i.e., details, of the observed scene. The

blurring effect of these media, which may include the atmosphere, water and optical material, is often characterized by a

point spread function (PSF), which models the degradation of a point light source by an imaging device (Gonzalez &

Woods 2008). The spatial frequency response of the PSF, also known as its modulation transfer function (MTF),

determines how the different frequency components of the scene are attenuated during the imaging process. Another source

of degradation is chromatic aberration (Jaehne et al. 2000), which results from the failure of a lens to focus all colors to the

same point. This phenomenon is due to the fact that the refractive index of a lens decreases with increasing light

wavelength.

The resulting low-pass filtered image is then sampled by the array of photosites. Since a photosite is not an ideal

sampler, it cannot measure light at an infinitely precise location in the scene. Instead, a pixel sensor measures light intensity

by integrating photons over its surface area. Consequently, the measured signal undergoes another blurring effect at this

stage. Furthermore, spatial aliasing may occur during this sampling process when the sensor density is not high enough to

capture all frequencies that have passed through the lens and other media. Similarly, temporal aliasing can occur when the

frame-rate of the camera is too low; One typical example of this phenomenon is illusion of a wheel spinning backwards in a

video. The sampling process is not instantaneous; To produce a meaningful value, each photosite must integrate light

during the exposure, which results in temporal blur. Furthermore, relative motion between the camera and observed objects

occurring during the exposure period produces a smearing effect called motion blur.

Figure 3. Image acquisition with a digital device

Observation Model

The observation model, establishes a relationship between the original HR image and the observed LR images. Figure 4

illustrates the observation model, were the observed LR images are the warped, blurred, down-sampled and noisy version

of the original HR image. In this section, two observation models of the imaging system are presented. They formulate the

SR image reconstruction problem and the SR video reconstruction problem, respectively.

Figure 4. The observation model

Smart Computing Review, vol. 4, no. 3, June 2014

185

■ Observation Model for Image Super-Resolution

As depicted in figure 4, the image acquisition process is modeled by the following four operations:

Geometric transformation

Blurring effect

Down-sampling by a factor of q1 × q2

White Gaussian noise

Note that the geometric transformation includes translation, rotation, and scaling. Various blurs (such as motion blur

and out-of-focus blur) are usually modeled by convolving the image with a low-pass filter, which is modeled by a PSF.

It is important to note that there is another observation model commonly used in the literature (Chiang et al. 2000) (Patti

et al. 2001) (Lertrattanapanich et al. 2002) (Wang et al. 2004). The only difference is that the order of warping and blurring

operations is reversed. When the imaging blur is spatio-temporally invariant and only global translational motion is

involved among multiple observed low-resolution images, the blur matrix and the motion matrix are commutable.

Consequently, these two models coincide. However, when the imaging blur is spatio-temporally variant, it is more

appropriate to use the second model. The determination of the mathematical model for formulating the SR computation

should coincide with the imaging physics (i.e., the physical process to capture LR images from the original HR ones)

(Katsaggelos et al. 2007).

■ Observation Model for Video Super-Resolution

The observation model for SR video computation can be formulated by applying the observation model of the SR image at

each temporal instant.

Super-Resolution Image Reconstruction

The objective of SR image reconstruction is to produce an image with a higher resolution using one or a set of images

captured from the same scene. In general, the SR image techniques are classified into four classes:

Frequency domain-based approach

Interpolation-based approach

Regularization-based approach

Learning-based approach

The first three categories get a HR image from a set of LR input images, while the last one achieves the same objective

by exploiting the information provided by an image database.

■ Frequency Domain Based Image Super-Resolution Approach

The frequency domain approach makes explicit use of the aliasing that exists in each LR image to reconstruct an HR image.

The first frequency-domain SR method can be credited to Tsai and Huang (1984), where they considered the SR

computation for noise-free LR images. They used the frequency domain approach to demonstrate the ability to reconstruct

one improved resolution image from several down-sampled noise-free versions of it, based on the spatial aliasing effect.

They proposed to first transform the LR image data into the Discrete Fourier transform (DFT) domain and then combine

them according to the relationship between the aliased DFT coefficients of the observed LR images and that of the

unknown HR image. The combined data is then transformed back to the spatial domain where the new image could have a

higher resolution than that of the input images. Next, Kim et al. (1990), proposed a frequency domain recursive algorithm

for the restoration of SR images from noisy and blurred measurements. Kim and Su (1993) also explicitly incorporated the

de-blurring computation into the HR image reconstruction process, because the separate de-blurring of input frames would

introduce an undesirable phase and high number of wave distortions in the DFT of those frames. Rhee and Kang (1999)

exploited the discrete cosine transform (DCT) to perform fast image de-convolution for SR image computation.

The frequency-domain-based SR approaches have a number of advantages. First, it is an intuitive way to enhance the

details (usually the high-frequency information) of the images by extrapolating the high-frequency information presented in

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

186

the LR images. Secondly, these frequency-domain-based SR approaches have low computational complexity. However, the

frequency-domain based SR methods are insufficient to handle the real-world applications, since they require that there

only exist a global displacement between the observed images and the linear space-invariant blur during the image

acquisition process. Recently, many researchers have begun to investigate the use of the wavelet transform for addressing

the SR problem to recover detailed information (usually the high-frequency information) that is lost or degrades during the

image acquisition process. This is motivated by the fact that the wavelet transform provides a powerful and efficient multi-

scale representation of the image for recovering high-frequency information (Nguyen et al. 2000). These approaches

typically treat the observed LR images as low-pass filtered sub-bands of the unknown wavelet-transformed HR image. The

aim is to estimate the finer scale sub-band coefficients, followed by applying the inverse wavelet transform to produce the

HR image. Woods et al. (2006) presented an iterative expectation maximization (EM) algorithm (Dempster et al. 1977) for

simultaneously performing the registration, blind de-convolution, and interpolation operations. Ei-Khamy et al. (2005, 2006)

proposed to first register multiple LR images in the wavelet domain, then fuse the registered LR wavelet coefficients to

obtain a single image, followed by performing interpolation to get a HR image. Chappalli and Bose (2005) incorporated a

de-noising stage into the conventional wavelet-domain SR framework to develop a simultaneous de-noising and SR

reconstruction approach. Ji and Fermuller (2006, 2009) proposed a robust wavelet SR approach to handle the error incurred

in both the registration computation and the blur identification computation. Although the frequency domain methods are

intuitively simple and computationally cheap, the observational model is restricted to only global translational motion and

LSI blur. Due to the lack of data correlation in the frequency domain, it is also difficult to apply spatial domain a priori

knowledge for regularization.

■ Interpolation Based Image Super-Resolution Approach

The interpolation-based SR approach constructs a HR image by projecting all the acquired LR images to the reference

image. Then it fuses together all the information available from each image, because each LR image provides an amount of

additional information about the scene. Finally, it deblurs the image. Note that the single image interpolation algorithm

cannot handle the SR problem well, since it cannot produce those high-frequency components that were lost during the

image acquisition process. The quality of the interpolated image generated by applying any single input image interpolation

algorithm is inherently limited by the amount of data available in the image. The interpolation-based SR approach usually

consists of the following three stages: (i) the registration stage for aligning the LR input images, (ii) the interpolation stage

for producing a HR image, and (iii) the de-blurring stage for enhancing the reconstructed HR image produced in Step (ii).

The interpolation stage plays a key role in this framework. There are various ways to perform interpolation. The simplest

interpolation algorithm is the nearest neighbor algorithm, where each unknown pixel is assigned an intensity value that is

the same as its neighboring pixels. However, this method tends to produce images with a blocky appearance.

Ur and Gross (1992) performed a non-uniform interpolation of a set of spatially-shifted LR images by utilizing the

generalized multichannel sampling theorem. The advantage of this approach is that it has low computational load, which is

thus quite suitable for real-time applications. However, the optimality of the entire reconstruction process is not guaranteed,

since the interpolation errors are not taken into account. Bose and Ahuja (2006) used the Moving Least Square (MLS)

method to estimate the intensity value at each pixel position of the HR image via a polynomial approximation using the

pixels in a defined neighborhood of the pixel position under consideration. Furthermore, the coefficients and the order of

the polynomial approximation are adaptively adjusted for each pixel position.

Irani and Peleg (1991) proposed an Iterative Back Projection (IBP) algorithm, where the HR image is estimated by

iteratively projecting the difference between the observed LR images and the simulated LR images. However, this method

might not yield a unique solution due to the ill-posed nature of the SR problem. A projection onto convex sets (POCS)

formulation of the SR reconstruction was first suggested by Stark and Oskoui (1989), and was extended by Patti and Tekalp

(1997) to develop a set-theoretic algorithm to produce the HR image that is consistent with the information arising from the

observed LR images and the prior image model. This information is associated with the constraint sets in the solution space.

The intersection of these sets represents the space of permissible solutions.

The POCS approach describes an alternative iterative approach to incorporating prior knowledge about the solution into

the reconstruction process. With the estimates of registration parameters, this algorithm simultaneously solves the

restoration and interpolation problem to estimate the SR image (Combettes and Civanlar 1991, Combettes 1993). By

projecting an initial estimate of the unknown HR image onto these constraint sets iteratively, a good solution can be

obtained. POCS is simple and can utilize a convenient inclusion of a priori information. This kind of method is easy to be

implemented and has the disadvantages of non-uniqueness, slow convergence, and high computational cost.

■ Regularization Based Image Super-Resolution Approach

Motivated by the fact that the SR computation is, in essence, an ill-posed inverse problem (Bertero et al. 1998), numerous

Smart Computing Review, vol. 4, no. 3, June 2014

187

regularization-based SR algorithms have been developed for addressing this issue (Tom et al. 1995, Tian et al. 2010,

Schultz et al. 1996, Suresh et al. 2007, Belekos et al. 2010, Shen et al. 2007). The basic idea of these regularization-based

SR approaches is to use the regularization strategy to incorporate the prior knowledge of the unknown HR image. From the

Bayesian point of view, the information that can be extracted from the observations (i.e., the LR images) about the

unknown signal (i.e., the HR image) is contained in the probability distribution of the unknown. Then, the unknown HR

image can be estimated via some statistics of a probability distribution of the unknown HR image, which is established by

applying Bayesian inference to exploit the information provided by both the observed LR images and the prior knowledge

of the unknown HR image.

Typically, the SR reconstruction algorithm is an ill-posed problem due to an insufficient number of LR images and ill-

conditioned blur operators. Procedures adopted to stabilize the inversion of ill-posed problem are called regularization. In

this section, deterministic and stochastic regularization approaches for SR reconstruction algorithm are presented.

Traditionally, Constrained Least Squares (CLS) (Haykin et al. 2002) and MAP SR image reconstruction methods are

introduced (Vaseghi et al. 1996).

The MRF or Markov/Gibbs Random Fields are proposed and developed for modeling image texture during 1990-1994

(Elfadel and Picard 1990, 1993, 1994, Picard and Elfadel 1994, Picard et al. 1991, Picard 1992, Popat and Picard 1993,

1994). Due to an MRF that can model the image characteristics, especially on image textures, Bouman and Sauer (1993)

proposed the single image restoration algorithm using a MAP estimator with the Generalized Gaussian-Markov Random

Field (GGMRF) prior. Later, Stevenson et al. (1994) proposed the single image restoration algorithm using an ML

estimator with Discontinuity Persevering Regularization. Schultz and Stevenson (1994) proposed the single image

restoration algorithm using MAP estimator with the Huber-Markov Random Field (HMRF) prior. Next, the Super-

Resolution Reconstruction algorithm using the MAP estimator (or the Regularized ML estimator), with the HMRF prior

was proposed by Schultz and Stevenson (1996). The blur of the measured images is assumed to be simple averaging, and

the measurements additive noise is assumed to be an independent and identically distributed (i.i.d.) Gaussian vector. Pan

and Reeves (2006) proposed single image MAP estimator restoration algorithm with the efficient HMRF prior using

decomposition-enabled edge-preserving image restoration in order to reduce the computational demand.

Typically, the regularized ML estimation (or MAP) is used in image restoration, therefore the determination of the

regularization parameter is an important issue in image restoration. Thompson et al. (1991) proposed the Methods of

choosing the smoothing parameter in image restoration by regularized ML. Next, Mesarovic et al. (1995) proposed the

single image restoration using regularized ML for unknown linear space-invariant (LSI) point spread function (PSF).

Subsequently, Geman and Yang (1995) proposed single image restoration using regularized ML with robust nonlinear

regularization. This approach can be done efficiently by Monte Carlo Methods, for example by annealing FFT domain

using Markov chain that alternates between (global) transitions from one array to the other.

Later, Kang and Katsaggelos (1995) proposed the use of a single image regularization functional, which is defined in

terms of restored image at each iteration step, instead of a constant regularization parameter and Kang and Katsaggelos

(1997) proposed regularized ML for SRR, in which no prior knowledge of the noise variance at each frame or the degree of

smoothness of the original image is required. Molina (1995) and Molina et al. (1999) proposed the application of the

hierarchical ML with Laplacian regularization to the single image restoration problem and derived expressions for the

iterative evaluation of the two hyper-parameters (regularized parameter) applying the evidence and MAP analysis within

the hierarchical regularized ML paradigm. Molina et al. (2003) proposed the multi-frame super-resolution reconstruction

using ML with Laplacian regularization. The regularized parameter is defined in terms of restored image at each iteration

step. Next, Rajan and Chaudhuri (2003) proposed super-resolution approach, based on ML with MRF regularization, to

simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution from its defocused

observed images.

Subsequently, He and Kondi (2004, 2004b) proposed image resolution enhancement with adaptively-weighted LR

images (channels) and the simultaneous estimation of the regularization parameter. He and Kondi (2005) also later

proposed a generalized framework of regularized image/video Iterative Blind De-convolution/Super-Resolution (IBD-SR)

algorithms using some information from the more matured blind De-convolution techniques from image restoration. Later,

He and Kondi (2006) proposed an SRR algorithm that takes into account inaccurate estimates of the registration parameters

and the point spread function. Vega et al. (2006) proposed the problem of de-convolving color images observed with a

single Coupled Charged Device (CCD) from the SR point of view. Utilizing the regularized ML paradigm, an estimate of

the reconstructed image and the model parameters is generated.

Elad and Feuer (1997) proposed the hybrid method combining the ML and non-ellipsoid constraints for the SR

restoration and Elad and Feuer (1999) proposed the adaptive filtering approach for the SR Reconstruction. Next, Elad and

Feuer (1999b) proposed two iterative algorithms, the R-SD and the R-LMS, to generate the desired image sequence at

practically computational complexity. These algorithms assume the knowledge of the blur, the down-sampling, the

sequences‟ motion, and the measurement noise characteristics, and apply a sequential reconstruction process to the whole

mess. Subsequently, the special case of SR reconstruction, where the warps are pure translations, the blur is space invariant

and the same for all the images, and the noise is white, are proposed by Elad and Hel-Or (2001) for a fast SR

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

188

Reconstruction (SRR).

Later, Nguyen (2000) and Nguyen et al. (2001) proposed fast SRR algorithm using regularized ML by using efficient

block circulant pre-conditioners and the conjugate gradient method. Elad (2002) proposed the Bilateral Filter theory,

showing how the bilateral filter can be improved and extended to treat more general reconstruction problems. Consequently,

the alternate SR approach, L1 Norm estimator, and robust regularization based on Bilateral Total Variance (BTV) was

presented by Farsiu et al. (2004, 2004b). This approach performance is superior to what was proposed earlier, and this

approach has fast convergence. But this SRR algorithm is only effectively applied to AWGN models. Next, Farsiu et al.

(2006) proposed a fast SRR of color images using an ML estimator with BTV regularization for the luminance component

and Tikhonov regularization for the chrominance component.

Subsequently, Farsiu et al. (2006b) proposed the dynamic SR problem of reconstructing a high-quality set of

monochromatic or color super-resolved images from low-quality monochromatic, color or mosaicked frames. This

approach includes a joint method for simultaneous SR, de-blurring and demosaicing. It takes into account practical color

measurements encountered in video sequences. Later, Patanavijit and Jitapunkul (2006) proposed the SRR using a

regularized ML estimator with affine block-based registration for the real image sequence. Moreover, Rochefort et al.

(2006) proposed an SR approach based on regularized ML for the extended original observation model devoted to the case

of non-isometirc interframe motion such as affine motion.

■ Learning Based Image Super-Resolution Approach

Recently, learning-based techniques were proposed to tackle the SR problem (Hertzmann et al. 2001, Chang et al. 2004,

Freeman et al. 2000, Pickup et al. 2003). In these approaches, the high frequency information of the given single LR image

is enhanced by retrieving the most likely high-frequency information from the given training image samples based on the

local features of the input LR image. Hertzmann et al. (2001) proposed an image analogy method to create the high-

frequency details for the observed LR image from a training image database. It contains two stages: an off-line training

stage and an SR reconstruction stage. In the off-line training stage, the image patches serve as ground truth and are used to

generate LR patches by simulating the image acquisition model. Pairs of LR patches and the corresponding (ground truth)

high-frequency patches are collected. In the SR reconstruction stage, the patches extracted from the input LR images are

compared with those stored in the database. Then, the best matching patches are selected according to a certain similarity

measurement criterion (e.g., the nearest distance) as the corresponding high frequency patches used for producing the HR

image.

Chang et al. (2004) proposed that the generation of the HR image patch depends on multiple nearest neighbors in the

training set in a way similar to the concept of manifold learning methods, particularly the locally linear embedding (LLE)

method. In contrast to the generation of an HR image patch, which depends on only one of the nearest neighbors in the

training set as used in the aforementioned SR approaches (Hertzmann et al. 2001, Freeman et al. 2000, Pickup et al. 2003),

this method requires fewer training samples. Another way is to jointly exploit the information learnt from a given HR

training data set, as well as that provided by multiple LR observations.

Datsenko and Elad (2007) first assigned several high quality candidate patches at each pixel position in the observed LR

image. These are found as the nearest-neighbors in an image database that contains pairs of corresponding LR and HR

image patches. These found patches are used as the prior image model and then merged into an MAP cost function to arrive

at the closed-form solution of the desired HR image. Recent research on the studies of image statistics suggest that image

patches can be represented as a sparse linear combination of elements from an over-complete image patch dictionary

(Wang et al. 2010, Kim et al. 2010, Yang et al. 2010). The idea is to seek a sparse representation for each patch of the LR

input, followed by exploiting this representation to generate the HR output. By jointly training two dictionaries for the LR

and HR image patches, the sparse representation of a LR image patch can be applied with the HR image patch dictionary to

generate an HR image patch.

Super-Resolution Video Reconstruction

The SR video approaches reconstruct an image sequence with an HR from a group of adjacent lower-resolution

uncompressed image frames or compressed image frames. The major challenges in the SR video problem lie in several

parts. The first challenge is how to extract additional information from each LR image to improve the quality of the final

HR images. The second challenge is how to consider the uncertainty (i.e., estimation error) incurred in motion estimation of

the SR computation. For that, reliable motion estimation is essential for achieving high-quality SR reconstruction. The SR

algorithms are usually required to be robust with respect to these errors (Schultz et al. 1998). For that, various motion

estimation methods have been developed, including parametric modeling (Protter and Elad 2009), block matching (Barreto

et al. 2005, Callico et al. 2008), and optical flow estimation (Baker and Kanade 1999, Zhao and Sawhney 2002, Lin et al.

Smart Computing Review, vol. 4, no. 3, June 2014

189

2005, Le and Seetharaman 2006, Fransens et al. 2007). The existing SR video approaches can be classified into the

following four categories: (i) the sliding-window-based SR video approach (Suresh et al. 2007, Narayanan et al. 2007, Ng

et al. 2007, Patanavijit and Jitapunkul 2007), (ii) the simultaneous SR video approach (Borman and Stevenson 1999, Zibetti

and Mayer 2007, Alvarez et al. 2003), (iii) the sequential SR video approach (Elad and Feuer 1999, Elad and Feuer 1999b,

Farsiu et al. 2006, Costa et al. 2007, Tian et al. 2009) , and (iv) the learning-based SR video approach ( Bishop et al. 2003,

Dedeoglu et al. 2004, Kong et al. 2006). A review of each category is presented below. Figure 5 portrays the basic example

for SR video reconstruction.

Figure 5. Example for SR video reconstruction

■ Sliding-Window Based Video Super-Resolution Approach

The sliding-window-based approach (Suresh et al. 2007, Narayanan et al. 2007, Ng et al. 2007, Patanavijit and Jitapunkul

2007), is the most commonly used and direct approach to conduct SR video. The sliding window selects a set of

consecutive LR frames for producing one HR image frame. That is, the window is moved across the input frames to

produce successive HR frames sequentially. The major drawback of this approach is that the temporal correlations among

the consecutively reconstructed HR images are not considered.

■ Simultaneous Video Super-Resolution Approach

Many papers (Borman and Stevenson 1999, Zibetti and Mayer 2007, Alvarez et al. 2003) have been done to reconstruct

multiple HR images simultaneously. Borman and Stevenson (1999) proposed an algorithm to reconstruct multiple HR

images simultaneously by imposing the temporal smoothness constraint on the prior image model, while Zibetti and Mayer

(2007) reduced its computational complexity by only considering the temporally-smooth LR frames in the observation

model. Alvarez et al. (2003) proposed a multi-channel SR approach for the compressed image sequence, which applies the

sliding window to select a set of LR frames to reconstruct another set (same number of frames) of HR frames

simultaneously. However, all these three algorithms require large memory storage, since these multiple HR and LR images

are required to be available at the same time during the reconstruction process. Moreover, the number of HR frames that

should be exploited for simultaneous reconstructions is another important issue which has not been addressed.

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

190

■ Sequential Video Super-Resolution Approach

The major challenge in the SR video problem is the difficulty to exploit temporally-correlated information provided by the

established HR images and available temporally correlated LR images respectively to improve the quality of the desired

HR images. Elad et al. (1999, 1999b) proposed an SR image sequence algorithm based on adaptive filtering theory, which

exploits the correlation information among the HR images. However, the information provided by the previously observed

LR images is neglected. That is, only a single LR image is used to compute the least-squares estimation for producing one

HR image. On the other hand, the computational complexity and the convergence issues of the SR algorithm developed by

Elad and Feuer (1999) are analyzed by Costa and Bermudez (2007).

A three-equation-based state space filter is proposed by Tian and Ma (2009), by incorporating an extra observation

equation into the framework of the conventional two-equation-based Kalman filtering to build up a three equation-based

state-space model. In Tian and Ma (2009), a full mathematical derivation for arriving at a closed-form solution is provided,

which exploits information from the previously reconstructed HR frame, the currently observed LR frame, and the

previously observed LR frame to produce the next HR frame. The above-mentioned steps will be sequentially processed

across the image frames. This is in contrast to the conventional two-equation-based Kalman filtering approach, in which

only the previously reconstructed HR frame and the currently observed LR frame are exploited.

■ Learning Based Video Super-Resolution Approach

Bishop et al. (2003) proposed a learning-based SR video method based on the principle of the learning-based SR still image

approach developed in Freeman et al. (2000). Their method uses a learned data set of image patches capturing the

relationship between the low-frequency and high-frequency bands of natural images, and uses a prior image model over

such patches. Furthermore, their method uses the previously enhanced frame to provide part of the training set for

producing the current HR frame. Dedeoglu et al. (2004) and Kong et al. (2006) exploit the same idea, while enforcing both

the spatial and the temporal smoothness constraints among the reconstructed HR frames.

Previous Super-resolution Research in Magnetic Resonance Imaging

In Figure 6, the first row represents the Single Image SR techniques aimed at estimating an HR image using only one LR

image as input. The second row portrays the Multi-frame SR technique, where the SR estimation is done jointly using the

LR image and a reference HR image (Rousseau 2010). The current limits in HR imagery and high SNR in MRI are not due

to the acquisition system resolution, but rather to the acquisition time. The acquisition times for 3-D HR imaging are

impractical for the desired T2-weighted contrast. Herment et al. (2000, 2003) were one of the first groups to experiment

with SR in MRIs. They combined partial k-space data of the same object but with different frequency domain sampling

boundaries using three successive 3-D MRI volumes. To reconstruct an image using k-space data, their method zero-pads

the unknown regions contained in the bounding box from the union of the three k-space data volumes.

Figure 6. Single image SR framework (first row) and the Multi-frame SR approach (second row)

Smart Computing Review, vol. 4, no. 3, June 2014

191

The shared center cube portion of the acquired k-space samples is averaged, while the remaining exclusive partial

volumes of acquired k-space samples are left unchanged for Fourier reconstruction. The total acquisition time of the three

3-D MRI volumes is shorter than the acquisition time of an equivalent 3-D MRI scan with k-space data extending to the

bounding box from the union of the three volumes. Their results show anisotropic HR imagery, but only in the directions

shared by the high-frequency k-space data samples. This makes their method readily useful for imaging tissues with

specific directions such as arteries, but not for brain imaging where isotropic resolution is desired.

Peled and Yeshurun (2001) applied the IBP algorithm to a set of eight spatially shifted LR diffusion tensor images with

equal resolutions and fields of view using 2-D multi-slice acquisitions. Diffusion tensor imaging relies on the Brownian

motion of water molecules in brain tissue, which helps visualize white matter fibers (or tracts) of the brain as a way to

detect strokes (Hacke et al. 2000). While Peled and Yeshurun (2001) claimed resolution improvement in the frequency and

phase encoding directions, their results were subsequently invalidated by Scheffler (2002).

Greenspan et al. (2002) verified the statements of Scheffler (2002) by applying SRR techniques to 2-D multi-slice MRI

scans. They also showed that the SR reconstruction results based on the Peled and Yeshurun (2001) experiments can be

replicated with zero-padding interpolation from the LR images. The slice selection and phase and frequency-encoding

directions of the LR images shared uniformly spaced voxel shifts. Applying the 3-D iterative back-projection algorithm, the

frequency spectrum of the HR estimates showed a sharp cut-off in the phase encoded direction. Conversely, spectrum

analysis in the slice selection direction revealed approximately twice the extent of bandwidth, thus providing a basis for SR

reconstruction in this direction.

Consequently, further SR reconstruction experiments of Greenspan et al. (2002) relied on spatial shifts in only the slice

selection direction from both real and phantom 2-D multi-slice MRI data. To account for the PSF needed in the iterative

back projection algorithm, Greenspan et al. (2002) measured the slice profile to be well approximated by Gaussian

functions, where the slice thickness is its full width at half maximum. The SNR per unit acquisition time or SNR efficiency

of the LR image data sets was greater than the equivalent scan with equal spatial resolution in the HR estimate. It also

shows that edge width is comparable to the HR image in the slice selection direction.

Peeters et al. (2004) also considered SR in the slice selection direction, but for functional MRI data used in the 2-D

multi-slice acquisition. As opposed to anatomical structures, a functional MRI visualizes the temporal activity or

physiology of the brain, which gives a dynamic time series of 3-D activation areas. Peeters et al. (2004) used an additive

model, computing the volume of shared space from any given LR and HR image pixels. Carmi et al. (2006) explored SR

further using 2-D multi-slice MRI data sets, and recognized emerging problems caused by spatially-shifted LR images like

those used by Greenspan et al. (2002). Their main contribution is a new sampling condition for the LR image that goes

beyond MRI and into SR in general.

Specifically, they show how a set of LR images with equal sampling periods and uniform spatial shifts can propagate

localized spatial errors globally to all the pixels of the HR estimate during the SR reconstruction process. They also show

that in fully-determined SR scenarios, pixels of the HR estimate may remain unresolved despite the absence of any errors.

If input errors are limited to some physical location such as scanner vibrations or non-rigid motions of the object, then it

may be desirable to keep this spatial error localized.

This means that any pixel from the estimate of the HR images should be expressed by only a small combination of

neighboring pixels in the LR images. In other words, each row in the inverse of the E & F matrix, when non-singular,

would only have a small number of non-zero entries. While a small support region of the PSF equates to a small number of

HR pixels expressed in the linear combination of any pixel in the LR images, it does not, however, guarantee sparsity for its

inverse.

Bai et al. (2004) presented one application of the SR algorithm for the reconstruction of HR brain images from several

LR MRI data sets with a resolution in slice selection direction that is lower than the original in-plane resolution. They

employed SR reconstruction to improve the SNR and the resolution in all directions of the MR brain images. Through the

proper combination of two orthogonal scans of the same subjects using the SR algorithm, they produce a single image with

improved SNR and resolution that approximates the original in-plane resolution in all directions. The authors of this work

follow the SR approach with same MAP cost function as Hardie et al. (1997).

Rahman and Wesarg (2010) employed the SR reconstruction for cardiac MR images. The cardiac images are highly

anisotropic (1.5mm x 1.5mm x 8mm) and valuable information is missed along the slice-selection direction because of the

poor resolution in that direction. Rahman and Wesarg (2010) make use of two orthogonal sets of image volumes, which

have different slice-selection directions and recover the missing information using SR reconstruction. They adopt the

approach proposed by Geman and Geman (1984) and exploit the equivalence between MRF and the Gibbs distribution to

express the probability distribution of the prior. They employ a Bayesian approach for maximizing the posterior probability.

The optimization function they use is the one proposed in Hardie et al. (1994). The temperature parameter is assigned the

constant value 10 for the experimental results.

Rahman and Wesarg (2010) manage to show that the quality of the cardiac images after applying the SR algorithm is

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

192

dramatically enhanced. In their work, they show the improvement of the quality of the reconstructed images after

combining short-axis and long-axis cardiac MR images by applying an SR algorithm. They compare the results with the

results of a simple averaging of the SA and LA volumes, and it can be easily seen that the super-resolved images are of

much better quality. Except for that, Rahman and Wesarg (2010) use both deformable and rigid registration at the

preprocessing step of the SR reconstruction and provide results that show that the SRR with deformable registration

produces better SR reconstruction due to the motion of the heart and the breathing of the patient. In Rahman and Wesarg

(2010b) they also investigate SRR by combining 3 orthogonal volumes, namely short-axis, long-axis 2-chamber (2CH), and

4-chamber (4CH) views, instead of combining only SA and 2CH. According to their results, this combination is not

recommendable. They explain it with the imperfectionof the registration results due to heart beat and breathing.

Factors Limiting Performance of Super-Resolution

Recently, there has been some criticism towards the overall efficiency of the SR process. The skepticism is fueled by quick

advances in sensor technology. In fact, some argue that the most direct and cost effective solution to increase spatial

resolution is to reduce the pixel size by sensor manufacturing techniques. However, as the pixel size decreases, the number

of photons incident on the pixel array decreases, generating shot noise that significantly decreases the SNR. In practice, this

performance problem will be most noticeable in low-light conditions, where the noise becomes a major problem. In fact,

besides the reduction in photon conversion efficiency, there are other fundamental optical limits that become increasingly

important in the overall imaging process, which place a practical lower limit on pixel size (Hecht 2002, Komatsu et al.

1993). Therefore, in these applications, SR processing might be the solution to overcome the future limitations of sensing

technologies.

Besides digital cameras, several applications might benefit from SR. In the previous chapter, we listed some potential

applications, although we are convinced that there are more that have not been considered. Therefore, the real questions

that we should try to address are not about the applicability of SR, but rather about the performance.

In the following section, we discuss the performance limits of SR. First, we consider the theoretical bounds that limit

the performance of SR algorithms. Next, we raise some practical issues that limit the performance of the derived algorithms.

■ Necessity of Aliasing

In the frequency domain setting, SR processing restores the DFT samples at a finer resolution and extrapolates frequency

content such that the restored spectrum is wider than any of the observed LR images. The extrapolation is the operation that

is most specific to SR, since it explains the recovery of the lost information during the sampling process. Hunt (1999)

explained the recovery of the information beyond the diffraction limit cut-off because the sync function (due to rectangular

spatial sampling) is infinite in extent, which means that there will be components of the spectrum portion above the

diffraction limit that will be mirrored into the spectrum below the diffraction cut-off. Thus it is the presence of the aliased

high frequencies that make SR feasible. In practice, this means that the images need to be under-sampled at the sensor level,

without undergoing excessive low pass filtering due to optical blurring or motion blur.

■ Ill-Posed Inverse Problem

The fundamental problem that is addressed in SR is a typical example of an ill-posed inverse problem. This means that

explicit regularization strategies need to be employed in order to achieve meaningful solutions. In practice, regularization is

incorporated in the solution as terms that express a priori assumptions about the structure of the imaged scene. Most, if not

all, SR algorithms are based on the reconstruction constraints, which assume that the LR images are generated from a single

HR image. Baker and Kanade (2002) derived some analytical results that show that the reconstruction constraints provide

less and less useful information as the magnification factor increases. Most noticeable is their remark that even if the

reconstruction constraints are invertible (which they generally are not), the condition number grows at least as fast as the

quadratic function of the target magnification factors, which roughly indicates that the ill-posedness of the inverse problem

is growing exponentially with the magnification. This means that the overall estimate of the HR solution, especially when

combined with smoothness priors, is more and more irrelevant at large magnification factors. In one experiment, Baker

showed that for a magnification factor of 16, the smoothness prior provides more information than the reconstruction

constraints. This illustrates how fast the inverse problem can become ill-conditioned.

■ Simplistic Modeling

Smart Computing Review, vol. 4, no. 3, June 2014

193

Most of the proposed methods for SR in the literature suffer from simplistic assumptions. Besides the simplification in the

image formation model, sub-pixel motion is usually assumed to be exactly known. In practice, the displacement between

the consecutive frames has to be separately estimated, which makes the SR reconstruction a compound process that heavily

depends on the precision and implementation details of the motion estimation. Additionally, motion blur, which has a

substantial degrading effect on the performance of SR (Ben-Ezra et al. 2004), is usually skipped in the image formation

model. In fact, few publications dissect the implementation of the different algorithms required in SR in detail. Rather,

most publications concentrate on the simulated inverse problem and the associated regularization strategies. We reckon that

there is the need to investigate and understand the problems that are posed when considering SR as an entire process, which

integrates the motion estimation process and also tackles the dependent problems such as motion outliers, motion blur and

internal camera settings.

Nonlinearities: The most evident simplification lies in the image formation model itself, which is usually assumed

linear for the tractability of the solution. Usually the employed models do not cater to the different processes that

happen during the sensing of the scene, which typically result in the nonlinearities of the model. For instance,

besides the model recently presented in Gunturk and M. Gevrekci (2006), which considers the limited dynamic

range and the non-linear sensor response in the SR model, most of the existing literature assumes linear sensing

models and constant camera exposure time. Still, there are several additional physical and processing parameters

which are usually skipped in the modeling, e.g., uneven sampling of the different colors, uneven color response,

different gain levels used with each picture and the resulting signal-dependent amplification of noise, different

optical aberrations such as vignetting or geometrical distortions, etc. Even when adequate linearization of these

processes is corrected by applying point-wise or spatial processing on the captured images, some additional

enhancement algorithms are usually applied inside the camera to reduce noise or to improve the sharpness of the

color contrast. The resulting process is extremely difficult to capture using a simple model. This means that the

linear models used in the formulations of image SR are approximate at best.

Noise modeling: Usually Gaussian noise models are used in the modeling of the image formation process. It is

well accepted that this is an over-simplification, since in reality the sensor‟s noise is due to a combination of several

sources, e.g., shot noise, photonic noise, dark current noise, dark signal level, or thermal noise. The processing on

the sensor itself might complicate the noise modeling by introducing errors due to fixed-pattern noise, photon-

response non-uniformity, amplifier noise, circuit noise, pixel cross-talk, correlated double sampling, quantization

noise, or chromatic conversions. (Costantini and S. Susstrunk 2004). Most of the research literature available on the

detailed sensor‟s noise analysis is developed within the electronics community, and typically comes to conclusions

and models that are useful only for the purpose of electronic hardware design and integration. These fragmented

models are particularly inadequate for image processing applications, since accurate pixel-wise knowledge of the

noise model is required in order to properly restore the image details. In general, signal-dependent noise models

need to be considered in order to improve the fidelity of the reconstruction process in SR. For example, recently Foi

et al. (2005) investigated the effect of precise Poissonian noise modeling on the performance of image de-blurring. It

was found that the assumption of signal-dependent noise, which is closer to reality, significantly improves the

performance of the image restoration process, especially for images captured with sensors having a small pixel size.

In a further development, Foi et al. (2006) proposed a de-convolution technique for observations corrupted by

signal-dependent noise. The de-blurring is performed in the transform-domain and is applied on varying size blocks.

The results demonstrate the good performance of the proposed method, which can be easily combined with other

transform-domain processing.

■ Algorithmic Performance

Precise motion estimation: One critical requirement to achieve good performance in image SR is the availability of

accurate registration parameters. In fact, sub-pixel precision in the motion field is needed to achieve the desired

improvement. In real-life electronic imaging applications the motion occurring between frames is not precisely

known, since perfect control over the data acquisition process is rarely available. Thus, motion estimates must be

computed to determine pixel displacements between frames. To achieve practical implementations of SR, the

problems of sub-pixel image registration and outlier robustness need to be investigated in more detail. The employed

algorithm for motion estimation needs to compromise the following properties: precision of registration, noise

robustness, locality of motion estimates, robustness to motion outliers, and reasonable computational complexity. It

is well accepted that motion estimation is the most challenging in SR because it is affected by aliasing and

degradation in the image formation process, which are precisely the factors for which SR proposes to solve. It is

well known that an accurate estimation of the motion blur parameters is non-trivial problem and requires strong

assumptions about the camera motion during integration (Tico 2006). It was shown in (Ben-Ezra 2004) that even

when an accurate estimate of the motion blur parameters is available, motion blur has a significant influence on the

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

194

SR result. The overall performance of SR algorithms is particularly degraded in the presence of persistent outliers,

for which registration has failed. The artifacts caused by an incorrectly registered image are much more visually

disturbing than the intrinsic poor spatial resolution of a single input image. To enhance the robustness of the

processing against this problem, SR algorithms need to integrate adaptive filtering strategies in order to reject the

outlier image regions.

Computational requirements: SR is a computationally intensive process. Since the initial problem is numerically ill-

posed, most solutions require iterative processing in order to reach an acceptable solution. Additionally, this filtering

technique requires several processing stages (motion estimation, restoration, interpolation) that are usually

interdependent. This makes the overall implementation quite complex, requiring a lot of memory to store the

intermediate results, as well as considerable computational resources to calculate the final result. When the targeted

filtering is to run in mobile imaging devices, it is desirable to have real-time operations. This might be quite

challenging to achieve, especially in that portable devices are currently constrained with limited memory, and

computational power resources. Additionally, if the number of input images is large, or the output image size is

large, the processing delay needed to perform the overall filtering might be unreasonably long. On the other hand,

the computational power is increasing all the time, and the rate of improvement is even faster for portable devices.

User expectations of image and video quality are also rising all the time. This means that the opportunity for

integrating heavy processing techniques such as SR will be possible in the future. In this goal, SR algorithms can be

first introduced by scaling down the processing, for instance, by reducing the number of iterations, thus favoring the

approach of acceptable quality improvement in real-time operations at the expense of the best possible quality but

slow operation.

Summary

In this work, the current state-of-the-art in SR imaging techniques was reviewed. The background of a statistical approach

and SR as an inverse problem was also presented. The observation model for both SR images and video was established.

The chronological development of methods in this area was studied. Also, the limitations and drawbacks of these

techniques were portrayed. Previous SR research in Magnetic Resonance Imaging was also reviewed.

Appendix A

Bayesian Theorem

In mathematical terms, this means:

(1)

P (H) is called the prior probability of H, since it does not depend on E.

P (H E) is the conditional probability of H in terms of E, called also posteriori probability.

P (E H) is the conditional probability of E in terms of H, called also likelihood.

P (E) is the prior probability of E.

In mathematical terms, this is expressed as follows:

(2)

Markov Property

In the case that process takes discrete values and is indexed by a discrete time, the Markov property can be formally

expressed as:

(3)

Markov Random Fields

A random field X is said to be an MRF on S with respect to a neighborhood system N if and only if equations (4) and (5)

Smart Computing Review, vol. 4, no. 3, June 2014

195

hold good.

(4)

(5)

Gibbs Random Fields

(6)

c

(7)

where, z is a normalization constant called the partition function and E(x) is the energy function, which is a sum of

clique potentials Vc(x) over all possible cliques C. Vc(x) is referred to as clique potential. Its value depends on the local

configuration of the clique C. A clique C is defined as a subset of sites in S in which every pair of distinct sites are

neighbors, except for single-site cliques.

Maximum A-posteriori Estimation

Let the unknown parameter be f, and the known information be g. We can encode the prior information in terms of a

probability distribution function on the parameter to be estimated. The associated probability P(f) is called the prior

probability. Bayesian theorem shows the way for incorporating prior information in the estimation process:

(8)

The term is the posterior, is the likelihood, and the denominator serves as a normalization term.

(9)

Observation Model for Image Super-Resolution

The given image (say, with a size of M1 × M2) is considered as the HR ground truth, which is to be compared with the HR

image reconstructed from a set of LR images (say, with a size of L1 × L2 each; that is, L1 = M1/q1 and L2 = M2/q2) for

conducting performance evaluation. To summarize mathematically,

(10)

(11)

where and denote the th

L1×L2 LR image and the original M1×M2 HR image, respectively, and = 1, 2, . . . , ρ.

Furthermore, both and are represented in the lexicographic-ordered vector form, with a size of L1×L2 ×1 and M1×M2

×1, respectively, and each L1×L2 image can be transformed (i.e., lexicographic ordered) into a L1×L2 ×1 column vector,

obtained by ordering the image row by row. is the decimation matrix with a size of L1×L2 × M1×M2, is the blurring

matrix of size (M1×M2) ×(M1×M2), and is the warping matrix of size (M1×M2) ×(M1×M2). Consequently, three

operations can be combined into one transform matrix = with a size of L1×L2 × M1×M2. Lastly, is a

L1×L2 ×1 vector, representing the white Gaussian noise encountered during the image acquisition process. Note that is

assumed to be independent of . Over a period of time, one can capture a set of (say, ρ) observations { , , . . . ,

}.With that established, the goal of the SR image reconstruction is to produce one HR image based on

{ , , . . . , }.

Observation Model for Video Super-Resolution

More specifically, at each temporal instant n, the relationship between the LR image and the HR image can be

mathematically formulated as,

(12)

where is defined as in (Katsaggelos, A.K. et al 2007), and represent an L1×L2 low-resolution image and an

M1×M2 HR image in the temporal instant , respectively. Both of these two images are represented in the lexicographic-

ordered vector form, with a size of L1×L2 × 1 and M1×M2 × 1, respectively. Additionally, is modeled as a zero-mean

white Gaussian noise vector (with a size of L1×L2 × 1), encountered during the image acquisition process. Furthermore,

is assumed to be independent of . In summary, the SR video problem is an inverse problem, and its goal is to make use

of a set of LR images with a dimension of L1×L2 each to compute the HR ones { with a

dimension of M1×M2 each (i.e., the enlarged resolution ratio is q1 × q2).

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

196

References

[1] S. Geman, D. Geman, “Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images,” IEEE

Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 6, pp. 721-741, 1984. Article (CrossRef Link)

[2] B. Jaehne, H. Haussecker, “Computer Vision and Applications: A Guide for Students and Practitioners,” Academic

Press, Orlando, 2000.

[3] R. C. Gonzalez, R. E. Woods, “Digital Image Processing,” Prentice-Hall, New Jersey, 2008.

[4] M. C. Chiang, T. E. Boult, “Efficient super-resolution via image warping,” Image and Vision Computing, vol. 18, no.

10, pp. 761-771, 2000. Article (CrossRef Link)

[5] A. Patti, Y. Altunbasak, “Artifact reduction for set theoretic super resolution image reconstruction with edge

adaptive constraints and higher-order interpolants,” IEEE Transactions on Image Processing, vol. 10, no. 1, pp. 179-

186, 2001. Article (CrossRef Link)

[6] S. Lertrattanapanich, N. K. Bose, “High resolution image formation from low resolution frames using Delaunay

triangulation,” IEEE Transactions on Image Processing, vol. 11, no. 12, pp. 1427-1441, 2002. Article (CrossRef

Link)

[7] Z. Wang, F. Qi, “On ambiguities in super-resolution modeling,” IEEE Signal Processing Letters, vol. 11, no. 8, pp.

678-681, 2004. Article (CrossRef Link)

[8] A. K. Katsaggelos, R. Molina, J. Mateos, “Super Resolution of Images and Video,” Morgan & Claypool, San Rafael,

2007.

[9] T. S. Huang, “Advances in Computer Vision and Image Processing,” JAI Press, Greenwich, 1988.

[10] S. P. Kim, N. K. Bose, H. M. Valenzuela, “Recursive reconstruction of high resolution image from noisy

undersampled multiframes,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, no. 6, pp.

1013-1027, 1990. Article (CrossRef Link)

[11] S. P. Kim, W. Y. Su, “Recursive high-resolution reconstruction of blurred multiframe images,” IEEE Transactions

on Image Processing, vol. 2, no. 4, pp. 534-539, 1993. Article (CrossRef Link)

[12] S. Rhee, M. G. Kang, “Discrete cosine transform based regularized high-resolution image reconstruction algorithm,”

Optical Engineering, vol. 38, no. 8, pp. 1348-1356, 1999. Article (CrossRef Link)

[13] N. Nguyen, P. Milanfar, “A wavelet-based interpolation-restoration method for superresolution,”

Circuits, Systems and Signal Processing, vol. 19, no. 4, pp. 321-338, 2000. Article (CrossRef Link)

[14] N. A. Woods, N. P. Galatsanos, A. K. Katsaggelos, “Stochastic methods for joint registration, restoration, and

interpolation of multiple undersampled images,” IEEE Transactions on Image Processing, vol. 15, no.1, pp. 201-

213, 2006. Article (CrossRef Link)

[15] P. Dempster, N. M. Laird, D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal

of the Royal Statistical Society, Series B, vol. 39, no. 1, pp. 1-38, 1977.

[16] S. E. Ei-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam, F. E. Ei-Samie, “Regularized super-resolution

reconstruction of images using wavelet fusion,” Optical Engineering, vol. 44, no. 9, 2005.

[17] S. E. Ei-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam, F. E. Ei-Samie, “Wavelet fusion: a tool to break the

limits on LMMSE image super-resolution,” International Journal of Wavelets, Multiresolution and Information

Processing, vol. 4, no. 1, pp. 105-118, 2006. Article (CrossRef Link)

[18] M. B. Chappalli, N. K. Bose, “Simultaneous noise filtering and super-resolution with second-generation wavelets,”

IEEE Signal Processing Letters, vol. 12, no.11, pp. 772-775, 2005. Article (CrossRef Link)

[19] H. Ji, C. Fermuller, “Robust wavelet-based super-resolution reconstruction: theory and algorithm,” IEEE

Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 4, pp. 649-660, 2009. Article (CrossRef

Link)

[20] H. Ur, D. Gross, “Improved resolution from subpixel shifted pictures,” CVGIP Graphical Models Image Processing,

vol. 54, no. 2, pp. 181–186, 1992. Article (CrossRef Link)

[21] N. K. Bose, N. A. Ahuja, „Superresolution and noise filtering using moving least squares,” IEEE Transactions on

Image Processing, vol. 15, no. 8, pp. 2239-2248, 2006. Article (CrossRef Link)

[22] M. Irani, S. Peleg, “Improving resolution by image registration,” CVGIP Graphical Models Image Processing, vol.

53, no. 3, pp. 231-239, 1991. Article (CrossRef Link)

[23] H. Stark, P. Oskoui, “High resolution image recovery from image-plane arrays, using convex projections,” Journal

of the Optical Society of America A, vol. 6, no. 11, pp. 1715-1726, 1989. Article (CrossRef Link)

[24] A. J. Patti, A. M. Tekalp, “Super resolution video reconstruction with arbitrary sampling lattices and nonzero

aperture time,” IEEE Transactions on Image Processing, vol. 6, no. 8, pp. 1446-1451, 1997. Article (CrossRef Link)

Smart Computing Review, vol. 4, no. 3, June 2014

197

[25] P. L. Combettes, “The foundations of set theoretic estimation,” in Proc. of the IEEE, vol. 81, no. 2, pp. 182-208,

1993. Article (CrossRef Link)

[26] M. Bertero, P. Boccacci, “Introduction to Inverse Problems in Imaging,” CRC Press, Florida, 1998. Article

(CrossRef Link)

[27] B. C. Tom, A. K. Katsaggelos, “Reconstruction of a high-resolution image by simultaneous registration, restoration,

and interpolation of low-resolution images,” in Proc. of the IEEE international conference on image processing, pp.

539-542, 1995. Article (CrossRef Link)

[28] J. Tian, K. K. Ma, “Stochastic super-resolution image reconstruction,” Journal of Visual Communication and Image

Representation, vol. 21, no. 3, pp. 232-244, 2010. Article (CrossRef Link)

[29] R. R. Schultz, R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Transactions on

Image Processing, vol. 5, no. 6, pp. 996-1011, 1996. Article (CrossRef Link)

[30] K. V. Suresh, A. N. Rajagopalan, “Robust and computationally efficient superresolution algorithm,” Journal of the

Optical Society of America A, vol. 24, no. 4 pp. 984-992, 2007. Article (CrossRef Link)

[31] S. Belekos, N. P. Galatsanos, A. K. Katsaggelos, “Maximum a posteriori video super-resolution using a new

multichannel image prior,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1451-1464, 2010. Article

(CrossRef Link)

[32] H. Shen, L. Zhang, B. Huang, P. Li, “A MAP approach for joint motion estimation, segmentation, and super

resolution,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 479-490, 2007. Article (CrossRef Link)

[33] S. Haykin, “Adaptive Filter Theory,” Prentice-Hall, New Jersey, 2002.

[34] S. V. Vaseghi, “Advanced Signal Processing and Digital Noise Reduction,” John Wiley & Sons, New York, 1996.

Article (CrossRef Link)

[35] I. M. Elfadel, R. W. Picard, “Gibbs random fields, cooccurrences, and texture modeling,” IEEE Transactions on

Pattern Analysis and Machine Intelligence, vol. 16, no.1, pp. 24-37, 1994. Article (CrossRef Link)

[36] C. Bouman, K. Sauer, “A generalized Gaussian image model for edge-preserving MAP estimation,” IEEE

Transactions on Image Processing, vol. 2, no. 3, pp. 296-310, 1993. Article (CrossRef Link)

[37] R. L. Stevenson, B. E. Schmitz, E. J. Delp, “Discontinuity preserving regularization of inverse visual problems,”

IEEE Transactions on Systems, Man and Cybernetics, vol. 24, no. 3, pp. 455-69, 1994. Article (CrossRef Link)

[38] R. R. Schultz, R. L. Stevenson, “A Bayesian approach to image expansion for improved definition,” IEEE

Transactions on Image Processing, vol. 3, no. 3, pp. 233-242, 1994. Article (CrossRef Link)

[39] R. Pan, S. J. Reeves, “Efficient Huber-Markov edge-preserving image restoration,” IEEE Transactions on Image

Processing, vol. 15, no. 12, pp. 3728-3735, 2006. Article (CrossRef Link)

[40] A. M. Thompson, J. C. Brown, J. W. Kay, D. M. Titterington, “A study of methods of choosing the smoothing

parameter in image restoration by regularization,” IEEE Transactions on Pattern Analysis and Machine Intelligence,

vol. 13, no. 4, pp. 326-339, 1991. Article (CrossRef Link)

[41] V. Z. Mesarovic, N. P. Galatsanos, A. K. Katsaggelos, “Regularized constrained total least squares image

restoration,” IEEE Transactions on Image Processing, vol. 4, no. 8, pp. 1096-1108, 1995. Article (CrossRef Link)

[42] D. Geman, C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Transactions on Image

Processing, vol. 4, no. 7, pp. 932-946, 1995. Article (CrossRef Link)

[43] M. G. Kang, A. K. Katsaggelos, “General choice of the regularization functional in regularized image restoration,”

IEEE Transactions on Image Processing, vol. 4, no. 5, pp. 594-602, 1995. Article (CrossRef Link)

[44] M. G. Kang, A. K. Katsaggelos, “Simultaneous multichannel image reconstruction and estimation of the

regularization parameters,” IEEE Transactions on Image Processing, vol. 6, no. 5, pp. 774-778, 1997. Article

(CrossRef Link)

[45] R. Molina, “On the hierarchical Bayesian approach to image restoration applications to astronomical images,” IEEE

Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 11, pp. 1122-1128, 1995. Article (CrossRef

Link)

[46] R. Molina, A. K. Katsaggelos, J. Mateos, “Bayesian and regularization methods for hyperparameter estimation in

image restoration,” IEEE Transactions on Image Processing, vol. 8, no. 2, pp. 231-246, 1999. Article (CrossRef

Link)

[47] R. Molina, M. Vega, J. Abad, A. K. Katsaggelos, “Parameter estimation in Bayesian high-resolution image

reconstruction with multisensors,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp.1655-1667, 2003.

Article (CrossRef Link)

[48] D. Rajan, S. Chaudhuri, “Simultaneous estimation of super-resolution scene and depth map from low resolution

defocuses observations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1102-

1117, 2003. Article (CrossRef Link)

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

198

[49] H. He, L. P. Kondi, “Resolution enhancement of video sequences with adaptively weighted low-resolution images

and simultaneous estimation of the regularization parameter,” in Proc. of the IEEE international conference on

acoustics, speech, and signal processing, pp. 213-216, 2004.

[50] H. He, L. P. Kondi, “A regularization framework for joint blur estimation and super-resolution of video sequences,”

in Proc. of the IEEE international conference on image processing, pp. 329-332, 2005.

[51] H. He, L. P. Kondi, “An image super-resolution algorithm for different error levels per frame,” IEEE Transactions

on Image Processing, vol. 15, no. 3, pp. 592-603, 2006. Article (CrossRef Link)

[52] M. Vega, R. Molina, A. K. Katsaggelos, “A Bayesian super-resolution approach to demosaicing of blurred image,”

EURASIP Journal on Advances in Signal Processing, vol. 2006, article ID 25072, pp. 1-12, 2006.

[53] M. Elad, A. Feuer, “Restoration of a single superresolution image from severalblurred, noisy, and undersampled

measured images,” IEEE Transactions on Image Processing, vol. 6, pp. 1646-1658, Dec. 1997. Article (CrossRef

Link)

[54] M. Elad, A. Feuer, “Superresolution restoration of an image sequence: adaptive filtering approach,” IEEE

Transactions on Image Processing, vol. 8, no. 3, pp. 387-395, 1999. Article (CrossRef Link)

[55] M. Elad, A. Feuer, “Super-resolution reconstruction of image sequences,” IEEE Transactions on Pattern Analysis

and Machine Intelligence, vol. 21, no. 9, pp. 817-834, 1999. Article (CrossRef Link)

[56] M. Elad, Y. Hel-Or, “A fast super-resolution reconstruction algorithm for pure translational motion and common

space-invariant blur,” IEEE Transactions on Image Processing, vol. 10, no. 8, pp. 1187-93, 2001. Article (CrossRef

Link)

[57] N. Nguyen, P. Milanfar, G. Golub, “A computationally efficient superresolution image reconstruction algorithm,”

IEEE Transactions on Image Processing, vol. 10, no. 4, pp. 573-583, 2001. Article (CrossRef Link)

[58] M. Elad, “On the origin of the bilateral filter and ways to improve it,” IEEE Transactions on Image Processing, vol.

11, no. 10, pp. 1141-1151, 2002. Article (CrossRef Link)

[59] S. Farsiu, M. D. Robinson, M. Elad, P. Milanfar, “Advances and challenges in super-resolution,” International

Journal of Imaging Systems and Technology, vol. 14, no.2, pp. 47-57, 2004. Article (CrossRef Link)

[60] S. Farsiu, M. D. Robinson, M. Elad, P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Transactions

on Image Processing, vol. 13, no. 10, pp. 1327-1344, 2004. Article (CrossRef Link)

[61] S. Farsiu, M. Elad, P. Milanfar, “Multiframe demosaicing and super resolution of color images,” IEEE Transactions

on Image Processing, vol. 15, no. 1, pp. 141-159, 2006. Article (CrossRef Link)

[62] S. Farsiu, M. Elad, P. Milanfar, “Video-to-video dynamic super resolution for grayscale and color sequences,”

EURASIP Journal on Advances in Signal Processing, vol. 2006, article ID 61859, pp. 1-15, 2006.

[63] V. Patanavijit, S. Jitapunkul, “An iterative super-resolution reconstruction of image sequences using a Bayesian

approach and affine block-based registration,” in Proc. of the fourteenth EUSIPCO European signal processing

conference, pp. 1-5, 2006.

[64] G. Rochefort, F. Champagnat, G. Le Besnerais, J. F. Giovannelli, “An improved observation model for super-

resolution under affine motion,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3325-3337, 2006.

Article (CrossRef Link)

[65] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, D. H. Salesin, “Image analogies,” in Proc. of the SIGGRAPH

28th annual conference on computer graphics and interactive techniques, pp 327-340, 2001. Article (CrossRef Link)

[66] H. Chang, D. Y. Yeung, Y. Xiong, “Super-resolution through neighbour embedding,” in Proc. of the IEEE

international conference on computer vision and pattern recognition, pp. 275-282, 2004.

[67] W. T. Freeman, E. C. Pasztor, O. T. Carmichael, “Learning low level vision,” International Journal of Computer

Vision, vol. 40, no. 1, pp. 25-47, 2000. Article (CrossRef Link)

[68] L. C. Pickup, S. J. Roberts, A. Zisserman, “A sampled texture prior for image super-resolution,” Advances in neural

information processing systems, pp. 1587-1594, 2003.

[69] D. Datsenko, M. Elad, “Example-based single document image super-resolution: a global MAP approach with

outlier rejection,” Multidimensional Systems and Signal Processing, vol. 18, no. 2-3, pp. 103-121, 2007. Article

(CrossRef Link)

[70] J. Wang, S. Zhu, Y. Gong, “Resolution enhancement based on learning the sparse association of image patches,”

Pattern Recognition Letters, vol. 31, no.1, pp. 1-10, 2010. Article (CrossRef Link)

[71] K. I. Kim, Y. Kwon, “Single-image super-resolution using sparse regression and natural image prior,” IEEE

Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 6, pp. 1127-1133, 2010. Article (CrossRef

Link)

[72] J. Yang, J. Wright, T. Huang, Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on

Image Processing, vol. 19, no. 11, pp. 2861-2873, 2010. Article (CrossRef Link)

Smart Computing Review, vol. 4, no. 3, June 2014

199

[73] R. R. Schultz, M. Li, R. L. Stevenson, “Subpixel motion estimation for super-resolution image sequence

enhancement,” Journal of Visual Communication and Image Representation, vol. 9, no. 1, pp. 38-50, 1998. Article

(CrossRef Link)

[74] M. Protter, M. Elad, “Super resolution with probabilistic motion estimation,” IEEE Transactions on Image

Processing, vol. 18, no. 8, pp. 1899-1904, 2009. Article (CrossRef Link)

[75] D. Barreto, L. D. Alvarez, J. Abad, “Motion estimation techniques in super-resolution image reconstruction: a

performance evaluation,” Proceeding of the Virtual Observatory: Plate Content Digitalization, Archive Mining and

Image Sequence Processing, pp. 254-268, 2005.

[76] G. Callico, S. Lopez, O. Sosa, J. F. Lopez, R. Sarmiento, “Analysis of fast block matching motion estimation

algorithms for video super-resolution systems,” IEEE Transactions on Consumer Electronics, vol. 54, no. 3, pp.

1430-1438, 2008. Article (CrossRef Link)

[77] S. Baker, T. Kanade, “Super resolution optical flow,” Technical report, Carnegie Mellon University, Pittsburgh,

1999.

[78] W. Zhao, H. Sawhney, “Is super-resolution with optical flow feasible?,” in Proc. of the European conference on

computer vision, pp. 599-613, 2002.

[79] F. Lin, C. Fookes, V. Chandran, S. Sridharan, “Investigation into optical flow super-resolution for surveillance

applications,” in Proc. of the APRS workshop on digital image computing, pp. 73-78, 2005.

[80] H. V. Le, G. Seetharaman, “A super-resolution imaging method based on dense subpixel-accurate motion fields,”

Journal of VLSI Signal Processing, vol. 42, no. 1, pp. 79-89, 2006. Article (CrossRef Link)

[81] R. Fransens, C. Strecha, L. V. Gool, “Optical flow based superresolution: a probabilistic approach,” Computer

Vision and Image Understanding, vol. 106, no. 1, pp. 106-115, 2007. Article (CrossRef Link)

[82] B. Narayanan, R. C. Hardie, K. E. Barner, M. Shao, “A computationally efficient super-resolution algorithm for

video processing using partition filters,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17,

no. 5, pp. 621-634, 2007. Article (CrossRef Link)

[83] M. K. Ng, H. F. Shen, L. P. Zhang, E. Lam, “A total variation regularization based super-resolution reconstruction

algorithm for digital video,” EURASIP Journal on Advances in Signal Processing, vol. 2007, article ID 74585, pp.

1-16, 2007.

[84] V. Patanavijit, S. Jitapunkul, “A Lorentzian stochastic estimation for a robust iterative multiframe super-resolution

reconstruction with Lorentzian-Tikhonov regularization,” EURASIP Journal on Advances in Signal Processing, vol.

2007, article ID 34821, pp. 1-21, 2007.

[85] S. Borman, R. L. Stevenson, “Simultaneous multi-frame MAP super-resolution video enhancement using spatio-

temporal priors,” in Proc. of the IEEE international conference on image processing, pp. 469-473, 1999.

[86] M. V. W. Zibetti, J. Mayer, “A robust and computationally efficient simultaneous super-resolution scheme for image

sequences,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 10, pp. 1288-1300, 2007.

Article (CrossRef Link)

[87] L. D. Alvarez, R. Molina, A. K. Katsaggelos, “Multi-channel reconstruction of video sequences from low-resolution

and compressed observations,” in Proc. of the eighth Iberoamerican congress on pattern recognition, pp. 46-53,

2003.

[88] G. H. Costa, J. C. M. Bermudez, “Statistical analysis of the LMS algorithm applied to super-resolution image

reconstruction,” IEEE Transactions on Signal Processing, vol. 55, no. 5, pp. 2084-2095, 2007. Article (CrossRef

Link)

[89] J. Tian, K. K. Ma, “A state-space super-resolution approach for video reconstruction,” Signal, Image and Video

Processing, vol. 3, no. 3, pp. 217-240, 2009. Article (CrossRef Link)

[90] C. M. Bishop, A. Blake, B. Marthi, “Super-resolution enhancement of video,” in Proc. of the international

conference on artificial intelligence and statistics, pp. 1-8, 2003.

[91] G. Dedeoglu, T. Kanade, J. August, “High-zoom video hallucination by exploiting spatio-temporal regularities,” in

Proc. of the IEEE international conference on computer vision and pattern recognition, pp. 151–158, 2004.

[92] D. Kong, M. Han, W. Xu, H. Tao, Y. Gong, “Video superresolution with scene-specific priors,” in Proc. of the

British machine vision association conference, p. 549, 2006.

[93] E. Roullot, A. Herment, M. Bloch, M. Nikolova, E. Mousseaux, “Regularized reconstruction of 3D high-resolution

magnetic resonance images from acquisitions of anisotropically degraded resolutions,” in Proc. of fifteenth

international conference on pattern recognition, pp. 346-349, 2000.

[94] A Herment, E. Roullot, I. Bloch, O. Jolivet, A. D. Cesare, F. Frouin, J. Bittoun, E. Mousseaux, “Local

reconstruction of stenosed sections of artery using multiple MRA acquisitions,” Magnetic Resonance in Medicine,

vol. 49, no.4, pp. 731-742, 2003. Article (CrossRef Link)

Kathiravan et al.: An overiew of SR techniques applied to images, videos and magnetic resonance images

200

[95] S. Peled, H. Yeshurun, “Superresolution in MRI: Application to human white matter fiber tract visualization by

diffusion tensor imaging,” Magnetic Resonance in Medicine, vol. 45, no.1, pp. 29-35, 2001. Article (CrossRef Link)

[96] W. Hacke, S. Warach, “Diffusion-weighted MRI as an evolving standard of care in acute stroke,” Neurology, vol. 54,

no. 8, pp. 1548-1549, 2000. Article (CrossRef Link)

[97] K. Scheffler, “Superresolution in MRI?,” Magnetic Resonance in Medicine, vol. 48, no. 2, p. 408, 2002. Article

(CrossRef Link)

[98] H. Greenspan, G. Oz, N. Kiryati, S. Peled, “MRI inter-slice reconstruction using super-resolution,” Magnetic

Resonance Imaging, vol. 20, no.5, pp. 437-446, 2002. Article (CrossRef Link)

[99] R. Peeters, P. Kornprobst, M. Nikolova, S. Sunaert, T. Vieville, G. Malandain, R. Deriche, O. Faugeras, M. N. P.

Hecke, “The use of superresolution techniques to reduce slice thickness in functional MRI,” International Journal of

Imaging Systems and Technology, vol. 14, no. 3, pp. 131-138, 2004. Article (CrossRef Link)

[100] E. Carmi, S. Liub, N. Alona, A. Fiata, D. Fia, “Resolution Enhancement in MRI,” Magnetic Resonance Imaging, vol.

24, no. 2, pp. 133-154, 2006. Article (CrossRef Link)

[101] Y. Bai, X. Han, J. L. Prince, “Super-resolution reconstruction of MR brain images,” in Proc. of thirty-eighth annual

conference on information sciences and systems, 1358-1363, 2004.

[102] R. C. Hardie, K. J. Barnard, E. E. Armstrong, “Joint MAP registration and high-resolution image estimation using a

sequence of undersampled images,” IEEE Transactions on Image Processing, vol. 6, no. 12, 1621-1633, 1997.

Article (CrossRef Link)

[103] S. U. Rahman, S. Wesarg, “Combining short-axis and long-axis cardiac MR images by applying a super-resolution

reconstruction algorithm,” in Proc. of SPIE conference in Medical Imaging: Image Processing, San Diego, pp. 1-12,

2010.

[104] S. U. Rahman, S. Wesarg, “Upsampling of cardiac MR images: Comparison of averaging and super-resolution for

the combination of multiple views,” in Proc. of tenth IEEE international conference on information technology and

applications in biomedicine, pp. 1-4, 2010. Article (CrossRef Link)

[105] E. Hecht, “Optics,” Pearson, Harlow, 2002.

[106] T. Komatsu, K. Aizawa, T. Igarashi, T. Saito, “Signal-processing based method for acquiring very high resolution

image with multiple cameras and its theoretical analysis,” in Proc. on IEE Communications, Speech and Vision, vol.

140, no. 1, pp. 19-24, 1993.

[107] B. Hunt, “Super-resolution of imagery: understanding the basis for recovery of spatial frequencies beyond the

diffraction limit,” in Proc. of IEEE conference on Information, Decision and Control, pp. 243-248, 1999.

[108] S. Baker, T. Kanade, “Limits on super-resolution and how to break them,” IEEE Transactions on Pattern Analysis

and Machine Intelligence, vol. 24, no. 9, pp. 1167-1183, 2002. Article (CrossRef Link)

[109] M. Ben-Ezra, A. Zomet, S. Nayar, “Jitter camera: High resolution video from a low resolution detector,” in Proc. of

IEEE computer society conference on computer vision and pattern recognition, pp. 135-142, 2004.

[110] B. Gunturk, M. Gevrekci, “High-resolution image reconstruction from multiple differently exposed images,” IEEE

Signal Processing Letters, vol. 13, no. 4, pp. 197-200, 2006. Article (CrossRef Link)

[111] R. Costantini, S. Susstrunk, “Virtual sensor design,” in Proc. of SPIE electronic imaging 2004: Sensors and camera

systems for scientific, industrial, and digital photography applications, San Jose, pp. 408-419, 2004.

[112] A. Foi, S. Alenius, M. Trimeche, V. Katkovnik, K. Egiazarian, “A spatially adaptive poissonian image deblurring,”

in Proc. of IEEE international conference on image processing, pp. 925-928, 2005.

[113] A. Foi, S. Alenius, M. Trimeche, V. Katkovnik, “Adaptive-size block transforms for poissonian image deblurring,”

in Proc. of international TICSP workshop on spectral methods and multirate signal processing, pp. 1-8, 2006.

[114] M. Tico, M. Trimeche, M. Vehvilainen, “Motion blur identification based on differently exposed images,” in Proc.

of IEEE international conference on image processing, pp. 2021-2024, 2006.

Kathiravan S received his B.E., Degree in Electronics and Communication Engineering and

M.E. Degree in Communication Systems Engineering from Anna University, Chennai. He is

currently working towards his PhD degree in Electronics and Communication Engineering at

PSG College of Technology, Anna University Chennai, Tamil Nadu INDIA. His research

interests accumulate in the area of Super-resolution, Image de-noising, Image Segmentation and

Image Enhancement.

Smart Computing Review, vol. 4, no. 3, June 2014

201

Dr. J. Kanakaraj received his B.E and M.E. Degrees in Electrical and Electronics Engineering

from PSG College of Technology, Coimbatore. He received his PhD in Electrical and Electronics

Engineering from PSG College of Technology, Coimbatore. He is currently working as Associate

Professor at PSG College of Technology, Coimbatore. His research interests accumulate in the

area of Super-resolution, Image de-noising, Image Segmentation, Image Enhancement, Control

Systems, Instrumentation and Electrical Machines.

Copyrights © 2014 KAIS