Upload
muni-sankar-matam
View
8
Download
0
Embed Size (px)
DESCRIPTION
how to deblurring of an image
Citation preview
Advance Digital Signal Processing – Term Paper (Tutorial)
Image Deblurring
Pei-Yu Chao 趙珮妤 D00945005
Abstract
Digital images are subject to blurring due to many hardware limitations, such as atmospheric disturbance, device noise and poor focus quality. In order to reveal the detailed information carried in the digital image, image deblurring or restoration is necessary. Image deblurring have wide applications, from consumer photography, e.g., remove motion blur due to camera shake, to radar imaging and tomography, e.g., remove the effect of imaging system response. This tutorial is aimed to provide a basic knowledge of image degradation and restoration process. Specifically, the models and techniques used in both cases will be presented, followed by MATLAB simulation for justification.
CHAPTER 1: Introduction
Digital images are electronic snapshots taken of a scene, which typically composed of picture elements in a grid formation known as pixels. Each pixel holds a quantized value that represents the tone at a specific point.
Theoretically, the formation of digital images can be described by the pinhole camera model (Fig. 1), where the camera aperture is described as a point and only the light from the scene passes through the camera aperture can be captured on the image plane. However in reality, the camera aperture has a finite size and is often appended with a lens to focus light from the scene. Hence, the digital image is often an approximation of the real scene.
Fig. 1 Illustration of pinhole camera model
The deviation of the digital image from the actual depiction of the scene can be described as distortion, which may be resulted from both hardware and software processing limitations. The software processing limitation is mainly due to the compression technique used, i.e., lossy compression, during image storage and transmission, whereas the hardware limitation has many factors, e.g. Focus quality of the camera lens and effect of finite camera aperture size Scattered and reflected light from the environment, and device noise from the
CCD/CMOS sensor and circuitry. This will cause random variation of the pixel intensity on the image sensor.
Digital quantization noise.
The hardware limitations contributed to the loss of sharpness in the image, which decreases the amount of detailed information in an image, i.e., a blurry image.
In this tutorials, image models will be introduced, which is followed by a discussion on some common image processing techniques used to deblur or restore images. Lastly, MATLAB simulations will be used to justify the effect of the image techniques presented in this tutorial.
CHAPTER 2: Image Degradation Model
For a linear invariant system, the observed/distorted image can be modeled as a
convolution of the object function , which is the actual object in the scene, with
the image degradation function , which is also commonly known as the point
spread function.
where is an additive noise function that describes the random variation of the
pixel intensity.
According to the convolution theorem, a convolution of two spatial functions can be expressed as product of their respective Fourier transform in frequency domain. Thus, the image degradation model can be written as
In a simplest image degradation model, the degradation function is modeled as a low pass filter, which resulted in a blurry effect. Fig. 2 shows the block diagram of image degradation and restoration process. Fundamentally, the image restoration process involves in reversing the distortion effects.
Fig. 2 Block diagram of image degradation and restoration processCHAPTER 3: Image Restoration Model
As mentioned previously, the image restoration process can be achieved by inversing the image degradation process, i.e.,
where is the inverse filter, and is the recovered image.
Although the concept is relatively simple, the actual implementation is difficult to achieve, as one requires prior knowledge or identifications of the unknown
degradation function and the unknown noise source .
In the following sections, common noise models and method of estimating the degradation function are presented.
3.1. Noise Model
Since main sources of noise presented in digital images are resulted from atmospheric disturbance and image sensor circuitry, following assumptions can be made: The noise model is spatial invariant, i.e., independent of spatial location. The noise model is uncorrelated with the object function.
Some commonly used noise models can be categorized into two groups: additive noise and multiplicative noise.
A. Additive noise models
In this case, the noise is superimposed upon the image, which resulted in variation of the image signal. Some common noise distributions are: Gaussian noise distribution
Rayleigh noise distribution
Gamma(a,b) noise distribution
Exponential noise distribution
B. Multiplicative noise modelsIn this case, the noise is signal dependent, and is multiplied to the image. Two commonly discussed multiplicative noise models are: Salt-and-Pepper
Speckle noise
where , are independent Gaussian, with zero mean
Fig. 3 shows the effect of additive and multiplicative noise on digital images.
Fig. 3 (a) Original image with (b) ‘salt and pepper’ noise and (c) Gaussian noise [3].
If only the noise is presented in the digital image, i.e., without considering the degradation function, following techniques can be used to reduce the noise effect:
Mean filter For every pixel in the image, the pixel value is replaced by the mean value
of its neighboring pixels ( with a weight . This will
resulted in a smoothing effect in the image. Median filter
For every pixel in the image, the pixel value is replaced by the statistical
median of its neighboring pixels ( . Although median filter also
provides a smoothing effect, it is better in preserving detailed image information, e.g., edges.
Homomorphic filter In the case of multiplicative noise, one cannot simply apply smooth filter to
the observed noisy image , as the Fourier transform of the product of
two functions is not separable. To overcome this issue, a logarithmic representation of the image model is used instead, i.e.,
The Fourier transform of the logarithmic function is
Since the noise model, through logarithmic operation, becomes additive, a smooth filter can thus be applied to remove the noise effect. Fig. 4 shows the process of homomorphic filter. Since the multiplicative noise tends to have a slow spatial variation characteristic, compared to the image of the object, one can design a filter which reduces intensity variation across the image while highlighting details. A mathematical example of a homomorphic filter is expressed below, and its graphical representation is shown in Fig. 5.
Fig. 4 Block diagram illustrate homomorphic filtering process
Fig. 5 Graphical representation of the homomorphic filter3.2. Degradation / Point-Spread Function Estimation
The usual assumptions of the degradation function are: Linearity, and Position-invariant, i.e., the response at any point in the image depends only on
the value of the input at that point, not on its position.
For a finite camera aperture size, the impulse response of the camera can be described by the airy function, which is shown in Fig. 6. One can see that when a point object is imaged by the imaging system, the observed image is affected by the point spread function of the imaging system. Fig. 7 shows the degraded image which is caused by the point spread function.
Fig. 6 Point spread function (Airy disk pattern) of a point source. Left figure represents the perception of the light distribution when viewing an Airy disc (shown
on the right figure)
Fig. 7 (Left) an impulse of light. (Right) degraded image of light impulse
Similar to the idea of impulse response in the signals and system, the image system response can be described by the point spread function, which is obtained by
transforming a point object by the system representation
One must realized that imaging a point object is unpractical; hence, instead of finding the PSF directly, edge spread function is computed. A normalized edge is defined as a step function,
By differentiating the above edge spread function, a line spread function along x direction is obtained. Using the line spread functions acquired for both axial and azimuth direction, a two-dimensional PSF can thus be obtained, i.e.,
In the process of estimating the PSF, the line spread function are computed by evaluating the gradient or an edge in an image, along axial and azimuth direction. Fig.8 shows the process of estimating the point spread function of the imaging system.
Fig. 8 Process of estimating 2-D point spread function
CHAPTER 4: Image Deblurring / Restoration Operation
In this chapter, the implementation of image restoration will be presented.4.1. Restoration – Inverse Filter
As shown in the previous chapter, the image restoration can be achieved by applied an
inverse filter to the observed blurry image, i.e.,
Theoretically, the inverse filter is the inverse of the degradation function. However, if this was implemented, the inverse filter will enlarge the high frequency noise. This is due to that most degradation function have low-pass filter nature, hence, it has relatively low high frequency power spectrum. If an inverse operation is performed, the inverse filter will have a high-pass filter nature, which will cause the blurred image to have a magnified high frequency noise.
Therefore, the actual implementation of inverse filter will need to consider the nature of the degradation function and blurred image function, i.e.,
where is a threshold that is used to mitigate the effect of zeros in the degradation
function
4.2. Restoration – Wiener Filter
The effect of noise distribution has not been considered in the inverse filter operation.
Let the additive noise power spectrum be and image power spectrum be
. Generally, has a dominant effect over in the high
frequency region, as tends to concentrated in low frequency spectrum.
In the Fourier domain, the Wiener filter is expressed as
where is the noise-to-signal ratio. One can see that in
high frequency region, the resulting will be relatively large, i.e.,
. Consequently, the high frequency response of the restoration
filter is suppressed. Fig. 9 shows the power spectrum characteristics of the inverse filter and Wiener filter.
Fig. 9 Comparison between inverse filter and Wiener filter
It is worth noting that if no noise is presented, i.e., , the Wiener filter is
the inverse filter:
4.3. Restoration – Blind Convolution
Both inverse and Wiener filter requires accurate estimation of the degradation
function and knowledge about the noise model for image restoration. However in practice, this is often hard to obtain.
Blind deconvolution is a method where accurate estimation of the degradation function is not required. Instead, an initial guess of the degradation function is sufficient. Fig. 10 shows the iterative process of blind deconvolution, which is usually carried out in the frequency domain. The blind deconvolution procedure not only attempt to restore the blurred image, but also estimating the degradation function through MAP.
Fig. 10 Block diagram of blind deconvolution process
CHAPTER 5: Simulation
In this chapter, MATLAB simulations of inverse and Wiener filtering are performed and the effect of image deblurring is explored. Fig. 11 shows the original clear image that will be used in the simulation.
Original Image
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 11 Original Image
A 2-dimensional, 10-by-10 points low-pass filter is used as the degradation model,
i.e., . Fig. 12 shows the blurred image where only the
degradation model is applied. Blurred Image
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 12 Blurred image
Both inverse filter and Wiener filter is applied to Fig. 12. A threshold value of 1, 5
and 10 is used to evaluate the effect of threshold selection on the image restoration quality.
Restored Image (Inverse Filter, = 1)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Restored Image (Inverse Filter, = 5)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Restored Image (Inverse Filter, = 10)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 13 Restored images, using inverse filter with α = 1, 5 and 10
Restored Image (Wiener Filter, = 1)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 14 Restored images, using Wiener filter with α = 1.
One can see that without the effect of noise, the inverse filter and Wiener filter has the same performance in image restoration. Furthermore, choosing high α value will prevent high frequency response of the restoration filter, and thus, enhance the image details.
Fig. 15 shows the case where Gaussian noise (σ=15) is added to the blurred image. Blurred Image
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 15 Blurred image with additive Gaussian noise
The restored images using inverse and wiener filters are shown Fig. 16 and Fig. 17. One can see that Wiener filter is more robust to noise, while preserves high frequency details.
Restored Image (Inverse Filter, = 1)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Restored Image (Inverse Filter, = 5)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Restored Image (Inverse Filter, = 10)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 16 Restored images, using inverse filter with α = 1, 5 and 10
Restored Image (Wiener Filter, = 1)
100 200 300 400 500 600 700 800
50
100
150
200
250
300
350
400
450
500
Fig. 17 Restored images, using Wiener filter with α = 1
CHAPTER 6: Conclusion
Image deblurring or restoration have wide applications, from consumer photography, e.g., remove motion blur due to camera shake, to radar imaging and tomography, e.g., remove the effect of imaging system response. This tutorial has presented the image degradation and restoration model. The relative sub-models and model estimation have also been introduced. Although the concept of image restoration is relatively simple, the actual implementation is somewhat difficult, as prior knowledge of the degradation function and noise model is often not possible. Through simulation, one have realized that even with the prior knowledge of the degradation function and noise model, the quality of image restoration varies, which is depended on the choices of threshold value α.
References
[1] “Moving Theory into Practice: Digital Imaging Tutorial”, URL: http://www.library.cornell.edu/preservation/tutorial/contents.html
[2] “Image quality”, URL: http://en.wikipedia.org/wiki/Image_quality[3] C. Solomon and T. Breckon, ‘Fundamentals of Digital Image Processing,’ John
Wiley & Sons, Ltd, 2011.[4] V. Roth and P. Cattin, “Biomedical Image Analysis: Homomorphic Filtering and
Applications for PET”, Lecture Notes, Universität Basel.[5] L. Yang and J. Ren, “Remote sensing image restoration using estimated point
spread function”, 2010 International Conference on Information, Networking and Automation (ICINA), IEEE, 2010.
[6] R. Jiřík and T. Taxt, ‘Two-Dimensional Blind Bayesian Deconvolution of Medical Ultrasound Images,’ IEEE Transaction on UFFC, Vol.55, No.10, October 2008.