72
E&TC D.Y Patil College of Engineering 1 CHAPTER 1: INTRODUCTION Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Project Report on Cloud Removal

  • Upload
    navin86

  • View
    155

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

1

CHAPTER 1: INTRODUCTION

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 2: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

2

CHAPTER 1: INTRODUCTION

1.1 Background

A significant obstacle for extracting information from remote sensing imagery is the missing information caused by clouds and their shadows in the images. According to a study, the average percentage of cloud cover in the equatorial region is in order of 75%. In a report, climate statistics showing that northwestern Europe during the least clouded months still has cloud coverage of about 40%. There are some radar satellites that do not have cloud contamination problems because they operate in the microwave range of the electromagnetic spectrum. It is possible to obtain microwave imagery information from some of the satellites that goes back in time to 1991. But these kinds of images cannot replace information provided by optical remote sensing data. The emitted radiation in microwave range is very low while in the visible range the maximum energy is emitted. Consequently, in order to obtain imagery in the microwave region and measure these signals, which are weak, large areas are imaged. This results in relatively poor spatial resolution. On the other hand, by contrast, images in the visible range have a high resolution.

Many conventional methods for removing clouds and their shadows from images are based on time series, but methods that use spatial information result in significantly better estimates. Methods including both spatial and temporal information performed better, by a slight margin. Two common method for replacing clouded pixels and associated shadows are threshold method and subtraction method.

A relatively new approach for dealing with the cloud contamination has been developed using the wavelet transform to obtain the image in the frequency domain. The wavelet transforms cuts up the image into different frequency components. The wavelet transform can be performed for multiple levels. The multi-levels of wavelet decomposition produce images at different frequencies. So this method for clouds and shadows detection is based on making threshold at the wavelet coefficients, at different levels of decompositions, and consequently at different frequencies. According to DN values, a decision map is produced to specify the clouds and shadows pixels. If the pixel is a cloud or shadow, then the wavelet coefficients in the original noisy image are replaced by the wavelet coefficients from the clear image. Then it is required to obtain the image again in the time domain, so the inverse of two dimensional discrete wavelet transform is performed.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 3: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

3

1.2 Objectives

The main objective of this project is to explore the possibilities of the wavelet analysis to solve the problem of missing information caused by cloud covering satellite images.

1.3 Methodology

The methodology of this project has three principal steps: pre-processing and processing.

1.3.1 Pre-processing

• Aster image acquisition: It was possible to obtain a number of ASTER satellite images that correspond of the upper zone of the Mira watershed in the north area of Ecuador.

• Atmospheric corrections: It is necessary to apply atmospheric corrections.Striping, Haze and sun angle corrections are applied to the multi-temporal data set.

1.3.2 Processing

Apply wavelet decomposition to obtain image in frequency domain and use of multi-level decomposition to obtain images at different frequencies.

Perform an image fusion procedure to fill out missing information caused by clouds and their shadows.

Apply wavelet reconstruction to obtain the image in time domain.

1.4 Resources

• Data: ASTER images are used on this project. A multitemporal data set of ASTER images at Level–1A is selected. The study area is located in the north area of Ecuador. It is a mountainous region of Carchi and Imbabura provinces.

• Software: MATLAB software-package version 7.1 with Wavelet Tool application module.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 4: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

4

1.5 Digital Image Processing

An image may be defined as a two-dimensional function, f(x,y) where x & y

are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x,y)

is called the intensity or gray level of the image at that point. When x, y and the

amplitude values of f are all finite, discrete quantities, we call the image a digital

image. The field of digital image processing refers to processing digital images by

means of digital computing. The digital image is composed of a finite number of

elements, each of which has a particular location and value. These elements are

referred to as picture elements, image elements, pels and pixels.

Vision is the most advanced of our senses, so it’s not surprising that images

play the single most important role in human perception. However unlike humans,

who are limited to the visual band of electromagnetic spectrum, imaging machine

cover almost the entire EM spectrum, ranging from gamma to radio waves. They can

operate on images generated by sources that humans are not accustomed to

associating with images. These include ultrasound, gamma, electron microscopy and

computer generated images. Thus the DIP encompasses a wide and varied field of

applications.

There are no clear cut boundaries in the continuum from image processing at

one end to computer vision at the other. However, one useful paradigm is to consider

three types of computerized processes in this continuum: low, mid & high level

processes. Low level processes involve primitive operations such as image pre-

processing to reduce noise, contrast enhancement and image sharpening. A low level

process is characterized by the fact that both its input and outputs are images. Mid

level processing on image involves task such as segmentation, description of those

objects to reduce them to a form suitable for computer processing and classification of

individual objects. A mid level process is characterized by the fact that its input

generally is images, but its output is attributes extracted from those images. Whereas

high level processing involves “making sense” of an ensemble of recognized objects,

performing the cognitive functions normally associated with human vision. Thus

based on preceding comments we can say that digital image processing encompasses

processes whose inputs and outputs are images and, in addition, encompasses

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 5: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

5

processes that extract attributes from images, up to and including the recognition of

individual objects. Digital Image Processing consists of following fundamental steps:

Image Acquisition

Image Enhancement

Image Restoration

Pattern recognition

Wavelets and Multi-Resolution Processing

Wavelet image fusion

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 6: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

6

CHAPTER 2: LITERATURE SURVEY

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 7: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

7

CHAPTER 2: LITERATURE SURVEY

A significant obstacle for extracting information from remote sensing imagery is the missing information caused by clouds and their shadows in the images. According to a study, the average percentage of cloud cover in the equatorial region is in order of 75%. In a report, climate statistics showing that northwestern Europe during the least clouded months still has cloud coverage of about 40%. There are some radar satellites that do not have cloud contamination problems because they operate in the microwave range of the electromagnetic spectrum. It is possible to obtain microwave imagery information from some of the satellites that goes back in time to 1991. But these kinds of images cannot replace information provided by optical remote sensing data. The emitted radiation in microwave range is very low while in the visible range the maximum energy is emitted. Consequently, in order to obtain imagery in the microwave region and measure these signals, which are weak, large areas are imaged. This results in relatively poor spatial resolution. On the other hand, by contrast, images in the visible range have a high resolution.

2.1 ASTER Images

ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) is an advanced multispectral imager that was launched on board of NASA’s Terra spacecraft in December of 1999.

The objective of ASTER Project is a better understanding of local and regional phenomena on the Earth surface and its atmosphere. More specific areas of scientific research include:

• Geological and soil

• Volcano monitoring

• Carbon cycle and marine ecosystem

• Aerosol and cloud studies

• Hydrology

• Vegetation and ecosystem dynamics, and

• Land surface climatology

ASTER is an advanced multispectral imager. Its instruments cover a wide spectral region from the visible to the thermal infrared by 14 spectral bands each with high spatial, spectral and radiometric resolution. The spatial resolution varies with wavelength: 15 m in the visible and near-infrared (VNIR), 30m in the short wave infrared (SWIR), and 90 m in the thermal infrared (TIR).

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 8: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

8

VNIR has three bands and a system with two telescopes. SWIR has six bands; and TIR has five bands. Each subsystem operates in a different spectral region with its own telescope(s).

2.2 CURRENTLY USED TECHNIQUES

Many conventional methods for removing clouds and their shadows from images are based on time series, but methods that use spatial information result in significantly better estimates. Methods including both spatial and temporal information performed better, by a slight margin. Two common method for replacing clouded pixels and associated shadows are threshold method and subtraction method.

1. Threshold method

Threshold method is the simplest gray-level segmentation process to detect the areas of clouds and shadows. Two threshold values (THI and TH2) are proposed for detecting shadows and clouds. In this case the pixels of clouds have a brightness values from 230 to 255, and the pixels of shadows have a brightness values from 0 to 40.

2. Subtraction method

In subtraction method, the two images (original noisy image and clear image) are subtracted to obtain the difference image. If the original noisy image is subtracted from the clear image, the absolute values of differences of the similar areas will be zero or small value while in the areas of clouds or shadows the differences will be high and the clouds and the shadows areas will be detected.

2.2.1 LIMITATIONS OF CURRENTLY USED TECHNIQUES

The results of the threshold and the subtraction methods still have some defects. There is false-recognition of clouds where some land areas are detected as clouds because it has the same brightness properties as the clouds. These methods failed to detect some shadows especially when some areas have similar brightness and the same position to the shadows in the noisy image. It is still needed to apply another method that is able to detect and remove clouds and shadows efficiently.

2.3 RECENT TRENDS AND DEVELOPMENTS IN THE FIELD

Many existing signals transforms are used for different applications like Hilbert transform, short-time Fourier transform, Wigner distributions, Radon transform, Fourier transform, wavelet transform. In order to understand what wavelet transformation is, let’s take a look at one of the most common transformations, the Fourier Transformation.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 9: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

9

2.3.1 Fourier transforms (FT)

Fourier transform is a mathematical technique for transforming a time-based signal into a frequency-based signal. It breaks down a signal into constituent sinusoids of different frequencies. Most of the signals are time-domain in their raw format. When one plots these signals they are in time-amplitude representation. In other words plotting those signals one of the axes is time (independent variable), and the other is the amplitude (dependent variable). After the FT the signal is expressed in frequency spectrum. That is frequency will be represented as independent variable and amplitude will be expressed as dependent variable.

Figure 2.1: Time amplitude representation and frequency spectrum

FT has an important disadvantage. In the process to transform a signal from time-amplitude representation to frequency domain representation, time information is lost. For that reason it is not possible to say when an event occurred. But FT is a reversible transform, that is, it allows going back and forth between raw and transformed signals. Depending on the type of application, many times it is not important to have information related with time. This occurs when a signal is stationary. In other words, the frequency of a signal does not change in time or there is only a minimal change and we know in advance how much the frequency changes with time. Sometimes the frequency of a signal, that we need to analyze, changes constantly with time. They are called non-stationary signals and FT is not a suitable technique in that case.FT decomposes a signal as the linear combination of two basic functions sine and cosine, with different amplitude phase and frequencies:

In exponential form it can be expressed as equation 2.2 that represents the Fourier transform of f(t), and equation 2.3 represent the inverse Fourier transform of F(ω):

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 10: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

10

where t represents time, ω represents frequency, f denotes the signal in time domain and F denotes the signal in frequency domain.2.3.2 Short-Time Fourier Transform (STFT)

One way to solve FT problem in time is analyze only a section of the signal at a time. This is called windowing the signal. It maps a signal into two-dimensional functions of time and frequency. This provides information about when and at what frequency an event occurs. But this information is limited by the size of the window.

The size of the window must be the same for all frequencies but many signals require a more flexible analysis.

Figure 2.2: Short Time Fourier Transformation (STFT)

2.3.3 Wavelet Transform (WT)

WT was developed as an alternative to the STFT and is capable to provide the time and frequency representation of the signal through a process called decomposition. This is done passing the time-domain signal through various high pass filters which filter out high frequency portions of the signal and low pass filters which filter low frequency portions of the signal. The previous process is repeated several times and each time some frequencies are removed from the signal. Decomposition continues until the signal has been decomposed to a certain pre-defined level. After that process it is possible to obtain many signals (which represent the raw signal) but all corresponding to different frequency bands. If we plot those signals on a 3-D graph, we will have time in one axis, frequency in the second and amplitude in the third axes.

In reality WT does not use time-frequency but it uses time-scale region. Scale is inverse of frequency (high scale = low frequency and low scale = high frequency).

Using WT it is possible to know which spectral components exist at any time instant. But it is not possible to know which spectral component exists at any given time interval of time. Higher frequencies are better resolved in the time domain and lower frequencies are better resolved in the frequency domain.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 11: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

11

Figure 2.3: Wavelet TransformationContinuous Wavelet Transform (CWT)

The CWT is the sum over the whole time of the signal multiplied by scaled, shifted versions of the wavelet function Ψ. This process produces wavelets coefficients that are a function of scale and position.

Scaling a wavelet simply means stretching (or compressing) it. It is done keeping the shape while changing the one-dimensional time scale a (a > 0).

Shifting a wavelet simply means delaying its onset. In other words, move the basic shape from one side to the other. Translating it to position b

Then translation and change of scale in one-dimensional context is represented as follows (from 2.5 and 2.6)

Then Continuous analysis is done using

Discrete Wavelet Transform (DWT)

Information provided by CWT is highly redundant as far as the reconstruction of the signal is concerned. DWT provides information enough for analysis and synthesis with an important reduction of computation time. A time-scale representation of a signal is obtained using filtering techniques. Filters of different

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 12: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

12

cutoff frequencies are used at different scales. High pass filters are used to analyze high frequencies and low pass filters to analyze low frequencies. After the signal passes through filters its resolution is changed by upsampling and downsampling operations. Downsampling is to remove some of the samples of the signal and Upsampling is to add new samples to the signal. We will limit our choice of a and b values by using only the following dyadic discrete set for one-dimensional context:

Applying to 2.7 it is possible to obtain the discrete wavelet

DWT decomposes the signal into a coarse approximation and detail information. DWT employs two sets of functions called scaling and wavelet functions.Both of them are related to low pass and high pass filters, respectively. Figure 2.4 shows the DWT process.

Figure 2.4: Discrete Wavelet TransformationThe DWT process can be iterated with successive approximations. The original signal is broken down into many lower resolutions. This process is called multi-level wavelet analysis. Figure 2.5 shows this process.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 13: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

13

Figure 2.5: Multi-Level wavelet analysisWavelet Reconstruction

Wavelet reconstruction is known as synthesis whereas the previous DWT decomposition process is called analysis. In wavelet reconstruction process coefficients obtained from wavelet decomposition are used. As we mentioned, wavelet analysis involves filtering and downsampling whereas wavelets synthesis involves upsampling and filtering.

The low and high pass filters (L and H), together with their associated reconstruction filters (L’ and H’) form a system called quadrature mirror filter

Figure 2.6: Wavelet reconstruction

2.3.4 Wavelet families

There are different types of wavelet families, each of them having different characteristics. Table 2.2 lists some of the wavelet families.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 14: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

14

Daubechies wavelets

The names of the Daubechies family wavelets are written dbN, where N is the order, and db the "surname" of the wavelet. The db1 wavelet, as mentioned above, is the same as Haar wavelet. Here are the wavelet functions psi of the next nine members of the family:

dbN

These wavelets have no explicit expression except for db1, which is the Haar wavelet. However, the square modulus of the transfer function of h is explicit and fairly simple.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 15: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

15

The support length of Ψ and Φ is 2N - 1. The number of vanishing moments of Ψ is N. Most dbN are not symmetrical.

In this project we will explore the advantages of Daubechies wavelet family to detect clouds and their shadows. Also in the final procedure of this project we will use Daubechies wavelet family for image fusion procedure in order to fill out missing information.

2.4 IMAGE FUSION

There are many definitions of image fusion in the remote sensing field. A general definition is”Image fusion is the combination of two or more different images to form a new image by using a certain algorithm”.

Words like ’data’ and ’image’ have a similar meaning. Often not only remote sensing images are fused but also other spatial data like GPS coordinates and topographic maps, etc.

Image fusion has been used to combine high spatial resolution, panchromatic imagery, with multispectral imagery of low resolution. In this way, the high spatial resolution is incorporated to the spectral resolution. Also it is possible to fuse images of the same sensor but with different spatial resolution.

There are three different levels to perform image fusion. They depend on the stage at which fusion takes place: 1) pixel; 2) feature; 3) decision level.

Image fusion at pixel level means fusion at the lowest level referring to the merging of measured physical parameters. It uses raster data that is at least co-registered but most commonly decoded. The following approaches are used to produce image fusion at pixel level:

• RGB colour composites;

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 16: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

16

• Intensity Hue Saturation (IHS) transformation;

• Lightness-Hue-Saturation (LHS)

• Arithmetic combinations;

• Principal components analysis;

• Wavelets;

This project is focused on wavelet image fusion. After the wavelet decomposition of an image, the coefficients play an important role determining the structure characteristics at a certain scale in a certain location. Two images of different spatial resolution are decomposed. A transformation model can be derived to determine the missing wavelet coefficients of the lower resolution image. Using this it is possible to create a synthetic image from the lower resolution image at the higher spatial resolution. The image contains the preserved spectral information with higher resolution, hence showing more spatial details

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 17: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

17

CHAPTER 3: SIGNIFICANCE

CHAPTER 3: SIGNIFICANCE

A significant obstacle for extracting information from remote sensing imagery is the

missing information caused by clouds and their shadows in the images.

Many conventional methods for removing clouds and their shadows from images are based on time series, but methods that use spatial information result in significantly better estimates. Methods including both spatial and temporal information performed better, by a slight margin. Two common method for replacing clouded pixels and associated shadows are threshold method and subtraction method.

1. Threshold method

Threshold method is the simplest gray-level segmentation process to detect the areas of clouds and shadows. Two threshold values (THI and TH2) are proposed for detecting shadows and clouds. In this case the pixels of clouds have a brightness values from 230 to 255, and the pixels of shadows have a brightness values from 0 to 40.

2. Subtraction method

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 18: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

18

In subtraction method, the two images (original noisy image and clear image) are subtracted to obtain the difference image. If the original noisy image is subtracted from the clear image, the absolute values of differences of the similar areas will be zero or small value while in the areas of clouds or shadows the differences will be high and the clouds and the shadows areas will be detected.

The results of the threshold and the subtraction methods still have some defects. There is false-recognition of clouds where some land areas are detected as clouds because it has the same brightness properties as the clouds. These methods failed to detect some shadows especially when some areas have similar brightness and the same position to the shadows in the noisy image. It is still needed to apply another method that is able to detect and remove clouds and shadows efficiently.

A relatively new approach for dealing with the cloud contamination has been developed using the wavelet transform technique. The wavelet analysis is a refinement of the Fourier analysis. Fourier analysis is based on the description of an input signal (or function) in terms of its frequency components. It can select frequencies from a signal consisting of many frequencies. However, the disadvantage of this analysis is that it cannot deal with a signal that is changing over time. On the other hand, wavelet analysis can deal with the amount of localization in time and in frequency. A narrow time window is needed to examine high-frequency content, but a wide time window is allowed when investigating low-frequency components.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 19: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

19

CHAPTER 4: SYSTEM OVERVIEW

CHAPTER 4: SYSTEM OVERVIEW

4.1 BLOCK DIAGRAM

4.1.1 THRESHOLD METHOD

Image acquisition model represent the first step in the system. The images used in our project are downloaded from NASA’s EOS .

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Image Acquisition

Cloud detection using threshold values

Cloud removal using cloudmask

Preprocessing

Page 20: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

20

In pre-processing stage, radiometric correction is done which consists of cosmetic corrections and atmospheric corrections. Cosmetic corrections are operations that solve problems of visible error and noise in the images. These can be line dropouts, line striping and random noise. While atmospheric corrections are done because atmospheric conditions have influence on the pixel values registered by the sensor. These corrections are related to the influence of haze, sun angle and skylight

In next step different threshold values for intensity, saturation and variance of the input image are set. Using mathematical calculations clouds are detected and a cloud mask is generated.

Cloud removal is done using the cloud mask with the help of mathematical calculations.

4.1.2 WAVELET IMAGE FUSION

Image acquisition model represent the first step in the system. The images used in our project are downloaded from NASA’s EOS .These images are taken by ASTER satellite. The image corresponds to the region in the north part of Ecuador. Specifically, it is located in the Mira watershed that is in Carchi and Imbabura provinces. That is a mountainous region affected by cloud coverage during all year. Clouds coming from Amazonas forest (East) and Pacific Ocean (west) meet over that area.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Preprocessing Wavelet decomposition

Image Fusion

Image Acquisition

Wavelet Reconstruction

Page 21: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

21

In pre-processing stage, radiometric correction is done which consists of cosmetic corrections and atmospheric corrections. Cosmetic corrections are operations that solve problems of visible error and noise in the images. While atmospheric corrections are done because atmospheric conditions have influence on the pixel values registered by the sensor. These corrections are related to the influence of haze, sun angle and skylight

In wavelet decomposition stage, wavelet transform will be applied to obtain the image in frequency domain and is cut into different frequencies. The transform is computed for different levels.

In the next stage image fusion will be done i.e. elimination of clouds and their shadows and filling the missing information in the cloud affected image.

Then it is required to obtain the image again in the time domain, so the inverse of two dimensional discrete wavelet transform is performed.

4.2 BLOCK DIAGRAM DESCRIPTION

4.2.1 Threshold method

1. Image acquisition

The image is imported from NASA’s EOS.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 22: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

22

Fig 4.2.1a Original cloudy image 1

Fig 4.2.1b Original cloudy image 2

2. Pre-Processing

We have obtained the already pre-processed images from NASA’s EOS

(ASTER Satellite). These images were pre-processed using ERDAS Imagine 8.6 and

ILWIS 3.2 image processing software that is used in satellite imagery research.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 23: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

23

3. Cloud detection

In this approach, fixed values of intensity, saturation and variance which

reflect the clouded pixels are set and cloud mask is generated by comparing

the above mentioned parameters value. The values are adjusted so as to get the

best result.

Cloud detection algorithm

1. Read the input RGB image and convert it into HSV colour model.

2. Set the variance, intensity and saturation threshold value in the range [0,1]

to get the best result.

3. Compare the input image and its variance with the set values to get the

cloud mask .i.e. if the variance of the image is less than the set variance

value, it will be detected as cloud and similar operations for intensity and

saturation values of the input image pixels is done.

FLOW CHART FOR CLOUD DETECTION USING

THRESHOLD METHOD

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 24: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

24

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 25: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

25

GUI

Fig 4.2.1c GUI showing cloud detection and removal in cloudy image 1

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 26: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

26

Fig 4.2.1d GUI showing cloud detection and removal in cloudy image 2

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 27: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

27

Result of cloud detection

Fig 4.2.1e Cloud mask of original cloudy image 1

Fig 4.2.1f Cloud mask of original cloudy image 2

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 28: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

28

4. Cloud removal

In this step comparison of the cloud mask and the original image is done and

based on the result of mathematical formulation the averaging of the

neighbouring pixels detected as cloud is done. The averaged image is then

subtracted from the original scaled image to get the final cloud free image.

Algorithm for Cloud removal using cloud detection

1. Read the input RGB image and convert it into HSV colour model.

2. Set the saturation rate to get the best result and perform v1=x+y*rate

where x is the input image and y the cloud mask.

3. Perform the mathematical operation to calculate the average of

neighbouring cloud mask pixels on the original image.

4. Subtract the above result from the scaled version of original image to get

the high-boost filtered image v2.

5. Compare if the saturation component of the image lies in v1>1 and also

compare if the intensity component lies in v2>1 or v2<0 while the hue

component is not changed. Neglect all other pixels.

6. Convert the HSV image into RGB to get the cloud free image.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 29: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

29

FLOW CHART FOR CLOUD REMOVAL USING

THRESHOLD METHOD

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 30: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

30

Result of cloud removal

Fig 4.2.1g Cloud free image of original image 1

Fig 4.2.1h Cloud free image of original image 2

4.2.2 Wavelet Image Fusion

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 31: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

31

1. Image Acquisition (Selection of Input Images)

All images are imported from NASA’s EOS. Seven different images are taken that have overlapping areas.

Figure 4.2.2a: (Left): Image 030101 (Center): Image 021001 (Right): Image 181001

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 32: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

32

Figure 4.2.2b: (Left): Image 211201(Center): Image 280402 (Right): Image211002

Figure 4.2.2c: Image 170503

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 33: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

33

2. Pre-Processing

We have obtained the already pre-processed images from NASA’s EOS

(ASTER Satellite). These images were pre-processed using ERDAS Imagine 8.6 and

ILWIS 3.2 image processing software that is used in satellite imagery research.

Following stages were involved in pre-processing of images-

a. Denoising

Filtering of image is done to reduce the effect of noise from the input image. Following images show the effect of noise as well as its removal respectively

Noisy image Denoised image

b. Radiometric CorrectionIt involves cosmetic and atmospheric corrections. Cosmetic corrections

are operations that solve problems of visible error and noise in the images. The atmospheric corrections are done because atmospheric conditions have influence on the pixel values registered by the sensor. These corrections are related to the influence of haze, sun angle and skylight.

3. Image Fusion Using Discrete Wavelet Transform:

In this approach, both defected and cloud-free images are composed using discrete wavelet transform to create different high and low frequency components. The high frequency component contains image details such as noise, edges and details. On the other hand, the low frequency (approximation) components contain basic image information. Considering wavelet decomposition of the shadow areas of an image, it could be easily shown that the details components contain image information located in these areas. Information beneath shadows could be preserved if the details information in these areas were used in the image reconstruction process while neglecting the approximation component. However, neglecting the approximation component produces an image with only high frequency information. To solve this problem, the approximation component of another cloud-free image is

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 34: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

34

taken to replace the approximation component of the defective image dominated primarily by the cloud associated shadows. The wavelet-based image fusion technique is represented schematically in figure 3.5.

Figure 4.2.2d Wavelet Image Fusion Algorithm Chart

The steps of shadows detection using multi-levels of discrete wavelet transform are:

1. The input images are the original noisy image Original -image (i, j) and the cloud free image clear (i, j) of size 2Nx2N are decomposed to their high and low frequency images.

2. To detect the shadows, make a threshold in the wavelet coefficient at different levels of decomposition. The shadows pixels will set to ones and the other pixels will set to zeros.

3. Replace the wavelet coefficients at the shadows areas in the original noisy image by the wavelet coefficients from the clear image.

4. Repeat the steps 1-3 to detect the clouds from the original noisy image and replace their wavelet coefficients by the wavelet coefficients from the clear image

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 35: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

35

FLOWCHART FOR CLOUD DETECTION AND REMOVAL USING WAVELET IMAGE FUSION BASED ALGORRITHM

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 36: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

36

a. Wavelet decomposition of original cloud free image

Fig 4.2.2e Original cloud free image

Fig 4.2.2f Decomposition of original image at level 4 using db8 wavelet

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 37: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

37

b. Wavelet decomposition of cloudy image

Fig 4.2.2g Cloudy image

Fig 4.2.2h Decomposition of cloudy image at level 4 using db8 wavelet

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 38: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

38

c. Cloud free image after fusion

Fig 4.2.2i Cloud free image after fusion

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 39: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

39

4.3SPECIFICATIONS

4.3.1 Data Collection

The images in our project are the real satellite images collected from NASA’s EOS (ASTER Satellite). The images collected are all preprocessed using satellite imagery software and is available in ASTER satellite image gallery (http://asterweb.jpl.nasa.gov/).

4.3.2 Software

We are using MATLAB 7.0 for developing the algorithm of our project. In MATLAB we are making use of the Image Processing Toolbox 4.2.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 40: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

40

CHAPTER 5: SOFTWARE DESIGN

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 41: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

41

CHAPTER 5: SOFTWARE DESIGN

5.1 INTRODUCTION TO THE SOFTWARE: ‘MATLAB’

MATLAB®, or MATrix LABoratories, is an interactive computer program

that serves as a convenient “laboratory” for computations involving matrices. It is a

high-performance language for technical computing. It integrates computation,

visualization, and programming in an easy-to-use environment where problems and

solutions are expressed in familiar mathematical notation. This collection includes the

following topics:

The MATLAB language was created by Prof. Cleve B. Moler, Professor of

Computer Science (a specialist in numerical analysis) at the University of New

Mexico. It has since spawned several commercial and open-source derivatives of the

original MATLAB language.

MATLAB started life in the 1970s as a user-friendly interface to certain clever

but complicated programs for solving large systems of equations. The idea behind

MATLAB was to provide a simple way of using these Programs that hid many of the

complications. The idea was appealing to scientists who needed to use high

performances of software but had neither the time nor the inclination (nor in some

cases the ability) to write it from scratch. Since its introduction, MATLAB has

expanded to cover a very wide range of applications and can now be used as a very

simple and transparent programming language where each line of code looks very

much like the mathematical statement it is designed to implement.

MATLAB is a complete environment for high-level programming, as well as

interactive data analysis. MATLAB integrates numerical analysis, matrix

computation, signal processing, and graphics in an easy-to-use environment without

traditional programming. Results can be made available both numerically and as

excellent graphics.

MATLAB has evolved over the years with input from many users. MATLAB

TOOLBOXES are specialized collections of MATLAB files designed for solving

particular classes of functions.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 42: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

42

5.1.1 The MATLAB System

The MATLAB system consists of five main parts:

1. Development Environment

This is the set of tools and facilities that help you use MATLAB functions and

files. Many of these tools are graphical user interfaces. It includes the MATLAB

desktop and Command Window, a command history, an editor and debugger, and

browsers for viewing help, the workspace, files, and the search path.

2. The MATLAB Mathematical Function Library

This is a vast collection of Computational algorithms ranging from elementary

functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated

functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier

transforms.

3. The MATLAB Language

This is a high-level matrix/array language with control flow statements,

functions, data structures, input/output, and object-oriented programming features. It

allows both "programming in the small" to rapidly create quick and dirty throw-away

programs, and "programming in the large" to create large and complex application

programs.

4. Graphics MATLAB has extensive facilities for displaying vectors and matrices as

graphs, as well as annotating and printing these graphs. It includes high-level

functions for two-dimensional and three-dimensional data visualization, image

processing, animation, and presentation graphics. It also includes low-level functions

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 43: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

43

that allow you to fully customize the appearance of graphics as well as to build

complete graphical user interfaces on your MATLAB applications.

5. The MATLAB Application Program Interface (API) This is a library that allows you to write ‘C’ and Fortran programs that interact

with MATLAB. It includes facilities for calling routines from MATLAB (dynamic

linking), calling MATLAB® as a computational engine, and for reading and writing

MAT-files.

5.2 IMAGE PROCESSING TOOLBOX

The Image Processing Toolbox is a collection of functions that extend the

capability of the MATLAB® numeric computing environment. The toolbox supports

a wide range of image processing operations, including

Spatial image transformations

Morphological operations

Neighbourhood and block operations

Linear filtering and filter design

Transforms Image analysis and enhancement

Image registration

Region of interest operations

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 44: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

44

CHAPTER 6: TESTING AND DEBUGGING

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 45: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

45

CHAPTER 6: TESTING AND DEBUGGING

We have tried our code for two different methods-

A. THRESHOLD METHOD

We have taken the different threshold values of intensity, saturation and variance to get the best output. The output for different threshold values is shown below-

Fig 6.1 Cloud detection for Fig 6.2 Cloud detection for variance-0.001, saturation-0.2 & variance-0.001, saturation-0.3 &intensity-0.5 intensity-0.8

Fig 6.3 Cloud removal for saturation Fig 6.4 Cloud removal for saturationrate-0.2 rate-0.5

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 46: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

46

Thus by trial and error we find the best values of intensity, variance and saturation as 0.8, 0.001 and 0.3 for cloud detection and saturation rate of 0.5 for cloud removal.

B. WAVELET IMAGE FUSION

1. Decomposition level- We have calculated the maximum level of decomposition for the input image using ‘wavemax’ function in MATLAB which is found to be 4. So, we tried with 2, 3 and 4 levels of decomposition.

2. Mother wavelet- number of wavelet filters is studied to select the best mother wavelet for cloud removal such as haar, dbN etc.

HAAR WAVELET DAUBECHIES4

Orthogonal Daubechies coefficients (normalized to have sum 2)

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 47: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

47

Table 6.1 dbN coefficients for N=2 to 20

Since the properties of cloud are similar to dbN filter due to the irregularities in the cloud dimensions, we have used db8 filter.The results due to 2, 3 and 4 levels is shown below-

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 48: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

48

Fig 6.5.1 Decomposition level 2

Fig 6.5.2 Decomposition at level 3

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 49: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

49

Fig 6.5.3 Decomposition at level 4

Table 6.2 Cloudy image parameters

Parameters of Approximation & Detail coefficients

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 50: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

50

Table 6.3 Parameters of Approximation coefficient at level 4

Table 6.4 Parameters of Horizontal coefficient at level 4

Table 6.5 Parameters of Vertical coefficient at level 4

Table 6.5 Parameters of Diagonal coefficient at level 4

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 51: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

51

CHAPTER 6: FUTURE SCOPE & APPLICATION

CHAPTER 7: FUTURE SCOPE & APPLICATION

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 52: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

52

7.1 APPPLICATION

Remote Sensing

Remote sensing is the acquisition of data about an object or scene by a sensor that is far from the object, Aerial photography, satellite imagery, and radar are all forms of remotely sensed data.

Clouds and their shadows is common feature in remote sensing images. This feature causes serious problems for different applications like detection of earth changes, classification, crop yield estimation, and environmental monitoring. For these reasons, removing these clouds and their shadows from the satellite images is very necessary.

7.2 FUTURE SCOPE

Due to severe problems faced by satellite imagery because of clouds and shadows in detection and study of objects, we are to remove these unwanted parameters and fill in the missing information. There are number of algorithms proposed for detection of clouds and shadows and we are using discrete wavelet transform to implement the same.

Future stage of this project would be to identify and implement other wavelet based algorithms to detect clouds and shadows which could result in more accurate and highly efficient detection. Different wavelet family can be considered as the mother wavelet (basis function) and different wavelet transform methods will be considered so as to come up with the efficient method to detect clouds and shadows.

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 53: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

53

CHAPTER 8: APPENDIX

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.

Page 54: Project Report on Cloud Removal

E&TCD.Y Patil College of Engineering

54

REFERENCES

Digital Image Processing, R.C Gonzalez

Digital Image Processing using MATLAB, R.C Gonzalez

IEEE papers, ‘CLOUDS AND SHADOWS DETECTION AND REMOVING FROM REMOTE SENSING IMAGES’.

ASTER satellite image gallery (http://asterweb.jpl.nasa.gov/).

Wavelet tutorials: (http://lycee-ledantec.ac-rennes.fr/upload/PO/Ondelettes/POLIKAR%201.htm).

Google Search Engine

MATLAB Help

Enhancement of cloud associated shadow areas in satellite images using wavelet image fusion.