5
Pixel-based Image Fusion Using Wavelet Transform for SPOT and ETM+ Image Hongbo Wu Center for Forest Operations and Environment Northeast Forestry University Harbin, P.R.China E-mail: [email protected] Absact -Image fusion means merging two or more images in such a way as to preserving the most desirable characteristics of each image. Because of standard image fusion methods are often successful at adding spatial detail into the multispectral imagery but distort the colour information in the fusion process. So, the paper presents an approach for multi-resolution image fusion of a high-resolution SPOT (Satellite Pour l'Observation de la Terre, SPOT) panchromatic image and a low-resolution Landsat 7 ETM+ multispectral image based on wavelet transform (WT) combined with filtering in the Fourier domain. Firstly, the images were decomposed to some wavelet coefficients by Mallat algorithm; Secondly, the wavelet transform methods add the wavelet coefficients of the SPOT PAN image to ETM+ image. Finally, the fused image is reconstructed by performing the inverse wavelet transform for obtaining multispectral images of higher spatial resolution. In order to evaluate the quality of the fused images, three quantitive indicators such as Gradients, RMSE, Correlation Coefficients were defined in the paper. In the mean time, the results from a number of wavelet-based image fusion schemes which are the intensity-hue-saturation (IHS) and high-pass filter method (HPF) are compared and these fusion methods were used to synthesize the Landsat ETM+ data and SPOT -5 PAN data were. The evaluation results showed that the WT fusion methods perform the fusion of SPOT PAN image and ETM+ image better than IHS and HPF, specially in preserving both spectral and spatial information. Experiment results showed that the proposed WT fusion algorithm works well in multi- resolution fusion and also preserve the original color or spectral characteristics of the input image data. Keywords- wavelet ansform; spot; assessment; image fusion; pel I. INTRODUCTION A recent research focus for remote sensing is the development of methods for applying these high-resolution satellite imageries in different fields [1]. The different sensor images are combined to form a single image through a judicious selection of pixels and regions in different images. This process is known as multifocus image sion. The standard image sion methods include high-pass filter (HPF), WT, brovey transform (BT), principal component analysis (PCA), intensity-hue-saturation (IHS)-like image sion methods. These image sion methods can be classified into spectral domain techniques, spatial domain techniques, and scale space techniques. HPF, BT and PCA is relative spectral an spatial image sion methods. The scale space technique may be recommended as IHS-like image sion methods. 978-1-4244-6789-1110/$26.00 ©2010 IEEE 936 YanqiuXing Center for Forest Operations and Environment Northeast Forestry University Harbin, P.R.China E-mail: [email protected] Image sion ses two or more images and synthesizes them into one that contains all the significant or clear information from each input image. These images may be acquired om different sensing devices, or they may be of the same scene with focus on different parts of it. These include spatial and spectral resolution, quantity of information and details of features of interest. So, the image sion can be divided into pixel, feature, and symbol levels. The detail information that is extracted om one image using wavelet transforms can be injected into another image using one of a number of methods, for example substitution, addition, or a selection method based on either frequency or spatial domain. Furthermore, the wavelet nction used in the transform can be designed to have specific properties that are usel in the particular application of the transform [2, 3]. Moreover, Wavelet transform (WT) is extensions of the idea of high-pass filtering. WT provide a multi-resolution amework where the signal being analyzed is decomposed into several components, each of which captures information present at a given scale [4]. This enables the introduction of the concept of details between successive levels of scale or resolution and if the process is inverted, the original image can be exactly reconstructed from one approximation and from the different wavelet coefficients [5]. Multi-sensor image sion using the WT approach can provide a conceptual amework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image. In the paper, the main objective is to present a comprehensive framework based on the WT method to synthesize SPOT-5 and ETM+ images, and the evaluation for the image sion was done using Gradients, RMSE, Correlation Coefficient indicators. Also, to understand the perfoance of WT sion method, the paper compared the WT sed results to the ones om HIS and HPF methods. II. METHODS A. Image Resouces SPOT (Satellite Pour l'Observation de la Terre, SPOT) is a high-resolution, optical imaging Earth observation satellite system operating from space. It is run by Spot Image based in Toulouse, France. SPOT-5 was launched on May 4, 2002. The high-resolution SPOT-5 PAN images of Wangqing forest area in Jinlin Province of northeaste China, of which recorded on June 31, 2002, were used for image sion. There are four

Pixel-based Image Fusion Using Wavelet Transform for SPOT and …mcs.csueastbay.edu/~grewe/pubs/DistSensorNetworkBook2011/... · 2011. 2. 26. · Pixel-based Image Fusion Using Wavelet

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Pixel-based Image Fusion Using Wavelet Transform for SPOT and …mcs.csueastbay.edu/~grewe/pubs/DistSensorNetworkBook2011/... · 2011. 2. 26. · Pixel-based Image Fusion Using Wavelet

Pixel-based Image Fusion Using Wavelet Transform for SPOT and ETM+ Image

Hongbo Wu Center for Forest Operations and Environment

Northeast Forestry University Harbin, P.R.China

E-mail: [email protected]

Abstract-Image fusion means merging two or more images in such a way as to preserving the most desirable characteristics of each image. Because of standard image fusion methods are often successful at adding spatial detail into the multispectral imagery but distort the colour information in the fusion process. So, the

paper presents an approach for multi-resolution image fusion of a high-resolution SPOT (Satellite Pour l'Observation de la Terre, SPOT) panchromatic image and a low-resolution Landsat 7 ETM+ multispectral image based on wavelet transform (WT)

combined with filtering in the Fourier domain. Firstly, the images were decomposed to some wavelet coefficients by Mallat algorithm; Secondly, the wavelet transform methods add the wavelet coefficients of the SPOT PAN image to ETM+ image.

Finally, the fused image is reconstructed by performing the inverse wavelet transform for obtaining multispectral images of higher spatial resolution. In order to evaluate the quality of the fused images, three quantitive indicators such as Gradients,

RMSE, Correlation Coefficients were defined in the paper. In the mean time, the results from a number of wavelet-based image

fusion schemes which are the intensity-hue-saturation (IHS) and high-pass filter method (HPF) are compared and these fusion methods were used to synthesize the Landsat ETM+ data and SPOT -5 PAN data were. The evaluation results showed that the WT fusion methods perform the fusion of SPOT PAN image and ETM+ image better than IHS and HPF, specially in preserving both spectral and spatial information. Experiment results showed that the proposed WT fusion algorithm works well in multi­resolution fusion and also preserve the original color or spectral

characteristics of the input image data.

Keywords- wavelet transform; spot; assessment; image fusion; pixel

I. INTRODUCTION

A recent research focus for remote sensing is the development of methods for applying these high-resolution satellite imageries in different fields [1]. The different sensor images are combined to form a single image through a judicious selection of pixels and regions in different images. This process is known as multifocus image fusion. The standard image fusion methods include high-pass filter (HPF), WT, brovey transform (BT), principal component analysis (PCA), intensity-hue-saturation (IHS)-like image fusion methods. These image fusion methods can be classified into spectral domain techniques, spatial domain techniques, and scale space techniques. HPF, BT and PCA is relative spectral an spatial image fusion methods. The scale space technique may be recommended as IHS-like image fusion methods.

978-1-4244-6789-1110/$26.00 ©2010 IEEE

936

YanqiuXing Center for Forest Operations and Environment

Northeast Forestry University Harbin, P.R.China

E-mail: [email protected]

Image fusion fuses two or more images and synthesizes them into one that contains all the significant or clear information from each input image. These images may be acquired from different sensing devices, or they may be of the same scene with focus on different parts of it. These include spatial and spectral resolution, quantity of information and details of features of interest. So, the image fusion can be divided into pixel, feature, and symbol levels. The detail information that is extracted from one image using wavelet transforms can be injected into another image using one of a number of methods, for example substitution, addition, or a selection method based on either frequency or spatial domain. Furthermore, the wavelet function used in the transform can be designed to have specific properties that are useful in the particular application of the transform [2, 3].

Moreover, Wavelet transform (WT) is extensions of the idea of high-pass filtering. WT provide a multi-resolution framework where the signal being analyzed is decomposed into several components, each of which captures information present at a given scale [4]. This enables the introduction of the concept of details between successive levels of scale or resolution and if the process is inverted, the original image can be exactly reconstructed from one approximation and from the different wavelet coefficients [5]. Multi-sensor image fusion using the WT approach can provide a conceptual framework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image.

In the paper, the main objective is to present a comprehensive framework based on the WT method to synthesize SPOT-5 and ETM+ images, and the evaluation for the image fusion was done using Gradients, RMSE, Correlation Coefficient indicators. Also, to understand the performance of WT fusion method, the paper compared the WT fused results to the ones from HIS and HPF methods.

II. METHODS

A. Image Resouces SPOT (Satellite Pour l'Observation de la Terre, SPOT) is a

high-resolution, optical imaging Earth observation satellite system operating from space. It is run by Spot Image based in Toulouse, France. SPOT-5 was launched on May 4, 2002. The high-resolution SPOT-5 PAN images of Wangqing forest area in Jinlin Province of northeastern China, of which recorded on June 31, 2002, were used for image fusion. There are four

Page 2: Pixel-based Image Fusion Using Wavelet Transform for SPOT and …mcs.csueastbay.edu/~grewe/pubs/DistSensorNetworkBook2011/... · 2011. 2. 26. · Pixel-based Image Fusion Using Wavelet

bands in SPOT-5 image bands data and its key parameters are shown in TABLE I . The PAN image provides lOmxlOm resolution. In the meantime, Landsat TM satellite data used for this study provides lower resolution (30mx30m) multispectral data, which were obtained from Earth Resources Observation and Science Center (EROS) and downloaded from http://glovis.usgs.gov/. The TABLE I shows key paremeters of Landsat 7 ETM+ image. The TM images the paper employed for the study area were acquired in the period from September 1, 2007.

TABLE 1. PARAMETERS OF SPOT-5 AND LANDSAT 7 ETM+ SPECTRAL BANDS

SPOT-5 Landsat 7 ETM+ Spectral

Wave Wave Resolution Wave Wavelength Resolution band band length (m) band (pm) (m) (pm)

Blue TMI 0.450-0.51 5 30 Green BI 0.50-0.59 2.5-1 0 TM2 0.525-0.605 30 Red B2 0.61-68 1 0 TM3 0.630-0.690 30 NIR B3 0.78-89 1 0 TM4 0.775-0.900 30 MIR B4' 1 .58-1 .75 1 0 TM5 1 .550-1.750 30 Far-infrared TM6 1 0.40-1 2.50 60 MIR TM7 2.09-2.35 30 Panchromatic PAN 0.48-0.71 5 PAN 0.52-0.90 1 5

*B4 notes SWIR (short-wave infrared band), NIR notes near infrared band.

B. Image Pre-processing Even though the available images were corrected for

aerosol scattering. To remove data affected by thick clouds, we extracted the information on clouds and generated masks of cloud cover for all time periods of images datasets using the qualitive control flags in the ETM+ and SPOT image file. Those pixels labelled as clouds, were removed. An additional restriction was that the pixels with a blue band reflectance of 2':0.2 were removed as abnormal data. Geometric, radiometric and atmospheric corrections had been made to the remote sensing images used here. After that, false color image (TM7, TM4, TM1) is shown in Fig. 1.

"" _Red: levu_7 _arlin. lay.,_41 _Blue; lllye'

(a)

Figure I. Images of Wangqing forest area (a) Multispectral Landsat 7 ETM+ image with closer-look area ( highlighted by rectangle), (b) SPOT-5 PAN

image

C. Image Fusion Methods Three kinds of methods were normally used to perform the

fusion between SPOT-5 PAN image and Landsat 7 ETM+ images, that is methods of HIS, HPF and WT.

1) I}{StransjPorm

937

The IHS transform is one of the common image fusion methods in the remote sensing fields [6]. Three bands (TM1, TM2, TM4) of the source multispectral ETM+ image are mapped into the RGB color space and the RGB color space is then transformed to the IHS color space:

(2)

S = �� 2 +V22

, (3) where I is the intensity component, }{ is the hue component, S is the saturation component, VI and V2 are the intermediate variables. Fusion is performed by replacing I with the source panchromatic image. Finally, the fused image is obtained by performing the inverse IRS transform. The IRS transform based image fusion algorithm can preserve the same spatial resolution as the source panchromatic image but seriously distort the spectral (color) information in the source multispectral image.

2) }{PF fUsion method The HPF method previously used by Chavez [7] to fuse

ETM+ and SPOT PAN image. In the HPF method the higher spatial resolution data have a small high-pass filter applied. The results of the small high-pass filter contain the high­frequency information that is related to spatial information. The spatial filter removes most of the spectral information. The HPF results are added, pixel by pixel, to the lower spatial resolution, not a high-resolution dataset. The process merges the spatial information of high spatial resolution dataset.

The mathematical model is

(4)

where DN1 = DNh * hand ho is a low-pass filter such as a PAN PAN 0

matrix filter. A 3 X 3 convolution mask is suitable for 1:2 fusion only, since the frequency response should have -6dB cutoff (halved amplitude) at /n=0.25, where is the spatial frequency normalized to the sampling frequency [8].

3) Wavelet transjPorm The WT is suitable for image fusion, not only because it

enables one to fuse image features separately at different scales, but also because it produces large coefficients near edges in the transformed image and reveals relevant spatial information [9]. The WT decomposes the signal based on elementary functions: the wavelets. Wavelets can be described in terms of two groups of functions: wavelet functions and scaling functions. It is also common to be defined the wavelet function as the "mother wavelet", and the scaling function is the "father" wavelet. So the transformations of the parent wavelets are "daughter" and "son" wavelets. In one-dimensional case, the continuous wavelet transform of a distributionRt) can be expressed as

1 [ t-b WT(a, b) = I f(t)y(-)dt va a (5)

where WT (a, b) is the wavelet coefficient of the functionRt); IjI the analyzing wavelet and a (a > 0) and b are scaling and translational parameters, respectively. Each base function is a scaled and translated version of a function ljI(t) called Mother Wavelet.

Page 3: Pixel-based Image Fusion Using Wavelet Transform for SPOT and …mcs.csueastbay.edu/~grewe/pubs/DistSensorNetworkBook2011/... · 2011. 2. 26. · Pixel-based Image Fusion Using Wavelet

Currently used wavelet-based image fusion methods are mostly based on two algorithms: the Mallat algorithm [10] and the it trous algorithm [II]. The Mallat algorithm-based dyadic wavelet transform (WT), which uses decimation, is not shift­invariant and exhibits artifacts due to aliasing in the fused image [12]. The WT method allows the decomposition of the image in a set of wavelet and approximation planes, according to the theory of multiresolution wavelet transform given by Mallat. Each wavelet plane contains the wavelet coefficients where the amplitude of a coefficient defines the scale and informations of the local features. Formally, wavelet coefficients are computed by means of the following equation:

(6)

j= 1, ... ,N, such as j is the scale index, N is the number of decomposition, Po(k,l) corresponds to the original image P(k,l) and P/k,l) is the filtered version of the image produced by means of the flowing equation:

�(k,I)= LL h(n,m)�_l(n+ 2j-1 k,m+ 2j-1/) (7)

H(n,m) are the filter coefficients.

�(k,l)=�_l(k,l)-�(k,l) (8) j=1,2, ... ,N, andj is the scale index, N is the level number of decomposition, Po(k,l) corresponds to the original ETM+ image P(k, I) and Pj(k, l) is the filtered version of the image

a) The roles based-neighborhood pixels Since the useful features in the image usually are larger

than one pixel, the rules based single pixel may not be the most appropriate method. Then the rules based the neighborhood features of pixel is more appropriate. This kind of rules uses some neighborhood features of one pixel to guide the selection of coefficients at that location. The neighborhood window is often set 3*3 in the paper. Suppose A and B are high frequency sub images waiting for fusing, F isthe fusion result sub image, then

F(x,y) = A(x,y) if O"A(X,y) � O"B(X,y),

F(x,y) = B(x,y) if O"Ax,y) < O"B(X,y)

a is local variance.

b) Wavelet fusion scheme

(9)

So, one image can be decomposed many level images, each level produces 4 sub images kL' ILH, IHL, IHH. Fig. 2 is a two­level image by wavelet decomposition. kL represents the coarse approximation signal, ILH, IHL, IHH represents the detail signals, corresponding to the horizontal, vertical and diagonal directions. For example, in case of fusing SPOT PAN images and Landsat ETM+ images, firstly applying wavelet transform to decompose input images, then using specific fusion rules to combine result coefficients from the wavelet coefficients of the different input sources, the fused image is finally obtained by performing the inverse decomposition process. The scheme process for WT fusion method is shown in Fig. 3. Fig. 2 and Fig. 3 present that not only the modality of wavelet transform but also the fusion rules are very important for the fusion results.

938

ILL IHL I HL

ILH IHH

ILH IHH

Figure 2. Two-level image by wavelet decomposition

1) apply wr SOPT image to TM image

2) add SOPT image 3) inverse WT details 10 TM image

Figure 3. The scheme of image fusion using wavelet transformation.

D. Indicators In this case, the quality evaluation was based on

quantitative measures, and they were associated to three indicators for this purpose. The first indicator is gradients related to the quality of the spatial information of a fused image. Gradients are useful tools to measure the variation of intensity with respect to immediate neighboring points or pixels of an image [13, 14]. It is observed that a pixel possesses high gradient value when it is sharply focused. So gradient or maximun gradient can be used to measure the spatial resolution of the fused image. Hence, for an ideal fused image, its maximum gradient approaches the value 1, and a larger gradient means a higher spatial resolution. Second, Correlation coefficients and RMSE were applied to quantify the spectral and spatial differences between each fused image and the original ETM+ image. The correlation coefficient and RMSE between the TM image and each fused image were calculated using a pixel-based comparison.

III. RESULTS AND DISCUSSION

A. The Results of Imagery Fusion This study evaluated whether images with a large ratio of

spatial resolution could be fused, and evaluates the potential of using such fused images. The 10m resolution SOPT -5 PAN image and the 30m resolution multispectral image (shown in color image by mapping three bands of the multispectral image into the RGB color space) of the Wangqing country, in Jinlin province, China., are shown in Fig. 4(a) and (b), respectively.

During the fusion process, the Landsat ETM+ bands 1-7 were respectively decomposed three levels. Based on three fusion image methods above in Section II, the paper applied IHS, HPF, WT fusion methods to ETM+ image bands and SPOT-5 PAN image, these fused images results of which methods were shown in Fig.5. In addition, the enlarged subsets images were extracted from the fused images results. For further comparing to characteristics based-pixels, the paper is tested on another two more enlarged subsets of fused images in Fig.6.

Page 4: Pixel-based Image Fusion Using Wavelet Transform for SPOT and …mcs.csueastbay.edu/~grewe/pubs/DistSensorNetworkBook2011/... · 2011. 2. 26. · Pixel-based Image Fusion Using Wavelet

Figure 4. (a) High resolution SPOT PAN image, (b) Multispectral Landsat 7 ETM+ image

Figure 5. (a) Fused results of SPOT PAN and Landsat TM image by IHS, (b) Fused results of SPOT PAN and Landsat TM image by HPF, (c) Fused result

of SPOT PAN and Landsat TM image by WT

From Fig. 5, studies of two band wavelet-based satellite image fusion can indicate that the resolution of these fused images using HPF, IRS and WT fusion methods is 20m 15m and 10 m, respectively. In the mean time, the fact that �rtifact such as blocking effects are noticed in the fused images in some regions. But this is a common phenomena in pixel-based image fusion using multiresolution approach and happens due to the fact that error introduced at the topmost level is amplified during reconstruction image. In our case, these effects are not obvious at the regions in the source images. They are present in the fused images using WT comparing to the other two methods as well.

Careful visual inspection of Fig. 6 shows that the WT method is the best method for visual effects, while the HPF meth

.od shows the worst performance in this case. Fig. 6(c) has

a senes of more clear texture of objects on the surface than Fig. 6(a), Fig. 6(b).

939

(a) (b) (c)

Figure 6. (a) Details from Fused results of SPOT PAN and Landsat ETM+ image by IHS, (b) Fused results of SPOT PAN and Landsat ETM+ image by

HPF, (c) Fused result of SPOT PAN and Landsat ETM+ image by WT

B. Performances Evaluation Using Qualitative Measures In comparison to the fused images, the quantitative

comparisons are shown in TABLE II , TABLE III. The quantitative indicators were used to evaluate each fused image results with the original ETM+ image.

Similarity between the maximum gradient images and the fused gradient images using three of fusion methods are listed in TABLE II. The table shows the spectral discrepancies between the images obtained by different fusion algorithms and the source multispectral image. Besides this, the gradients of the fused images obtained by different fusion algorithms indicated that the non-linear WT proposed by us possesses the following invariance properties in the spatial domain. That means adding a certain value to all pixel values in the original dat�, and addi�g th�t value to the scaled image data during fuSIOn. So, detaIls wIll not change in case of addition.

A good fusion scheme should preserve the spectral characteristics of the source multispectral image as well as the high spatial resolution characteristics of the source panchromatic image. From TABLE II, we can conclude that the WT fusion algorithm can preserve high spatial resolution characteristics of the original PAN image and also can preserve more useful information compared with IRS, HPF fusion methods. In addition, the spectral distortion introduced to the proposed WT fusion method is less than the traditional algorithms based on the IRS transform and the HPF.

Page 5: Pixel-based Image Fusion Using Wavelet Transform for SPOT and …mcs.csueastbay.edu/~grewe/pubs/DistSensorNetworkBook2011/... · 2011. 2. 26. · Pixel-based Image Fusion Using Wavelet

TABLE II. SIMILARITY BETWEEN MAXIMUM GRADIENT AND FUSED GRADIENT IMAGES

Methods TMI TM2 TM3 TM4 TM5 TM6 TM7 IHS 0.861 0.824 0.853 0.866 0.819 0.899 0.845 HPF 0.796 0.821 0.803 0.834 0.789 0.845 0.840 WT 0.896 0.868 0.902 0.921 0.91 1 0.931 0.91 4

Considering RMSE of all the fused images by HIS, HPF, WT fusion methods as well as the Correlation Coefficients are presented in Table III. The correlation coefficients per band between the ETM+ image and each fused image as well as the root mean square error (RMSE) are presented in Table III. These measures were selected in order to evaluate the similarity at pixel level between the ETM+ image and the fused images. The correlation coefficient should be as close as possible to 1 and the RMSE should be as low as possible.

TABLE III. CORRELATION COEFFICIENT(CORR.) AND RMSE BETWEEN THE ORIGINAL ETM+ BAND 1-7 AND THE FUSED IMAGES

Methods Indicator S2ectral bands'

b1 b2 b3 b4 b5 b6 b7

IHS Corr. 0.87 0.91 0.88 0.89 0.84 0.92 0.93

RMSE 1 0.81 7.33 9.24 8.48 1 6.34 6.65 5.76

HPF Corr. 0.69 0.71 0.75 0.78 0.73 0.79 0.81

RMSE 68.80 53.54 46.29 34.84 51 .55 30.50 28.90

WT Corr. 0.89 0.91 0.92 0.94 0.86 0.93 0.95

RMSE 6.80 4.60 3.84 2.70 1 2.51 2.28 1 .86 a bl-b7: spectral bands ofETM band I-baod 6 fused respectively with SPOT-5 PAN

From Table III, the WT presented higher correlation coefficients and lower RMSE values for all bands compared to the IHS and to the HPF, which shows the superiority of the WT transform considering this set of quality measures. These fused image results using the WT indicated that not only the spatial features are preserved but also that the spectral content is similar to the original ETM+ image (Fig. 4(b». In the WT transform the decomposition is carried out with the Mallat decomposition algorithm in Fourier domain. This frequency information (especially relationships between neighboring pixels) was particularly appropriate for capturing useful scale­related characteristics during the decomposition of the ETM+ image, resulting in a fused image with better quality.

IV. CONLUSIONS

The fusion results showed that, in general, the best fusion performance for ETM+ images and SPOT-5 PAN images was achieved in the study by the wavelet-based technique, and followed by the Fourier-based technique. In this paper, the WT fusion method proposed, facilitates efficient feature detection, to retain the local features in each of the input images and suppress the noise accumulation from noisy input images. Hence, this fusion method has the capability of enhancing the spatial quality of the multispectral image while preserving its spectral characteristics much better than the IHS, HPF fusion methods.

By analysis of the quantitative results and visual inspection, it is possible to see that the experimental results are in consistant with the theoretical analysis and that the WT method

940

produces the fused images closest to those the corresponding ETM+ image would observe at the high-resolution pixel level.

ACKNOWLEDGMENT

This research was funded by the Natural Science Foundation of China (4087119), the Foundation of the Advanced Programs of the State Human Resource Ministry for Scientific and Technical Activities of Returned Overseas Chinese Scholars, the Fundamental Research Funds for the Central Universities (Grant: DL09CA08), Harbin Youth Science and Technology Innovation Talents (2008RFQXN003) and Project of Foundation (Gram09) by graduate school of Northeast Forestry University (NEFU). We would like to thank National Snow and Ice Data Center for providing ICESatiGLAS data.

REFERENCES

[I] K. Amolins, Y. Zhang and P. Dare, "Wavelet based image fusion techniques -- An introduction, review and comparison," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, pp. 249-263, September 2007.

[2] G. Pajares and J. Manuel de la Cruz, "A wavelet-based image fusion tutorial," Pattern Recognition, vol. 37, pp. 1855-1872, September 2004.

[3] J. R. Carr, "Computational considerations in digital image fusion via wavelets," Computers & Geosciences, vol. 31 , pp. 527-530, May 2005.

[4] S. Li and B. Yang, "Multi focus image fusion by combining curvelet and wavelet transform," Pattern Recognition Letters, vol. 29, pp. 1 295-1 301 , July 2008.

[5] H. Li, B. S. Manjunath and S. K. Mitra, "Multisensor Image Fusion Using the Wavelet Transform," Graphical Models and Image Processing, vol. 57, pp. 235-245, May 1995.

[6] Y. Chibani and A. Houacine, "Redundant versus orthogonal wavelet decomposition for multisensor image fusion," Pattern Recognition, vol. 36, pp. 879-887, April 2003.

[7] I. De and B. Chanda, "A simple and efficient algorithm for multifocus image fusion using morphological wavelets," Signal Processing, vol. 86, pp. 924-936, May 2006.

[8] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull and N. Canagarajah, "Pixel- and region-based image fusion with complex wavelets," Information Fusion, vol. 8, pp. 1 1 9-1 30, April 2007.

[9] J. Nunez, X. Otazu, Octavifors and A. Prades, "Simultaneous image fusion and reconstruction using wavelets applications to SPOT + LANDSAT images," Vistas in Astronomy, vol. 41, pp. 351-357, 1997.

[1 0] Z. Li, Z. Jing, X. Yang and S. Sun, "Color transfer based remote sensing image fusion using non-separable wavelet frame transform," Pattern Recognition Letters, vol. 26, pp. 2006-201 4, October 2005.

[1 1 ] P. L. Lin and P. Y. Huang, "Fusion methods based on dynamic­segmented morphological wavelet or cut and paste for multifocus images," Signal Processing, vol. 88, pp. 1 51 1 -1 527, June 2008.

[1 2] A. Loza, D. Bull, N. Canagarajah and A. Achim, "Non-Gaussian model­based fusion of noisy images in the wavelet domain," Computer Vision and Image Understanding, vol. 1 1 4, pp. 54-65, January 201 0.

[1 3] F. W. Acerbi-Junior, J. G. P. W. Clevers and M. E. Schaepman, "The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna," International Journal of Applied Earth Observation and Geoinformation, vol. 8, pp. 278-288, December 2006.

[1 4] W. Shi, C. Zhu, Y. Tian and J. Nichol, "Wavelet-based image fusion and quality assessment," International Journal of Applied Earth Observation and Geoinformation, vol. 6, pp. 241-251 , March 2005.