3
Performance Evaluation of Information Theoretic Image Fusion metrics over Quantitative Metrics Arathi T Computational Engineering and Networking Amrita Vishwa Vidyapeetham Coimbatore, India. [email protected] Soman K P Computational Engineering and Networking Amrita Vishwa Vidyapeetham Coimbatore, India. [email protected] Abstract—This paper is an evaluation of four information theoretic image fusion quality assessment metrics and how they perform, in comparison with some of the existing quantitative metrics. The information theoretic fusion metrics evaluated are: Fusion Factor (FF), Fusion Symmetry (FS), Image Fusion Performance Measure (IFPM) and Renyi Entropy (RE). Even though traditional quality assessment metrics like Mean Square Error (MSE), Correlation Coefficient (CC) etc, are being improved by incorporating the edge information, similarity measure between the images, taking the luminance and contrast measures in the images etc, most of the quantitative approaches still don’t give a satisfactory performance, since they don’t take into account the information content in the images. Here, we illustrate how the information theoretic metrics are superior to the quantitative metrics, for grayscale image fusion. Keywords—Mutual Information, Entropy, Quantitative metrics I. INTRODUCTION Combining two low quality images into a single image, which has a higher quality than either of the input images, is the crux behind image fusion. Since, there are a variety of image fusion algorithms presented in literature, it becomes necessary to have certain means by which we can assess these algorithms, so as to determine which gives a better fused image. Traditionally, the quality assessment metrics that were available were quantitative metrics. Some of the quantitative metrics used were: Mean Square Error, Average Difference, Correlation Coefficient etc. These measures were mostly based on the quantitative evaluation of the pixel deviation between the original image and the fused image. Later on, they were improved by incorporating many changes into these trivial metrics. In [1], the edge information of the input images and the fused image are considered when calculating the quality index. [2] gives another metric, where the luminance and contrast information in the images is used for its quality assessment. The similarity measure between the images is used for calculating the quality index for a fusion algorithm in [3]. However, image fusion aims at integrating the complementary information from input images, so that the fused image is more suitable for visual perception and computer processing. It therefore becomes mandatory that the quality metric should also estimate the amount of information obtained from the individual input images. This lead to the development of fusion quality metrics based on the principles of information theory. In this paper, four information theoretic quality metrics are discussed and their performance analyzed and compared with some quantitative metrics, for the following fusion algorithms: Averaging Method, Principal Component Analysis Method, Gradient Pyramid Method, Laplacian Pyramid Method and Daubechies-4 tap Wavelet Transform Method. II. INFORMATION THEORETIC QUALITY METRICS A brief description of the four information theoretic quality metrics is given in this section. All these are non- reference metrics, i.e. they don’t require a reference image for the quality index evaluation. A. Fusion Factor Mutual Information (MI) is a concept from information theory, which measures the statistical dependence between two random variables. This is an index used for assessing the fusion performance, also referred to as Fusion Factor (FF) [4]. Mutual information is the amount of information that one image contains about the other. This inspires the use of mutual information as a measure of image fusion performance. If A, B are the two input images and F is the fused image, the amount of information that F contains about A and B is given as: , ( , ) ( , ) ( , )log ( ) ( ) FA FA FA f a F A p f a FF f a p f a p f p a = (1) , ( , ) ( , ) ( , ) log ( ) () FB FB FB fb F B p fb FF fb p fb p f p b = (2) Then the FF performance measure is defined as: ( , ) ( ,) AB F FA FB FF FF fa FF fb = + (3) A large value of FF indicates that more information has been transferred from the source images to the fused image. However, a disadvantage with this measure is that, a large FF cannot indicate whether the source images are fused symmetrically or not. So as to account for this, the concept of Fusion Symmetry was introduced. 2009 International Conference on Advances in Recent Technologies in Communication and Computing 978-0-7695-3845-7/09 $25.00 © 2009 IEEE DOI 10.1109/ARTCom.2009.192 225 2009 International Conference on Advances in Recent Technologies in Communication and Computing 978-0-7695-3845-7/09 $26.00 © 2009 IEEE DOI 10.1109/ARTCom.2009.192 225 2009 International Conference on Advances in Recent Technologies in Communication and Computing 978-0-7695-3845-7/09 $26.00 © 2009 IEEE DOI 10.1109/ARTCom.2009.192 225

[IEEE 2009 International Conference on Advances in Recent Technologies in Communication and Computing - Kottayam, Kerala, India (2009.10.27-2009.10.28)] 2009 International Conference

  • Upload
    soman

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 International Conference on Advances in Recent Technologies in Communication and Computing - Kottayam, Kerala, India (2009.10.27-2009.10.28)] 2009 International Conference

Performance Evaluation of Information Theoretic Image Fusion metrics over Quantitative Metrics

Arathi T Computational Engineering and Networking

Amrita Vishwa Vidyapeetham Coimbatore, India.

[email protected]

Soman K P Computational Engineering and Networking

Amrita Vishwa Vidyapeetham Coimbatore, India.

[email protected]

Abstract—This paper is an evaluation of four information theoretic image fusion quality assessment metrics and how they perform, in comparison with some of the existing quantitative metrics. The information theoretic fusion metrics evaluated are: Fusion Factor (FF), Fusion Symmetry (FS), Image Fusion Performance Measure (IFPM) and Renyi Entropy (RE). Even though traditional quality assessment metrics like Mean Square Error (MSE), Correlation Coefficient (CC) etc, are being improved by incorporating the edge information, similarity measure between the images, taking the luminance and contrast measures in the images etc, most of the quantitative approaches still don’t give a satisfactory performance, since they don’t take into account the information content in the images. Here, we illustrate how the information theoretic metrics are superior to the quantitative metrics, for grayscale image fusion. Keywords—Mutual Information, Entropy, Quantitative metrics

I. INTRODUCTION Combining two low quality images into a single image, which has a higher quality than either of the input images, is the crux behind image fusion. Since, there are a variety of image fusion algorithms presented in literature, it becomes necessary to have certain means by which we can assess these algorithms, so as to determine which gives a better fused image. Traditionally, the quality assessment metrics that were available were quantitative metrics. Some of the quantitative metrics used were: Mean Square Error, Average Difference, Correlation Coefficient etc. These measures were mostly based on the quantitative evaluation of the pixel deviation between the original image and the fused image. Later on, they were improved by incorporating many changes into these trivial metrics. In [1], the edge information of the input images and the fused image are considered when calculating the quality index. [2] gives another metric, where the luminance and contrast information in the images is used for its quality assessment. The similarity measure between the images is used for calculating the quality index for a fusion algorithm in [3]. However, image fusion aims at integrating the complementary information from input images, so that the fused image is more suitable for visual perception and computer processing. It therefore becomes mandatory that the quality metric should also estimate the amount of information obtained from the individual input images. This lead to the

development of fusion quality metrics based on the principles of information theory. In this paper, four information theoretic quality metrics are discussed and their performance analyzed and compared with some quantitative metrics, for the following fusion algorithms: Averaging Method, Principal Component Analysis Method, Gradient Pyramid Method, Laplacian Pyramid Method and Daubechies-4 tap Wavelet Transform Method.

II. INFORMATION THEORETIC QUALITY METRICS A brief description of the four information theoretic quality metrics is given in this section. All these are non-reference metrics, i.e. they don’t require a reference image for the quality index evaluation. A. Fusion Factor Mutual Information (MI) is a concept from information theory, which measures the statistical dependence between two random variables. This is an index used for assessing the fusion performance, also referred to as Fusion Factor (FF) [4]. Mutual information is the amount of information that one image contains about the other. This inspires the use of mutual information as a measure of image fusion performance. If A, B are the two input images and F is the fused image, the amount of information that F contains about A and B is given as:

,

( , )( , ) ( , ) log

( ) ( )FA

FA FAf a F A

p f aFF f a p f a

p f p a= ∑ (1)

,

( , )( , ) ( , ) log

( ) ( )FB

FB FBf b F B

p f bFF f b p f bp f p b

= ∑ (2)

Then the FF performance measure is defined as: ( , ) ( , )AB

F FA FBFF FF f a FF f b= + (3) A large value of FF indicates that more information

has been transferred from the source images to the fused image. However, a disadvantage with this measure is that, a large FF cannot indicate whether the source images are fused symmetrically or not. So as to account for this, the concept of Fusion Symmetry was introduced.

2009 International Conference on Advances in Recent Technologies in Communication and Computing

978-0-7695-3845-7/09 $25.00 © 2009 IEEE

DOI 10.1109/ARTCom.2009.192

225

2009 International Conference on Advances in Recent Technologies in Communication and Computing

978-0-7695-3845-7/09 $26.00 © 2009 IEEE

DOI 10.1109/ARTCom.2009.192

225

2009 International Conference on Advances in Recent Technologies in Communication and Computing

978-0-7695-3845-7/09 $26.00 © 2009 IEEE

DOI 10.1109/ARTCom.2009.192

225

Page 2: [IEEE 2009 International Conference on Advances in Recent Technologies in Communication and Computing - Kottayam, Kerala, India (2009.10.27-2009.10.28)] 2009 International Conference

B. Fusion Symmetry The concept of Fusion Symmetry (FS) is given by the equation: ( , )

0.5( , ) ( , )

FA

FA FB

MI f aFS abs

MI f a MI f b⎛ ⎞

= −⎜ ⎟+⎝ ⎠ (3)

It denotes the symmetry of the fusion process, in relation to the two input images. The smaller the FS, better the performance of the fusion process. FF measure is given importance, when one of the two sensors used for capturing the image is inferior. When both the sensors are of high quality, then the FS parameter is also of importance and an algorithm with relatively smaller FS is said to perform better. C. Image Fusion Performance Measure When information of the source images are used for calculating the performance measure, only the common information between each of the source images and the fused image is considered and no attention is given to the overlapping information of the source images [5]. The Image Fusion Performance Measure (IFPM) employs mutual information and conditional information to represent the amount of information transferred from the source images to the final fused grayscale image. Hence, the overlapping information contained in the source images is considered only once in the formation of the final image [6]. In IFPM calculation, each source image Xi is treated as a discrete random variable with a probability density function, p(xi). The resulting fused image is denoted as Y and p(y) is the corresponding probability density function. Mutual information I(X1;Y), describes the common information between the source image X1 and the final fused image Y. The conditional mutual information I(X2;Y|X1) describes the common information between X2 and Y, given X1. In this way, only the information that is present in X2, but not in X1 is considered in evaluating the common information between X2 and Y.

For N input images, the sum of all the conditional information represents the total amount of common information CI, transferred form the source images Xi, to the final fused image Y and is expressed as:

1 1 12

( ; ) ( ; | ,...., )N

i ii

CI I X Y I X Y X X−=

= +∑ (4)

Each tem of CI can be calculated using the following relations:

1 1 1( ; ) ( ) ( | )I X Y H X H X Y= − for n = 1 (5)

2 1 1 2 1 2 1( ; | ) ( , ) ( , | ) ( ; )I X Y X H X X H X X Y I X Y= − − (6) for n = 2

CI represents the amount of common information between the source images and the final fused image. The joint entropy H(X1, X2, …, Xn) represents the total amount of information for the source images. The IFPM is then defined as:

1 2( , , ...., )n

CIIFPMH X X X

= (7)

IFPM takes a value in the range [0, 1]. Zero corresponds to a total lack of information between the source images and the fused image and one corresponds to an extremely effective fusion process that transfers all the information from the source images to the fused image. D. Renyi Entropy Measure Like Shannon entropy, Renyi Entropy (RE) [7] also measures the total amount of information that the fused image contains about the source images. The overlapping information problem is considered by using Generalized Normalized MI, to avoid its influence. Renyi entropy is a generalized form of entropy, with a parameter α introduced. RE is defined as:

1

1( ) log( ( ) )1

n

ii

R X p xα α

α ==

− ∑ (8)

where, α ≥ 0 and α ≠ 1. RE is a measure capable of distinguishing the noise presence in the images by decreasing its value when noise is present. RE can also be applied to resolve the multimodal registration problem, improving the speed and the accuracy of intensity-based alignment. For two source images S1 and S2 and the corresponding fused image F, the MI between the source and the fused images, based on RE is calculated as:

11 1 1

, 1

1( , 1) log( ( , 1) . ( ). ( 1)) )1FS FS F S

f sI f s p f s p f p sα α α

α−=

− ∑ (9)

1

2 2 2, 2

1( , 2) log( ( , 2) . ( ). ( 2)) )1FS FS F S

f sI f s p f s p f p sα α α

α−=

− ∑ (10)

The source images have strong correlations, as the same area is being covered by complementary imaging features. Thus, the overlapping information gets calculated more than once. To reduce the influence of overlapping information, the Generalized Normalized MI is utilized to give:

Re( ; ) ( ; )

( , )( ; ) ( ; ) ( ; )

FF SS

FF SS FS

I F F I S SR F S

I F F I S S I F S

α α

α α α

+=

+ − (11)

The RE performance metric is given as:

1 2 Re Re( , 1) ( , 2)FS SMI R F S R F Sα = + (12)

III. EXPERIMENTS The information theoretic fusion quality metrics detailed above were calculated and analyzed for a set of grayscale images, using the following fusion algorithms: Averaging Method, Principal Component Analysis Method, Gradient Pyramid Method, Laplacian Pyramid Method and Daubechies-4 tap Wavelet Transform Method. The fusion

226226226

Page 3: [IEEE 2009 International Conference on Advances in Recent Technologies in Communication and Computing - Kottayam, Kerala, India (2009.10.27-2009.10.28)] 2009 International Conference

quality metrics proposed in [1], [2] and [3] were also calculated for the same set of images, using the same algorithms. While calculating the RE metric, the parameter α was given a value of 0.5. Also, while evaluating the metric proposed in [1], the saliency was taken as the variance of the individual images, calculated using a window of size 8x8. The experiments were conducted for six sets of grayscale images. All were found to give almost the same results. The results of one such set, which gave the best comparative evaluation, are given here.

IV. RESULTS The input grayscale images and the corresponding fusion outputs for the various algorithms are shown below:

1 2 3

4 5 6

7 Figure: (1, 2) – Input images, (3-7) Fusion Outputs – 3 Averaging Method 4 Principal Component Analysis 5 Gradient Pyramid Method 6 Laplacian Pyramid Method 7 Daubechies Wavelet Transform Method.

The tables below show the values of the various information theoretic and quantitative quality metrics for the various spatial and spectral domain algorithms.

TABLE I. COMPARISON OF FUSION ALGORITHMS USING QUANTITATIVE QUALITY METRICS

Metrics Averaging PCA Gradient Laplacian DBSS GHindex 0.0482 0.0482 0.0483 0.0487 0.0483 Simindex 0.9998 0.9998 0.9998 0.9994 0.9998 LCindex 11.3779 11.3812 11.4815 12.0924 11.7525 Wmetric 0.7264 0.7261 0.7599 0.7929 0.6262

TABLE II. COMPARISON OF FUSION ALGORITHMS USING INFORMATION-THEORETIC QUALITY METRICS Metrics Averaging PCA Gradient Laplacian DBSS FF 0.3545 0.5732 0.0659 0.0938 0 FS 0.0018 0.0031 0.00026 0.0014 0.00001 IFPM 0.9373 0.9373 0.9538 0.9412 0.9451 RE 0.7953 0.8585 2.0812 2.3925 2

From Table-I, it can be seen that the quantitative quality metrics doesn’t show much difference in value, between the various fusion algorithms. i.e. the separation between the good and bad results is not clearly evident from the quality metric values. Table-II gives the information theoretic quality index values. Here, the values suggest that the Daubechies-4 tap wavelet transform based fusion, performs the best, when evaluated by FF and IFPM methods. There is also a clear separation between the good and the bad results, which helps in ranking the various fusion algorithms in the order of their better performance, which is not possible in the case of quantitative approaches. This suggests that information-theoretic quality assessment metrics show superior performance when used for the evaluation of fusion algorithms, when compared to quantitative methods.

V. CONCLUSION This paper discusses about the various information theoretic fusion quality assessment metrics and how they perform for a set of grayscale images. It experimentally verifies that the information theoretic measures perform better than the quantitative metrics, when comparing various fusion algorithms. As a future work, these metrics can be applied on color images and verified if they perform equally well for them or not. However, the computational time for the evaluation will be more, as the different bands will need to be evaluated separately.

REFERENCES

[1] A new qualitymetric for image fusion, Gemma Piella and Henk Heijmans, CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands.

[2] A Novel Similarity Based Quality Metric for Image Fusion, Shanshan Li, Richang Hong, Xiuquing Wu, EEIS Department, University of Science and Technology of China.

[3] A Similarity Metric for Assessment of Image Fusion Algorithms, Nedeljko Cvejic, Artur Loza, David Bull and Nishan Canagarajah, International Journal of Signal Processing 2;3 Summer 2006.

[4] C. Ramesh and T. Ranjith, ‘Fusion performance measures and a lifting wavelet transform based algorithm for image fusion’, in Proc. Of the 5th International Conference on Information Fusion, Vol. 1, 2002, pp. 317-320.

[5] G. Qu, D. Zhang and P. Yan, ‘Information measure for performance of image fusion’, Electronics Letters, vol.38, pp.313-315, 2002.

[6] An information measure for assessing pixel-level fusion methods, V. Tsagaris and V. Anastassopoulos, Electronics and Computers Division, Physics Department, University of Patras, Greece, 26500.

[7] A Novel Objective Image Quality Metric for Image Fusion Based on Renyi Entropy, Youzhi Zheng, Xiaodong Hou, Department of Computer Science and Technology, Tsinhua University, Beijing, China.

227227227