6
Image processing for NDT images # Prachetaa R, * Dr.B.P.C.Rao # BITS Pilani, K.K.Birla, Goa Campus, NH-17B, Airport Road, Zuari Nagar, Goa, INDIA.PIN 403726. [email protected] * Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu, INDIA. PIN - 603102 Abstract The fusion of images is the process of combining two or more images into a single image retaining important features from each. Image fusion is an important technique especially in nondestructive method of testing [1 3] wherein no damage is done to the material being tested. There are two categories of image fusion, one being fusion of images from the same sensor and the other being multi sensor image fusion. Many methods of fusion have been developed such as Wavelet based image fusion and Dempster Shafer method of fusion. This work contemplates on which technique works better giving better end results using performance metrics. It has been observed that a particular method cannot be termed as the best method for image fusion as it has been seen that it depends on the conditions of the image, thus the need to judiciously select the best method for the given source images. Key words: Image fusion, Image denoising, Wavelet transform, Dempster Shafer theory, evidence, image registration, feature detection. I. INTRODUCTION Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion of images is required due to the requirement of high spatial and spectral information simultaneously. Image fusion has many applications like in the field of medicines, remote sensing, computer vision and robotics. Many fusion techniques have been developed till now starting from the simplest method of pixel averaging to methods such as wavelet transform based image fusion. The images maybe fused in the spatial domain or the frequency domain with the help of transforms such as Fourier transform or Wavelet transform. The process of image fusion must ensure that all the salient information present in the source images are transferred into the fused image. Fusion can be performed at three different levels: element, attribute and decision level fusion. Element level fusion employs pixels and uses basic information. Attribute level fusion is an intermediate level of fusion which uses derived information from pixels or image primitives. Decision level fusion uses merging rules and is a high level fusion method. The Bayesian fusion methodology which bases on a solid mathematical theory provides a rich ensemble of methods and allows an intuitive interpretation of the fusion process. Within the Bayesian framework, the final fusion result is extracted from the Bayesian posterior distribution using an adequate Bayes estimator from decision theory. Prior knowledge as well as artificial constraints on the fusion result can be incorporated via the prior distribution. Dempster Shafer theory is based on the concept of attaching weights to the states of the system being measured. Demspter Shafer [4] allows alternative scenarios such as treating equally the sets of alternatives that have a nonzero intersection. Many such methods of fusion are available and the need of the day is to select the best possible method so that the final image has relevant information from the input images. Our work highlights on the selection of the best method for image fusion given the images required to be fused. The remainder of the paper is organized as follows. First we showcase the work done in this field followed by the image fusion scheme wherein four different algorithms and their implementation are discussed and finally feature detection and results are discussed and necessary conclusions are made. II. RELATED WORK Image fusion has been an area of research for a long time. Image fusion first started with sensor image fusion followed by pyramid decomposition based image fusion and now we have come to Wavelet transform based image fusion. Several types of pyramid decomposition have been developed such as Laplacian pyramid, Ratio of low pass pyramid, Gradient pyramid etc. A number of pixel level fusion [5] techniques have been developed in which the input images are processed and fused on a pixel level. They range from averaging to complex methods. Image fusion has evolved to region based techniques, in which, the source images are first segmented to yield a set of regions (which are decided by the user or even pre defined) that constitute the image, followed by fusion of the corresponding regions. The main problem in all the methods is the problem of mis registration. Discrete Wavelet Transform (DWT) is widely used since it helps in capturing all the features of an image not only at different resolutions but also at different orientations. DWT is shift variant due to the sub sampling at each 169 978-1-4244-8594-9/10/$26.00 c 2010 IEEE

[IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

  • Upload
    bpc

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

Image processing for NDT images #Prachetaa R,

*Dr.B.P.C.Rao

#BITS Pilani, K.K.Birla, Goa Campus, NH-17B, Airport Road, Zuari Nagar, Goa, INDIA.PIN 403726.

[email protected]

*Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu, INDIA. PIN - 603102

Abstract The fusion of images is the process of combining

two or more images into a single image retaining important

features from each. Image fusion is an important technique

especially in nondestructive method of testing [1 3] wherein

no damage is done to the material being tested. There are two

categories of image fusion, one being fusion of images from

the same sensor and the other being multi sensor image

fusion. Many methods of fusion have been developed such as

Wavelet based image fusion and Dempster Shafer method

of fusion. This work contemplates on which technique works

better giving better end results using performance metrics. It

has been observed that a particular method cannot be termed

as the best method for image fusion as it has been seen that it

depends on the conditions of the image, thus the need to

judiciously select the best method for the given source

images.

Key words: Image fusion, Image denoising, Wavelet

transform, Dempster Shafer theory, evidence, image

registration, feature detection.

I. INTRODUCTION

Image fusion is the process by which two or more images are combined into a single image retaining the important

features from each of the original images. The fusion of

images is required due to the requirement of high spatial

and spectral information simultaneously. Image fusion has

many applications like in the field of medicines, remote

sensing, computer vision and robotics.

Many fusion techniques have been developed till now

starting from the simplest method of pixel averaging to

methods such as wavelet transform based image fusion.

The images maybe fused in the spatial domain or the frequency domain with the help of transforms such as

Fourier transform or Wavelet transform.

The process of image fusion must ensure that all the salient

information present in the source images are transferred

into the fused image. Fusion can be performed at three

different levels: element, attribute and decision level

fusion. Element level fusion employs pixels and uses basic

information. Attribute level fusion is an intermediate level

of fusion which uses derived information from pixels or

image primitives. Decision level fusion uses merging rules

and is a high level fusion method.

The Bayesian fusion methodology which bases on a solid

mathematical theory provides a rich ensemble of methods

and allows an intuitive interpretation of the fusion process.

Within the Bayesian framework, the final fusion result is

extracted from the Bayesian posterior distribution using an

adequate Bayes estimator from decision theory. Prior

knowledge as well as artificial constraints on the fusion

result can be incorporated via the prior distribution.

Dempster Shafer theory is based on the concept of

attaching weights to the states of the system being

measured. Demspter Shafer [4] allows alternative scenarios such as treating equally the sets of alternatives

that have a nonzero intersection.

Many such methods of fusion are available and the need of

the day is to select the best possible method so that the

final image has relevant information from the input

images. Our work highlights on the selection of the best

method for image fusion given the images required to be

fused.

The remainder of the paper is organized as follows. First we showcase the work done in this field followed by the

image fusion scheme wherein four different algorithms

and their implementation are discussed and finally feature

detection and results are discussed and necessary

conclusions are made.

II. RELATED WORK

Image fusion has been an area of research for a long time.

Image fusion first started with sensor image fusion

followed by pyramid decomposition based image fusion

and now we have come to Wavelet transform basedimage fusion. Several types of pyramid decomposition

have been developed such as Laplacian pyramid, Ratio

of low pass pyramid, Gradient pyramid etc.

A number of pixel level fusion [5] techniques have been

developed in which the input images are processed and

fused on a pixel level. They range from averaging to

complex methods. Image fusion has evolved to region

based techniques, in which, the source images are first

segmented to yield a set of regions (which are decided by

the user or even pre defined) that constitute the image, followed by fusion of the corresponding regions. The main

problem in all the methods is the problem of mis

registration.

Discrete Wavelet Transform (DWT) is widely used since it

helps in capturing all the features of an image not only at

different resolutions but also at different orientations.

DWT is shift variant due to the sub sampling at each

169978-1-4244-8594-9/10/$26.00 c©2010 IEEE

Page 2: [IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

level of decomposition.

The motivation for region based fusion is derived from

the fact that information is present in a region rather than a

pixel and also that a pixel usually belongs to a region in an

image. Therefore, it is more logical to consider regions rather than pixels. A region map comprising all the region

that constitute the image is obtained through image

segmentation and a set of fusion rules is applied to the

corresponding regions depending on varied measures.

III. IMAGE FUSION SCHEME

The aim of the paper is to apply the best fusion method so

that the fused image has the maximum possible

information. This has been achieved by determining the

best fusion technique for the given source images which is

determined from the metrics thus making it an intelligent way of fusion.

A. Fusion algorithms

The images used in this paper are images obtained from

eddy current method of testing of stainless steel plates

wherein the sensor measures the impedence value of the

plate based on the principle of eddy current and the values

are normalized to [0,255] range to get a grayscale image.

Eddy current based sensing is used to detect defects

present in the specimen based on the principle of electromagnetic induction. Impedence value changes in

case of any defect due to change in material properties

such as permeability, material etc.

The first step involved is image registration. It is the

process of transforming different sets of data into one

coordinate system, i.e. basically pixel to pixel

correspondence between the source images. Registration is

necessary in order to be able to compare or integrate the

data obtained from different measurements such as

multiple photographs, data from different sensors, from

different times, or from different viewpoints. It geometrically aligns the reference image and the sensed

images. The process of image registration is done using

cross correlation. It obtains an initial estimate of the cross

correlation peak by Fast Fourier Transform and then

refines the shift estimation by up sampling the Discrete

Fourier Transform only in a small neighborhood of that

estimate by means of a matrix multiply Discrete Fourier

Transform.

Four different algorithms have been implemented namely:

Spatial Frequency based image fusion, Wavelet based image fusion, Bayesian method of fusion and Dempster

Shafer method of fusion. Each of these are explained

briefly. The process is done using Matlab and images are

shown using Matlab. Matlab fusion toolbox was initially

used to check image fusion. [6]

1) Spatial frequency based image fusion : Spatial

frequency (SF) [1] measures the overall information level

in an image and for an image I of dimension M x N, it is

defined as follows :

=( , 1, 2)1

=11

=0 (1)

=( [ , , 1 ]2)1

=11

=0 (2)

=

2 + ( 2) (3)

Equation (3) gives the spatial frequency using (1) and (2)

where i and j are the pixel positions in the image I. Once

spatial frequencies are calculated for the source images, it

is normalized so that the sum of the normalized spatial

frequency is 1. The normalized spatial frequencies act as

weights to the input images / a region of the input image

and thus fusion is performed. This method can be thought as if the variation of pixel values is high, then it means that

the image contains more information and thus we obtain a

higher spatial frequency value for that particular image.

2) Wavelet based image fusion : The wavelet transform

[7] techniques have been compared to other fusion

techniques and results have shown that wavelet transform

functions for image fusion improves the spatial resolution

with minimal distortion of the spectral content of the

original image.

Fig. 1 Wavelet based image fusion.

Wavelet transform is a mathematical tool developed in the

field of signal processing (Figure 1). The wavelet

transform decomposes the signal based on elementary

functions: the wavelets. By using this, a digital image

decomposes into a set of multi resolution images with

wavelet coefficients. For each level, the coefficients contain spatial differences between two successive

resolution levels.

The Discrete Wavelet Transform of a signal x is calculated

by passing it through a series of high and low pass filters.

= = [ ]= (4)

First the samples are passed through a low pass filter with

impulse response g resulting in a convolution of the two as

shown in (4). The signal is also decomposed

simultaneously using a high pass filter h. The outputs

giving the detail coefficients (from the high pass filter)

and approximation coefficients( from the low pass).

170 2010 International Conference on Signal and Image Processing

Page 3: [IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

Moreover, wavelet transform is suitable for image fusion.

By using this method, it is possible to consider and fuse

image features separately at different scales and to

produce numbers of coefficients in the transformed image.

When images are merged in wavelet space, we can process

different frequency ranges differently. For example, high frequency information from one image can be combined

with lower frequency information from another, for

performing edge enhancement.

With the required number of decompositions and type of

wavelet used, the wavelet coefficients of the source images

are obtained and the fusion rule is specified. Once the

wavelet transformed images are fused, the fused wavelet

coeffiecients are obtained and Inverse Wavelet Transform

is performed to get the fused image.

3) Bayesian method of image fusion : divides statisticians over the idea of how best to estimate

an unknown parameter from a set of data. For example, we

might wish to identify a defect based on a set of

measurements of useful parameters, so that from this data

x.

Two important estimates of this best value of x are:

Maximum likelihood estimate: The value of x that

maximizes (data|x) Maximum a posteriori estimate: the value of x that maximizes (x|data)

There can be a difference between these two estimates, but

they can always be related using A standard

difficulty e theorem is in

supplying values for the so-called prior probability

= / (5)

H is a hypothesis, and D is the data.

P(H) is the prior probability of H: the probability

that H is correct before the data D was seen.

P(D | H) is the conditional probability of seeing

the data D given that the hypothesis H is

true. P(D | H) is called the likelihood.

P(D) is the marginal probability of D.

P(H | D) is the posterior probability: the

probability that the hypothesis is true, given the data and the previous state of belief about the

hypothesis.

The prior probabilities which need to be supplied are the

probability of the hypothesis i.e. P(H) which is 0.5 if we

do not know if the hypothesis is true or false with

certainty, probability of the data from the sensor i.e. P(D)

and probability of the data being true given the hypothesis

P(D|H). With these inputs, we get probability that the

hypothesis being true given the data from the sensor using

(5) i.e. P(H|D). This acts as weights to the source images

or regions of the source images if fusion is done using segmentation of images.

4) Dempster Shafer method of image fusion : Dempster

Shafer method [8] involves attaching weights to the

input / source images. The sum of all the masses must add

up to 1.

Let X be the universal set, the set of all states under

consideration. The power set, P(X), is the set of all possible sub-sets at X, including empty set, Ø.

The elements of the power set can be taken to represent

propositions that we might be interested in, by containing

all and only the states in which this proposition is true.

By definition, the mass of the empty set is zero.

m(Ø) = 0 (6)

The mass of m(A) of a given member of the power set A,

expresses the proportion of all relevant and avoidable

evidences that support the claim that the actual state belongs to A but to no particular subset of A. The value of

m(A) pertains only to the set A and makes no additional

claims about any subsets of A, each of which has, by

definition, its own mass. The problem is how to combine

two independent sets of mass m1 and m2 assignments. The

combination is calculated as follows:

m1,2(Ø) = 0

m1,2 1(B) m2(C) ) / (1 K) (7) Ø

where

1(B) m2 (C)

(8)

K is a measure of the amount of conflict between the two

mass sets. The normalization factor, 1-K, has the effect

of completely ignoring conflict and attributing any mass

associated with conflict to the null set.

The D ) is a

generalization of Bayes theorem where events are

independent.

On each pixel we consider three hypotheses:

Hypothesis of defect presence, associated to an

evidence mass called positive evidence

Hypothesis of defect absence, associated to an

evidence mass called negative evidence

Hypothesis of defect presence or absence,

associated to evidence mass called doubt evidence

The calculation stages are as follows :

We calculate the amplitude average and the

standard deviation on the neighborhood

and then we calculate two indicators :

Low limits corresponding to the amplitude

average minus standard deviation

High limit corresponding to the amplitude

average plus standard deviation

We suppose that the global amplitude distribution is described by a normal distribution.

2010 International Conference on Signal and Image Processing 171

Page 4: [IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

Fig. 2 Definition of evidence masses. [8]

The area under this curve up to the low limit corresponds

to the positive evidence. The area under the curve from the

high limits corresponds to the negative evidence and the

area under the curve between low and high limits

corresponds to the doubt (Fig 2).

B. Fusion process

The above algorithms are implemented for image fusion

and the performance of these algorithms have been

evaluated with the help of metrics. This paper aims at developing an intelligent image fusion scheme. The

procedure in implementation of such a scheme is discussed

below.

The fusion scheme is performed in the region based

method with the image being segmented. Once the image

is segmented with the required / feasible window size, all

the four algorithms are run separately. The same process is

repeated for the other regions too and thus the final result

is the whole image. Thus, we get a fused image containing

the best and most relevant information present. The

performances of the algorithms have been observed at different environments and have been discussed in the

results section through the calculation of metrics.

C. Feature detection

The post processing stage involves detecting the features

present in the image. Gradient based edge detection

methods such as Sobel edge detection [9] etc. are sensitive

to variations in image illumination, blurring and

magnification. A model of feature perception namely the

Local Energy Model developed by Morrone et al. postulates that features are perceived at points wherein the

Fourier components are maximally in phase. Values of

phase congruency vary from a maximum of 1 (indicating a

very significant feature) down to 0 (indicating no

significance). This allows one to specify a threshold to

pick out features before an image is seen.

The measurement of phase congruency at a point in a

signal can be seen geometrically in Figure 3. The local,

complex valued, Fourier components at a location x in the

signal will each have an amplitude An(x) and a phase angle

n(x). Figure 2 plots these local Fourier components as complex vectors adding head to tail. The magnitude of the

vector from the origin to the end point is the Local Energy,

|E(x)|.

Fig. 3 Polar diagram showing the Fourier components at a location in the

(x). The noise circle represents the level of E(x) one can expect just from

the noise in the signal. [10]

The measure of phase congruency developed by Morrone et al. [10] is

PC1(x) = |E(x)|

n An(x) (9)

Under this definition phase congruency is the ratio of

|E(x)| to the overall path length taken by the local Fourier

components in reaching the end point. If all the Fourier

components are in phase, all the complex vectors would be

n An(x) would be 1. If

there is no coherence of phase, the ratio falls to a minimum of 0. Phase congruency provides a measure that is

independent of the overall magnitude of the signal making

it invariant to variations in image illumination and/or

contrast.

IV. RESULTS AND DISCUSSION

The performance of the algorithms have been observed

with the help of different metrics such as Signal to Noise

ratio (SNR), Root Mean Square Error (RMSE), Percentage

Fit Error (PFE) and Mean Absolute Error (MAE) which

are also mentioned in [11]. Higher the SNR value and

lower the error values, better is the fusion. Thus this has

been calculated for all the four algorithms at different

environments. SNR is one of the best metrics for

comparison than other metrics.

Fig. 4 1st image (on the left) showing the defects at 75 kHz and 2nd

image(on the right) at 300 kHz using eddy current method of

nondestructive testing .

172 2010 International Conference on Signal and Image Processing

Page 5: [IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

Fig. 5 1st image (on the left) showing the defects at 75 kHz and 2nd

image(on the right) at 300 kHz with a Gaussian noise of 0.1 standard

deviation.

(a) (b)

(c)

(d)

Fig. 6 (a) shows image fusion using spatial frequency method, (b) using

wavelet based, (c) using Bayesian and (d) showing the evidence images

using Dempster Shafer method of fusion( Negative Doubt and Positive

Evidence images). The process is done using Matlab and the images are obtained using Matlab.

It can be seen in Fig.4 that two features present on the

right of the images are faint whereas those features are

visible in Fig.6 thus showing the necessity of image

fusion. Fusion was tried out for various environments and

the following were observed using Matlab.

Fig. 7 Metrics plotted for fusion of Fig.4. (left bottom) and for Fig.5.

(above)

It is seen from the metrics that wavelet based image fusion

performs much better in the absence of noise. When a

Gaussian noise of 0.1 standard deviation is added to the

image ( to simulate surface roughness), it can be seen that

wavelet is pretty sensitive to noise and its performance

drastically decreases from a SNR value of nearly 40 to a

SNR value of lesser than 8. The performance of Spatial

Frequency based fusion turns out to be the most efficient

amongst all the four as it has a higher SNR value and

lower MAE, RMSE and PFE values. Spatial frequency

method turns out to be better in noise case due to the fact

that it considers the fact that more the variation in an image, more is the information content in the image, hence

it works better when noise is present in the image.

Similarly, when fusion is done for multi sensor images

(the probe diameter is varied and images are obtained at

different frequency), we observe that all the four

algorithms perform equally well at different environments

except for one case wherein it is observed that Dempster

Shafer method of fusion performs much better than the

other algorithms when information is required to be

extracted involving higher frequency ranges. The SNR is nearly double for Dempster Shafer method of fusion

than the other methods of fusion. The SNR for Demspter

Shafer is slightly more than 27 whereas for other methods

of fusion, the SNR comes out to be less than 15.

One result which is easily observable is the fact that multi

sensor fusion is better when compared to single sensor fusion. This is quite obvious due to the fact that different

sensors have the capability of sensing different aspects and

thus the final image contains more information than the

source images. Thus it can be observed that a particular

image fusion algorithm need not be the best for all

conditions and therefore selection of the best fusion

scheme is a necessity. Hence a hybrid fusion scheme and

at the same time an intelligent one is proposed as future

work.

0

5

10

15

20

25

30

35

40

45

SNR RMSE MAE PFE

SF

Wavelet (max)

Bayesian

Dempster

Shafer

2010 International Conference on Signal and Image Processing 173

Page 6: [IEEE 2010 International Conference on Signal and Image Processing (ICSIP) - Chennai, India (2010.12.15-2010.12.17)] 2010 International Conference on Signal and Image Processing -

V. CONCLUSIONS

With the development of new sensors for imaging, image

fusion becomes an important area of research and

technique which is capable of quickly merging massive

volume of data while simultaneously preserving most information. Until now, these contemporary methods have

only been implemented individually and implementing

them together might lead to better results. The methods

have been evaluated by some statistical metrics and have

been compared quantitatively. When the contemporary

methods are used individually, it is seen that each

performs differently; hence a combination of the methods

is the best solution to extract as much as information in the

final image from the given images.

The statistics show that a particular method of fusion

cannot be termed as the best method because it can be seen that it depends on different conditions and different

algorithms perform better under different conditions.

Hence there is a need to select the best method of fusion

judiciously and carefully.

In the future, we plan to implement a hybrid version of

image fusion wherein the best fusion algorithm for a

particular region of an image is determined from the

metrics and then the best fusion scheme is applied, the

same process could be performed for the remaining

regions that constitute the image and thereby to obtain anoverall image containing almost all the information from

all the source images.

REFERENCES

[1]

wavelet dec IEEE Geosci. Remote

Sens. Symp., vol. 2, pp. 1383 1385, July 2003

[2] M. A. Hurn, K. V. Mardia, T. J. Hainsworth, J. Kirkbride,

IEEE Trans. Med. Imag., vol. 15,

no. 6, pp. 850 858, December 1996.

[3] X. E. Gros, Z. Liu, K. Tsukada, and K. Hanasaki,

-

IEEE Trans. Instrum. Meas., vol. 49, no. 5, pp.

1083 1090, October 2000.

[4 sensor Data Fusion

[5]

Int. J. Remote Sens., vol. 19, no. 5, pp. 823 854,

1998.

[6] Image fusion toolbox for Matlab 5.x. [Online]. Available:

http://www.metapix.de/toolbox.htm

[7]

[8] -

[9] Harris, C., Stephens, M., A com in

Proceedings, 4th Alvey Vision Conference. (1988) 147 151 Manchester

[10] Kovesi, P.D., Phase congruency: A low-level image invariant.

Psychological Research 64 (2000) 136 148

[11] N. Cvejic, A.

Int. J. Signal Proc., vol. 2, no. 3, pp. 178 182,

2005.

174 2010 International Conference on Signal and Image Processing