11
4142 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008 Interpretation of Multisensor Remote Sensing Images: Multiapproach Fusion of Uncertain Information Imed Riadh Farah, Wadii Boulila, Karim Saheb Ettabaâ, Basel Solaiman, and Mohamed Ben Ahmed Abstract—Land cover interpretation using multisensor remote sensing images is an important task that allows the extraction of information that is useful for several applications. However, satellite images are usually characterized by several types of imperfection, such as uncertainty, imprecision, and ignorance. Using additional sensors can help improve the image interpre- tation process and decrease the associated imperfections. Fusion methods such as the probability, possibility, and evidence methods can be used to combine information coming from these sensors. An extensive literature has accumulated during the last decade to resolve the issue of choosing the best fusion method, particularly for satellite images. In this paper, we present a semiautomatic approach based on case-based reasoning (CBR) and rule-based reasoning, allowing intelligent fusion method retrieval. This ap- proach takes into account the advantage of data stored in the case base, allowing a more efficient processing and a decrease in image imperfections. The proposed approach incorporates three modules. The first is a learning module based on evaluating three fusion methods (probability, possibility, and evidence) applied to the given satellite images. The second looks for the best fusion method using CBR. The last is devoted to the fusion of multisensor images using the method retrieved by CBR. We validate our approach on a set of optical images coming from the Satellite Pour l’Observation de la Terre 4 and radar images coming from European Remote Sensing Satellite 2 (ERS-2) representing a cen- tral Tunisian region. Index Terms—Classification, data fusion, data imprecision, data uncertainty, evidence theory, interpretation, land cover detection, possibility theory, probability theory. Manuscript received October 30, 2007; revised February 11, 2008 and March 24, 2008. Current version published November 26, 2008. I. R. Farah is with the University of Jendouba, Jendouba 8100, Tunisia (e-mail: [email protected]). W. Boulila, K. S. Ettabaâ, and M. B. Ahmed are with Laboratory RIADI, National School of Computer Sciences Engineering, 2010 Manouba, Tunisia (e-mail: [email protected]; [email protected]; [email protected]). B. Solaiman is with Laboratoire ITI, TELECOM-Bretagne, Technopôle Brest Iroise, 29238 Brest, France (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TGRS.2008.2000817 I. I NTRODUCTION I NTERPRETATION of multisensor remote sensing images is in full evolution, allowing the generation of up-to-date land cover information. Due to the development of various image sensors (visible, infrared, synthetic aperture radar (SAR), etc.), interpretation of the scene can be done through the fusion of data provided by these sensors (multisensor fusion) [1]. However, the interpretation process is generally characterized by numerous types of imperfection [2]. Therefore, the problem of managing imprecise and uncertain data is growing; in fact, imprecision and uncertainty are becoming more complex in multisensor fusion [1]. Interpretation systems should be able to deal with this kind of information [3]. We have to choose between two possible solutions: either representing uncertainty and imprecision or working with uncertain and imprecise infor- mation. In this paper, we present the three most commonly used mathematical frameworks that try to overcome the problem of imperfection accompanying the image interpretation process, i.e., the probability, possibility, and evidence theories [1]. Each of these three models has its own operations to combine and process information and is more appropriate for a specific situ- ation of given information and for a particular type of imperfec- tion [4]. The most commonly used image interpretation systems do not take into account the imperfection accompanying these images, and the few systems that do use only one theory with very restricted parameters [3]. In this paper, we present a semiautomatic interpretation of remotely sensed images based on case-based reasoning (CBR), rule-based reasoning (RBR), and a multiagent system. The choice of such architecture improves the automatic interpre- tation aspect, develops the parallelism of tasks, and offers better efficiency and flexibility. Our approach seems to be fruitful since it constructs a flexible and extensible system that improves the capability and efficiency of processing image im- perfections with different approaches; additionally, it provides classified images that are close to the ground truth image. We decided to follow CBR to simulate expert reasoning, which offers better efficiency to our system when choosing the most adapted fusion method. In order to validate our approach, high- resolution imagery is needed for an accurate analysis of the land cover [5]. Thus, we used images coming from two different satellites: the SAR of the European Remote Sensing Satellite 2 (ERS-2) and Satellite Pour l’Observation de la Terre 4 (SPOT 4) images obtained approximately at the same date in 1998. We present an example consisting of five images taken by the 0196-2892/$25.00 © 2008 IEEE

Interpretation of Multisensor Remote Sensing Images: Multiapproach Fusion of Uncertain Information

Embed Size (px)

Citation preview

4142 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008

Interpretation of Multisensor Remote SensingImages: Multiapproach Fusion

of Uncertain InformationImed Riadh Farah, Wadii Boulila, Karim Saheb Ettabaâ, Basel Solaiman, and Mohamed Ben Ahmed

Abstract—Land cover interpretation using multisensor remotesensing images is an important task that allows the extractionof information that is useful for several applications. However,satellite images are usually characterized by several types ofimperfection, such as uncertainty, imprecision, and ignorance.Using additional sensors can help improve the image interpre-tation process and decrease the associated imperfections. Fusionmethods such as the probability, possibility, and evidence methodscan be used to combine information coming from these sensors.An extensive literature has accumulated during the last decade toresolve the issue of choosing the best fusion method, particularlyfor satellite images. In this paper, we present a semiautomaticapproach based on case-based reasoning (CBR) and rule-basedreasoning, allowing intelligent fusion method retrieval. This ap-proach takes into account the advantage of data stored in thecase base, allowing a more efficient processing and a decrease inimage imperfections. The proposed approach incorporates threemodules. The first is a learning module based on evaluating threefusion methods (probability, possibility, and evidence) applied tothe given satellite images. The second looks for the best fusionmethod using CBR. The last is devoted to the fusion of multisensorimages using the method retrieved by CBR. We validate ourapproach on a set of optical images coming from the SatellitePour l’Observation de la Terre 4 and radar images coming fromEuropean Remote Sensing Satellite 2 (ERS-2) representing a cen-tral Tunisian region.

Index Terms—Classification, data fusion, data imprecision, datauncertainty, evidence theory, interpretation, land cover detection,possibility theory, probability theory.

Manuscript received October 30, 2007; revised February 11, 2008 andMarch 24, 2008. Current version published November 26, 2008.

I. R. Farah is with the University of Jendouba, Jendouba 8100, Tunisia(e-mail: [email protected]).

W. Boulila, K. S. Ettabaâ, and M. B. Ahmed are with LaboratoryRIADI, National School of Computer Sciences Engineering, 2010 Manouba,Tunisia (e-mail: [email protected]; [email protected];[email protected]).

B. Solaiman is with Laboratoire ITI, TELECOM-Bretagne, TechnopôleBrest Iroise, 29238 Brest, France (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TGRS.2008.2000817

I. INTRODUCTION

INTERPRETATION of multisensor remote sensing imagesis in full evolution, allowing the generation of up-to-date

land cover information. Due to the development of variousimage sensors (visible, infrared, synthetic aperture radar (SAR),etc.), interpretation of the scene can be done through the fusionof data provided by these sensors (multisensor fusion) [1].However, the interpretation process is generally characterizedby numerous types of imperfection [2]. Therefore, the problemof managing imprecise and uncertain data is growing; in fact,imprecision and uncertainty are becoming more complex inmultisensor fusion [1]. Interpretation systems should be ableto deal with this kind of information [3]. We have to choosebetween two possible solutions: either representing uncertaintyand imprecision or working with uncertain and imprecise infor-mation. In this paper, we present the three most commonly usedmathematical frameworks that try to overcome the problem ofimperfection accompanying the image interpretation process,i.e., the probability, possibility, and evidence theories [1]. Eachof these three models has its own operations to combine andprocess information and is more appropriate for a specific situ-ation of given information and for a particular type of imperfec-tion [4]. The most commonly used image interpretation systemsdo not take into account the imperfection accompanying theseimages, and the few systems that do use only one theory withvery restricted parameters [3].

In this paper, we present a semiautomatic interpretation ofremotely sensed images based on case-based reasoning (CBR),rule-based reasoning (RBR), and a multiagent system. Thechoice of such architecture improves the automatic interpre-tation aspect, develops the parallelism of tasks, and offersbetter efficiency and flexibility. Our approach seems to befruitful since it constructs a flexible and extensible system thatimproves the capability and efficiency of processing image im-perfections with different approaches; additionally, it providesclassified images that are close to the ground truth image. Wedecided to follow CBR to simulate expert reasoning, whichoffers better efficiency to our system when choosing the mostadapted fusion method. In order to validate our approach, high-resolution imagery is needed for an accurate analysis of the landcover [5]. Thus, we used images coming from two differentsatellites: the SAR of the European Remote Sensing Satellite 2(ERS-2) and Satellite Pour l’Observation de la Terre 4 (SPOT 4)images obtained approximately at the same date in 1998. Wepresent an example consisting of five images taken by the

0196-2892/$25.00 © 2008 IEEE

FARAH et al.: INTERPRETATION OF MULTISENSOR REMOTE SENSING IMAGES 4143

Fig. 1. Multisensor satellite images.

satellites previously mentioned (Fig. 1). The study area islocated in the central region of Tunisia (North Africa).

II. MULTIAPPROACH FUSION METHOD

Much work has investigated the problem of image interpreta-tion, particularly the issue of incomplete information providedby a single sensor resulting in image misclassification. To solvethis problem, many authors consider the use of data fusion,which is defined as the process of combining data coming fromseveral sources in order to refine the prediction [1], [6] andto give complementary attributes providing information aboutland cover for the remote sensing community [7]. Indeed, theredundancy and complementarity of data provided by fusionreduce the imperfection and give a suitable image interpre-tation. Thus, interpretation systems have to deal with thesetypes of data in order to decrease imprecision and uncertainty.Several models such as the probability, possibility, and evidencemethods are used in uncertainty and imprecision representation.Many studies have explored this field, among which we can listHudelot, who presents a distributed architecture founded on thecooperation of three knowledge-based systems to manage andseparate different sources of knowledge and reasoning [3]. Inthis architecture, imperfection is modeled through the possi-bility approach. In [8], Colliot et al. recommended modelingspatial relationships in a fuzzy set framework. The applica-tion is the recognition of brain internal structures in magneticresonance images. Cleynenbreugel et al. used the evidencemethod in the expert systems judgment field. In this context,the evidence method is used to validate segment matching inmultitemporal images [9].

In several image fusion studies, interpretation systems usea single model to process all types of imperfection associatedwith images. Our approach, however, uses three fusion meth-ods, as mentioned earlier (probability, possibility, and evidencemethods). Let us consider a general problem of fusion forwhich we have l heterogeneous sources S1, S2, . . . , Sl. Ourgoal is to select one decision, given a set of n possible decisionsd1, d2, . . . , dn [6].

The fusion process has four main steps: 1) modeling;2) estimation; 3) combination; and 4) decision. The modelingstep includes choosing the fusion method and finding a wayto express information within this method. The estimation stepinvolves determining the numerical distributions needed to es-timate the information to fuse. In the third step, the choice of anappropriate operator for combination is made. The decision step

involves choosing a decision based on information provided bythe sources. The fusion architecture depends on the way thesesteps are arranged.

We have essentially four types of fusion architecture. Thefirst type is global fusion where a decision is made based onall the sources, taking into account all types of information.The second fusion model consists of taking a separate decisionfor each source: A decision dj is taken based only on theinformation provided by source Sj , and then, as a secondstep, these local decisions are merged into a global decision.This is a decentralized fusion model; it is most adapted whenthe sources are not simultaneously available [1]. The thirdfusion model involves combining sources relative to the samedecision with a function F ; afterward, a decision is taken [4].Indeed, we combine all the images for each class; we get k(k is the number of classes) possible decisions of the classes’adherence. According to the fixed-decision criterion, we choosethe most suitable decision. In this case, there is no need totake an intermediate decision, and information is manipulatedin the formalism chosen until the last step, decreasing thecontradictions and conflicts. This type, like the global model,is a centralized model that requires having all sources availableat the same time; however, it is simpler [1]. The last fusionmodel is an intermediate hybrid model, which involves choos-ing adaptive information according to the sources’ feature. Thismodel introduces symbolic knowledge into the images andzones. For example, the class “river” is well identified in radarimages, unlike optical images. Multiagent architectures fit intothis model very well. Among these four fusion types, the threefusion methods (probability, possibility, and evidence) usuallybelong to the third model [4].

A. Probability Method

The probability method models uncertain information. In-formation is represented by a conditional probability p(x ∈Ci|Ij), i.e., the probability that a pixel belongs to a particularclass Ci, considering available images Ij .

The combined rule used in this theory is Bayes’ rule [1], [10],which assumes the use of independent sensors. It can be writtenas follows:

p(x ∈ Ci|I1, . . . , Il) =

∏lj=1 p(Ij |x ∈ Ci)p(x ∈ Ci)

p(I1, . . . , Il)(1)

wherep(Ij |x ∈ Ci) posteriori probability that element x of class

Ci belongs to the jith image [10];p(I1, . . . , Il) normalization term, which is constant for

all events (it does not depend on contex-tual information). When using independentsources, this term will be equal to p(I1) ∗. . . ∗ p(Il), where p(Ij) represents the prob-ability that an element taken at random be-longs to the jith image (1 ≤ j ≤ l);

p(x ∈ Ci) probability that x ∈ Ci.

4144 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008

The most common rule in the probability approach decisionis the maximum a posteriori defined as

x ∈ Ci if p(x ∈ Ci|I1, . . . , Il)

= max {p(x ∈ Ck|I1, . . . , Il), 1 ≤ k ≤ n} . (2)

B. Possibility Method

The possibility approach is used to model either uncertainor imprecise information [11]. Information in this theory ismodeled by πx

j (Ci), i.e., the degree of possibility that the classto which x belongs is class Ci when referring to an image j [1].

There are a variety of combination operators that can dealwith heterogeneous information. This approach provides greatflexibility when choosing the operator. We can list t-norms,t-conorms, mean, etc.

These operators have the same behavior, regardless of thevalue of information to combine, and they are computed with-out any contextual information [4].

The decision is usually taken by the degree of maximummembership after the combination step defined as follows:

x ∈ Ci if πx(Ci) = max {πx(Ck), 1 ≤ k ≤ n} . (3)

C. Evidence Method

Dempster–Shafer theory, or evidence theory, allows the rep-resentation of imprecision and uncertainty using the mass,plausibility, and belief functions [1], [12], [13].

The mass functions are defined on 2D, where D ={C1, C2, . . . , Cn}, Ci is class I , and n is the number of classes.

— The mass functions are m: 2D → [0, 1], m(∅) = 0, and∑A⊆D

m(A) = 1. (4)

— The belief functions are

Bel(A) =∑

B⊆A,B �=φ

m(B) ∀A ∈ 2D. (5)

— The plausibility functions are

Pls(A)=∑

B∩A �=φ

m(B)=1−Bel(AC) ∀A ∈ 2D. (6)

In this method, the mass functions are combined usingDempster’s orthogonal rule [1]

m(A)= (m1 ⊕ m2⊕, . . . ,⊕ml)(A)

=

∑B1∩,...,∩Bl=A

m1(B1)m2(B2), . . . , ml(Bl)

1 − K(7)

where

K =∑

B1∩,...,∩Bl=φ

m1(B1)m2(B2), . . . ,ml(Bl). (8)

K represents the degree of conflict between the l sources.

Fig. 2. Proposed architecture.

After combination, the decision is taken using one of severalrules [4].

— The maximum of plausibility is defined as

x∈Ci si PLs(Ci)(x) = max {PLs(Ck)(x), 1 ≤ k ≤ n} .(9)

— The maximum of belief is defined as

x ∈ Ci si Bel(Ci)(x) = max {Bel(Ck)(x), 1 ≤ k ≤ n} .(10)

III. INTERPRETATION OF THE REMOTE

SENSING IMAGE APPROACH

We propose a modular approach based on CBR and a multia-gent system, as shown in Fig. 2. The proposed system containsa set of agents like the fusion agent (probability, possibility,and evidence agents) and the zone detection agent (urban,cultivated, humid, sebkha, and parcel agents).

Satellite images contain four types of uncertainty andimprecision.

1) Limit positioning errors (the small zones are erroneous oradded to the neighboring zones);

2) The pixels belonging to a specific class can appear locallyin a zone identified by another land cover type (forexample, some pixels of a river in a zone representinga road).

3) Overflows of the neighboring agriculture.4) Inability to clearly determine the land cover type of a

zone because two or several land cover types share thesame zone.

FARAH et al.: INTERPRETATION OF MULTISENSOR REMOTE SENSING IMAGES 4145

Fig. 3. Learning module.

A. Reasoning Steps

Our approach is divided into the following three main steps:

1) a learning step based on evaluating three fusion meth-ods (probability, possibility, and evidence) applied to thegiven satellite images.

2) a retrieval step that helps us find the closest case from theexperiences made by experts during the learning step.

3) a fusion step, in which images presented to our systemare fused by the method provided by the retrieval step.

We have chosen to do reasoning based on cases since itsimulates human analysis [14]; it allows us to compare newcases to the already made experiments. Thus, CBR has theadvantage of minimizing the expert’s intervention in the imageinterpretation process [15].

CBR has several advantages over approaches based on gen-eralized knowledge. Indeed, it is extremely difficult to findan expert who is able to formalize his decision processes andcommunicate his knowledge [16]. He is frequently led to judgesituations through his intuition and imagination. Moreover,we choose to add RBR to CBR in order to justify knowl-edge represented by cases [17]. Mixing CBR and RBR im-proves the reliability and efficiency of our image interpretationsystem.

IV. STEP 1: LEARNING

Learning in CBR is done from experience since it is usuallyeasier to learn by retaining a concrete problem solving expe-rience than to generalize from it. In the learning step, CBRreaches some maturity by acquiring a set of expert opinionsafter evaluating the three applied fusion methods (Fig. 3).

As it is generally hard for an expert to select the rightapproach based on his experience and intuition, we chooseto apply all three fusion approaches (probability, possibility,and evidence) and pick one based on the evaluation module.The goal of that component is to support the expert whenlearning. Validating the best fusion method is reserved for theexpert.

In order to accomplish this, we fixed several criteria, allowingthe comparison of the thematic images obtained by the three fu-sion methods and the ground truth image. The fixed criteria aregiven as follows: average absolute difference (AAD), signal-to-noise ratio (SNR), and mean square error (MSE) [18].

Let us consider I(x, y) as the original image, I ′(x, y) asthe approximated version (which is actually the decompressedimage), and M,N as the image dimensions.

TABLE IIDEAL VALUES OF GOOD IDENTIFICATION OF DIFFERENT APPROACHES

1) AAD: AAD is the absolute average of error [18]. It isdefined as follows:

AAD =1

MN

M∑m=1

N∑n=1

|I(m,n) − I ′(m,n)| . (11)

A small value of AAD indicates that a fused image is veryclose to the ground truth image.

2) SNR: SNR is a measure of the noise rate in the image[18]. SNR is defined as

SNR =M∑

m=1

N∑n=1

I(m,n)2/M∑

m=1

N∑n=1

[I(m,n) − I ′(m,n)]2 .

(12)

A very large value of SNR indicates that the fused image(using one of the fusion methods) is very close to the groundtruth image.

3) MSE: MSE is the cumulative square error between theoriginal image (ground truth image) and the fused image [18].MSE is defined as

MSE =1

MN

M∑m=1

N∑n=1

[I(m,n) − I ′(m,n)]2 . (13)

A low MSE value describes a small error. An image that is aperfect reproduction of the original one will have an MSE equalto zero, whereas an image that greatly differs from the originalone will have a large MSE.

In addition to validating the best fusion method, the experthas the task of assigning a weight for each feature and a degreeof relevance for each case. Additionally, he is required to adaptthese weights and degrees to perform case retrieval (Table I).

V. STEP 2: FUSION METHOD RETRIEVAL

A. CBR

The CBR approach is particularly useful for applicationswhere we lack sufficient knowledge of either formal representa-tion or parameter estimation [15]. CBR presents cases related tosimilar problems that were previously handled and suggests thesolution adopted in similar situations. It also decides what orderthe previous cases can provide to deal with the current problem.This process is similar to what happens inside the human brainand is similar to the method by which many experts handle adifficult problem.

CBR supports, in an intuitively reasonable and understand-able way, the handling of application domains where the notionof case already exists naturally. This is the case of our workcontext: image fusion and interpretation, which is characterizedby repetitive information.

4146 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008

Fig. 4. Cycle of CBR. 1: Case building = {features, ?}. 2: Retained case ={features, fusion method}. 3: Confirmed target case = {revised features,confirmed fusion method}. 4: Suggested target case = {adapted features,Suggested fusion method}.

The CBR cycle includes four tasks: 1) feature processing;2) case retrieval or similarity measurement; 3) adaptation; and4) adjustment, as shown in Fig. 4.

1) Feature Processing: Among the difficulties in buildingCBR is finding the right features, allowing pertinent retrieval ofa target case from a set of source cases. These features enablethe CBR system to better differentiate between the availablecases. We choose to work with two types of descriptors: globaland local. Whereas global descriptors use the statistical featuresof the whole image, local descriptors use the visual featuresof regions to describe the image content. To obtain the localvisual descriptors, an image is often divided into parts using anappropriate segmentation method.

a) Global descriptors: They include textual, color, andtexture features [19]–[21].

• Textual Features: The objective of these features is tolocate the work setting of satellite images for our in-terpretation system. It describes the zones to extract,sensor information, and the nature of imperfection.

Textual features {Nature of imperfection (imprecision or uncertainty)Nature of zonesSensor Information{ TypesSpectral IntervalDegree of certaintyConflict }}

• Color Features: These features are related to the colorin the processed images. The histogram [21] is anexample of these features.

• Texture Features: They are related to the contrast ofprocessed images. Among these features, we usedentropy [22].

b) Local descriptors: These descriptors are based onshape features, which are extracted from the regions resultingfrom the segmentation of images. There is no universal segmen-tation method that could be used for all image types [23]–[25].In our approach, we choose to apply the fuzzy c-means method(FCM), which is one of the most popular clustering methods[26]. Moreover, this algorithm is one that has proven to be quiteappropriate for dealing with the imprecise nature of satelliteimages. FCM is an unsupervised algorithm, allowing each pointto belong to a particular class with a degree of membership. Thenumber of classes is fixed without human intervention. Theymay rely on a histogram-based preprocessing [26]. Among thelocal descriptors, we used averages, variance, and Zernike andFourier–Mellin moments [27], [28].

2) Case Retrieval (Similarity Measures): Case retrieval isbased on similarity measures. Generally, the efficiency of CBRis a result of choosing appropriate similarity measures. Thepurpose of this step is to obtain a set of cases that are similarenough to the new case.

Cases that match all input features are, evidently, goodcandidates for matching, but those that match only one part ofthe problem features may also be retrieved. By selecting onlycertain features for matching and enforcing constraints on fea-ture values, a context allows the recognition of a partial match:Cases that satisfy the specified constraints for the context areconsidered similar and are relevant to the context.

For the output of this step, we get a set of cases havingthe form

Retained case = {features, fusion method}.

In our approach, we are faced with a problem regardingcontent-based image retrieval. Indeed, as the database grows,the difficulty of finding relevant images grows as well. Toimprove the performance of image queries, we split our workinto two parts: global-based and region-based image retrievals.

The first part deals with filtering images according to theirglobal features (textual, texture, and color features) and there-fore decreases the number of images to be compared by region.In the second part, region-based image retrieval is performedon the images resulting from the first part. The main benefitof using this approach in case retrieval is the reduction of thetime and effort required to obtain image-based information. Toaccomplish this, each image in the database is split into differ-ent regions by the FCM method described in Section V-A1b;then, local descriptors for each region are computed.

Let S denote the input image and B denote the image inthe base. S and B contain M and N regions, respectively.A Bhattacharyya distance is used to compute the similaritymeasure according to (14). Let δ and δ′ denote the area ofrespective regions in S and B. The distance between two regionsRi and R′

j is described by

dr

(Ri, R

′j

)= −wi log

∑p

Fi(p)F ′j(p) (14)

where Fi and F ′j are the feature vectors. These features encode

local descriptors about each region.

FARAH et al.: INTERPRETATION OF MULTISENSOR REMOTE SENSING IMAGES 4147

Fig. 5. Comparison of features over regions.

wi is defined as follows:

wi =min(δi, δ

′j)

M∑i=1

δi

. (15)

In our approach, each region in S is compared with allthe regions in B. This requires matching k regions of S tok regions of B, where k ≤ min{M,N} (Fig. 5) [29]. Thedistance between two images is computed using the distancebetween the most resembling couples, excluding those havinga distance of less than a given threshold th according to [29]

d(S,B) =1k

M∑i=1

minj∈[1,N ]

{dr

(Ri, R

′j

).l{(i,j)/dr(Ri,R′

j)<th}}

(16)

where

l{(i,j)/dr(Ri,R′j)<th} =

{1, if dr

(Ri, R

′j

)< th

0, else. (17)

To address the problem of image retrieval on the basis ofthe features previously described, we choose to apply a tree-structured clustering technique. This technique is used to speedup retrieval. Many researchers have attempted to describe thistechnique [29], [30]. Such a technique allows the representationof image features in a multilevel structure. Each level containsa set of nodes, which is reserved for a specific feature (textual,color, texture, or shape). The tree-structured technique allowsus to filter images while progressively increasing the detaillevel.

As the first step, the images in our base are globally com-pared to the requested image. The global features are color, tex-tual, and texture. If the global similarity to the requested imageis lower than a given threshold, the subregions of these imagesare compared. This technique allows global image retrieval anda comparison according to the similarity between regions.

The distance between two images represented by multilevelfeatures is obtained by an exhaustive comparison of all homol-ogous region features.

3) Adaptation: The adaptation process in CBR manipulatesthe retrieved case to better fit the input case. It is a techniquethat allows the retained case to be altered and produced a newsolution to a new problem. The retained case can be changedto make it suit a new use. The goal of the case is to improve

the capacity of CBR resolution problems [31]. A CBR systemretrieves cases corresponding to similar problems from its casebase. The adaptation step must recognize differences betweenthe new and retrieved problems and refine the retrieved caseto reflect these differences. This problem is solved by a set ofadaptation rules.

An adaptation is a process where an action is taken, depend-ing on a given situation. The situation contains the differencesbetween new and retrieved problems. The action captures theupdate for the retrieved case. We can list two major cases of ad-aptation: substitution (new values for the reused case) and trans-formation (components to be added, deleted, or changed) [17].

As the results of this step, we get a set of cases havingthe form

Suggested target case

= {adapted features, suggested fusion method}.

4) Adjustment: Adjustment is an evaluation expert step[32]. Indeed, during the life cycle of CBR, experts recommendseveral strategies to integrate new solutions in the case base andto modify the CBR structures to perform it [17], [33].

The adjustment, which is also called revision, involves twotasks: The first task is to evaluate the case solution; the secondtask is to repair the case solution by referring it to an expert.The evaluation task takes the result from its point of view.Case repair involves detecting errors in the current solution andretrieving or generating explanations for them.

As the results of this step, we get a set of cases havingthe form

Confirmed target case

= {revised features, confirmed fusion method}.

5) Case Base: In our approach, each case has threecomponents: 1) problem description; 2) the image fusionmethod; and 3) case relevance. The problem descriptioncharacterizes the problem, the image fusion method gives asolution to a given problem, and case relevance for this case isprovided by an expert. We have weighed each problem featureaccording to its importance in characterizing the problem,during the learning phase.

Each case in our case base is described as follows:

Cx = ({f1(Cx), p1} , {f2(Cx), p2} ,

. . . , {fi(Cx), pi} , . . . , {fl(Cx), pl} ,Mi, conf)

whereCx(x = 1, . . . ,m) case x in our case base;fi(Cx) features of image i in case x;pi weight features of image i in case x;MI : most adapted fusion method for case x

and can be the probability, possibility,or the evidence fusion method;

conf confidence that the expert grants forcase x.

4148 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008

B. Rule Base

We chose to use RBR to support the CBR in retrievingthe best fusion method. The integration of CBR with RBR isclosely related to the general issue of our architecture for uni-fied problem solving and learning. This choice is justified by theneed to automate the retrieval of the image fusion method. RBRcan guide CBR in the similarity measure step by justifying thechoice of one candidate among a set of possible cases. In addi-tion, rules are used in the case adaptation step where a new caseis constructed from an old one. The rules in our system aim toimprove the problem resolution capacity. The rule base is com-posed of two types of rules: retrieval and adaptation rules [17].

1) Retrieval Rules: These rules have the following form:

If ((|Caracx(i, 1) of target case − Caracx(i, 1) of sourcecase | ∗ w1)

+(|Caracx(i, 2) of target case − Caracx(i, 2) of sourcecase | ∗ w2)+ · · · + (|Caracx(i,m) of target case − Caracx(i,m) ofsource case |wm)/

∑mk=1 wk) < threshold

then (possibility method): mx.

Here, Caracx (i, j) is the characteristic j of image i for casex; Wi is the weight of feature i for case x, which is fixed byan expert; mx is the confidence that the expert grants for casex; and the threshold is a value fixed by the expert, allowing therelevance of the search.

2) Adaptation Rules: There are three types of adaptationrules [17].

1) Rules of copy: These rules copy the identical case.2) Transformational rules: The retrieved case is modified,

suppressed, or added to the case base.3) Generative rules: They reuse the process leading to the

solution.

Examples are given as follows, where fi (cx) are the featuresof image i for case x:

If (fi (cx) are source case features but are not target casefeatures) then (add fi (cx) to the target case features).If (fi (cx) are target case features but are not source casefeatures) then (suppress fi (cx) of the target case features).

VI. STEP 3: IMAGE FUSION

The task of the image fusion module is to fuse a set of satel-lite images according to the method resulting from the adapta-tion step (the probability, possibility, or evidence method).

This module is divided into two submodules: supervisedlearning and fusion by the suggested fusion method (Fig. 6).The goal of the fusion module is to obtain a thematic imagefused using the method given by the CBR.

A. Learning Module

Learning in our approach is supervised; it plays a doublerole. First, it allows the acquisition of new knowledge providedby an expert in the image interpretation domain. This expert

Fig. 6. Architecture of fusion module.

chooses the regions of interest and gives specific informationabout them.

Second, the learning process allows the initialization ofthe fusion process. Indeed, the expert should choose samplesrepresenting the zones to identify; these samples will be used toestimate the conditional probabilities, possibility degrees, andmasses, which are needed in the fusion step.

In order to estimate the probability distribution, we chooseto use the Parzen method [35], which estimates conditionaldensities p (x = Ci) for each Ci class.

Initially, this method requires defining the number N of Ci

classes. We calculate the statistical features of each class: themiddle xi and standard deviation σi. Conditional probabilityp(x/Ci) is calculated by

p(x/Ci) =1

σi

√2π

e− (x−xi)

2

2σ2i ∀A ∈ 2D. (18)

To estimate the possibility degrees and masses, we opt toapply the possibilistic c-means (PCM) algorithm [36]. Let usconsider fik as the membership of xi to class Ck, where∀ifik ∈ [0, 1].

The algorithm consists of minimizing U given by the follow-ing equation:

U(F, V,w) =c∑

k=1

n∑i=1

fmik d2

ik +c∑

k=1

wk

n∑i=1

(1 − fik)m (19)

whereF fuzzy matrix of elements fik;V = {V1, V2, . . . , Vc} center of samples representing the

zones to identify;dik represents the distance between a

sample and xi;w = {w1, . . . , wc} set of penalties of the PCM

algorithm;m fuzzy coefficient controlling the

fuzzy quantity in the partition(m > 1).

B. Fusion by the Suggested Fusion Method

The fusion module needs the information given by the learn-ing module to accomplish its task. In our approach, we usea mixed fusion architecture that benefits from the modularityof the decentralized architecture while preserving good clas-sification rates, with comparison to those of the centralizedarchitecture. This module is based on three agents: probability,

FARAH et al.: INTERPRETATION OF MULTISENSOR REMOTE SENSING IMAGES 4149

Fig. 7. Query image.

possibility, and evidence agents. Each agent is adapted to aspecific situation (type of imperfection, type of sensor, type ofobject to extract, etc.).

VII. VALIDATION

We choose to validate our approach on an example consistingof five satellite images representing the zone of Kairouan incentral Tunisia (North Africa), which is approximately 100 kmsouth of the capital Tunis. The environment of our studied zonerepresents many classes of land cover. The images used in thispaper come from SPOT 4 and ERS-2 satellites; we used twoimage sources in order to benefit from the complementarities ofoptical and radar images in spectral representation. A descrip-tion of the images used is given here.

1) The SAR image of ERS-2 acquired on April 24, 1998presents a spatial resolution of 12.5 m.

2) The optical image of SPOT 4 acquired on May 31, 1998has a spatial resolution of 20 × 20 m for the High-Resolution Visible and Infrared (HRVIR) instrument; thespectral band varies from visible [0.50λ, 0.590 λ] to meaninfrared [0.75λ, 1.58 λ] with two intermediate bands.

Following a retrieval step in its case base, CBR identifies thecases that are most similar to the previously presented example.

For instance, we are going to describe the retrieval step forthe first image in our example (Fig. 7).

As previously explained, the aim of the global retrievalstep is to decrease the number of images to be compared byregion. The number of discarded images greatly depends onthe threshold chosen to measure similarity between images.Setting a threshold to 0.1, 38% of the images in our database areeliminated. Some of the images resulting from global retrievalare shown in Fig. 8.

The images resulting from global retrieval are already seg-mented by FCM, and the number of classes is predeterminedwith a histogram-based preprocessing. After that, the queryimage is segmented, and five segments are extracted (Fig. 9).The next step is to decompose each segment into regions(Fig. 10). Each region of this image (Fig. 11) is compared withall the regions of the images resulting from the global retrieval.

Local retrieval permits the fourth image to be identified asthe image most similar to the query image.

The same process is performed on the other four images inour example. The most adapted fusion method is chosen by votefrom the methods corresponding to the retrieved images. For

Fig. 8. Some of the images resulting from the global retrieval.

Fig. 9. Query image segmented by FCM.

Fig. 10. Segments extracted from the query image.

the aforementioned example, the probability method is the mostadapted fusion method (Figs. 12–14).

As shown in Fig. 15, our image interpretation system identi-fies five land cover types, i.e., humid, parcel cultivated, urban,and sebkha.

4150 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008

Fig. 11. Decomposed regions from each segment of the query image.

Fig. 12. Fourth global-retrieval image segmented by FCM.

Fig. 13. Segments extracted from the fourth global-retrieval image.

Fig. 14. Decomposed regions from each segment of the fourth global-retrievalimage.

The most adapted fusion method retrieved by our CBR isthe probability method; this result allowed us to decide aboutthe nature of the imperfection accompanying processed images.In fact, the probability theory is most adapted for uncertaintycaused by unreliable sensors or to spatial and temporal con-straints, such as climate (cloud, rain, pressure, obstacles, etc.).

To evaluate these results, we choose to fuse images fromthe previous example using the three fusion methods (proba-bility, possibility, and evidence). For the probability method,we choose the case of equiprobability between the five images.In addition, we decide to apply three types of combinationoperators for the possibility method (t-norms, t-conorms, andmean) and to use an unsupervised fusion for evidence theory.

Fig. 15. Land cover types of extracted objects.

Fig. 16. Classified images. 1: Possibility method using the t-norms operator.2: Possibility method using the t-conorms operator. 3: Possibility method usingthe mean operator.

TABLE IIVALUES OF GOOD IDENTIFICATION OF DIFFERENT APPROACHES

The results are five images classified according to the threefusion methods (Fig. 16).

After the fusion stage, we obtained five types of land cover,i.e., humid, parcel cultivated, urban, and sebkha (Fig. 15).

Table II shows the best fusion method by comparing theresulting images to the ground truth image.

As we can see, the probability method has the best values ofthe comparison criteria. Indeed, for AAD, 116.093039 is thesmallest value. For SNR, 0.214791 is the highest value. ForMSE, 16446.131449 is the smallest value. To assess the bestfusion method, we choose to keep the one having the maximumof the best difference rankings. In our example, we decide tokeep the probability method.

The results made by comparing fused images to the groundtruth image are in accordance with those given by CBR. Thisresult shows the efficiency and utility of the CBR developedin our approach. It allows the image interpretation process tobe minimized. Furthermore, our image interpretation systemavoids the need to ground truth images, which are generallydifficult to get. Moreover, based on the experiments made atthe learning step, our system acquires a reasoning capabilitythat improves its automatic aspect by minimizing recourse toexperts.

FARAH et al.: INTERPRETATION OF MULTISENSOR REMOTE SENSING IMAGES 4151

VIII. CONCLUSION

In this paper, we have presented a semiautomatic approachfor image interpretation, taking into account the imperfectionsaccompanying this process. This approach is based on threemain steps: 1) supervised learning; 2) retrieval of the best fusionmethod by the CBR module; and 3) image fusion. The proposedapproach seems to be rewarding for several reasons. First, wehave developed an image interpretation system that is able towork with different types of imperfections accompanying theseimages. Second, using the CBR, we have tried to improvethe automatic aspect of image interpretation for which welack sufficient knowledge of either formal representation orparameter estimation. An additional contribution of this paper isthe combination of CBR and RBR, which improves the retrievalof the image fusion method. Indeed, having rules, together withthe cases, has allowed two innovations in CBR technology:First, the rules provide a natural way to index the cases, andsecond, they assist case adaptation. Moreover, we use a multia-gent system offering flexibility and parallelism to interpret tasksthat are generally long. In addition, we use several theories formodeling image imperfections permitting good classificationrates, unlike the existing interpretation system. The a prioriknowledge required for this includes the characteristics ofsensors and the description of objects to detect. A confidencefactor associated with each detected object is added to expressthe belief degree about its existence in the real scene.

However, the use of case-based reasoning presents somelimitations. The lack of cases for new problems is, for example,a limitation to this approach. In order to overcome the absenceof similar cases, we have developed an evaluation module. Thismodule allows the comparison of fused images resulting fromthree fusion methods (probability, possibility, and evidence) tothe ground truth image. The evaluation step is ensured by threecriteria, i.e., AAD, SNR, and MSE.

The developed system is evaluated by comparing the resultobtained to the ground truth image.

As a perspective for this paper, we might replace the RBRcomponent in the proposed approach with some other reasoningmethod, such as neural networks.

REFERENCES

[1] I. Bloch and A. Hunter, “Fusion: General concepts and characteristics,”Int. J. Intell. Syst., vol. 16, no. 10, pp. 1107–1134, Oct. 2001.

[2] S. French, “Uncertainty and imprecision: Modelling and analysis,”J. Oper. Res. Soc., vol. 46, no. 1, pp. 70–79, Jan. 1995.

[3] C. Hudelot, “Towards a cognitive vision platform for semantic imageinterpretation; application to the recognition of biological organisms,”Computer Science thesis, Nice-Sophia Antipolis Univ., Nice, France,Apr. 2005.

[4] I. Bloch, “Information combination operators for data fusion: A compara-tive review with classification,” IEEE Trans. Syst., Man, Cybern. A, Syst.,Humans, vol. 26, no. 1, pp. 52–67, Jan. 1996.

[5] J. Inglada and G. Mercier, “A new statistical similarity measure for changedetection in multitemporal SAR images and its extension to multiscalechange analysis,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5,pp. 1432–1445, May 2007.

[6] I. Bloch, “Fusion of numerical and structural image information in med-ical imaging in the framework of fuzzy sets,” in Fuzzy Systems in Medi-cine, ser. Series Studies in Fuzziness and Soft Computing. New York:Springer-Verlag, 2000, pp. 429–447.

[7] Y. L. Chang, L. S. Liang, C. C. Han, J. P. Fang, W. Y. Liang, andK.-S. Chen, “Multisource data fusion for landslide classification using

generalized positive Boolean functions,” IEEE Trans. Geosci. RemoteSens., vol. 14, no. 6, pp. 1697–1708, Jun. 2007.

[8] O. Colliot, O. Camara, R. Dewynter, and I. Bloch, “Description of braininternal structures by means of spatial relations for MR image segmenta-tion,” in Proc. SPIE—Medical Imaging, San Diego, CA, 2004, vol. 5370,pp. 444–455.

[9] J. Van Cleynenbreugel, S. A. Osinga, F. Fierens, P. Suetens, andA. Osterlinck, “Road extraction from multi-temporal satellite images byan evidential reasoning approach,” Pattern Recognit. Lett., vol. 12, no. 6,pp. 371–380, Jun. 1991.

[10] M. Törmä, “Classification of natural areas in Northern Finland usingoptical remote sensing images and data fusion,” in Proc. IEEE Geosci.Remote Sens. Symp., Barcelona, Spain, Jul. 2007, pp. 3078–3081.

[11] N. Milisavljevic and I. Bloch, “Possibilistic multi-sensor fusion forhumanitarian demining,” in Proc. IEEE Geosci. Remote Sens. Symp.,Barcelona, Spain, Jul. 2007, pp. 14–17.

[12] S. H. Lee, “Multsensor fusion based on Dempster–Shaefer evidence us-ing beta mass function,” in Proc. IEEE Geosci. Remote Sens. Symp.,Barcelona, Spain, Jul. 2007, pp. 3112–3114.

[13] S. Le Hégarat-Mascle, I. Bloch, and D. Vidal-Madjar, “Application ofDempster–Shafer evidence theory to unsupervised classification in multi-source remote sensing,” IEEE Trans. Geosci. Remote Sens., vol. 35, no. 4,pp. 1018–1031, Jul. 1997.

[14] J. Jarmulaka, E. J. H. Kerckhoffs, and P. P. Van’t Veen, “Case-basedreasoning for interpretation of data from non-destructive testing,” Eng.Appl. Artif. Intell., vol. 14, no. 4, pp. 401–417, Aug. 2001.

[15] I. Jurisica and J. Glasgow, “Applications of case-based reasoning inmolecular biology,” Artif. Intell. Mag.—Special Issue on Bioinformatics,vol. 25, no. 1, pp. 85–95, 2004.

[16] K. J. Yang and Y. M. Chen, “Ontology-based knowledge retrieval inorganizational memory,” in Proc. 1st Int. Conf. Innovative Comput., Inf.Control, 2006, vol. 1, pp. 566–569.

[17] S. Craw, N. Wiratunga, and R. C. Rowe, “Learning adaptation knowl-edge to improve case-based reasoning,” Artif. Intell., vol. 170, no. 16/17,pp. 1175–1192, Nov. 2006.

[18] E. Christophe, D. Léger, and C. Mailhes, “Quality criteria benchmark forhyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 9,pp. 2103–2114, Sep. 2005.

[19] L. Desmecht, “Reconnaissance d’objets 3D par leurs caractéristiquesclés,” in Proc. 12th Meeting Classification Francophones Soc., Montreal,QC, Canada, May 30–Jun. 1, 2005.

[20] H. C. Shih and C. L. Huang, “A semantic network modelling for under-standing baseball,” in Proc. ICASSP, 2003, vol. 5, pp. 820–823.

[21] E. G. M. Petrakis, C. Faloutsos, and K. I. Lin, “Imagemap: An imageindexing method based on spatial similarity,” IEEE Trans. Knowl. DataEng., vol. 14, no. 5, pp. 979–987, Sep./Oct. 2002.

[22] E. Vansteenkiste, S. Gautama, and W. Philips, “Analysing multi-spectraltextures in very high resolution satellite images,” in Proc. IEEE Geosci.Remote Sens. Symp., Toulouse, France, 2004, vol. 5, pp. 3062–3065.

[23] A. P. Carleer, O. Debeir, and E. Wolff, “Assessment of very high spa-tial resolution satellite image segmentations,” Photogramm. Eng. RemoteSens., vol. 71, no. 11, pp. 1285–1294, Nov. 2005.

[24] G. Hay, G. Castilla, M. Wulder, and J. Ruiz, “An automated object-basedapproach for the multiscale image segmentation of forest scenes,” Int.J. Appl. Earth Observation Geoinformation, vol. 7, no. 4, pp. 339–359,Dec. 2005.

[25] I. Gath and A. B. Geva, “Unsupervised optimal fuzzy clustering,” IEEETrans. Pattern Anal. Mach. Intell., vol. 11, no. 7, pp. 773–781, Jul. 1989.

[26] J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised mul-tiresolution segmentation for images with low depth of field,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 23, no. 1, pp. 85–90, Jan. 2001.

[27] S. Derrode, R. Mezhoud, and F. Ghorbel, “Reconnaissance de formes parinvariants complets et convergents: Application à l’indexation de basesd’objets à niveaux de gris,” in Proc. GRETSI 17th Symp. Signal Process.Images, 1999, pp. 119–122.

[28] F. Chaker, M. T. Bannour, and F. Ghorbel, “A complete and stable set ofaffine-invariant Fourier descriptors,” in Proc. 12th Int. Conf. Image Anal.Process., Sep. 17–19, 2003, pp. 578–581.

[29] A. Hafiane, S. Chaudhuri, G. Seetharaman, and B. Zavidovique, “Region-based CBIR in GIS with local space filling curves to spatial representa-tion,” Pattern Recognit. Lett., vol. 27, no. 4, pp. 259–267, Mar. 2006.

[30] Y. Du and J. Z. Wang, “A scalable integrated region-based image retrievalsystem,” in Proc. IEEE ICIP, 2001, pp. 22–25.

[31] B. Wang, X. Zhang, and N. Li, “Relevance feedback technique forcontent-based image retrieval using neural network learning,” in Proc.5th Int. Conf. Mach. Learn. Cybern., Dalian, China, Aug. 13–16, 2006,pp. 3692–3696.

4152 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 12, DECEMBER 2008

[32] F. T. S. Chan, “Application of a hybrid case-based reasoning approach inelectroplating industry,” Expert Syst. Appl., vol. 29, no. 1, pp. 121–130,Jul. 2005.

[33] Y. Li, S. C. K. Shiu, S. K. Pal, and J. N. K. Liu, “A rough set-based case-based reasoner for text categorization,” Int. J. Approx. Reason., vol. 41,no. 2, pp. 229–255, Feb. 2006.

[34] I. R. Farah, K. Saheb Ettabaa, and M. Ben Ahmed, “A generic multi-agent system for analyzing spatial-temporal geographic information,” Int.J. Comput. Sci. Netw. Security, vol. 6, no. 8, pp. 4–10, Aug. 2006.

[35] S. Aksoy, K. Koperski, C. Tusk, G. Marchisio, and J. C. Tilton, “LearningBayesian classifiers for scene classification with a visual grammar,” IEEETrans. Geosci. Remote Sens., vol. 43, no. 3, pp. 581–589, Mar. 2005.

[36] L. Bentabet, S. Jodouin, and A. Boudraa, “Estimation of mass functionsin Dempster–Shafer evidence theory using fuzzy clustering and spatialinformation for gray level based image fusion,” Opt. Eng., vol. 41, no. 4,pp. 760–770, Apr. 2002.

Imed Riadh Farah received the M.D. degree fromISG Institute of Computer Sciences, Tunis, Tunisia,in 1995 and the Dr. Eng. degree from ENSI,Manouba, Tunisia, in 2003.

He was with the Laboratory RIADI, Na-tional School of Computer Sciences Engineering,Manouba, Tunisia, where he became a PermanentResearcher in 1995 and a Research Assistant in 1996.Since 2004, he has been an Assistant Professor withthe University of Jendouba, Jendouba, Tunisia. Hisresearch interest includes image processing, pattern

recognition, artificial intelligence, data mining, and their application to remotesensing.

Prof. Farah is a member of Arts-Pi Tunisia.

Wadii Boulila received the M.E. degree in engineer-ing from the National School of Computer SciencesEngineering, Manouba, Tunisia, in 2007.

Since 2006, he has been a Permanent Researcherwith the Laboratory RIADI, National School ofComputer Sciences Engineering. His research inter-est includes image processing, image fusion, arti-ficial intelligence, data mining, pattern recognition,and their application to remote sensing.

Karim Saheb Ettabaâ received the M.E. andDr.Eng. degrees from the National School of Com-puter Sciences Engineering, Manouba, Tunisia, in2004 and 2007, respectively.

Since 2004, he has been a Permanent Researcherwith the Laboratory RIADI, National School ofComputer Sciences Engineering. His research inter-est includes image processing, data mining, artificialintelligence, pattern recognition, and their applica-tion to remote sensing.

Basel Solaiman received the M.E. degree intelecommunication engineering from the EcoleNationale Supérieure des Télécommunications deBretagne (TELECOM-Bretagne), Brest, France,in 1983 and the Ph.D. degree from the University deRennes I, Brest, in 1989.

From 1984 to 1985, he was a Research Assistantwith the Communication Group, Centre d’Etudeset de Recherche Scientifique, Damascus, Syria. In1992, he joined the Laboratoire ITI, TELECOM-Bretagne, Technopôle Brest Iroise, Brest. His current

research interests include the fields of remote sensing, medical image process-ing, pattern recognition, neural network, and artificial intelligence.

Mohamed Ben Ahmed is a Professor Emeritus withthe Laboratory RIADI, National School of ComputerSciences Engineering, Manouba, Tunisia. His re-search interest includes image processing, data min-ing, artificial intelligence, pattern recognition, andontology.