16
Ž . ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 www.elsevier.comrlocaterisprsjprs An improved model for automatic feature-based registration of SAR and SPOT images Paul Dare a, ) , Ian Dowman b a Department of Geomatics, UniÕersity of Melbourne, Melbourne, Victoria 3010, Australia b Department of Geomatic Engineering, UniÕersity College London, Gower Street, London, WC1E 6BT, UK Received 7 February 2000; accepted 15 May 2001 Abstract Accurate image registration is essential for the successful creation and interpretation of multi-image spatial information products. However, the enormous increase in the volume of remotely sensed data that is being acquired by an ever growing number of Earth Observation satellites means that automation now plays a more vital role than ever before in the registration procedure. This paper introduces an improved model for automatic feature-based image registration, and presents a robust system for automatically registering SAR and SPOT imagery. The model incorporates multiple feature extraction and feature matching algorithms which operate together to identify common features in the multi-sensor images, from which accurate tie points can be derived. Application of the proposed automatic registration model to both small and large images showed that in each case a large number of accurate tie points could be identified fully automatically across the images. q 2001 Elsevier Science B.V. All rights reserved. Keywords: Image registration; Feature-based matching; Multi-sensor data integration; SPOT; ERS SAR 1. Introduction Image registration is fundamental to remote sens- ing. With the ever increasing number of remote Ž . sensing satellites Harris, 1997 , advances in data Ž . fusion Pohl and van Genderen, 1998 and the func- tionality of modern geographic information systems, use of multi-image spatial information products is swiftly becoming the norm. However, in order to meet the requirements of the user, each individual image making up the multi-image product needs to be expressed in the same geometric reference frame. ) Correspondimg author. Fax: q 61-3-9347-2916. Ž . E-mail address: [email protected] P. Dare . This means the images have to be accurately regis- Ž tered to each other and preferably but not necessar- . ily expressed in a local geodetic co-ordinate system. Manual image registration is well established, but the procedure can lead to inaccurate results, and can be slow to execute, especially if a large number of images need to be registered. The subject of auto- matic image registration addresses, and in many cases solves, the problems associated with manual image registration. However, there still exists a num- ber of scenarios where automatic image registration is not well developed and robust paradigms have not been established for multi-source image registration Ž and image-to-map registration Dowman and Dare, . 1999 . This paper addresses the problem of auto- matic registration of SAR and SPOT images, and 0924-2716r01r$ - see front matter q 2001 Elsevier Science B.V. All rights reserved. Ž . PII: S0924-2716 01 00031-4

An improved model for automatic feature based registration of SAR and SPOT images

Embed Size (px)

Citation preview

Ž .ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28www.elsevier.comrlocaterisprsjprs

An improved model for automatic feature-based registration ofSAR and SPOT images

Paul Dare a,), Ian Dowman b

a Department of Geomatics, UniÕersity of Melbourne, Melbourne, Victoria 3010, Australiab Department of Geomatic Engineering, UniÕersity College London, Gower Street, London, WC1E 6BT, UK

Received 7 February 2000; accepted 15 May 2001

Abstract

Accurate image registration is essential for the successful creation and interpretation of multi-image spatial informationproducts. However, the enormous increase in the volume of remotely sensed data that is being acquired by an ever growingnumber of Earth Observation satellites means that automation now plays a more vital role than ever before in the registrationprocedure. This paper introduces an improved model for automatic feature-based image registration, and presents a robustsystem for automatically registering SAR and SPOT imagery. The model incorporates multiple feature extraction and featurematching algorithms which operate together to identify common features in the multi-sensor images, from which accurate tiepoints can be derived. Application of the proposed automatic registration model to both small and large images showed thatin each case a large number of accurate tie points could be identified fully automatically across the images. q 2001 ElsevierScience B.V. All rights reserved.

Keywords: Image registration; Feature-based matching; Multi-sensor data integration; SPOT; ERS SAR

1. Introduction

Image registration is fundamental to remote sens-ing. With the ever increasing number of remote

Ž .sensing satellites Harris, 1997 , advances in dataŽ .fusion Pohl and van Genderen, 1998 and the func-

tionality of modern geographic information systems,use of multi-image spatial information products isswiftly becoming the norm. However, in order tomeet the requirements of the user, each individualimage making up the multi-image product needs tobe expressed in the same geometric reference frame.

) Correspondimg author. Fax: q61-3-9347-2916.Ž .E-mail address: [email protected] P. Dare .

This means the images have to be accurately regis-Žtered to each other and preferably but not necessar-

.ily expressed in a local geodetic co-ordinate system.Manual image registration is well established, but theprocedure can lead to inaccurate results, and can beslow to execute, especially if a large number ofimages need to be registered. The subject of auto-matic image registration addresses, and in manycases solves, the problems associated with manualimage registration. However, there still exists a num-ber of scenarios where automatic image registrationis not well developed and robust paradigms have notbeen established for multi-source image registration

Žand image-to-map registration Dowman and Dare,.1999 . This paper addresses the problem of auto-

matic registration of SAR and SPOT images, and

0924-2716r01r$ - see front matter q 2001 Elsevier Science B.V. All rights reserved.Ž .PII: S0924-2716 01 00031-4

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2814

proposes an innovative, robust model that is shownto produce reliable and accurate results.

The traditional procedure for manually registeringa pair of images requires the manual selection of tiepoints in each image. The points are used to deter-mine the parameters of a transformation function,which is subsequently used to register the AslaveBimage to the AmasterB image, via a resamplingscheme. Automation of this procedure requires thereplacement of manual tie point selection with auto-matic algorithms for locating corresponding points in

Ž .both images Brown, 1992 . In feature-based match-Žing which is particularly suited to multi-source im-

.age registration this is achieved by extracting spe-Ž .cific features such as points, lines and patches from

each image, matching corresponding features andderiving a set of tie points.

Many different methods have been proposed forsolving the problem of automatically locating tiepoints in multi-source images using feature-based

Ž .matching Fonseca and Manjunath, 1996 . The com-mon theme to all of these techniques is that eachmethod relies on a single feature extraction algo-rithm for extracting the primitives to be matched.The obvious consequence is that the potential forfinding common features in a pair of images isseverely limited. The principal difference of the au-tomatic image registration model described in thispaper is that the procedure incorporates numerousfeature extraction algorithms, rather than relying onjust one. The results clearly and unambiguously showthat by utilising multiple feature extraction algo-rithms the number of tie points located in a pair ofmulti-source images can be significantly increased.

A second aspect of the new model is that itcombines feature extraction and feature matchinginto an iterative procedure, so that the results of thefeature matching algorithm are used to re-extractfeatures which are subsequently re-matched. Resultsshow that this technique can further increase thenumber of matched patches, and hence the numberof tie points located in the pair of images.

Finally, the issue of automation needs to be ad-dressed. It would be desirable to develop an auto-matic image registration system which could operatefrom beginning to end without any user interventionwhatsoever. However, with current processing tech-niques this is not feasible. Typically, user interven-

tion is required to select optimal parameters for thefeature extraction algorithms, and to verify the qual-

Žity of the results. The latter procedure verification of. Ž .the results cannot and possibly should not be

automated at this stage. However, to ensure that userintervention is kept to a minimum the selection ofoptimal parameters must be automated. The pro-posed new model differs significantly from previoustechniques since it incorporates algorithms for auto-matically selecting optimal feature extraction algo-rithm parameters. Results confirm that a reasonablenumber of tie points can be found without the needfor manual intervention at any point.

2. Feature-based registration of multi-source im-ages

Spatial domain feature-based registration relies onŽcorresponding features features in overlapping im-

.ages which represent the same object on the groundbeing matched with each other in order to derive aset of tie points. The basic model is illustrated in Fig.1. It differs from area-based matching in that it is thegeometric distribution of the pixels making up thefeature which is used in the matching, rather thantheir radiometric attributes. Consequently, spatial do-main feature-based matching is the ideal method for

Fig. 1. Fundamental model for automatic image registration.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 15

registering pairs of multi-source images where theŽradiometry is very different as is the case with SAR

.and SPOT imagery .Any features, from both the spatial and frequency

domains, can be used as matching primitives. Fre-Ž .quency domain features such as wavelets are more

Ž .commonly used in single sensor or similar sensorimage registration, so they are not applicable in thisexample. The three fundamental and most commonlyused spatial domain features are points, lines andpatches. Points can be extracted using an interest

Ž .operator Forstner and Gulch, 1987; Moravec, 1977 ;¨ ¨lines can be extracted using derivative-based edge

Ž .detectors Pratt, 1991 or line extraction algorithmsŽsuch as the Hough transform Illingworth and Kittler,

.1988 ; patches can be extracted using thresholding,Žclassification or segmentation algorithms Gonzalez

.and Woods, 1992 , amongst other methods.The particular method used to match the primi-

tives is dependent on the nature of the primitivesthemselves. Point features can be matched either byconsidering the radiometric properties of the sur-rounding pixels, or the geometric distribution of thewhole set of point features across the whole imageŽ .Boardman et al., 1996 . Needless to say, pointfeatures extracted from multi-source imagery wherethe radiometry is very different will be difficult tomatch. However, if the radiometry is significantlydifferent it would be unlikely that point extractionalgorithms would be able to extract the same pointfeatures from the imagery. Consequently, it is clearthat point features are not suitable primitives whenthe imagery to be registered are radiometrically verydifferent. Linear or areal features are commonlymatched by comparing differences in attributesŽ .Morgado and Dowman, 1997 . Attributes such as

Ž .length or radius of curvature for lines , and area orŽ .perimeter length for patches , are compared for

features extracted from each image. A correct matchis most likely to occur when differences in attributesare small. Numerous recent studies, many of which

Ž .are described in Fonseca and Manjunath 1996 ,have shown that matching of linear or areal featuresextracted from radiometrically different multi-sourceimagery can successfully be used for registeringthose images.

After the matching has taken place, a set of tiepoints can be derived from the set of conjugate

features. Selection of the transformation function andresampling method is usually made manually, ac-cording to the geometric properties of the master andslave images, and the requirements of the finalmulti-image product.

Two principal problems associated with feature-based image registration which have to be addressedby any robust automatic image registration systemare:

Ø the difficulty of recognizing features acquiredfrom multiple sensors; and,

Ø application of image registration algorithms toŽimagery with differing attributes such as reso-

.lution or radiometry .

The problem of recognising similarities betweenpairs of images is fundamental to automatic imageregistration and perhaps the most difficult problem tosolve. Fig. 2 shows two images of the same region inSouthern France. The image on the left is a SPOTpanchromatic image and the image on the right is anERS-1 SAR image. There are one or two featuresthat can be recognised in both images, but since theimaging geometry is different, corresponding fea-tures have different sizes and shapes. Furthermore,since the images were acquired using different sen-sors the grey values of common features are alsoquite different.

Even when a pair of images are accurately co-reg-istered it can still be difficult to recognise commonfeatures. Fig. 3 shows a region extracted from theabove pair of images of Southern France. The SPOT

Ž .image left has been registered to the SAR imageŽ .right using manually selected tie points and a firstorder transformation function. Even though the im-ages have the same reference co-ordinate system,similar resolution and cover the same ground area,there are few features that can clearly be identified inboth images.

The pairs of images shown above highlight someof the difficulties of matching features in pairs ofmulti-source images. The problems persist at both

Žsmall scales for approximate alignment of large. Žimages and large scales for accurate registration of

.smaller scenes .The second difficulty introduced above is that of

global application of algorithms. Many of the pro-

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2816

Ž . Ž .Fig. 2. Optical left and radar right images of the Rhone Valley, Southern France.ˆ

posed automatic image registration algorithms de-scribed in the literature perform very well with thetest datasets provided by the authors, but sometimesthese algorithms will fail when used with different

datasets. The problem that has to be overcome is thedevelopment of automatic registration models thatcan be applied to almost any dataset and still pro-duce accurate results.

Ž . Ž .Fig. 3. Co-registered optical left and radar right images of Southern France.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 17

The automatic registration model proposed in thispaper addresses the two difficulties described aboveby incorporating multiple feature extraction algo-rithms. Firstly, this ensures that many more featuresare extracted than if a single extraction algorithmwere used, and thus the number of matched corre-sponding features is increased. Secondly, it increasesthe chances of yielding useful results when appliedto a wide range of imagery.

3. A new model for automatic image registration

3.1. OÕerÕiew

The image registration model described in thispaper is based on a three-step procedure in whichregistration accuracy increases at each stage of theprocessing. The three processing steps are:

1. initial alignment using manually selected tiepoints;

2. approximate registration using patch matching;and,

3. accurate registration using edge matching.

In the first step the slave image is approximatelyŽaligned with the master image using a few manually

.selected tie points in order to remove gross differ-ences in scale and rotation, which cannot be treatedin the subsequent patch matching. Although this stephas the disadvantage of introducing manual interven-tion into the processing, it also has two distinctadvantages. Firstly, it considerably simplifies thesubsequent processing to such a large extent that itenhances the efficiency of the overall procedure.Secondly, it provides the user with the opportunity toview images being registered to ensure there is suffi-cient overlap and coverage of the area of interest onthe ground. The alignment only needs to be very

Ž .approximate in the order of 10 to 20 pixels , sothree or four tie points and either a similarity oraffine transformation function is sufficient.

The second step in the processing improves theaccuracy of the initial registration using tie pointslocated by automatically extracting and matching

Ž .areal features or patches . In the automatic registra-

tion system described in this paper, four patch ex-Žtraction algorithms have been used although more

could have been used, four were sufficient to providean insight into algorithm redundancy, while still

. Žgiving a manageable number of results . Results i.e..extracted patches from each of these algorithms are

combined before proceeding to the matching stageof the processing. Feature matching is subdividedinto three steps. The first step uses a cost function tomatch as many patches as possible, even though it ispossible that some of the matches may be incorrect.The second and third steps refine the matches usingthe shape and spatial distribution of the matchedpatches, respectively, until only correct matches re-main. The centroids of the matched patches form aset of tie points which are used to generate theparameters of an transformation function, which issubsequently used to register the images. At thispoint, it is possible to increase the quantity androrquality of the matches by using the patches extractedfrom one of the images to guide the patch extractionin the other image, and vice versa. Patches arere-extracted and combined with the patches extractedoriginally, and the matching is repeated. The result isa more refined set of tie points.

The third step in the proposed image registrationmodel improves and refines the accuracy of theregistration by extracting and matching edge fea-tures. A derivative-based edge extraction algorithmextracts edges which are matched using an algorithm

Žbased on dynamic programming Newton et al.,.1994 . The advantage of using this particular edge

matching algorithm is that it provides a much largernumber of tie points with a more extensive spatialdistribution than the patch matching algorithm. How-ever, the images have to be closely aligned with each

Žother maximum offset of about three pixels, with.very small rotation and scale differences for the

algorithm to produce reliable results. It is for thisreason that patch matching is used to initially regis-ter the images.

The result of these three processing steps is alarge set of tie points distributed across the masterand slave images. The user is now able to select the

Žmost appropriate transformation function according.to the type and geometry of the imaging sensors and

resampling algorithm to meet the requirements of theapplication and register the images.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2818

3.2. Patch extraction

The three fundamental principles of patch extrac-tion for feature-based image registration are:

Ø similar patches must be extracted from eachimage, even if the images have very differentradiometric properties; but,

Ø not every single patch needs to be extractedfrom both images; and,

Ø not all extracted patches need to be matched.

Redundancy is a key factor in the patch extractionand matching procedure. Although not all patchesneed to be extracted from each image, as many aspossible must be extracted to ensure a redundancy ofpatches, and although not all the extracted patchesneed to be matched, as many as possible should bematched to ensure a redundancy of matches. It is byutilising this redundancy that poor quality matchesand blunders can be eliminated from the automaticregistration procedure, hence yielding a more accu-rate result. To ensure a redundancy of patches, fourpatch extraction algorithms were used: automaticthresholding, homogeneous patch extraction and twodifferent region growing segmentation algorithms.

An automatic thresholding algorithm was devel-oped based on the established technique of bimodal

Ž .histogram splitting Weszka, 1978 . Although thequality of the results depends on the content of the

Žimage i.e. if there is a distinct object feature on a.fairly homogeneous background , it was found that

reliable features could be extracted from a range ofŽ .small 512=512 pixels SAR and SPOT images.

Reliability and robustness for SAR images was in-creased by pre-processing the images with smooth-ing or speckle reduction filters. Extraction of featuresusing this technique of automatic thresholding re-quired no user intervention whatsoever.

ŽHomogeneous patch extraction Abbasi-Dezfouli.and Freeman, 1994 detects and extracts patches by

scanning the image for homogeneous regions. In thisimplementation, homogeneity has been defined by amanually set parameter, which specifies the maxi-mum variation of pixel values within a patch. Resultsshowed that patches could be extracted from a rangeof SAR and SPOT images featuring many different

types of landcover. No pre-smoothing of the SARimages to reduce speckle was required.

Four different segmentation algorithms were usedŽin the patch extraction procedure: MUM Cook et

. Ž .al., 1994 and RWSEG White, 1991 for SAR im-Ž .ages, and REGSEG Kai and Muller, 1991 and

Ž .OPTISEG Ruskone and Dowman, 1997 for SPOT´images. Each one operates in a slightly differentway, and therefore produces slightly different results.However, a common theme with all of them is that

Žto obtain the best segmentation results for a particu-.lar application the user has to manually experiment

Ž .with different parameters usually by trial and erroruntil the desired result is achieved.

To overcome the problem of manual parameterselection, a Abrute forceB technique was employed:the extraction algorithm was repeatedly applied tothe image using a different parameter each time.Therefore, a discrete range of parameters, rather thana single parameter, were used to generate results.Although this technique has the disadvantage ofincreased processing, this is outweighed by the twoadvantages that the method offers. Primarily, it elim-inates all manual intervention in the feature extrac-tion process. Secondly, since it increases the totalquantity of extracted patches, it follows that thenumber of matched corresponding features is simi-larly increased.

The aim of the feature extraction processing stepis to pass on as many patches as possible to the

Žfeature matching step. Two procedures the use ofmultiple feature extraction algorithms, and the use of

.multiple parameters for those algorithms have beenincorporated to meet this aim, and as a consequencethe system can be considered to be both automaticand robust.

As a final step in the patch extraction processingstage, the extracted patches are filtered to removethose which are unlikely to yield successful matches.The single criterion used in the filtering process ispatch size: very small or very large patches werefound to be less likely to represent real objects onthe ground, and therefore unlikely to be successfully

Ž .matched. Previous work Dare et al., 1997 hassuggested the possibility of using multiple criteriaŽ .such as shape and relative location in the image ina more rigorous filtration procedure, but further re-search and development is required.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 19

3.3. Patch matching

The patch matching algorithm operates in a step-wise manner where the results are refined at each

Ž .stage of the processing Fig. 4 .The first step matches all the patches extracted

from the original images with each other by minimis-Ž Ž ..ing a cost function Eq. 1 based on four patch

attributes: area, perimeter length, and width andheight of the minimum bounding rectangle.

1a ya p yp r yr1 2 1 2 1 22

Gs q qa qa p qp r qr1 2 1 2 1 2

c yc1 2q 1Ž .

c qc1 2

where a is the area of patch i, p is the perimeteri i

length, r is the length of the bounding rectangle,i

and c is the width of the bounding rectangle. Notei

that the square root of the area component has beentaken in order to reduce its influence on the value of

Ž .G otherwise this term dominates the cost function ,and each term has been normalised so that consistentresults can be achieved for both small and largepatches.

Fig. 4. Patch matching algorithm.

Matching results for the first step of the patchmatching algorithm are generated as follows. Thecost function is repeatedly determined using the firstpatch in image 1 and all the patches in image 2, oneby one. The pair of patches which gives the mini-mum value is accepted as the best match. The pro-cess is repeated for the second, third and all subse-quent patches in image 1 until they have all beenmatched with patches in image 2. In the situationwhere a patch from image 2 has been matched withtwo different patches from image 1, the match withthe lowest cost function is accepted as the correctone. The result is that all the patches in image 1 havebeen matched, but not necessarily all of the patchesin image 2 have been matched. In order to ensurethat all patches in image 2 have the opportunity to bematched with patches in image 1, the process isrepeated with the order of the images reversed. Thefirst patch in image 2 is matched with all the patchesin image 1, as is the second, third, and so on.Multiple matches are again eliminated using thevalue of the cost function. Before the matchingresults can be refined in the second step of thematching procedure, the results from the two pro-cesses described above have to be combined. Theresult is that every patch extracted from each imagehas been matched with its most likely correspondingpatch from the other image. Therefore, although theset of matches will be large and contain many falsematches, it should also contain most correct matches.The aim of the next two processing steps is toseparate the correct matches from the false ones.

The second step in the patch matching algorithmsrefines matches by comparing patch shape. Previous

Žstudies Abbasi-Dezfouli and Freeman, 1994; Mor-.gado and Dowman, 1997 have used chain coding as

a shape descriptor to match patches. However, unlikein these previous studies, this procedure does notproduce reliable results for patches extracted fromSAR and SPOT imagery, so a much simpler tech-nique is introduced here. To detect a false match, apair of matched patches are laid on top of each otherwith their centroids coinciding, and the number ofpixels that they have in common are counted. Patcheswith similar shapes will have a large number ofpixels in common, whereas patches with differentshapes will have fewer pixels in common. In orderthat results from different pairs of patches can be

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2820

compared, the results are expressed as a ratio ofpixels in common to total number of pixels in thetwo patches. By using a fixed threshold for theminimum amount of overlap, it is simple to separatethe correct matches from the false matches. Obvi-ously, it is possible that some false matches may stillbe accepted as true matches, but now the correctmatches outnumber the false matches. The thirdstage of the matching procedure further refines theresults to ensure that no false matches remain.

The third stage of the matching process removesŽ .the remaining false patches if any still remain by

Žcomparing the patch separation distances the differ-ence between the centroid co-ordinates for a pair of

.matched patches for all the matched patches. Sincethe images being matched are approximately alignedwith each other, it is true to say that the patchseparation for all correctly matched patches should

Ž .be very similar where terrain relief is low . Byplotting a histogram of all the patch separations, acluster will exist which represents the correctmatches. The patch separation for the false matcheswill be distributed randomly. Therefore, by eliminat-ing all the matches with a patch separation distanceoutside that cluster, only the correct matches remain.However, for this method to produce reliable results,there must be a significant number of correctlymatched patches in order to identify the cluster.

The result of applying the patch matching algo-rithm is a set of tie points derived from the co-ordinates of the centroids of the matched patches.Assuming that sufficient patches have been matched,the tie points can be used to register the imagesusing a first order polynomial transformation func-tion.

It is now possible to improve the matching resultsŽ .i.e. increase the quantity of matched patches byrepeating the feature extraction procedure for eachimage, but this time using the information containedin the co-registered image. In other words, featuresextracted from the slave image are used to improve

Žthe extraction of features from the master image to.which the slave image has been registered and vice

versa. This is possible since some segmentation algo-rithms can utilise data from more than one source tosegment an image—in this case the sources are themaster image, and the co-registered slave image. The

Žnewly extracted features typically numbering about

10% of the total number of correctly matched fea-.tures are matched by following the matching proce-

dure described above, and the resulting tie points areused to register the images using a first order trans-formation function. The resulting registration accu-racy is then refined in the final step of the registra-tion algorithm using edge extraction and matching.

3.4. Edge extraction and matching

Edge extraction and matching are the final stepsin the proposed automatic registration model. Edgematching is used to increase the quantity, and hencethe spatial distribution, of tie points and thereforerefine the quality of the registration. It is a necessarystep in the processing chain since the patch matchingprocedure will only produce a single pair of tiepoints from each pair of matched patches, so thenumber of tie points found will always be limited bythe number of significant areal features in theseimages. However, although edge matching will pro-duce a larger quantity of tie points than patch match-ing, it cannot be used in isolation: to produce reliableresults, the images have to be approximately regis-tered in advance, and matched patches have to havebeen found in the pair of images.

An important aspect of edge matching, as withpatch matching, is that it is essential that only ex-tracted edges which represent the same object in thescene are matched. To ensure that this is the case,only those pixels in the boundaries of patches whichare already known to represent the same object onthe ground are used in the edge matching procedure.This is done by using the boundaries of matchedpatches as masks to eliminate all other unwantededge information from the images.

Edges are extracted from the master and slaveŽimages using the Sobel operator Gonzalez and

.Woods, 1992 . Smoothing the optical image andreducing speckle in the SAR image improves thefinal results slightly, but the difference is not pro-found. After extraction, the edge strength images areprocessed using a non-maximal suppression algo-

Ž .rithm Lewis, 1988 , a thresholding algorithm and aŽ .clutter removal algorithm Dare, 1999 . As a result,

all weak and insignificant edges are eliminated. Thecorresponding edge direction images are filtered so

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 21

that the only pixels remaining are those associatedwith edge strength pixels.

ŽThe edge matching algorithm Newton et al.,.1994 uses dynamic programming and is loosely

based on a method described by Maitre and WuˆŽ .1989 . The algorithm operates by moving alongeach edge in the master image pixel by pixel, and byusing the transformation function it predicts whereeach pixel will be located in the slave image. Foreach edge pixel in the master image, a search area isset up around the predicted location in the corre-sponding slave image. A cost function, based onpixel separation and difference in direction, is evalu-ated for all slave image edge pixels within a certain

Ž .radius 3 pixels of the predicted position of themaster image edge pixel, and the slave pixel with thelowest cost function value is accepted as the match.Only one slave image edge pixel can be matched toeach master image edge pixel, and vice versa.

An important aspect of this algorithm is that whenall the edges in the master image have been consid-ered, the edge with the lowest average cost functionis used to update the initial transformation. Thewhole matching process is then repeated, but thistime it starts from improved starting conditions. Inaddition, the edge used to update the transformation

is excluded from the matching. This process is re-peated until all of the edges in the master image havebeen used to update the transformation, and no edgesremain.

As a result of applying the edge extraction andmatching algorithm, the number of new tie pointsŽ .the matched edge pixels found in the images isgreatly increased. Depending on the application, it isleft to the user to select the most appropriate trans-formation function.

4. Automatic registration of small SAR and SPOTimages

The proposed automatic registration model wasevaluated using ERS-1 SAR images and SPOT PAN

Ž .1A images. Two small 512=512 pixels sub-imagesŽ .Camargue and Istres were extracted from each ofthe full scene images. The terrain height variations in

Ž .one pair of images Istres was approximately 25 m,Ž .whilst in the other Camargue it was less than 5 m.

The SPOT images were chosen to be the slaveimages and were therefore aligned with the SARimages using four manually selected tie points and

Ž .an affine transformation function Figs. 5 and 6 .

Fig. 5. SAR and SPOT test images of Camargue.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2822

Fig. 6. SAR and SPOT test images of Istres.

4.1. Patch extraction and matching

In order to validate the proposed model of patchextraction and matching, two tests were performed:

1. patch extraction and matching using a singlepatch extraction algorithm with manually se-lected parameters for each image; and,

2. patch extraction and matching using multiplepatch extraction algorithms with automaticallyselected parameters.

The purpose was to investigate how well theŽproposed method of automatic registration repre-

.sented by test 2 compared to the more commonlyŽ .used method represented by test 1 . For the first

test, it was found by manual investigation that the

Fig. 7. Matched patches found in SAR and SPOT images of Camargue.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 23

Fig. 8. Matched patches found in SAR and SPOT images of Istres.

best combination of patch extraction algorithms andŽparameters i.e. the combination which yielded the

.greatest number of correctly matched patches weredifferent for each pair of test images. For the Camar-gue images, the greatest number of correct matcheswere found using the MUM algorithm for the SARimage and the REGSEG algorithm for the SPOTimage, whereas for the Istres images the greatestnumber of correct matches were found using theRWSEG algorithm for the SAR image and the ho-mogeneous patch extraction algorithm for the SPOTimage. These algorithms were used to extract patcheswhich were subsequently matched. The matchedpatches are shown below in Figs. 7 and 8.

ŽThe second test using multiple extraction algo-.rithms and a range of parameters operated fully

automatically, with no user intervention, and gaveŽ .improved matching results Figs. 9 and 10 .

A comparison of these results clearly shows thatnot only is the new model able to successfullyextract and match patches, but that the new aspectsof the model have greatly improved the results. Figs.7 and 9 show that by utilising four patch extractionalgorithms rather than just one, the number of correctpatches found in the Camargue images has increasedfrom four to seven. For the Istres images the numberof matches has only increased by one, but by com-paring the shapes of corresponding patches in Figs. 8and 10, the quality of the matches has improved.Pairs of matched patches in Fig. 10 have a moresimilar shape than pairs of matched patches in Fig. 8.This is also true of the results for the Camargueimages, but the differences are a little less obvious.Finally, by using multiple extraction algorithms anda brute force method of parameter selection, theextraction and matching procedures operate fully

Fig. 9. Matched patches found in SAR and SPOT images of Camargue using multiple extraction algorithms.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2824

Fig. 10. Matched patches found in SAR and SPOT images of Istres using multiple extraction algorithms.

automatically. The manual work required in test 1 todetermine the best combination of algorithms tooksignificantly longer that the automatic method of test2. The processing time for patch extraction andmatching for each pair of images was less than 10min on a SUN SPARC workstation. The tie pointsderived from the centroids of the matched patchesshown in Figs. 9 and 10 were used to register theimages, using an affine transformation function.

4.2. Edge extraction and matching

Edges were subsequently extracted from each im-age using the Sobel operator and post-processedaccording to the method described above. Extractededges are shown in Figs. 11 and 12 for Camargueand Istres, respectively.

For each image, the edges were matched using theedge matching algorithm described above. For theCamargue image 1692 matched points were foundŽ24% of the total number of edge pixels in both

.images , whereas for the Istres image 1150 matchedŽpoints were found 23% of the total number of edge

.pixels . For each image the matched points wererandomly split into two equal groups: one was usedto register the images using an affine transformationfunction, whilst the other was used as an independentset of check points. From these check points it was

Ž .possible to calculate the root mean square RMSresidual of the transformation. The RMS residual forthe Camargue images was 1.6 pixels, and for theIstres images it was 1.5 pixels. The spatial distribu-tion of the matched points and the directional distri-bution of the residuals are shown for each pair of

Fig. 11. Post-processed edges extracted from SAR and SPOT images of Camargue.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 25

Fig. 12. Post-processed edges extracted from SAR and SPOT images of Istres.

Žimages in Fig. 13 note that for clarity only 10% ofthe points have been plotted, and the magnitude of

.the residuals has been increased by a factor of 80 .Fig. 13 shows that the matched points are reason-

ably well distributed across each image and themagnitude and directional distribution of the residu-als appears to be random; there is no obvious sys-tematic error in the results. It should be rememberedthat the affine transformation function used to gener-ate these results cannot model all the geometricdistortions present in the images.

4.3. Conclusions

There are a number of important conclusionswhich can be drawn from an analysis of the small-

image registration results. The first issue concernsthe comparison of the new registration model to thetraditional one. Almost immediately it was foundthat to obtain the best results from different images,different combinations of patch extraction algorithmsare required. This fact alone justifies the use ofmultiple feature extraction algorithms. Combine thiswith the brute force method for parameter selectionand the strategy becomes not only more robust, butalso fully automatic, unlike the traditional method.

The edge extraction and matching algorithm,which also operates fully automatically, was able tolocate between 1000 and 2000 matched points ineach pair of images, hundreds of times more thanwhat could be achieved manually. In each examplethe RMS residual was approximately 1.5 pixels.

Fig. 13. Matched points and residuals for Camargue and Istres test images.

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2826

Although this appears to be a reasonably accurateresult, it does not give a complete picture of thesituation—it really only signifies that the process is

Žfree of blunders which is further confirmed by the.plot of residuals in Fig. 13 . To achieve a more

Žaccurate registration result i.e. a lower RMS resid-.ual it is necessary to use a more appropriate trans-

Žformation function perhaps a model which incorpo-.rates terrain corrections rather than AbetterB tie

points.The question of the number of tie points found

should also be addressed. In a pair of small images,2000 tie points is far in excess of what is required tocalculate the parameters of a simple transformationfunction. However, there is no reason why a smallerset of points with an even spatial distribution cannotbe selected from the total number and used for theregistration, with the remainder being discarded.

5. Automatic registration of full scene SAR andSPOT images

To further evaluate the automatic registrationmodel, tests were performed using full scene SARand SPOT images. As with the small images the fullscene images were initially approximately alignedusing some manual tie points. In order to minimiseprocessing time, the images were subdivided intotiles, and processed on a tile by tile basis. For eachpair of coarsely corresponding tiles, patches wereextracted and matched using the methods describedabove. As a result of extracting and matching patches,a total of 39 correctly matched patches were identi-fied across the images.

In order to quantitatively assess the quality of thematched patches and the accuracy of the subsequent

Table 1Results from patch matching. RMS residuals of check points for

Žtwo tests, where control and check points were exchanged in.pixels

Test 1 Test 2

RMS residual in x 14.2 12.7RMS residual in y 7.5 4.4Total RMS residual 16.0 13.4

Table 2Results from edge matching. RMS residuals of check points for

Žtwo tests, where control and check points were exchanged in.pixels

Test 1 Test 2

RMS residual in x 9.9 9.8RMS residual in y 4.5 4.5Total RMS residual 10.9 10.8

registration, the matched points were split into twoequal groups: one group was used to generate theparameters of an affine transformation function, andthe other group were used as check points. In orderto confirm the results, the process was then repeated,but with the two groups interchanged: the checkpoints were now used as tie points, and the tie pointswere used as check points. The RMS residuals areshown in Table 1.

The results show that the proposed new model forpatch extraction and matching can be successfullyapplied to full scene multi-sensor images. However,the difficulties associated with the ability to extractsimilar features from both datasets led to only 39patches being successfully matched across the im-ages. Even so, this is a sufficient number to be ableto transform some check points and calculate some

Ž .RMS residuals. In x the cross-track direction , thecheck point residuals ranged from approximately

Žy33 to 20 pixels, and in y the along-track direc-.tion from y17 to 21 pixels. The means of both x

and y were close to 0, implying there is no system-atic error across the whole scene. However, it isevident that the y residuals are smaller than the xresiduals by a factor of two. Results from the twotests were slightly different, but not enough to war-

Žrant concern. The results in Table 1 and Table 2.below refer to the RMS residuals for all check tie

points across the full scene images. However, patchesand edges were matched on a tile by tile basis, asmentioned above. Tie points found in individual tileswould have had much smaller RMS residuals thanthe ones in Table 1, sufficient in fact to meet thenecessary approximations of ca. 3 pixels for the edgematching algorithm.

In order to refine the results using edge matching,the edge extractionrmatching algorithm described

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–28 27

above was applied to the SAR and SPOT imagery. Atotal of 3648 matched points were found using theedge matching technique. Their spatial distributionacross the image was similar to the matched pointsfound using the patch matching technique. Half ofthese matched points were used to generate theparameters of an affine transformation function,which in turn was used to register the images. Theother half were used as check points. As with theprevious tests on points derived from matchedpatches, these two groups were interchanged to con-firm that the results were consistent. The RMS resid-uals are shown in Table 2.

The edge extraction and matching algorithm pro-vided almost 100 times more matched points thanthe patch extractionrmatching algorithm. As ex-pected, Table 2 shows that the total RMS residualsare smaller than those generated from patch match-ing. The results of the two tests were very similarŽ .confirming the consistency of the results but it isstill apparent that the RMS residuals in y are signifi-cantly smaller than the RMS residuals in x, onceagain by a factor of two.

The most likely explanation is the fact that alinear transformation function has been used to cor-rect for non-linear distortions in the images, particu-larly distortions caused by terrain relief. By compar-ing the images with a topographic map of the region,it was estimated that the tie points in the imagesrepresented points on the ground with a verticaldistribution of about 50 m. In the SPOT image,terrain induced distortions are limited to the cross-

Ž .track in this case x direction, whilst in the SARimage, foreshortening due to terrain is manifested in

Ž .the cross-track again x direction. Therefore, it isŽ .expected that x cross-track residuals will be greater

Ž .than y along-track residuals.However, rather than concentrating on the RMS

residuals derived from a low order transformationfunction, attention should be focussed on the factthat an algorithm has been able to locate almost 4000matched points across two radiometrically very dif-ferent images, fully automatically. The processing

Žtime for the entire procedure patch and edge extrac-.tion and matching was of the order of a few tens of

Ž .minutes depending on image content on a SUNSPARC workstation.

6. Discussion

This paper has presented an improved model forautomatic multi-source image registration based onfeature matching, and the model has been validatedusing both small scene and full scene SAR andSPOT images. The proposed strategy is based on aprocedure where images, which are approximatelyaligned, are registered progressively more accuratelyusing patch matching and edge matching.

The traditional approach to patch matching hasbeen followed, but a number of significant improve-ments have been incorporated into the procedure,namely the use of multiple patch extraction algo-rithms and brute force parameter selection, and usingfeatures extracted from each image to improve ex-traction of features from the opposing image. Resultsshow that not only do these changes increase thequantity and quality of matched patches, but theyalso allow the system to operate fully automatically.The use of edge matching in conjunction with patchmatching has been shown to improve results consid-erably, in terms of the number of tie points found inthe two images.

Acknowledgements

The research presented in this paper was carriedŽ .out as part of a PhD project Dare, 1999 at the

Department of Geomatic Engineering, UniversityCollege London, and was supported by the NaturalEnvironment Research Council, UK, grant numberGT4r95r207D. SAR data was provided by the Eu-ropean Space Agency. SPOT data was provided bySPOT Image for an OEEPE project on aerial triangu-lation of SPOT data. The authors extend their thanksto the reviewers for their helpful and constructivecomments.

References

Abbasi-Dezfouli, M., Freeman, T.G., 1994. Stereo-image registra-tion based on uniform patches. International Archives of Pho-

Ž .togrammetry and Remote Sensing 30 3 , 1–8.Boardman, D., Dowman, I., Chamberlain, A., Fritsch, D., New-

( )P. Dare, I. Dowmanr ISPRS Journal of Photogrammetry & Remote Sensing 56 2001 13–2828

ton, W., 1996. An automated image registration system forSPOT data. International Archives of Photogrammetry and

Ž .Remote Sensing 31 4 , 128–133.Brown, L., 1992. A survey of image registration techniques. ACM

Ž .Computer Surveys 24 4 , 325–376.Cook, R., McConnell, I., Oliver, C., Welbourne, E., 1994. MUM

Ž .Merge Using Moments segmentation for SAR images. In:Ž .Franchetti, G. Ed. , SAR Data Processing for Remote Sens-

ing. Proc. SPIE, vol. 2316, pp. 92–103.Dare, P., 1999. New techniques for the automatic registration of

microwave and optical remotely sensed images. PhD Thesis,Department of Geomatic Engineering, University CollegeLondon, UK.

Dare, P., Ruskone, R., Dowman, I., 1997. Algorithm development´for the automatic registration of satellite images. Proc. ImageRegistration Workshop. NASA Goddard Space Flight Center,Maryland, USA, pp. 83–88.

Dowman, I., Dare, P., 1999. Automated procedures for multisen-sor registration and orthorectification of satellite images. Inter-national Archives of Photogrammetry and Remote Sensing 32,

Ž .37–44 7-4-3 W6 .Fonseca, L., Manjunath, B., 1996. Registration techniques for

multisensor remotely sensed imagery. Photogrammetric Engi-Ž .neering and Remote Sensing 62 9 , 1049–1056.

Forstner, W., Gulch, E., 1987. A fast operator for detection and¨ ¨precise location of distinct points, corners and centres ofcircular features. Proc. ISPRS Intercommission Workshop onFast Processing of Photogrammetric Data, Interlaken, Switzer-land, pp. 281–305.

Gonzalez, R., Woods, R., 1992. Digital Image Processing. Addi-son-Wesley, Reading, MA, USA.

Harris, R., 1997. Trends in remote sensing. Proc. Remote SensingSociety Annual Student Meeting, University College LondonADeveloping Space ’97B, UK, pp. 5–6.

Illingworth, J., Kittler, J., 1988. A survey of the Hough transform.Ž .Computer Vision, Graphics and Image Processing 44 1 ,

87–116.Kai, L., Muller, J.-P., 1991. Segmenting satellite imagery: a

region growing scheme. Proc. IEEE 11th Annual InternationalGeoscience and Remote Sensing Symposium ARemote sens-ing: global monitoring for earth managementB, Espoo, Fin-land, 3–6 June 1991, vol. 2, pp. 1075–1078.

Lewis, P., 1988. On creating an initial segmentation of remotelysensed images as an input to an automated knowledge basedsegmentation system. MSc Thesis, Department of Photogram-metry and Surveying, University College London, UK.

Maitre, H., Wu, Y., 1989. A dynamic programming algorithm forˆelastic registration of distorted images based on autoregressivemodels. IEEE Transactions on Acoustic, Speech and Signal

Ž .Processing 37 2 , 288–297.Moravec, H.P., 1977. Towards automatic visual obstacle avoid-

ance. Proc. 5th International Joint Conference on ArtificialIntelligence, Cambridge, MA, USA, pp. 584–592.

Morgado, A., Dowman, I., 1997. A procedure for automaticabsolute orientation using aerial photographs and a map. IS-

Ž .PRS Journal of Photogrammetry and Remote Sensing 52 4 ,169–182.

Newton, W., Gurney, C., Sloggett, D., Dowman, I., 1994. Anapproach to the automatic identification of forests and forestchange in remotely sensed images. International Archives of

Ž .Photogrammetry and Remote Sensing 30 3 , 607–614.Pohl, C., van Genderen, J., 1998. Multisensor image fusion in

remote sensing: concepts, methods and applications. Interna-Ž .tional Journal of Remote Sensing 19 5 , 823–854.

Pratt, W., 1991. Digital Image Processing. 2nd edn. Wiley, NewYork, USA.

Ruskone, R., Dowman, I., 1997. Segmentation design for an´automatic multisource registration. In: McKeown, D., Mc-

Ž .Glone, C., Jamet, O. Eds. , Integrating PhotogrammetricTechniques with Scene Analysis and Machine Vision III. Proc.SPIE, vol. 3072, pp. 307–317.

Weszka, J., 1978. A survey of threshold selection techniques.Ž .Computer Graphics and Image Processing 7 2 , 259–265.

White, R., 1991. Change detection in SAR imagery. InternationalŽ .Journal of Remote Sensing 12 2 , 339–360.