Upload
cem
View
214
Download
1
Embed Size (px)
Citation preview
GRAPH MATCHING BASED CHANGE DETECTION IN SATELLITE IMAGES
Murat Ilsever and Cem Unsalan
Computer Vision Research LaboratoryDepartment of Electrical and Electronics Engineering
Yeditepe University
Istanbul, 34755 TURKEY
ABSTRACTChange detection from bitemporal satellite images (taken
from the same region in different times) may be used in
various applications such as forest monitoring, earthquake
damage assessment, and unlawful occupation. There are
various approaches, based on different principles, to detect
changes from satellite images. In this study, we propose
a novel change detection method based on structure infor-
mation. Therefore, our method can be called as structural
change detection. To summarize the structure in an image,
we benefit from local features and their graph based rep-
resentation. Extracting the structure from both images, we
benefit from graph matching to detect changes. We tested our
method on 18 Ikonos image pairs and discuss its strengths
and weaknesses.
1. INTRODUCTION
Change detection from bitemporal satellite images may be
used to solve various remote sensing problems. The most
important of these are earthquake damage assessment, for-
est monitoring, and detecting unlawful occupation. To detect
changes from bitemporal images, various methods are pro-
posed in the literature. These can be broadly grouped as pixel,
texture, spectrum, and structure based methods. There are ex-
cellent review papers on these topics [1, 2, 3].
Initial change detection studies mostly focused on pixel
based methods. Recent satellite images have sufficient reso-
lution such that, object details can be observed. Therefore, de-
tailed change detection can be performed on them. Thomas etal. [4] benefit from this detailed information to detect hurri-
cane damage detection. In our previous studies, we benefit
from graph theory and local feature representations to grade
changes [5, 6]. In this study, our focus is detecting changes
using local features in a graph formalism. To represent the
structure, we extract local features from both images. Then,
we represent each local feature set (extracted from different
images) in a graph formation separately. This allows us to
detect changes using graph matching.
This work is supported by TUBITAK under project no 110E302.
2. LOCAL FEATURE EXTRACTION AND GRAPHREPRESENTATION
This section summarizes how we represent the structure in
the image. In the following section, we will benefit from this
representation to detect changes in bitemporal images.
2.1. Local Feature Extraction
We pick the features from accelerated segment test (FAST)
method to detect local features in images in a fast and reliable
manner [7]. This method can briefly be explained as follows.
For each candidate pixel, its 16 neighbors are checked. If
there exist nine contiguous pixels passing a set of tests, the
candidate pixel is labeled as a local feature. These tests are
done using machine learning techniques to speed up the op-
eration. We used FAST based local features in our previous
studies and obtained good results [8]. Therefore, we also use
them in this study.
2.2. Graph Representation
To extract the structure information from local features, we
represent them in a graph form. A graph G is represented
as G = (V,E), where V is the vertex set and E is the edge
matrix showing the relations between these vertices. Here
vertices are local features extracted by FAST. The edges are
formed between them just by their distance. If the distance
between the two vertices are small, there will be an edge be-
tween them. In this study, we set this difference value to 10
pixels depending on the characteristics of the objects in the
image.
3. DETECTING CHANGES BY GRAPH MATCHING
As we form graphs from both images separately, we apply
graph matching between them. In matching graphs, we ap-
ply constraints both in spatial domain and in neighborhood.
We can summarize this method as follows. Let the graph
formed from the first and second images be represented as
G1(V1, E1) and G2(V2, E2). In these representations, V1 =
6213978-1-4673-1159-5/12/$31.00 ©2012 IEEE IGARSS 2012
{f1, ..., fn} holds the local features from the first image and
V2 = {g1, ..., gm} holds the local features from the second
image. We first take spatial constraints in graph matching.
We assume that two vertices match if the spatial distance be-
tween them is smaller than a threshold. In other saying, fi
and gj are said to be matched if ||fi − gj || < δ, δ being a
threshold. This threshold adds a tolerance to possible image
registration errors. Non-matched vertices from both graphs
represent possible changed objects (represented by their lo-
cal features). We can also add neighborhood information to
graph matching. To do so, we first eliminate vertices having
neighbors less than a number. Then, we match these refined
vertices. This way, we eliminate some local features having
no neighbors (possible noise regions).
4. EXPERIMENTS
We tested our method on 18 registered Ikonos image sets ob-
tained from different regions of Turkey. They hold a total of
717 changed objects. In this section, we provide a sample
change detection result. We also provide the quantitative ob-
ject based change detection results.
4.1. A Sample Test Result
We provide a sample test image set from Adana region. We
provide the local features extracted from both images in
Fig. 1(a). We also provide the change detection results with
using only spatial constraints in Fig. 1(b) and three neighbor-
hood constraints in Fig. 1(c). As can be seen, non-matched
local features indicate the changed objects.
(a) Extracted local features for the test image set
(b) Using only spatial constraints (c) Using three neighborhood
constraint
Fig. 1. The Adana test image set and change detection results.
4.2. Object based Change Detection Results
We finally quantify object based change detection results
in this section. We benefit from two previously introduced
performance criteria as Detection Performance (DP) and
Branching Factor (BF) defined in [9]. These criteria are
defined as
DP =(
TP
TP + FN
)(1)
BF =(
FP
TP + FP
)(2)
where TP is the number of truly detected changed objects in
the ground truth image. Changed objects are assumed to be
truly detected if any object in the resulting image overlaps the
ground truth object. FN is the number of changed objects in
the ground truth image which are not detected. FP refers to
the extra object labels. For DP , we obtain these numbers in
terms of objects. However, for BF they can only be calcu-
lated in terms of extracted local features.
The performance results are as follows. Over 717 changed
objects, we obtain TP = 432 and FN = 285. Therefore, we
obtain the detection performance as DP = 0.6025. For the
branching factor, we obtain BF = 0.4614. These results are
promising taking into account the object level complexity and
image types in our test set.
5. CONCLUSIONS
In this study, we propose a novel change detection method
based on local features and graph matching. This method can
detect object based changes. The idea behind the method is as
follows. From each image, local feature points are extracted.
They are represented by two separate graphs. These graphs
are matched to eliminate similar local features. While match-
ing local features, neighborhood constraints are imposed
on them. Non-matched feature locations represent possible
changed regions. We tested our method on 18 sets of Ikonos
images and obtained promising results. This study may be
extended further by adding segmentation information. Then,
the local features labeled as change can be used to select
changed segments as well.
6. REFERENCES
[1] A. Singh, “Digital change detection techniques using
remotely-sensed data,” International Journal of RemoteSensing, vol. 10, no. 6, pp. 989–1003, 1989.
[2] J. F. Mas, “Monitoring land-cover changes: a comparison
of change detection techniques,” International Journal ofRemote Sensing, vol. 20, no. 1, pp. 139–152, 1999.
6214
[3] D. Lu, P. Mausel, E. Brondizio, and E. Moran, “Change
detection techniques,” International Journal of RemoteSensing, vol. 25, no. 12, pp. 2365–2401, 2004.
[4] J. Thomas, A. Kareem, and K. W. Bowyer, “Towards a ro-
bust automated hurricane damage assessment from high-
resolution images,” in 13th International Conference onWind Engineering (ICWE 13), 2011.
[5] C. Unsalan, “Measuring land development in urban re-
gions using graph theoretical and conditional statistical
features,” IEEE Transactions on Geoscience and RemoteSensing, vol. 45, no. 12, pp. 3989–3999, 2007.
[6] B. Sırmacek and C. Unsalan, “Using local features to
measure land development in urban regions,” PatternRecognition Letters, vol. 31, pp. 1155–1159, 2010.
[7] E. Rosten, R. Porter, and T. Drummond, “Faster and
better: a machine learning approach to corner detec-
tion,” IEEE Transactions on Pattern Analysis and Ma-chine Learning, vol. 32, no. 1, pp. 105–119, 2010.
[8] B. Sırmacek and C. Unsalan, “A probabilistic framework
to detect buildings in aerial and satellite images,” IEEETransactions on Geoscience and Remote Sensing, vol. 49,
no. 1, pp. 211–221, 2011.
[9] C. Lin and R. Nevatia, “Building detection and descrip-
tion from a single intensity image,” Computer Vision andImage Understanding, vol. 72, no. 2, pp. 101–121, 1998.
6215