8/10/2019 AAIC Description
1/27
1
Applied Analysis Image Calibrator (AAI C): Automatic Retrieval of Ground
Reflectance from Spectral Imagery
R.L. Huguenin, M.S. Bouchard, C.A. Penney, E.A. Conlon, and G.R. Waddington
Applied Analysis Inc., Westford, MA
November 2013
Abstract
A spectral image processing application,AAIC,is described that automatically transformsspectral imagery from its native digital number (DN) units directly to pixel reflectance
units. The process uses a physics-constrained image statistics approach that is entirelyscene-based and requires no user inputs. Pixel reflectance values are nominally within
0.05 reflectance units of the actual ground directional spectral reflectance value (absolute
band-averaged difference), based on tests with diverse images having targets with known
field-measured reflectance spectra. A Calibration Confidence metric is automaticallygenerated, estimating the probability that retrieved pixel reflectance values will be within
the nominal accuracy range. The consistency of accuracy performance from image toimage makes possible autonomous application and scene-to-scene reuse of spectral
signatures and indices to detect specific materials of interest and monitor changes.
Introduction
We describe here a spectral image processing application,AAIC, for automaticallytransforming spectral imagery from its native digital number (DN) units to units of
ground material reflectance. The process is a stand-alone application that is fully
automatic, requiring no user interaction. Only the image file name is required as user
input for most images. The sensor and data type are automatically retrieved from theimage header, when available, while all the other needed information is derived
automatically from the scene data alone.
The image-statistics approach eliminates dependence on user knowledge and
judgment to correctly estimate input parameters and adjust them in accordance with
scene-to-scene variations in environmental and acquisition conditions. As such, it notonly significantly reduces the levels of user skill and experience normally required to
achieve the accuracies of physics-constrained atmospheric correction models. It also
reduces the potential for user errors, and enhances scene-to-scene uniformity of accuracy
8/10/2019 AAIC Description
2/27
2
performance. Furthermore, since the results for any given scene are fully repeatable, it
allows investigators to reproduce or meaningfully extend the results reported previously
or by others for that scene.
Algorithm description
AAICuses a physics-constrained image statistics approach, i.e., it employs algorithms
that are fully traceable to standard atmospheric radiation transfer and sensor
phenomenology in the reflective solar wavelength regime (350-2500nm). Two scene-
average n-band calibration spectra, ACF(n) and SCF(n), are derived from the image data.These calibration spectra relate the pixel Digital Number values for each spectral band,
DN(n), to ground reflectance, Ref(n), according to the expression
DN(n) = ACF(n) + SCF(n)*Ref(n) (1)
DN(n) corresponds to the reported pixel intensity value in the raw Digital Number image
in spectral band n. The two terms on the right hand side of the expression correspond tothe two principal radiance components that comprise the reported DN(n) spectrum for a
cloud-free pixel. The first term, ACF(n), corresponds to the scene-average
atmospherically scattered solar radiance component, expressed in DN units. The secondterm, SCF(n)*Ref(n), represents the ground radiance component expressed in DN units.
The ACF(n) component represents the sum of the atmospherically scattered solar
radiation, A(n), and sensor spectral transfer function offset, O(n), contributions to image
pixel spectra:
ACF(n) = A(n) + O(n) (2)
O(n) corresponds to the DN(n) value for a sensor dark field (no incoming radiance). A(n)
includes the scene-average atmospherically backscattered component of incident solar
radiation, expressed in DN units. ACF(n) can be viewed as simulating the pixel spectrumof a deployed black calibration panel (no ground contribution to the pixel spectrum).
The second term in Expression 1 is a product of two terms, SCF(n) and the desired
ground directional spectral reflectance term, Ref(n). SCF(n) represents the product of thesensor spectral transfer function gain factors, G(n), and the scene-average spectrum of the
two-path atmospherically attenuated incident and reflected solar radiance, S(n):
SCF(n) = G(n)S(n) (3)
G(n) corresponds to the DN(n) value for a sensor flat field minus the dark field value,
O(n). S(n) represents the scene-average integrated two-path atmospheric attenuation ofthe incident and emerging (reflected in the direction of the sensor) solar radiance. SCF(n)
8/10/2019 AAIC Description
3/27
8/10/2019 AAIC Description
4/27
4
then selected. From these four spectra, the Phase I approximations of ACF(n) and SCF(n)
for the image of interest are calculated.
The Phase I process makes the assumption that the relatively bright matched pair of
DN(n) spectra have common reflectance spectra, i.e., RefhA= RefhB. Similarly, for therelatively dark matched pair, the process assumes RefdA= RefdB. Here A and B refer tothe reference image and the image of interest, respectively, and b and d refer to the
relatively bright and relatively dark background reference spectra, respectively. SCFAand
ACFA(reference image) are known accurately, while SCFBand ACFB(image of interest)are the unknowns solved for. Following this initial (Phase 1) estimate, ACFB(n) and
SCFB(n) are progressively iterated using refined matches with a large number of matched
pairs and a progression based on the ARAD(n) and SUN(n) generated by each prior
iteration. The iterations continue until ACFB(n) and SCFB(n) reach convergence, yielding
an intermediate (Phase II) estimate of ACF(n) and SCF(n) for the image of interest.
The final (Phase III) estimate of ACF(n) and SCF(n) for the image of interest usesan automated subpixel process. The Phase III process uses a set of probe signatures
derived from the reference images. These signatures are derived from multiple referencescenes, containing a broad cross-section of land cover materials. The probe signatures are
transformations of the reference image ground cover spectra, enabling the detection of
subpixel occurrences of those materials in the image of interest. The corresponding
signatures in the image of interest are derived automatically using a process based on theAdaptive Signature Kernel(ASK) application (Huguenin et al., 1998).ASKderives a new
child signature from subpixel detections of the probe signature in the image of interest.
It does this by clustering the detected subpixel occurrences according to spectralsimilarity using an approach modeled after that employed by ISODATA, and it computes
means and standard deviations of the clusters. Using criteria based on the populations and
standard deviations of the clusters, the best cluster is identified and its mean spectrum
becomes the child signature.ASKthen artificially inserts the signature into the imageover a range of subpixel fractions and background types, and the detections of these
dopedoccurrences are used to derive the signature tolerances. Sets of matched probe-
child signatures are then used to solve for ACFB(n) and SCFB(n) in the same way as for
Phases I and II.
This Phase III subpixel step can significantly improve accuracy and image-to-imagerobustness over that of the Phase II whole pixel step. This is due in part to the fact that
the ground cover spectra represented by the probe signatures typically correspond to
naturally associated mixtures of groundcovers, rather than pure materials. The spectra of
these naturally associated mixtures can vary from pixel to pixel and image to image dueto natural variations in the relative fractions of constituent components present in the
mixtures. These variations can cause the pixel spectra to potentially significantly deviate
from the probe signature spectrum. The subpixel step is able to retrieve the spectrum ofthe particular set of constituent fractions to which the probe signature corresponds by
identifying and suppressing the spectral contributions from the excess constituent
8/10/2019 AAIC Description
5/27
5
fractions in each pixel. It also effectively suppresses differences introduced by regional
variations in the inherent spectral properties of the individual constituents of the mixture(e.g., oaks versus maple). This latter effect contributes significantly to the observed
robustness of the process across a broad diversity of geographical and environmental
settings. Finally, the subpixel step identifies and suppresses contributions fromextraneous background components in the pixel that can frequently mask the signature ofthe target groundcover material. The net result is a significantly improved set of spectral
matches to the signature spectra across scenes.
Errors in calculations of ACF(n) and SCF(n) due to scene-to-scene material
differences between the probe and child signatures are suppressed through 1) clustering
of the retrieved ACF(n) and SCF(n); 2) identifying the best cluster (by population,diversity of source signatures in the cluster, and low standard deviation); and 3)
averaging the derived ACF(n) and SCF(n) values within the cluster. The best error
suppression and adherence to the underlying phenomenology occur when clusters contain
ACF(n) and SCF(n) values that are common to probe-child signature pairs acrossmultiple signature materials.
After completing Phase III, the process reports an internally generated performance
metric that serves as a first order estimate of image quality and calibration accuracy. TheCalibration Confidence metric is based on the spectral similarity (absolute mean spectraldifference) of the specific probe signatures, Probep(n), used for calculating ACF(n) and
SCF(n) to the child signatures, Childp(n), generated by those probe signatures. Here p is
the probe signature identifier and n is the spectral band number. With Probep(n) and
Childp(n) in reflectance units,
Calibration Confidence = (1.0meanDiff) x 100%, (5)
meanDiff = pabsDiffp/ P
absDiffp= abs [ n(Probep(n)Childp(n))/N ]
P = number of probe signatures, N = number of spectral bands
Calibration Confidence is a measure of the goodness of the spectral match of the
probe materials to the actual materials in the image of interest, thereby providing an
estimate of the likelihood that materials spectrally similar to the probe materials wereindeed in the image and that an accurate calibration could be generated. As long as there
are at least three good matches between the probe and scene materials, the process wasdesigned to be able to accurately retrieve ACF(n) and SCF(n) even in scenes with a lowdiversity of cover characteristics.
Although the approach is relatively insensitive to image-to-image variations in
scene content, not all images would be expected to produce high accuracy calibrations.
Images with temporal artifacts (discussed below), atmospheric dust palls, non-uniform
8/10/2019 AAIC Description
6/27
6
haze, or other image quality aberrations can sometimes cause spectral matches between
the probe and image materials to degrade to the point that calibration accuracies can dropbelow expected levels. Although these aberrations do not necessarily lead to poor
calibrations, they can significantly lower the probability of achieving an accurate
calibration. As discussed below, data suggest that images with significant qualityproblems might be expected to report Calibration Confidence values below a nominalthreshold value. Accepting only those images whose Calibration Confidence values
exceed the threshold provides a capability for insuring that those images were of high
enough quality that the derivation of ACF(n) and SCF(n) properly adhered to theunderlying phenomenology throughout the processing sequence.
An illustration of calibration results for an airborne AVIRIS hyperspectral image of
the Stennis Space Center (SSC), MS is shown in Figures 1 - 4. In Figure 1 is shown the
average DN(n) spectrum (184 bands) of the image pixels comprising a small grassy area
of interest (AOI) selected by NASA as one of several field data sites in the SSC TargetField (Holekamp, 2004). In Figure 2 are shown the calibration spectra, ACF(n) and
SCF(n), retrieved by AAICfrom the image, and used to automatically transform theimage to units of reflectance (Equation 4). In Figure 3 is shown the resultant averageRef(n) spectrum (184 bands) of the grassy AOI, retrieved from the derived reflectance
image.
Comparison of the spectrum in Figure 3 to the one in Figure 1 reveals the extent of
the transformation from DN(n) to Ref(n). Also shown in Figure 3 is the field-measuredreflectance spectrum (labeled GT) provided by NASA for this AOI. Comparison of the
two spectra reveals good general agreement, with the greatest differences occurring in the
800-1500nm region. The latter differences are likely due at least in part to the non-
uniform cover characteristics and different coverage included in the larger image AOI
versus those of the field measurements. Even with these differences, however, the mean(band-averaged) absolute difference between the two spectra is only .01376 reflectance
units (0.01.0 reflectance scale), and the correlation coefficient (0.9978) between thetwo spectra is quite high. The reported Calibration Confidence value for the image was
88.5%.
8/10/2019 AAIC Description
7/27
7
Figure 1. Average DN(n) spectrum (184 bands) of the image pixels comprising a grassy
area of interest (AOI) designated by NASA as one of several field data sites in theAVIRIS image cube of the Stennis Space Center, MS, acquired on 10/27/1998.
Figure 2. ACF(n) (left) and SCF(n) (right) spectra automatically retrieved by AAICfrom
the AVIRIS image cube of the Stennis Space Center, MS, acquired on 10/27/1998.
0
500
1000
1500
2000
2500
3000
3500
4000
4500
400 650 900 1150 1400 1650 1900 2150 2400
Wavelength (nm)
DN DN
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
1 21 41 61 81 101 121 141 161 181
Spectral Band
DN
SCF
0
100
200
300
400
500
600
700
800
900
1000
1 21 41 61 81 101 121 141 161 181
Spectral Band
DN
ACF
8/10/2019 AAIC Description
8/27
8
Figure 3. Comparison of the image-derived and field-measured reflectance spectra of thesame grassy AOI for which the raw data spectrum is shown in Figure 1. The image-
derived spectrum is the dotted curve, and the field-measured spectrum is the solid
spectrum.
Quantitative Assessment of Reflectance Accuracy
Quantitative accuracy of the reflectance images produced by AAICwas assessed using asuite of images containing deployed panels and other ground materials having known
field-measured spectra. In addition to the AVIRIS hyperspectral image of Stennis Space
Center, MS discussed above, were four HYDICE hyperspectral images (Desert Radiance
II Runs 03 and 13 of the Yuma Proving Ground, AZ , and Littoral Radiance II Runs 28and 47 of Red Beach and Camp Pendleton, CA, respectively); IKONOS and QuickBird
4-band multispectral images of the Stennis Space Center Target Field; and a RapidEye 5-
band multispectral image of the Railroad Valley Playa Test Site, NV. The HYDICEimages each contained 4-7 deployed reflectance calibration panels spanning from low
reflectance values (.02-.04) to high reflectance values (.60-.64). The AVIRIS, IKONOS,and QuickBird images contained a variety of background materials, and the RapidEyeimage contained two natural playa areas at the Railroad Valley site. The images spanned
a relatively wide range of representative land cover conditions. The accuracy was
measured in terms of spectral mean absolute reflectance difference (average across
spectral bands) between the image-derived and field measured spectra for each panel orground material in the image. The difference values for the suite of panels or background
8/10/2019 AAIC Description
9/27
9
materials in an image were then averaged to produce a mean absolute reflectance
difference value for the image.
Note that the results in Table 1 for the RapidEye Railroad valley Playa image werenot based on use of the full image for calibration. We used a subset that included only
the southern half of the image to retrieve ACF(n) and SCF(n), which were then applied
to the entire DN(n) image to obtain the Ref(n) image using Equation 4. This was done toavoid anomalous temporal spectral artifacts in the northern part of the image, produced
as a consequence of RapidEyes focal plane configuration. The images acquired in the
different spectral bands by RapidEye are spatially offset in the focal plane, causing them
to be temporally offset from one another. Pixels containing scene features that weremoving relative to the background terrain at the time of image acquisition were not
properly corrected for sensor motion, which adversely impacted their spectral integrity
by superimposing spectral band values from multiple locations. The sources of these
temporal spectral artifacts include moving atmospheric features, such as clouds, plumes,flying aircraft, and contrails; moving aquatic features, such as waves and boats; and
dynamic land features, such as moving vehicles. There were enough moving cloudpixels in the northern part of the Railroad Valley Playa image to corrupt the calibration,so the calibration was based instead on the subset (southern half) containing no clouds.
A similar temporal spectral artifact problem can arise with eight-band
WorldView2 imagery. This sensor contains two sets of detector arrays in the focal plane.
One set is the same as used for the four-band WorldView2 imagery. The second set isspatially offset from the first in a manner that leads to anomalous temporal spectral
artifacts that are similar to the ones produced by the RapidEye sensor. The AAIC
algorithm was specially modified to handle the temporal offset in the eight-band
WorldView2 imagery. The two WorldView2 image band sets are processed
independently, and the two reflectance images are merged into a single eight-bandreflectance image to compensate for the temporal offset. For this manuscript, we were
unable to locate an eight-band WorldView2 image of an area containing areas withcompanion field-measured spectra. To illustrate the algorithmic approach used by AAIC,
images were simulated for both a four-band and an eight-band configuration using
subsetted band sets from the HYDICE hyperspectral Run 47 image at the published
WorldView2 band passes. AAIC was used to process both images, and we calculated thespectral mean absolute reflectance differences for the set of panels in each. The resultant
difference and Calibration Confidence values are shown in Table 1. Although AAIC
reports both Calibration Confidence values for the two four-band subsets of the eight-band WorldView2 imagery, we report the simple average of the two confidence values in
Table 1.The mean absolute difference values for the 14 images in Table 1 ranged from
.01376 to .05050, with a mean of .02916 and standard deviation of .00948. Although thesample set is relatively small, the values are relatively uniformly distributed about the
mean. This suggests that the pixel spectral mean (band averaged) absolute reflectance
values for ~95% (mean 2 standard deviations) of images with comparable Calibration
8/10/2019 AAIC Description
10/27
10
Confidence values would be expected to be within 0.05 reflectance units of the actual
ground spectral mean reflectance.
The corresponding Calibration Confidence values for the images in Table 1 had amean and standard deviation of 89.26% and 4.19%, respectively. The correlation
coefficient between the errors and Calibration Confidence values was only 0.4089,
however, confirming that Calibration Confidence is not an effective predictor of relativescene-to-scene accuracy. Rather, it serves only as a predictor of the probability that the
accuracy standard (spectral mean calculated reflectance within 0.05 of actual mean
reflectance) was met.
Table 1.
Sensor Location Bands *Difference Cal. Conf.
AVIRIS Stennis Space Center, MS 184 .01376 88.55
HYDICE Yuma Proving Ground, AZ, Run 03 180 .01699 81.41
HYDICE Yuma Proving Ground, AZ, Run 13 180 .03669 83.67
HYDICE Red Beach, CA, Run 28 156 .02749 85.93
HYDICE Camp Pendleton, CA, Run 47 180 .03079 83.88
Landsat TM5 Railroad Valley Playa, NV (6/03/00) 6 .02679 94.00
Landsat TM5 Railroad Valley Playa, NV (6/19/00) 6 .03418 93.51
Landsat TM7 Railroad Valley Playa, NV (6/11/00) 6 .02363 91.37
**RapidEye Railroad Valley Playa, NV 5 .02805 88.67
QuickBird Stennis Space Center, MS 4 .01786 89.05
IKONOS Stennis Space Center, MS 4 .03603 90.61
WorldView2 (4-band) Railroad Valley Playa, NV 4 .05050 94.89
***WorldView2 (4-band) Simulated from Run 47 4 .03404 91.75
***WorldView2 (8-band) Simulated from Run 47 8 .03140 92.30
Mean .02916 89.26
Standard Deviation .00948 4.19
As discussed above, to achieve the accuracy standard, image quality needs to behigh enough that at least some good matches will occur between the probe and scene
materials so that the process can retrieve accurate ACF(n) and SCF(n) values. Since
Calibration Confidence is an indicator of the quality of the matches, this suggests thatthere may be a threshold for Calibration Confidence below which it is unlikely that the
accuracy standard can be met. As for the mean absolute reflectance difference values, the
Calibration Confidence values in Table 1 are relatively evenly spread about the mean,
8/10/2019 AAIC Description
11/27
11
suggesting that a nominal operating range for Calibration Confidence can be estimated
based on the mean and standard deviation, namely 80.997.6% (mean 2 standarddeviations) for 95% of the images. Outside of this range, there is an increased likelihood
that image quality is inadequate to achieve a reliably convergent estimate of ACF(n)
and/or SCF(n) and meet the accuracy standard. This is supported by numerousobservations (see below), indicating that Calibration Confidence can serve as an effectiveimage quality screen. In particular, by accepting only images having Calibration
Confidence values above a threshold value of 80%, the likelihood significantly increases
that the image-retrieved reflectance values will meet the nominal accuracy standard.
Comparisons of the image-derived and field-measured spectra of the referencematerials for several of the images listed in Table 1 are shown in Figures 4-7 for
illustration. These plots provide examples of the kinds of spectral errors that are
represented by the mean absolute reflectance difference values in Table 1. In Figure 4 are
shown plots of the image-derived versus field measured spectra for the five deployedreflectance reference panels in the 26 June 1995 HYDICE hyperspectral image of a site
in the Castle Dome area of the US Army Yuma Proving Ground in Arizona (Run 03,FootballField Target Array, HYMSMO Desert Radiance II data collect). The labelsrefer to AAIC-derived and field-measured spectra for the 2%, 12%, 24%, 36% and 48%
reflectance reference panels. The image-retrieved spectra are averages of pixel
reflectance spectra for areas of interest (AOI) within the interiors of the panels, excluding
mixed pixels along the panel edges. The field spectra were measured by the US ArmyTopographic Engineering Center using a GER Field Spectroradiometer (350-2500nm) in
a nadir viewing position on axis with the sun (Evans et al., 1995). Field spectra were
collected at three different sample areas within each panel, averaged, and spectralreflectance was computed relative to a Spectralon reflectance standard. In general there
was good agreement between the image-retrieved and field-measured spectra with respect
to both spectral shape and intensity. The differences for the 36% panel appear anomalous,however, suggesting that the average field spectrum for that panel may be less
representative than for the other panels. The spectral absolute mean reflectance difference
value listed for this image in Table 1 was relatively low (.01699), even though theCalibration Confidence value (81.41%) was close to the nominal threshold value.
8/10/2019 AAIC Description
12/27
12
Figure 4. Comparison of the image-derived and field-measured reflectance spectra of five
reflectance calibration panels deployed during Run03 of the Desert Radiance II exercise
at the Yuma Proving Ground on 26 June, 1995. The airborne HYDICE hyperspectral
image-derived spectra are shown as dotted curves, and the field-measured spectra are
shown as solid curves.
In Figure 5 are shown the results for the 5-band RapidEye image of the Railroad
Valley Playa, NV site. Here the material of interest is a natural playa deposit. Althoughfield-measured reflectance spectra were acquired at the time of overpass (Naughton et al,
2011), they were not available for the present study. As a substitute we used the vicarious
measurements acquired by the University of Arizona Optical Sciences Center (7 June and
10 June 2000). The image was acquired on 15 October, 2009, which was a differentseason and year, and the playa had likely undergone significant changes over the
intervening interval. Thome(2001)described the test site and described separately
(Thome, private communication, 2012) its history of spectral variability with changing
environmental conditions (standing water, erosional redistribution of salts, soil moisturelevels, etc.). Consequently, comparison of the image-retrieved pixel spectra to vicarious
field-measured spectra as a means of quantitatively assessing calibration accuracy at the
Railroad Valley playa and similar sites needs to be done with extraordinary caution. Theexpected differences in soil conditions can explain much of the spectral difference
between spectra in Figure 5. The spectral mean absolute reflectance difference (.02805)
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
400 650 900 1150 1400 1650 1900 2150 2400
Wavelength (nm)
Reflectance
02%Field
02%IC
12%Field
12%IC
24%Field
24%IC
36%Field
36%IC
48%Field
48%IC
8/10/2019 AAIC Description
13/27
13
and Calibration Confidence value (88.67%) were both relatively close to the mean of the
values in Table 1, however.
In Figure 6 are shown the results for another image of the Railroad Valley Playasite, this one measured by the WorldView2 (4-band) sensor. This image had a
comparatively large reported spectral mean absolute difference (.05050) relative to the
field measurement, even though it had a relatively high Calibration Confidence value(94.59%). The flattening and higher overall reflectance of the image-retrieved spectrum
relative to the field-measured spectrum is consistent with the expected differences in
playa conditions (dryness and increased amount of exposed salt) on the image date (21
October, 2011) versus the field measurement date (10 June, 2000).
Figure 5. Comparison of the image-retrieved and field-measured reflectance spectra of
the Railroad Valley Playa, NV, measured by the commercial 5-band RapidEye satellitesensor on 15 October, 2009
In Figure 7 are shown the image-retrieved versus field-measured spectra for the
same grassy AOI shown in Figure 2, but this time from a 4-band QuickBird image of the
Stennis Space Center test site, acquired on 10 January 2004. Note that the reportedspectral mean absolute reflectance difference (.01786) and Calibration Confidence
(90.61%) are generally similar to the AVIRIS hyperspectral image values for this site,
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
450 550 650 750 850
Wavelength (nm)
Reflectance
Field
IC
8/10/2019 AAIC Description
14/27
14
even though the sensor and platform were very different and the grass is susceptible to
seasonal variability.
Figure 6. Comparison of the image-derived and field-measured reflectance spectra of the
Railroad Valley Playa, NV, measured by the commercial 4-band WorldView2 satellite
sensor on 21 October, 2011.
Figure 7. Comparison of the image-derived and field-measured reflectance spectra of the
Stennis Space Center, MS test site measured by the commercial 4-band QuickBird
satellite sensor on 10 January, 2004
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
450 550 650 750 850
Wavele ngth (nm)
Reflectance
Field
IC
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
450 550 650 750 850
Wavelength (nm)
Reflectance
Field
IC
8/10/2019 AAIC Description
15/27
8/10/2019 AAIC Description
16/27
16
Figure 9. Arial (left) and ground (right) photographs of the blue tarps at the LynnhavenBay, VA construction site. The photos were acquired a few days prior to the image in
Figure 9 (photos courtesy of Google Earth).
To illustrate the stability of the process between images having varying scene
content for, we applied the same signature to a second image measured by the same
sensor only one second after the image in Figure 8, and it included the identical set of
blue tarp targets. The two images are shown in Figure 10. The yellow arrows indicate thelocation of the construction site. The scene content of the earlier image (left) is
dominated by ocean, while the later one (right) it is dominated by land with different landcover content than the one on the left. Furthermore, the land portion in the earlier imageis dominantly in a coastal setting, while in the later image the land portion had more
inland content. These differences were manifested in the differences in the derived AAIC
correction spectra shown in Table 2. The extent of scattering (ACF) was somewhat
higher in the earlier image, while the path attenuation fraction (SCF) was lower,consistent with the higher atmospheric aerosol content in the coastal versus inland
environments for the land portions of the two images sampled by AAIC. The Calibration
Confidence values were essentially the same. In Figure 11 we compare the blue tarpdetection results for the two images using the same signature, same difference metric
(Spectral Angle Mapper), and same signature tolerance value (0.30). The detection results
were essentially unchanged between the two images, illustrating the stability of theprocess under these varying scene content conditions used byAAICfor deriving the
transformation to reflectance.
8/10/2019 AAIC Description
17/27
17
Figure 10. WorldView2 images containing the Lynnhaven Bay, VA construction site
(yellow arrow). The image on the left was acquired on 9 January 2011 at 15:57:39 GMT,
and the image on the right was acquired one second later.
Table 2
Band 1 Band 2 Band 3 Band 4
Earlier Image
ACF = 261.695 192.800 42.606 54.297
SCF = 2436.213 3609.275 2335.121 2410.562
Later Image
ACF = 244.424 153.838 37.187 51.386
SCF = 3670.382 4897.893 2630.227 2022.749
8/10/2019 AAIC Description
18/27
18
Figure 11. The blue tarp detection results, highlighted in green, for the WorldView2
image of Lynnhaven Bay, VA acquired 9 Jan 2011 at 15:57:39GMT (Figure 11 left) areshown on the left. The results for the companion WorldView2 image acquired one second
later (Figure 11 right) are shown on the right.
We next processed a third image of the same construction site, this time measuredby a different sensor (GeoEye) and several months later (12 May 2011). By this date
much of the construction that was underway in January had been completed, and only a
few small tarps remained in the north corner of the site. The same blue tarp signature andSpectral Angle Mapper distance metric was used, but this time the signature toleranceneeded to be increased to 0.38 to detect the tarps in the north corner. The detection results
(highlighted in green) are shown in Figure 12 (left). The yellow arrow indicates the
location of the known tarps, and the viewport in the right image confirms that it was thepersistent tarps that were detected.Other apparent tarp occurrences were detected in thescene, including two new ones on the roof of the newly erected structure at the
construction site and several in the surrounding area associated with recreational pools.
There are several possible reasons why a larger signature tolerance was required to detectthe persistent tarps at the construction site in this image, including a possible error in
reflectance accuracy produced by AAICor a change in the signature properties of the tarp
due to aging and/or development of a coating over time from dust or mold in the
construction site environment, among others.Although there may possibly have been anerror in reflectance accuracy, the reported Calibration Confidence value was relatively
high (94.3%) and comparable to those of the other images. Furthermore, features detected
at a tolerance of 0.30 had the same characteristic blue visual color as the tarps in theearlier images, and were recently deployed (not present in the earlier image). Although
this supports the possibility that weathering may have been responsible for the higher
required signature tolerance, it is currently unknown why the tolerance needed to be
increased in this particular image.
8/10/2019 AAIC Description
19/27
19
Figure 12. Blue tarp detection results (left) for a GeoEye image of the Lynnhaven Bay,
VA construction site acquired on 12 May 2011, approximately five months after theimage in Figure 11. The portal (right) confirms the detection of blue tarps.
To further illustrate the scene-to-scene and sensor-to-sensor portability of the
signature enabled by AAICwe repeated the exercise with a series of QuickBird images
measured of the Gulfport, MS area following Hurricane Katrina. In Figure 13 (left) we
show the signature-based detection results for a scene acquired on 6 September 2005,only a week after the 29 August 2005 landfall. The same library signature, Spectral Angle
Mapper distance metric, and signature tolerance (0.30) as used for the Lynnhaven Bay
construction site were used for this image, and comparable detection accuracies wereachieved based on visual comparison to the image using the portal utility (Figure 14,
right). For the most part, the detections represented tarps deployed on damaged roofs.
Two additional QuickBird images of the Gulfport, MS area were acquired on the
same date five weeks later on 12 October 2005, each covering a different part of the city.
Again the same signature and tolerance values were used in both. Although there was asignificantly greater number of detected tarps in these two images than in the earlier
image, the detection accuracies of all three images appeared to be comparable based on a
detailed survey of the images using the Porthole utility. In Figure 14 we show a
comparison of the blue-tarp detection results for an area of Gulfport on the two dates,illustrating the potential for quantitative multi-date change analysis enabled by the scene-
to-scene accuracy consistency of AAIC.
8/10/2019 AAIC Description
20/27
20
Figure 13. Detection of blue tarps in an image of Gulfport, MS acquired by the
QuickBird sensor on 6 September 2005 (left). The portal confirmed the detection of bluetarps (right).
Figure 14. Multi-date comparison of blue tarp detections for Ward 2C of Gulfport, MS
(yellow boxes). The QuickBird image on the left was acquired only a week after landfallon 6 September 2005, and the QuickBird image on the right was acquired five weeks
later on 12 October 2005.
Comparison to other processes.
As an autonomous in-scene derivation process that is physics-constrained, AAIC uses a
fundamentally different approach to atmospheric correction than those of other available
8/10/2019 AAIC Description
21/27
21
applications. Many of the other applications use predictive physics-based radiation
transfer models rather than in-scene data, and they require user estimation of inputparameters rather than operating autonomously. They include such applications as
ATCOR (Richter, 2011, 2012), which is currently available in Intergraphs ERDAS
IMAGINE; FLAASH (Adler-Golden, et al.,1999), available in ITT Exelis ENVI; and thestand-alone ACORN (AIG, 2001) and ATREM (Gao and Goetz, 1990; Gao et al., 1993;and CSES, 1999) applications. They are largely modeled after the US Air Force-
developed MODTRAN application (Berk et al., 1999), with which pixels are transformed
to units of spectral radiance rather than reflectance. The transformation is based on userestimates of key environmental and encounter geometry parameters for the scene being
processed. The primary differences between them are the methods of parameter
estimation and user interface.
One physics-constrained application that is automated and uses an image statistics-
based approach is CORENV,an atmospheric correction processincluded as an integral
component of the commercially availableERDAS IMAGINE Subpixel Classifierapplication (Huguenin et al., 1997, 1998). While CORENVuses a similar physics-
constrained scene-based algorithmic approach to that of AAIC, it differs in that it wasdesigned specifically to optimize image-to-image portability of scene-derived spectral
signatures rather than performing an in-scene correction to reflectance. Toward this end
CORENVemploys a normalization step not included in AAICto optimize the statistical
spectral match between the signature-source scene and a second scene of interest.
An application that is automated and image statistics-based is QUAC(Quick
Atmospheric Correction), which is available as an optional module in ITT ExelisENVI(Bernstein et al., 2006, 2008). QUACuses an approach that is not physics-constrained,
but rather emulates theEmpirical Line Method (ELM)(Roberts et al., 1985; Conel et al.,
1987) with added normalization. LikeELM, QUACderives an offset and gain spectrumto apply to the image pixels to transform them to spectral reflectance units. UnlikeELM,
however, it does not start with known materials. Instead, QUACfirst derives a
background spectrumfrom the darkest pixels in each band (e.g., from cloud shadows,dark water, or vegetation). It subtracts this spectrum from the pixel spectra, and then
derives a gain spectrum using a set of end member spectra. The end member spectra are
selected to maximize diversity within each spectral band. A normalization step is then
employed, so that a gain spectrum can be derived through regression with a referencelibrary of reflectance spectra, treated as an artificial reference scene. The resultant gain
spectrum is then applied to the image pixels (with the background spectrum subtracted)
to convert them to spectral reflectance units.
Functionally, QUAC shares some similarity to the AAICprocess described here.Its image-statistics approach, however, is not physics-constrained. These differences
introduce sensitivities to scene conditions that can significantly impact the robustness and
uniformity of accuracy performance. One of these sensitivities is to land cover diversity.
In particular, to derive its gain spectrum QUACselects end member spectra from theimage and applies a normalization that assumes that the level of land cover diversity in
8/10/2019 AAIC Description
22/27
22
the image being processed is comparable to that of QUACsmaster reference library.
Deviations from the assumed land cover characteristics can significantly impact thenormalization and resultant accuracy of the derived gain factor (Bernstein et al., 2006,
2008). The impact is particularly evident, for example, in images dominated by desert
terrain, densely forested terrain, water, and other low-diversity terrains. This was pointedout and discussed by Bitelli and Mandanici (2010), who processed a hyperspectral imageof a low-diversity conservation area located in the Fayyum Oasis, Egypt. They reported
that QUACproduced pixel reflectance spectra that were significantly suppressed at
visible wavelengths and contained anomalous spikes in the near infrared.
Another sensitivity of the QUACapproach is to the inclusion of physically
unrepresentative dark pixels in the calculation of its background (offset) spectrum. Thebackground spectrum is subtracted from the image under the assumption that it is
representative of an additive offset component that is common to all the pixels. As
discussed above, the common additive offset includes both the sensor spectral offset
function and atmospherically scattered solar radiation contribution (see Equation 2).QUACsuse of the darkest non-zero image pixels for derivation of the background
spectrum can be problematic, since it is not always a physically valid estimate of thecommon offset. Most images contain artifacts, particularly near the image boundaries,
which can be among the darkest non-zero pixels in the image but which are anomalous
and physically unrepresentative of the image offset spectrum. Images also frequently
contain cloud shadows, which can similarly be among the darkest pixels in the image, butwhich exclude much of the atmospheric scattering component due to direct shading of the
densest part of the atmospheric column by the cloud. Still other images can be dominated
by bright materials having weak or missing shadows, so that the darkest pixels in thoseimages can be directly illuminated high reflectance materials that are substantially
brighter than the common offset spectrum component. Subtraction of the background
spectrum derived from the darkest non-zero image pixels can thus significantly under-
estimate or over-estimate the common offset contribution to pixel radiance. This canpotentially negatively impact the accuracy of QUACsderived gain spectrum and
resultant reflectance image. This was illustrated by Chakravortty and Chakrabarti (2011),
who reported a similar pattern of errors to those reported by Bitelli and Mandanici (2010)
in a QUAC-processed hyperspectral image containing numerous clouds and cloud
shadows of the Sunderlan Biosphere Reserve of West Bengal.
AAICsphysics-constrained approach largely avoids these sensitivities and
resultant errors. First, the method minimizes the chance that ACF(n) will be either an
over- or under-estimate of the offset through use of the simultaneous solution approach.
The approach derives offset, ACF(n), and gain, SCF(n), spectra that are mutuallycompatible and common to a majority of pixels across the image. This insures that
ACF(n) is minimally influenced by anomalies like cloud shadows and image artifacts,
and is physically realistic as an offset spectrum common to the image pixels, containingboth the relevant sensor function and atmospheric radiation transfer components. Second,
since the simultaneous solution approach works equally well with only two classes of
8/10/2019 AAIC Description
23/27
23
materials as it does with a large diversity of materials, it minimizes sensitivity to both
land cover characteristics and diversity. As a result AAICwould be expected to be more
consistently accurate over a broader range of scene characteristics.
Discussion.
The findings here for AAICindicate that reflectance accuracies are generally comparableto those of the predictive physics-based models like FLASSH. Unlike the predictive
models, however, the results are not dependent on the analysts choice of input parameter
values. The image characteristics alone control AAICsoutcome, effectively eliminating
differences arising from variations in the analystsexperience levels and choices of inputparameters. The results are fully repeatable for an image, regardless of who does the
processing. As such, AAICmay provide an effective and reliable substitute for these
more complex interactive models. Shifting this time-consuming and often-problematic
atmospheric-correction step to the background and eliminating it from the analystsworkflow requirements can free the analyst to directly focus on the spectral analysis task
at hand. It can also potentially significantly decrease project turn-around time. Since thereis no required operator in the loop with AAICthe time to calibrate an image to units of
material reflectance is controlled primarily by hardware throughput. In contrast, the
predictive models require dedicated time for operator interactions and iterations, which
can often be a dominant component of the time to complete a project.
This ability to autonomously transform images to reflectance without the need forfield measurements or other user inputs could be of particular value to multi-temporal
classification and monitoring studies. The comparison of blue-tarp detection results in
Figure 14 illustrates this potential. Using the tarps as indicators of structural (roof)damage, and using the number from the earlier date as a baseline, the difference innumber of detected pixels could potentially be used to quantitatively estimate the areal
extent of roof damage in a sector of interest in the city. In this hypothetical illustration,
we selected the Ward 2C sector, for which there were 3594 detected pixels (32,844 m2)
of blue tarp material on the later date compared to 265 detected pixels (2,763 m2) on the
earlier date. Assuming some of the latter tarps to have been in place prior to landfall and
possibly unrelated to the storm, the difference in the number of detected pixels (3329
pixels, or 30,081 m2) between the two dates provides a conservative estimate of the
extent of structural damage inflicted by the storm in this sector of the city. This kind of
information, acquired soon after the event, can be of potentially high value to the
insurance industry to support early assessment and planning for spreading claims risk andmitigating cost, as well as to relevant governmental and non-governmental organizations
to support relief and recovery planning.
This capability could also be of potentially high value to multi-temporal
classification and monitoring studies such as the reported multi-temporal assessments of
changes in agricultural production (e.g., Gibson et al., 2012; Vintrou et al., 2012;), water
8/10/2019 AAIC Description
24/27
8/10/2019 AAIC Description
25/27
25
pp. 63622P-63625P.
Bernstein, L.S., S.M. Adler-Golden, R.L. Sundberg, and A.J. Ratkowski, 2008. In-scene-based atmospheric correction of uncalibrated VISible-SWIR (VIS-SWIR) hyper- and
multispectral imagery,Proc. SPIE 7107, Remote Sensing of Clouds and the AtmosphereXIII, pp. 710706-710709.
Center for the Study of Earth from Space (CSES), 1999.Atmosphere Removal Program
(ATREM), Version 3.1, Users Guide, University of Colorado, Boulder, 12 pp.
Chakravortty, S. and S. Chakrabarti, 2011. Preprocessing of Hyperspectral Data: A case
study of Henry and Lothian Islands in Sunderban Region, West Bengal, India,
International Journal of Geomatics and Geosciences, 2 (2): 490-501.
Clark, R.N., G.A. Swayze, R. Wise, E. Livo, T. Hoefen, R. Kokaly, and S.J. Sutley, 2007,
USGS digital spectral library splib06a: U.S. Geological Survey, Digital Data Series 231,http://speclab.cr.usgs.gov/spectral-lib.html.
Conel, J. E., R. O. Green, G. Vane, C. J. Bruegge, R. E. Alley, and B. J. Curtiss, 1987.
Airborne Imaging Spectrometer-2: Radiometric spectral characteristics and comparisonof ways to compensate for the atmosphere,Proceedings of SPIE, 834: 140157.
Evans, T.D., J. P. Henley, and D. Del Bosque, 1995. Ground-Truth Data Field Collection
in Support of Spectral Remote Sensing, US Army Topographic Engineering Center, pp.11-13
Gao B., and A. F. H. Goetz, 1990. Column atmospheric water vapor and vegetation liquidwater retrievals from airborne imaging spectrometer data,Journal of GeophysicalResearch, 95 (D4): 3549-3564.
Gao, B. C., K. B. Heidebrecht, and A. F. H. Goetz, 1993, Derivation of scaled surface
reflectances from AVIRIS data,Remote Sens. Environ., 44: 165-178.
Gibson, G. R., J. D. Campbell, and R. H. Wynne, 2012. Three decades of war and food
insecurity in Iraq,Photogramm. Eng. & Rem. Sensing, 78(8): 885-895.
Heller, E., J. M. Rhemtulla, S. Lele, M. Kalacska, S. Badiger, R. Sengupta, and N.
Ramankutty, 2012. Mapping crop types, irrigated areas, and cropping intensities in
heterogeneous landscapes of Southern India using multi-temporal medium-resolutionimagery: Implications for assessing water use in agriculture,Photogramm. Eng. & Rem.
Sensing, 78(8): 815-828.
Holekamp, K., 2004. NASA Radiometric Characterization,High Spatial Resolution
Commercial Imagery Workshop, Reston, VA, 9 November, 46pp.
8/10/2019 AAIC Description
26/27
26
Huguenin, R. L., M. A. Karaska, D. Van Blaricom, and J. R. Jensen, 1997. Subpixel
classification of Bald Cypress and Tupelo Gum Trees in Thematic Mapper Imagery,Photogramm. Eng. & Rem. Sensing, 63: 717-725.
Huguenin, R. L., M. Hwa Wang, M. A. Karaska, and K. E. Roberts, 1998. Automatedscene-derived normalization of spectral imagery for subpixel classification,Proc. SPIE
Vol. 3438, Imaging Spectrometry IV,pp 157-161.
Kaplan, S., and S. W. Myint, 2012. Estimating irrigated agricultural water use through
Landsat TM and a simplified surface energy balance modeling in the semi-arid
environments of Arizona,Photogramm. Eng. & Rem. Sensing, 78(8): 849-860.
Liu, J., W. Zhu, and X. Cui, 2012. A shape-matching Cropping Index (CI) mapping
method to determine agricultural cropland intensities in China using MODIS time-series
data,Photogramm. Eng. & Rem. Sensing, 78(8): 829-838.
Naughton, D., A. Brunn, J. Czapla-Myers, S. Douglas, M. Thiele, H. Weichelt, and M
Oxfort, 2011. Absolute radiometric calibration of the RapidEye multispectral imagerusing the reflectance-based vicarious calibration method,J. Applied Remote Sens.5(1):
053544.
Richter, R., 2011. Atmospheric / topographic correction for satellite imagery,ATCOR-2/3
User Guide, DLR IB 565-01/11, Wessling, Germany.
Richter, R., 2012, Atmospheric / topographic correction for airborne imagery,ATCOR-4User Guide,DLR IB 565-02/12, Wessling, Germany.
Roberts, D.A., Y. Yamaguchi, and R.J.P. Lyon, 1985. Calibration of Airborne ImagingSpectrometer data to percent reflectance using field spectral measurements, 19th
International Symposium on Remote Sensing of Environment, Ann Arbor, MI, pp. 679-
688.
Romaguera, M., M. S. Krol, M. S. Salama, A. Y. Hoekstra, and Z. Su, 2012. Determining
irrigated areas and quantifying blue water use in Europe using remote sensing Meteosat
Second Generation (MSG) products and Global Land Data Assimilation (GDLAS) data,Photogramm. Eng. & Rem. Sensing, 78(8): 861-874.
Thenkaball, P. S., J. W. Knox, M. Ozdogan, M. K. Gumma, R. G. Congalton, Z. Wu, C.Milesi, A. Finkral, M. Marshall, I. Mariotto, S. You, C. Giri, and P. Nagler, 2012.
Assessing future risks to agricultural productivity, water resources and food security:
How can remote sensing help?,Photogramm. Eng. & Rem. Sensing, 78(8): 773-782.
Thome, K. J., 2001. Radiometric calibration of IKONOS by University of Arizona
8/10/2019 AAIC Description
27/27
Remote Sensing Group,Proc. High Spatial Resolution Commercial Imagery Workshop,
Greenbelt, MD, 19-21March 2001, 26p.
Vintrou, E., M. Soumare, S. Bernard, A. Begue, C. Baron, and D. Lo Seen, 2012.
Mapping fragmented agricultural systems in the Sudano-Sahelian environments of Africausing random forest and ensemble metrics of coarse resolution MODIS imagery,Photogramm. Eng. & Rem. Sensing, 78(8): 839-848.
Zhong, L., P. Gong, and G. S. Biging, 2012. Phenology-based crop classificationalgorithm and its implications on agricultural water use assessments in Californias
Central Valley,Photogramm. Eng. & Rem. Sensing, 78(8): 799-814.