7
Development Of A Field-Portable Imaging System for Scene Classification Multispectral Data Fusion Algorithms using Evan Preston, Tom Bergman, Ron Gorenflo, Dave Hennann, Ed Kopala, Tom Kuzma, Larry Lazofson & Randy Orkis, Battelle ABSTRACT Battelle scientists have assembled a reconfigurable multispectral imaging and classification system which can be taken into the field to support automated real-time targetbackground discrimination. The system may be used for a variety of applications including environmental remote sensing, industrial inspection and medical imaging. This paper discusses hard tactical target and runway detection applications performed with the multispectral system. multispectral imaging electro-optical (EO) sensor suite and a real-time digital data collection and data fusion image processor. The EO sensor suite, able to collect imagery in 12 distinct wavebands from the ultraviolet (UV) through the long wave infrared (LWIR), consists of five charge-coupled device The Battelle-developed system consists of a passive, Authors’ Current Addresses: E. Preston, ‘T. Bergmaq R. Gorenflo, D. Hennann, E Kopala, T. Kuzma, L. Lazofson and R. Ohs, Battelle, 505 King Avenue. Columbus, OH 43201-2693 Based on a paper presented at NAECON in May 1994. 088518985194$4.00 0 1994 IEEE (CCD) cameras and two thermal IR imagers integrated on a common portable platform. The data collection and processing system consists of video switchers, recorders and a real-time sensor fusiordclassification hardware system which combines any three input wavebands to perform real-time data fusion by applying “look-up tables,” derived from tailored neural network algorithm, to classify the imaged scene pixel by pixel. The result is then visualized in a video format on a full color, 9-inch, active matrix Liquid Crystal Display (ED). A variety of classification algorithms including artificial neural networks and data clustering techniques were successfully optimized to perform pixel-level classification of imagery in complex scenes comprised of tactical targets, buildings, roads, aircraft runways, and vegetation. Algorithms implemented included unsupervised maximum likelihood, Linde Buzo Gray, and “fuzzy” clustering algorithms along with Multilayer Perceptron and Learning Vector Quantization (LVQ) neural networks. Supervised clustering of the data was also evaluated. To assess classification robustness, algorithms were tested on imagery recorded over broad periods of time throughout the day. Results were excellent, indicating that scene classification is achievable despite temporal signature variations. Waveband saliency analyses were performed to I3

Development of a field-portable imaging system for scene classification using multispectral data fusion algorithms

  • Upload
    r

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Development Of A Field-Portable Imaging System for

Scene Classification

Multispectral Data Fusion Algorithms using

Evan Preston, Tom Bergman, Ron Gorenflo, Dave Hennann,

Ed Kopala, Tom Kuzma, Larry Lazofson & Randy Orkis,

Battelle

ABSTRACT

Battelle scientists have assembled a reconfigurable multispectral imaging and classification system which can be taken into the field to support automated real-time targetbackground discrimination. The system may be used for a variety of applications including environmental remote sensing, industrial inspection and medical imaging. This paper discusses hard tactical target and runway detection applications performed with the multispectral system.

multispectral imaging electro-optical (EO) sensor suite and a real-time digital data collection and data fusion image processor. The EO sensor suite, able to collect imagery in 12 distinct wavebands from the ultraviolet (UV) through the long wave infrared (LWIR), consists of five charge-coupled device

The Battelle-developed system consists of a passive,

Authors’ Current Addresses: E. Preston, ‘T. Bergmaq R. Gorenflo, D. Hennann, E Kopala, T. Kuzma, L. Lazofson and R. Ohs, Battelle, 505 King Avenue. Columbus, OH 43201-2693

Based on a paper presented at NAECON in May 1994.

088518985194 $4.00 0 1994 IEEE

(CCD) cameras and two thermal IR imagers integrated on a common portable platform. The data collection and processing system consists of video switchers, recorders and a real-time sensor fusiordclassification hardware system which combines any three input wavebands to perform real-time data fusion by applying “look-up tables,” derived from tailored neural network algorithm, to classify the imaged scene pixel by pixel. The result is then visualized in a video format on a full color, 9-inch, active matrix Liquid Crystal Display ( E D ) .

A variety of classification algorithms including artificial neural networks and data clustering techniques were successfully optimized to perform pixel-level classification of imagery in complex scenes comprised of tactical targets, buildings, roads, aircraft runways, and vegetation. Algorithms implemented included unsupervised maximum likelihood, Linde Buzo Gray, and “fuzzy” clustering algorithms along with Multilayer Perceptron and Learning Vector Quantization (LVQ) neural networks. Supervised clustering of the data was also evaluated. To assess classification robustness, algorithms were tested on imagery recorded over broad periods of time throughout the day. Results were excellent, indicating that scene classification is achievable despite temporal signature variations. Waveband saliency analyses were performed to

I3

Fig. 1. Battelle’s Multispectral Imaging Sensor Suite

determine whch spectral bands contained the bulk of the discriminating information for discerning objects in the scenes. Optimized classification algorithms are then used to populate the look-up tables in the sensor fusion board for real-time use in the field.

INTRODUCTION

Several field tests have been performed, with applications ranging from vegetation discrimination, hard target versus natural background determination, runway identification, flare/smoke characterization, and natural gas leak detection. This IR&D program is continuing into 1994, with the addition and integration of a van-based LIDAR system for automated aeros ol detection and iden ti fi cat ion.

Battelle has pursued client-driven internal research and development (IR&D) to study multi- and hyperspectral remote sensing of various man-made as well as naturally occurring targets and backgrounds. Procuring and assembling a rapidly reconfigurable mu1 tispectral sensor suite, waveband saliency analyses have been investigated for a variety of target sources. In addition, broad band versus narrow band phenomenology studies have been performed in support of Battelle’s Air Force clients. The Battelle sensor suite currently images in 12 distinct spectral bands, including UV, blue, green, red, near infrared, 3 middle wave infrared (1 wide and 2 narrow), and 4 long wave infrared (1 wide and 3 narrow) bands. Additional bands can be devised through the use of filtering. Digital signal processing techniques have been employed along with artificial neural networks to merge any three bands as input+ and produce, in real time, a scene classification as a result. The system can be utilized in two basic operating modes: (1) in a data collection mode, producing NTSC video output as well as digital thermal imagery; and (2) in a real-time sensor fusion/processor mode, producing RS-170 RGB output. The sensor fusion board operates using a database look-up table derived from previously collected imagery processed off-line by neural networks in the lab (Lazofson and Kuzma, 1993).

IMAGING SENSOR SYSTEM

The imaging system consists of a series of Chargecoupled Device (CCD) cameras and specialized thermal cameras mounted together on a single, tripod-supported platform (Figure 1). In addition, the system contains high fidelity SVHS tape recorders with timebase correction as well as controllers for the thermal cameras.

Imaging in the ultraviolet through near infrared regions is accomplished using a series of CCD cameras, each fitted with appropriate waveband filters. For the UV region, specially designed quartz lenses are used to focus the energy. The cameras are comprised of a single silicon frame transfer CCD with 754 (horizontal) by 488 (vertical) active picture elements with a cell size of 11.5 micrometers (horizontal) by 27.0 micrometers (vertical) in an RS-170,2:1 interlaced scanning system. The resolution is 565 TV lines (RS-170) horizontal and greater than 350 TV lines vertical. The cameras have a C-mount, 16 millimeter format type of lens mount capable of handling a variety of lenses. The video output is 1.0 volts peak-to-peak into a 75 ohm unbalanced load.

14 IEEE AES Svstemq Magazine, September 1994

Imaging in the thermal infrared spectral region is accomplished using two distinct thermal imaging camera systems, custom manufactured for Battelle by the Mikron Instrument Company. These camera systems are modified versions of the Model TH1102 (3.0 - 5.3 microns) and the Model TH1101 (8.0-13.0 microns).

The MWIR imager (TH1102) consists of a liquid nitrogen cooled Indium Antimonide (InSb) detector capable of distinguishing objects at a temperature resolution of 0.2”C (for a blackbody at 30°C). This sensitivity improves to 0.05”C when using the frame averaging mode. The field of view is 30 degrees (horizontal) by 28.5 degrees (vertical) with a focal range of 20 centimeters to infinity and includes a built-in optical zoom of up to 5X, in steps of 0.1X horizontal and vertical. The imager, as shown in Figure 1, is equipped with an externally mounted telephoto lens capable of an additional 3X zoom. The system provides a horizontal resolution of 2 milliradians, which is equivalent to 260 lines. The imager is a slow scan system, with frame times of 1 second, 1/2 second, or 1/4 second. The overall measurement accuracy of the system is plus/minus one percent of the range at full scale.

With the spectral filters, the MWIR imager is capable of scanning energy in three distinct spectral bands. The first band is broad, recording energy between 3.0 - 5.3 microns. Two additional narrow spectral bands include 3.5 - 4.1 microns and 4.5 - 5.0 microns. Images are displayed on the built-in four-inch LCD display in either color or black and white, with options for 256,128,64,32, or 16 hues or shades of gray. Some of the other display options include changing of the level and sensitivity in either run or freeze mode, isotherm display, waveform display, cursor display, multiple image display, image annotation and reverse image display. Signal output is provided in either RGB analog video, NTSC color, monochromatic video or digitally through a GP-IB interface. Images can then be stored either digitally (8 bit) on the built-in 3.5 inch floppy disk drive or recorded on video cassette.

The LWIR imager (TH1101) consists of a liquid nitrogen cooled Mercury Cadmium Telluride (HgCdTe) detector capable of distinguishing objects at a temperature resolution of 0.1”C (for a blackbody at 30” Centigrade). This sensitivity improves to 0.025” C when using the frame averaging mode. The field of view is 30 degrees (horizontal) by 28.5 degrees (vertical) with a focal range of 20 centimeters to infinity and includes a built-in optical zoom of up to 5X, in steps of 0.1X horizontal and vertical. The system provides a horizontal resolution of 1.5 milliradians, which is equivalent to 344 lines. This imager is also a slow scan system, with frame times of 1 second, 1/2 second, or 1/4 second. The overall measurement accuracy of the system is plusiminus one half of one percent of the range at full scale.

With the spectral filters, the LWIR imager is capable of scanning energy in four distinct spectral bands. The first band is broad, recording energy between 8.0 - 13.0 microns. Three additional narrow spectral bands include 7.81 - 9.80 microns, 9.98 - 11.41 microns, and a very narrow band between 10.497

- 10.857 microns. The display options are similar to the MWIR system.

DATA CLUSTERING ALGORITHMS

As a baseline, the study began with an investigation using an unsupervised maximum likelihood algorithm for clustering the multispectral data of a scene containing a Midgetman mobile missile launcher parked on a grassy area in front of a grove of trees. Additional objects within the scene consisted of paved areas and buildings. Displaying theclustered pixel classes with artificial color indicated that fusion of data from all six wavebands successfully distinguished the classes of interest. Using only the visual and near-infrared bands, the camouflagegreen mobile missile launcher was difficult to discriminate from background trees. Employing data from the two thermal bands, the clustering algorithm confused the mobile missile launcher with paved road, but successfully separated vegetation from man-made objects. Man-made objects indicated higher apparent temperatures in the LWIR band. Combining the data from all six wavebands successfully clustered the classes of interest, distinguishing target pixels Crom background pixels as well as differentiating between vegetation and man-made objects. Pruning combinations of sensor inputs indicated that the green-filtered visual band and the LWIR thermal band together contained most of the key information for distinguishing the classes.

A version of the Linde Bum Gray clustering algorithm was also applied to the same multispectral scene imagery (Linde 1980). The results of this clustering algorithm were similar to results obtained using the other data fusion techniques.

A fuizy clustering algorithm was also successfully applied to classify the multispectral scene imagery containing the mobile missile launcher. Similar to other techniques, this algorithm ‘‘carved’’ different object regions within the multispectral feature space, except that it allowed for overlapping class possibilities among the data clusters. In other words, a specific point in the multidimensional pattern recognition feature space may have been simultaneously designated as belonging to more than one cluster or object class, with weighted possibility factors pertaining to the “degree of belonging” to each class. However, the researcher must ultimately select a defuzzifying threshold to apply when making a final classification decision.

NEURAL NETWORK AKCHITECTURES AND TRAINING RESULTS

To assess classification robustness, tailored neural networks were trained and tested on near-simultaneous data from scenes imaged at different times of day. Learning coefficients, number of computational nodes, and node transfer functions were varied to optimize performance of a Multilayer Perceptron network and a modified Learning Vector Quantization (LVQ) network employing “conscience” and added training noise. Both architectures trained successfully,

IEEE AES SL wmr Mugu-rne. September 1994 15

converging within several thousand training iterations to a 98% pixel classification accuracy on separate test data. Trained network outputs of classified pixels from a full scene were displayed with artificial color to pictorially convey the near-perfect classification of the five types of “objects” in the scene (mobile missile launcher, buildings, paved road, trees, and grass). Network architectures were developed and tailored with the Neural Works Professional II/PLUS software package by Neuralware, Inc. Image pixel data were processed with the Geographic Resources Analysis Support System (GRASS), a Geographical Information System (GIS) with image processing capabilities.

The initial training data set consisted of 750 vectors (pixels) comprised of 150 pixels for each of the five classes. The corresponding test data set contained 250 vectors including 50 pixels for each class. The trained neural networks were later used to classify all 233,000 pixels imaged in a full scene, of which approximately 5,600 were target pixels. With only limited training, approximately 90% of the on-target pixels were correctly classified and over 99% of the background pixels were correctly classified as not being on target. The contiguity of the on target pixels offers an advantage in that a few misclassified target pixels will not detract from the target segmentation decision. Image processing techniques, such is window averaging, were implemented to post-process the pixel-by-pixel targethon-target classification output of the LVQ network. This post-processing served to “clean-up” the few non-target pixels incorrectly classified as being on target.

SUPERVISED CLAS SI FI CAT1 ON

Another algorithm applied to the mu1 tispectral data set was a supervised classification algorithm. The supervised

algorithm generated a pixel intensity histogram in each waveband for pixels sitting on a user-designated object (i.e., target) within an imaged scene. Thresholds were selected near the tails of the histograms. Pixels were then classified as target pixels if the intensity values in each waveband fell within the established thresholds on the histograms.

CLASSIFICATION ROBUSTNESS FOR TEMPORAL SIGNATURE VARIATIONS

To further verify classification robustness, algorithms were tested on imagery recorded over broad periods of time throughout the day. Results were excellent, indicating that scene classification is achievable despite temporal signature variations.

WAVEBAND SALIENCY ANALYSIS

Waveband saliency analyses were performed to determine which spectral bands contained the bulk of the discriminating information for discerning objects in the scenes. Equally important, these analyses may be used to determine the optimum subset of wavebands for discriminating the problem phenomenology. Histograms and scatter plots of the multispectral data were used. For a specific system implementation, the benefits of waveband saliency analyses and judicious pruning of features yields a reduction in the number of system sensors, minimizing cost, weight, complexity, and processing requirements.

RUNWAY DETECTION APPLICATION

In conjunction with the Federal Aviation Administration’s Runway Detection Program, Battelle collected additional

16

Fig. 3. Runway Data Collection: Thermal IR Images-1800 Hours

Fig. 4. Segmented Runway Using Unsupervised Classification - 1800 Hours, 3 km

multispectral imagery using the sensor suite in a separate measurement episode. This data was processed using analysis and fusion techniques to detect a runway at Wright-Patterson A B , discriminating the runway from other objects in the area using spectral characteristics. Figure 2 shows a 35mm photograph of a scene imaged in multiple wavebands consisting of a runway, roads, vegetation, and tactical targets at approximately 3 km. Figure 3 displays the apparent radiant temperatures measured in the scene for multiple wavebands within the MWIR and LWIR bands. Figure 4 displays, in two colors, the binanzed result of a data fusion algorithm merging the multispectral data to segment, or detect, the runway.

REAI-TIME SENSOR FUSION HARDWARE

The real-time sensor fusion circuit board incorporates state-of-the-art high speed video, computer graphics. memory

and programmable logic technologies into a hardware architecture that enables neural-network based sensor fusion algorithms to be executed at video pixel rates. The sensor fusion board is an 1/0 addressable standard full length ISA bus compatible circuit card assembly that is comprised primarily of VLSI surface mount components. The host processor is used to configure and program the hardware functional blocks.

The sensor fusion board accepts three line-locked composite video inputs from three spatially registered cameras. These inputs are digitized to 8 bits each and processed to form a 24-bit per sample input to the video classifier, and the video output select circuitry. The classifier uses 23 bits to produce an 8-bit data word that contains the 2 possible classifications out of a possible 16 for each pixel. The video output select circuitry receives the 8-bit classifier data word and 24-bit pixel data and, based on these inputs and user configuration data, selects an &bit video output, and the final synchronous 4-bit classifier data. The color space converter performs 3 X 3 matrix multiplication at pixel rates to perform 24-bit to 24-bit color space coordinate conversion. This device is used to create transparent pseudo color overlays on grey scale image data. The video RAMDAC converts the 24-bit color output of the color space converter into an RS-170 compatible RGB video output for display. The entire classification process is pipelmed, with all stages operating simultaneously. The process does not require input video storage other than registers required to temporarily store data at each stage of the pipeline. As a result, the overall input to output latency is just a few pixels, and the output is line locked to the input. The sensor fusion board is designed to operate at a top pixel rate of 18.75 MHz. In most cases, the pixel rate used will be less than 15 MHz, since multisensor pixel registration becomes more of a challenge with high resolution cameras.

17

The sensor fusion circuitry is ported at all of the major functional blocks to permit additional parallel processing and or an extension to the pipeline for more complex implementations. The pipeline process does not involve the host computer beyond configuration and initial programming, which makes the circuitry easily adapted to form factors other than the PCiAT circuit card.

systems. Potential applications are endless, including medical imaging, industrial inspection, environmental monitoring and airbornehatellite sensor validation. Interested organizations or individuals wishing more information on the Battelle-owned system are encouraged to contact one of the authors.

REFERENCES

SUMMARY

The Battelle multispectral sensor suite, comprised of 12 distinct spectral bands, provides an excellent portable platform for collecting simultaneous spectral data of natural or man-made features in the field. Coupled with the state-of-the-art sensor fusion board developed by Battelle scientists, the overall system provides a unique, one-of-a-kind, multispectral collection and fusion system for use in any type of environment. Additional data collection and testing efforts are planned for 1994. A field test at Dugway Proving Grounds to provide a multispectral analysis of biological warfare simulant clouds (including correlation to overhead satellite imagery) is planned for this summer. This test will integrate the passive collection system described in this paper with a van-based active LIDAR system. Analysis will include use of the passive system for detection of the clouds and the LIDAR system for positive identification. Additional smoke grenade tests are also being planned as well as a validation and verification test of several aircraft hyperspectral scanning

[ l ] Kuzma, Thomas J. and Laurence E. Lazofson. 1993, ”Automatic Target Detection Using Multispectral Sensor Fusion Implemented with Neural Networks,” 1993 AutomatedMission Plmning Society Symposium, San Antonio, T X .

and Segmentation Using Multispectral Sensor Fusion.” 1993 Meeting of the IRIS Specialty Group on Passiw Sensors, Applied Physics LaboratoryiJohns Hopkins University, Laurel, MD.

[3] Lazofson, Laurence E. and Thomas J. Kuzma. 1993. “Scene Classification and Segmentation Using Multispectral Sensor Fusion Implemented with Neural Networks,” Sixth National Symposium on Sensor Fusion, Orlando, FL.

[4] Lazofson, Laurence E. and Thomas J. Kuzma. 1993. “Scene Classification and Segmentation Using Multispectral Sensor Fusion Implemented with Neural Networks,” SPIE lntemationd Symposium on Optical Engineering and Photonics in Aerospace and Remote Sensing, Orlando, FL.

[2] Kuzma, Thomas J. and Laurence E. Lazofson. 1993. “Scene Classification

[5] Linde, Yoseph et al. 1980, ”An Algorithm for Vector Quantizer Design,

[6] Rogers, Steven K. et al. 1990,An Introduction to Biological andArtificial

[7] Seldin, J.H. and J.N Cederquist. 1992, “Classification of Multispectral

IEEE Trmsactionr on Communications, Vol. COM-28, No. 1.

Neural Networks. Bdlingham, Washington: SPIE.

Data: A Comparison Between Neural Network and Classical Techniques.“ Gowrnment Neural Network Applications Workshop: 79-83.

Tom Bergman (AS, Laser and Electro-Optics Technology, Vincennes University and AS, Electronics Technology, Vincennes University) has formal training in the Iasedelectm-optics and electronics fields and nearly five years expenence in the operation, testing. and troubleshooting of electro-optical and electronic equipment. He is expenenced in the use of various imaging and non-imaging IR systems, and with ORCAD schematic capture and AFGL‘s LOWTRAN VI1 atmospheric transmission software

Ron Gorenflo @SEE, University of Texas at Arlington) has 18 years’ experience in design and development of complex sensor systems He is currently the lead electronics engineer for several projects involving design of video and camera interface hardware

Dave Hermann (M S Electrical Engineenng, The Ohio State University, B.S. Electrical Engineenng, The Ohio State University) was an hourly staff member from October 1991 to Apnll993, while he was working toward his M S degree at 0 S U Dunng that time. he worked primanly in software development using C His graduate studies focused on high-performance computer architectures, supercomputers, parallel processing, and neural networks Since May 1993, he has been a regular staff member focusing mainly on computer engineering, hardware systems design, and software development He is a former U S Army First Lieutenant and served three years active dury after graduation in 1987

IEEE AES Sy.stems Muguzine, September I994

Ed Kopala (BSEE, Indiana Institute of Technology, oompleted all course requirements for an MSEE, The Ohio State University) has 2S years' experience in the system analysis, design, fabrication, and testing of military electro-optical sensor systems. These systems have included laser (Nd:Yag and COz), imaging infrared QRSTIFLJR). millimeter-wave radar, and both active and passive day/night television. Testing experience has been at both the theoretical and practical (hands-on) application level. Testing has included the development and execution of both man-in-the-loop and hardware-in-the-loop simulation validation studies, in addition to laboratory and operational field evaluations. His background includes broad knowledge in the areas of air and ground target phenomenology, atmospheric effects, optical design, detector technology, digital image processing, laser beam energy measurementsidiagnostics and laseriimaging infrared countermeasures. Mr. Kopala is also currently on the program committee for the IRIS Specialty Group on Passive Sensors. While at Rockwell International, Mr. Kopala was a member of the Corporate Optics Panel. Mr. Kopala is currently the Associate Manager for the Advanced Sensor Modeling and Remote Sensing Group within the Battelle Defense and Space Systems Analysis Department.

Tom Kuzma (B S ,Astronomy, Villanova University, Ph D . Astronomy, The Ohio State University) has a broad background in observational astronomy and theoretical astrophysics, and strong expenence in remote sensing and image processing in geographic information systems (GIS) He has many years expenence in the design. development, and use of large computer programs, and is involved in software development on a SUN workstation. In addition, he has been involved with space program planning, NASA mission planning, and the evaluation of flight hardware for the astrophysics division of NASA.

Larry Lmofson, (M.S.E.E.: Electro-optics, Air Force Institute of Technology, R.S.E.E, University of New Mexico, B.S., Natural Science, Muhlenberg College) prior to joining Battelle in September 1992, accrued 9 years of Air Force experience in the research and development career field directing technology growth, planning, test. and development. His background in electro-optichn frared avionics systems encompasses countermeasures, passive sensors: threat modeling, and low-observable technologies. Mr. Lazofson has published papers in the areas of artificial neural network applications for image processing/recognition and reliability in test equipment acquisition. He has extensive experience and training in the fields of metrology and calibration, logistics, statistics, computer systems and sofhvare, systems engineering, technology assessment, reliability and maintainability, quality, and leadership and management.

Randy Orkis (B.S., Electrical Engineering With Honors, Ohio University, M.S. (in progress), Electrical Engineering, Virginia Tech. and The Ohio State University) has broad experience in several engineering areas as well as contract and project management areas. Engineering experience includes: spacecraft support and design, radar systems, antenna systems, digital communications, electronic design, and sys tem integration. Management experience includes extensive interactiodcoordination responsibilities throughout the Intelligence and Department of Defense communities at all levels as Chairman of an Intelligence panel under the Director of Central Intelligence. Contract and project management experience includes several yeats as a Contracting Officer's Technical Representative for the CIA.

Evan Preston's (B.S., Geology and Mineralogy, The Ohio State University, M.S., Geodetic Science, The Ohio State University) background in Geographic Information Systems (CIS), remote sensing and image processing includes a wide variety of applications for both Government and industry. This experience includes: development of GIS software on Sun, VAX, and Tektronix computer systems written in C, FORTRAN and DoD Standard 2167 Ada; remote sensing and image processing of Landsat TM and MSS, Spot XS and PAN, AVHRR, DMSP and other remotely sensed information; cartographic data base design and development using both photogrammetric and computer-assisted cartographic techniques; and design and analysis of geopositioning systems for use in military navigation applications. Mr. Preston is familiar with all aspects of MC&G data and data standards and has experience with many GIShmage processing packages including GRASS, ArciInfo, PCI, TheCore and Khoros. In addition? Mr. Preston has successfully participated in the planning and execution of numerous field experiments involving remote sensing.