38

Click here to load reader

Developments in Radar Imaging

Embed Size (px)

Citation preview

Page 1: Developments in Radar Imaging

1. INTRODUCTION

Developments in RadarImaging

DALE A. AUSHERMAN, Member, IEEE

ADAM KOZMA, Member, IEEE

JACK L. WALKER, Member, IEEEEnvironmental Research Institute of Michigan

HARRISON M. JONES

ENRICO C. POGGIOM.I.T. Lincoln Laboratory

Using range and Doppler information to produce radar images is

a technique used in such diverse fields as air-to-ground imaging of

objects, terrain, and oceans and ground-to-air imaging of aircraft,

space objects, and planets. A review of the range-Doppler

technique is presented along with a description of radar imaging

forms including details of data acquisition and processing

techniques.

Manuscript received March 13, 1984; revised June 26, 1984; releasedfor publication July 9, 1984.

This work was supported by the U.S. Air Force, the U.S. Army, andthe U.S. Navy. The U.S. Government assumes no responsibility for theinformation presented.

Authors' addresses: D.A. Ausherman, A. Kozma, J.L. Walker,Environmental Research Institute of Michigan, P.O. Box 8618, AnnArbor, MI 48107; H.M. Jones and E.C. Poggio, Lincoln Laboratory,Massachusetts Institute of Technology, P.O. Box 73, Lexington, MA02173.

0018-9251/84/0700-0363 $1.00 © 1984 IEEE

The purpose of this paper is to discuss various typesof imaging radars. These radars take a number of formsaccording to the intended application. The forms rangefrom synthetic aperture radars (SARs) carried on movingplatforms, which are intended to be used to image stripsor patches of terrain, to stationary radars for imagingobjects placed on rotating platforms, objects moving bythe radar such as aircraft or orbiting objects, or celestialobjects like the Moon and planets.

Although these radars take different forms and havevarious applications, all are coherent radars which utilizethe range-Doppler principle to obtain the desired image.That is, the image is made using conventional techniquesto obtain fine-range resolution and using the Dopplerfrequency gradient generated by the rotation of the objectfield relative to the radar to obtain a cross-rangeresolution that is much finer than that obtainable by theradar's beamwidth.

In this tutorial paper, we give an introduction torange-Doppler radar imaging and briefly describe variousforms this technique takes. A historical perspective of thedevelopment of the imaging technique, along with anumber of examples, is given in Section 11. In SectionIII, we develop the fundamentals of range-Dopplerimaging in detail and discuss various processingapproaches which deal with motion through resolutioncells. We treat the general three-dimensional case,including the concept of three-dimensional processing. InSection IV, we include a detailed discussion of radarimaging techniques, including the data acquisition anddetails of the data-processing techniques.

A. Introduction to Radar Imaging Concepts

The Doppler frequency gradient required to obtainfine cross-range resolution is generated by the motion ofthe object relative to the radar; this motion is generated ina variety of ways which can be related to the simplifiedcase of a stationary monostatic radar illuminating arotating object. Fig. I portrays a three-dimensional objectas projected into the x - y plane, with the object rotatingwith a uniform angular motion about the z axis.However, as discussed in the following paragraphs, suchrestrictive assumptions can be removed and three-dimensional bodies rotating about an arbitrary axis withnonuniform rotation rates can be imaged. In addition,bistatic radar operation also can be accommodated.

If the object, contained within the beam of the radar,is rotating about the point A at w radians per second andthe coherent radar is located a distance ra from theobject, then the range to an object point with initial(t = 0) coordinates (ro, 00 zo) is given by

r = [r2 + r2 + 2rarO sin(00 + wt) + z2]½. (1)

If the distance to the object is much larger than the sizeof the object (ra >> ro, zO), a good approximation is

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984 363

Page 2: Developments in Radar Imaging

IsoDoppler Planes

Isorange Planes

A

ra

CoherentRadarl

Fig. 1. Range-Doppler imaging.

r - rc1 + x0 sin wt + Yo cos wt. (2)

Similarly, the Doppler frequency fd of the returned radarsignal is

2 dr 2x0w 2vowt'l=A d- Acos Wt-t sin wt (3)

where X is the radar wavelength.If the radar data are processed over a very small time

interval centered at t = 0, (2) and (3) can beapproximated as

r = r, + Yo (4)

f-b= 2x0w/X. (5)

Therefore, by analyzing the returned radar signal in termsof range delay and Doppler frequency, the (x0, yo)components of the position of the point scatterer can beestimated. The surfaces of constant range are parallelplanes perpendicular to the radar line of sight (RLOS),and the surfaces of constant Doppler are parallel planesparallel to the plane formed by the rotation axis and theRLOS. This constitutes the usual range-Doppler imagingprocedure. The presence of the object rotation rate X in(5) implies that in order to obtain a properly scaled imageof the object, the magnitude of X must be known. Mosttechniques for estimating the rotation rate depend on apriori knowledge and/or analysis of periodicities in theradar signal level. Another implicit assumption is that thedistance r,, from the radar antenna to the center of theobject is constant and known. In applications where r, isa function of time, the effects of time-varying range mustbe removed from the received signal in the radar receiverand/or processor.

The resolution in range is achieved by conventionalmeans using a train of short or long coded pulses which

provide a range resolution pr determined by thebandwidth Bw of the pulse. Hence

Pi c/2Bw

where c is the velocity of propagation of the radarenergy.

We see from (5) that we can achieve a cross-rangeresolution Ax = p, if we can measure Dopplerfrequencies with a resolution of

Afd = 2wptl/x.Since a frequency resolution Afd requires a coherentprocessing time interval of approximately AT = 1 1/Af,cross-range resolution is given by

pa = X/2wAT = X/2A0

(6)

(7)

(8)where AO = wAT is the angle through which the objectrotates during the coherent processing time.

Fine cross-range resolution implies coherentprocessing over a large AG; however, (2) and (3) indicatethat both the range and Doppler frequency of a particularpoint scatterer can vary greatly over a large processinginterval. This means that during a processing timeinterval sufficiently long to give the desired cross-rangeresolution, points on the rotating object may movethrough several resolution cells. Therefore, the usualrange-delay measurement and Doppler-frequency analysisimplied by (4) and (5) will result in degraded imagery forthe large processing interval case.

To avoid image degradation caused by motion throughresolution cells while using the simple range-Doppleranalysis described above, we must limit the size of thecoherent processing time AT. In the special casedescribed above (constant rotation rate and RLOSperpendicular to the axis of rotation), no motion througha range resolution cell and a Doppler resolution cell willoccur if

AT < 2pr/wDDand

AT<c `(X/Dr.)'2

(9)

(10)

respectively, where Dr and D, are the maximum rangeand cross-range dimensions, respectively, of the object.Consequently, one must limit the resolution of theimaging system such that

p2 > XDr/4

Pa Pr > XDa/4.

(1 1)(12)

In general, the image scene dimensions are not theonly parameters regulating the extent of the coherentprocessing interval and hence the cross-range resolutionof conventional range-Doppler images. When the angularrate is variable and/or the radar range directions are notcoplanar in a coordinate system that rotates with theobject, the constraint of no motion through a Dopplerresolution cell (10) may have to be modified to a morestringent one, leading to even smaller values of AT.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984364

Page 3: Developments in Radar Imaging

Often, a finer cross-range resolution is desired, andhence points in the object move through range and/orDoppler resolution cells during the coherent integrationtime. In this case, simple frequency analysis will yielddegraded imagery; the effect of motion through resolutioncells must be compensated. Several techniques ofcompensating for this motion through cells have beendeveloped over the years. These range from linearpiecewise approximations to account for the motion, tosophisticated "extended" methods, to elegant methods offormatting the data prior to the image formationprocessing. Some of these methods are discussed inSections 1II and IV.

B. Applications of Radar Imaging

Application of these principles has yielded variousforms of range-Doppler radars as stated above. The well-known stripmap SAR technique [ 11 is a special casewhere the Doppler gradient is achieved by the relativerotation produced by scanning an antenna fixed to amoving platform over a strip of terrain. This is illustratedin Fig. 2. Here a coherent radar carried on a movingplatform at a velocity v illuminates a stationary point 0 on

-Line of Flight

Fig. 2. Stripmap synthetic aperture radar.

the terrain at broadside range r, from the flight line. Thepoint 0 is first illuminated by the forward edge of theantenna beam and is last illuminated when the aft edge ofthe beam passes by the point. The apparent total rotationof the object in the neighborhood of point 0 is equal tothe angle subtended by the radar's antenna beam, whichis approximately AO = X/L, where L is the length of theradar antenna in cross range. Using this relation in (8)yields the well-known formula for the cross-rangeresolution of an SAR, namely, p, = L12.

With the stripmap SAR, the apparent rotation rate ofthe object (i.e., the relative rotation between the objectand the RLOS) is not constant and hence the Dopplerfrequency produced by a scatterer is a function of time.To achieve a fine cross-range resolution, it is necessary tomake a correction for the change in frequency. Such acorrection is often called "focusing the synthetic aperture

array" since it is equivalent to supplying an essentiallyquadratic phase shift to the synthetic array generated bythe radar as it moves past the terrain. Motion throughresolution cells does not occur if Ar, as shown in Fig. 2,is smaller than the desired range resolution. Under thiscondition, the quadratic phase correction described aboveis by itself sufficient to give the desired resolution. Thiscondition holds for most SARs; however, for SARs [2]which operate at extreme ranges, such as the NASAsatellite-borne SEATSAT [3] radar, range cell migrationcorrection is also required. Correction for both range cellmigration and Doppler frequency change of scatterers canbe accomplished by two-dimensional correlation of thereceived signals with a replica of the expected returnfrom a fixed point in each resolution cell in the scene.This cross-correlation function's magnitude plotted as afunction of position in the scene is the displayed image.

Rotating platform radars also fit the model shown inFig. I (except that the RLOS is not always perpendicularto the rotation axis). These radars are often used to obtainradar cross-section measurements and to produce imagesto obtain radar signatures [4, 51. Processing to eliminatethe effects of migration through resolution cells is almostalways required. A form of airborne terrain mappingradar, called the spotlight radar, also fits this simplemodel [6-10] (except that the relative rotation rate canvary). In this form, the radar is carried in a movingvehicle and the antenna illuminates a fixed spot on theterrain from a continuously changing look angle, asshown in Fig. 3. If the gross Doppler due to the changeof distance from the aircraft to the center of the spot iscompensated, it can be shown that the spot of terrain canbe treated as a rotating object field illuminated by adistant stationary radar.

Fig. 3. Spotlight synthetic aperture radar.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 365

Page 4: Developments in Radar Imaging

Ground-based radars [11] which image movingvehicles such as aircraft or objects moving in orbit alsofit the model, provided the radar tracks the object in itstrajectory. The gross Doppler due to the trajectory isremoved and the Doppler gradient appears as if themotion were due only to the rotation of the object relativeto the RLOS. This relative rotation is caused by thetranslational motion of the target along its trajectory andby the rotational motion of the target itself (both rotationsmust be described in the same coordinate system). Thetrajectory and rotational motions which provide theDoppler gradient are not always known a priori, and oneof the main problems in the image formation is tocorrectly estimate these motions from the radar data.

An important ground-based application of range-Doppler radar is to image the Moon or planets in radarastronomy [121. The technique is called delay-Dopplerimaging in astronomy. The radar is located at a fixed siteon the Earth and illuminates the Moon or a planet.Contours of constant delay appear as annuli on the planet,as shown in Fig. 4. Contours of constant Doppler appear

Axis of Rotation

Annulus Definedby Time Delay

Receding Edget % J Approaching Edge

Strip Definedby Doppler Range

Fig. 4. Delay-Doppler imaging in radar astronomy, after Green [32].

as straightline strips parallel to the rotation axis of theplanet. The intersection of an annulus and a strip definethe delay-Doppler resolution patches, as shown in blackin the figure. The size of the resolution patches aredetermined by the bandwidth of the pulse modulation andthe Doppler-frequency resolution, or by the coherentintegration time. The image obtained using this techniqueis ambiguous since the returns from A and B at theintersection of an annulus and a strip cannot bedistinguished from each other. However, using aninterferometer, one patch can be nulled with respect tothe other and the reflectivity of a patch can be isolated.

It is also possible to form images of objects whichhave no appreciable trajectory motion relative to theradar. Instead, the natural undulation of the object, suchas a rocking motion of a ship due to wave action, is usedto create the Doppler gradient. In this type of imaging,

which has been called inverse synthetic aperture radar(ISAR) 113], it is also necessary to make an estimate ofthe magnitude and direction of the undulation from thedata since these parameters are unknown.

11. HISTORICAL PERSPECTIVE

A. Synthetic Aperture Radar

The earliest statement that Doppler-frequency analysiscould be used to obtain fine cross-range resolution isattributed to Carl Wiley of the Goodyear AircraftCorporation in June 1951. At the same time, a group atthe University of Illinois 114] was conductingexperimental studies which revealed that radar returnsfrom certain terrain samples produced frequency spectracontaining sharp lines. In a report dated March 1952,they noted these lines were due to strong fixed targetswithin the beam of the observing radar and concluded thiseffect could provide a radar system with greatly improvedangular resolution. This group constructed an X-bandradar and in early 1953 used it to produce a radar mapusing frequency analysis techniques to obtain highresolution in cross range. The radar that produced thismap was an unfocused system; that is, there was nophase correction provided to compensate for the changingDoppler frequency 19].

In 1953, an Army summer study, called ProjectWolverine, was convened at the University of Michiganfor the purpose of recommending research anddevelopment programs leading to better battlefieldsurveillance techniques. As a part of this study, theDoppler-frequency technique was examined in moredetail. Participants in this study included representativesfrom universities and industry, including the Universitiesof Illinois and Michigan, Goodyear, Philco, GeneralElectric, and Varian. The result of this study was adevelopment program which proceeded, under Armysponsorship, to further develop the range-Doppler radarprinciple.

A part of this program was to develop a practical dataprocessor which could accept wideband signals and carryout the necessary Doppler-frequency analysis at eachresolvable range interval so that a useful image could beproduced. A group at the Willow Run Laboratories of theUniversity of Michigan, under L.J. Cutrona, wasassigned the problem of developing an optical computerfor this purpose. Processing techniques considered byother groups included electronic processors, recirculatingdelay lines, and storage tubes.

In the ensuing years, the Willow Run groupconstructed an X-band radar and built an opticalcomputer. The equipments were completed in the summerof 1957 and the first, fully focused SAR map wasproduced in August 1957. Very soon after this, the Armyrequested that a demonstration system be constructed.This system, the AN/UPD-1, was produced by the

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984366

Page 5: Developments in Radar Imaging

Willow Run group in conjunction with TexasInstruments. Five radar systems were built and variousdemonstration flights were conducted in the spring of1960 11.

In subsequent years, the state of the art of SAR forthe military was further developed by a number oforganizations. Currently, a SAR is used as a standardreconnaissance tool by the Air Force. This radar system,called the UPD-4, was built by Goodyear Aerospace 1151.In late 1972, a three-wavelength SAR was included in theApollo 17 lunar mission. The objectives of the Apollo 17Lunar Sounder Experiment (ALSE) were to detectsubsurface geologic structures, to generate a continuouslunar profile, and to image the Moon at radarwavelengths. A great deal of important data on thesurface and subsurface features were gathered during thisexperiment 1161. During the last decade, SAR has alsobeen applied to such diverse civilian applications asterrain mapping 117, 18], oceanography 119-211, and icestudies 122, 231. In 1978, NASA launched the SEASATsatellite which carried an L-band SAR. During itsrelatively short life, it imaged many parts of the worldand provided a great deal of important data tooceanographers and other scientists [24-261. An exampleof the type of image produced by this instrument isshown in Fig. 5. NASA is continuing to develop SAR for

Fig. 5. A 100-km by 100-km frame from the L-band SEASAT SARcollected on August 19, 1978. It shows the English Channel (Strait ofDover) between Rams Gate Head on the left and the French coast in thevicinity of Dunkerque and Calais on the right. The linear features inmidchannel and the distinctive surface patterns around Rams Gate Headboth are the result of tidal currents flowing over sand ridges at thebottom of the channel. The ground resolution of the image is

25 x 25 m (courtesy of NASA/JPL).

space applications with its shuttle imaging radar (SIR)series. SIR-A was carried aboard the shuttle flight inNovember 1981 [27] and plans for subsequent flights forSIR-B and SIR-C are being carried out. In addition, theEuropean Space Agency [28], Canada [29], and Japan

1301 have announced intentions to place SARs in orbitduring the next decade.

Recently, ERIMI has built a SAR designed to supportengineering operations in the Arctic. This system, calledthe sea ice and terrain assessment radar (STAR), iscurrently being operated by Intera, Ltd., in support oftwo Canadian oil companies drilling in the Beaufort Sea.A block diagram of this radar and a picture of theequipment is shown in Figs. 6 and 7, respectively. Thesystem is installed in a light twin-engine aircraft and fliesmapping missions of the ice fields surrounding thedrilling rigs. The data is processed in real time in theaircraft by an analog/digital processor, and the ice map istelemetered to a ground station where a mosaic of thearea surrounding the drill rig is assembled. This map isused by ice experts aboard the rig to assess the iceconditions. A sample of the type of imagery produced bythis system is shown in Fig. 8.

B. Radar Astronomy

Independent of the work that was being done in SAR.Green formulated the concept of delay-Doppler imagingin the 1950s with the aim of improving the resolution ofthe radars being used for making measurements of theMoon and planets 131, 32]. In the late 1950s, Pettingillused the technique to produce radar images of the Moon[33]. He used the Millstone Hill radar, operatingcoherently at 440 MHz, to produce 26 range cells of75-km resolution each across the Moon. In each rangecell, Pettingill was able to resolve Doppler frequencies to+ 1/10 Hz by processing a series of pulses existing overa 10-s duration. In 1961, several organizations obtainedradar echos from Venus [34-38]. In addition, radarcontacts have been made with Mercury and Mars [39-421.

The planet Venus can be imaged with good sensitivityonly near inferior conjunction, i.e., when Venus isapproximately between the Earth and the Sun. At thisdistance, even the narrow beam of the NationalAstronomy, and Ionosphere Center's Arecibo radar,produced by the 300-m dish, has about twice the diameterof the planet; so Doppler is needed to obtain good cross-range resolution. An image of Venus using Arecibo datataken in 1975, 1977, and 1980, is shown in Fig. 9.

C. Imaging of Orbiting Objects

In the early 1960s, it was recognized that the range-Doppler technique could be applied to imaging oforbiting objects. A radar for this purpose, called thesynthetic spectrum radar, was built by Westinghouseunder Defense Advanced Research Projects Agency(DARPA) sponsorship. This radar was an instantaneouslynarrowband radar which used frequency stepping

'1In1973, the Willow Run Laboratories separated from the Universityof Michigan and became the Environmental Research Institute ofMichigan (ERIM), a not-for-profit research organization.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 367

Page 6: Developments in Radar Imaging

Fig. 6. Block diagram of the STAR system. The radar operates at X band and uses a swept YIG oscillator to generate a linear frequency-modulated pulse for transmission. The bandwidth of the pulse over 30 psec is 15 MHz and 30 MHz for 6- and 12-m resolution, respectively. Thereturned pulse is compressed by a separate SAW device for each resolution. The range swath covered is 22.4 km and 44.7 km at 6- and 12-m rangeresolution, respectively. The azimuth compression is performed by the digital real-time signal processor which produces seven 6-m resolutionimages which are incoherently superimposed. These data are sent via a downlink to a ground station where the stripmap is recorded on film and on

a tape recorder. The image data are also recorded aboard the aircraft.

techniques to achieve a wide bandwidth. In the late1960s, Rome Air Development Center (RADC)developed the Floyd Site radar for imaging orbitingobjects. This radar was built by General Electric and theprocessing techniques were developed by SyracuseResearch Corporation.

A 94-GHz radar for space object identification (S01)was constructed by Aerosapce Corporation in the 1960s.This radar has a 1-GHz bandwidth and produces a time-bandwidth of 106 using a 1-ms pulse length [43].

The first high-quality images of near-Earth spaceobjects were obtained in the early 1970s using datacollected by the ARPA, Lincoln Laboratory, C-band,observables radar (ALCOR). These data were processedby Lincoln Laboratories and the Syracuse ResearchCorporation. Even though ALCOR was not designed forradar imaging, successful results were made possible bythe 50-cm range resolution, by coherent data recording,and by sufficient sensitivity to image low altitudesatellites.

In the middle 1970s, the success of the early ALCORresults persuaded DARPA to sponsor an SO1 program atLincoln Laboratories. Included in this program were

upgrades to the ALCOR radar, such as an increase inPRF to 200 Hz and the ability to record pulse compresseddata in up to three adjacent 30-m range windows. Dataacquisition procedures and range-Doppler imageprocessing efforts for many classes of near-Earth spaceobjects were fully developed.

In the late 1970s, the results of the ALCOR SO5program led to the development of the long-rangeimaging radar (LRIR) [1 1 ] at Lincoln Laboratory. Oncethe LRIR became operational, significant imageprocessing developments were achieved. The LRIR is anX-band radar with a bandwidth which is 10 percent of thecenter frequency. It was specifically designed to be ableto image satellites at synchronous range. The widebandwidth allows for 25-cm range resolution, and themaximum PRF of about 1000 Hz allows for imaging ofrapidly rotating space objects and provides added imagingsensitivity.

Significant progress was made in the late 1970s andearly 1980s in processing data from the LRIR. Atechnique called extended coherent processing (ECP) wasdeveloped. ECP is an efficient general imaging techniquewhich speeds up processing of image data and allows

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984368

Page 7: Developments in Radar Imaging

Fig. 7. A view of the STAR system as installed in a Cessna 441Conquest aircraft. The rack on the left contains the radar controlcomputer. the VISOR hard copy recorder, and the antenna control. Thelower part of the rack in the middle is the RF and mounted atop is thecontroller for the real-time signal processor. The remainder of theprocessor along with the buffer/presumimer is located in a rack furtheraft which is not shown in the picture. The rack to the right contains thedownlink formatter, the downlink. and the tape recorder. The small rackforward contains the Litton LTN-76 inertial measurement unit. Theslotted array antenna is located in a radome under the aircraft. Total

weight of the system is 340 kg.

Awl ~t, roWAFig. 8 STAR imagery of an area in western Pennsylvania. south ofAltoona. shows the radar's 6- by 12-m resolution wide swath mode(44.7 km). The sensor was flown at a 26 000-ft altitude. EvittsMountain and Dunning Mountain are the ridges running south to north

on the left; to the right (east of these) is the Juniata River.

carrying out new applications such as wide-angleimaging, stroboscopic imaging, and three-dimensionalimaging.

D. Rotating Platform Imaging

In the early 1960s, work began in the development oftechniques for imaging rotating objects at the Willow Run

Laboratories under Brown. This interest was stimulatedby the results of a summer study on space objectidentification sponsored by the Electronics SystemsDivision of the Air Force. Brown recognized that suchradar imaging is substantially equivalent to SAR sinceSAR can be described in terms of a general pulse-Doppler radar for which the range-Doppler imagecorresponds to a geometric image of the scene [441.

A rotating platform and a coherent ground-based radarwere built and work was carried out in data gathering andthe development of data-processing techniques underArmy and Air Force sponsorship. The principal data-processing problem addressed was processing in thepresence of motion through resolution cells. Theprocessing technique devised consisted of taking theFourier transform of the range data, followed by a gentledistortion of the range transform plane. After these steps,a two-dimensional Fourier transform was used to producethe image 1451.

Walker began work with this rotating platform in1970. His work resulted in a more general formulation ofthe range-Doppler imaging theory and the introduction ofthe polar-format storage technique which solved thegeneral problem of processing with motion throughresolution cells. In addition, extensive experimentalresults were produced [51.

The rotating platform radar facility used for this andsubsequent work is shown in Fig. 10. The facilitycurrently has the capability of forming images using radarillumination at a center frequency of 10 GHz. 35 GHz,and 94 GHz. A radar image of a vehicle produced by thefacility, along with an optical image, is shown in Fig. 11[461.

In addition to the work just discussed, Mensa et al.147, 481 at the pacific Missile Test Center and a groupunder Wehner at Naval Ocean Systems Center haveworked on imaging of rotating objects 1491, as have Chenand Andrews 150. 51]. Recently, a number of authorshave studied the relation between techniques used intomography and range-Doppler imaging [52-541. Theirconclusion is that range-Doppler imaging can beanalyzed using the projection-slice theorem fromcomputer-aided tomography (CAT). Conversely, it hasbeen suggested that processing techniques borrowed fromtomography may advance the state of radar processingtechniques [55].

Ill. RANGE-DOPPLER IMAGING FUNDAMENTALS

In Section 1, we introduced the basic concept of usingrange and Doppler (range-rate) time signals to providetwo-dimensional images of a rigid object field. In thissection, we develop in more detail the principles ofrange-Doppler imaging of rotating objects to serve as abackground for subsequent discussions of general imagingradar configurations. The fundamentals presented hereinvolve a three-dimensional imaging geometry withseparate (bistatic) transmitting and receiving antennasmoving along arbitrary trajectories. Important special

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 369

Page 8: Developments in Radar Imaging

t ak HnoiX~t s 111CLig. 9. Radar imagery of the surface ot Venus reveals the varied and complex nature of its surface terrain. This mosaic was obtained with the12.6-cm radar interferometer of the National Astronomy and Ionosphere Center and covers the area from 30'N to 70'N latitude and from 100 W to40WE longitude. The large radar-dark pear-shaped feature at top center is Planum Lakshmi, a broad flat plateau surrounded by steep scarps. The verybright feature to its right is Maxwcll Montes. which measures 75(0 km north to south and includes the planet's highest evaluation. 11 km above the

planetary mean (courtesy D.B. Campbell. NAIC).

cases such as stripmap SAR, spotlight SAR. and space-object imaging with a fixed radar are treated in Sectioniv.

1.:E if';

_~~~~~~~~~

Fig. 10. Rotating platform radar facility uses separate transmitting andreceiving horns shown located on the tower. The tower is located about40 m from the platform which is about 6 m in diameter and has a

rotation period of 168 s. The radar transmitter and receiver are locatedinside the building.

A. General Three-Dimensional Radar Imaging

In this section, we consider a more general range-Doppler imaging situation involving a bistatic transmitter/receiver configuration and a three-dimensional rigidobject as shown in Fig. 12. Both the object and theantennas can have arbitrary motion, although only therelative motion of the scatterers with respect to theantennas is important for the radar imaging methodsconsidered here. For vehicle-borne terrain imaging radars.this motion is often measured by means of inertialnavigation-based systems and supplemented by data-derived motion estimates as required. For ground-basedspace-object imaging radars, the motion is usually derivedby fitting radar data to obtain precise models whichdescribe the object's orbital and rotational motion.

The fundamental task of a radar imaging system is toestimate the reflectivity cr of each element of the objectas a function of the spatial coordinate ro. That is, the

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984370

Page 9: Developments in Radar Imaging

reflectivity function is to be approximated by an imagefunction G(ro) which is calculated from the returned radarsignals. Because of the limitations of the radar data. thefunction G(r0) will be a blurred representation of cr(r0,).This blurring is characterized by the ''point targetresponse" function. h(r0,) which is the image functionG(r0) calculated from the signal returned from an isolatedpoint scatterer.

To achieve good image quality. it is important that h1|have its maximum value over ro corresponding to thelocation of the point scatterer and have as sharp a peak aspossible with low sidelobes. In general, objects of interestcontain many elemental scatterers and, under theassumption of linearity. the image G can be representedas a superposition of point target response functions.

For a transmitted signal s(t), the signal received froma point scatterer is

S,.(t) = (t R, + R28C. (13)

IW Bwhere R, + R2 is the time-varying two-way range to theobject point, and or is the reflectivity associated with the

_ _ 1; |point. An image of the object can be achieved if R, + R,

The ~~~ ~ ~ ~~~~~ ~is a different function of time for each point on theobject. In principle, the total received signal from all

scattering elements of the object can be cross correlatedFig. 11. Rotating platform radar image and optical image of a with a set of reference functions of the form given byVolkswagen. The radar image is a superposition of data obtained over a(13) to produce such an image, G(ro).360° rotation of the table.(1)tprdcsuhaImgGr.

In practice, various approximations and limitingassumptions are often made which have led to a numberof different methods for processing the received radardata to form an image of the scene. For example, in

Z Section 1, we discussed the conventional range-DopplerTransmitter approximation for a two-dimensional rotating scene

(where the RLOS was perpendicular to the rotation axis)and showed that yo and x0 were directly related to rangeand range-rate measurements made over sufficiently

R./,/ small time intervals which leads to a relatively simplerange and Doppler-frequency analysis type of signalprocessor.

In this section, we consider larger coherent processing/ \Reci~ time intervals in order to achieve fine resolution over

large scenes, and therefore, more general image72 7formation methods are required. All of the image

Object formation methods described are based on the same

Y fundamental process of measuring range and changes inrange to produce image resolution. Some aredistinguished from one another by virture of the differentapproximations which are made to minimize hardwarecomplexity and/or maximize processing speed. Others areO

b merely different mathematical formulations of the same/ X fundamental technique such as time-domain (spatial-

Fb= r domain) versus frequency-domain analysis. We have notattempted herein to provide a complete taxonomy of radar

Fig. 12. Three-dimensional radar imaging geometry showing bisector image formation techniques but will review fourvector. (Symbols with overbars correspond to boldface symbols in text.) representative methods to serve as a background for the

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 371

Page 10: Developments in Radar Imaging

more detailed description of radar imaging techniques inSection IV.

(1) Pulse-by-Puilse Correlation Imaging. The cross-correlation image function G(ro), calculated over a set ofdiscrete pulses of radar data, using ( 13) as the referencesignal, can be written as a sum of single-pulse cross-correlation functions. The cross-correlation function forthe pth pulse can be expressed very simply as a phase-corrected pulse-compressed radar return sampled at thebistatic range R(ro, p) = (R1 + R2)12 to the point ro inthe object (see Fig. 12). To permit this simplecalculation, the pulse-compression system should beconfigured to give a response from a point scattererlocated at r0 which has the constant phase [4i-rR(r0, p/X+ constant] across the main peak of the response. Theadditive constant can be ignored. From such a pulse-compression system, the response from a point scattererat range R, would have the form SIR] = A(R - R)expjj4-rrRj1X, where A(R -R,.) is a real function withits peak at R = R,. The usual practical implementation ofpulse compression for a chirp waveform results in a weakquadratic dependence of phase on the sampling range R.We assume such effects to be negligible. Forconvenience, the pulse-compressed signal is calibrated sothat a point scatterer's radar cross section (RCS) is givenby the peak value of A2.

Under the above circumstances, the cross-correlationfunction over the set of pulses {p} is given by

G(r,) > W(p)S[R(r0, p)lI expl -j41TR(ro p)/XI. (14)

For each point ro, the range is calculated from the knownmotion of the object, the transmitter, and the receiver atthe time on target t(p) of the pth pulse. The symbolSIR(ro, p)] denotes the pulse-compressed return sampledat the calculated range R(ro, p). This calculated rangealso determines the phase correction in (14). The realweights W(p) can be used in various ways to optimizethe image quality. They can be used to suppress cross-range sidelobes. If data from multiple target rotations areused, they can, in some cases, be selected to suppresscross-range ambiguous images. The weights arenormalized W(p) - 1 so that in the image of a pointscatterer, the peak value of |G 2 is the scatterer's RCS.

Except for the effects of sidelobe suppression weightsused in pulse compression and the possibly nonuniformweights W(p) in (14), this function G(ro) optimizes thesignal-to-noise ratio for detecting a scatterer at r(. It alsodoes well in separating scatterers from each other if thepoint target response function h(ro) has low sidelobes anda single sharp peak within the extent of the object. Sincethe function G(ro) is linear in the received signal, theeffects of the scatterers in the target are linearlysuperposed in the complex function G. If signal saturationis avoided and the signal quantization step is smallcompared with the noise, the only nonlinear effects toconfuse the image are those that occur physically in the

scattering of the signal from the object, such asshadowing and multiple scattering.

For image processing, it is convenient to rewrite (14)in terms of the relative range

D(p) = D(ro, p) = R(ro, t) - R(O, t) (15)

where 0 is the origin of the displacement vector ro. InFig. 12, 0 is the origin of the x, v, z coordinate system,and R(0, t) = (1 r,1 + r2|)/2. This coordinate systemorigin can be any convenient point in the object. In termsof D(p), (14) becomes

G(ro) = E W(p)S[D(p)J exp[-j4irD(p)/1X{I}

(16)

where

StD(p)1 = S[R(ro, p)] expi-j4,nR(O, p)IXJ. (17)The formulation of G(ro) in terms of the relative

range separates out the phase corrections depending onranges to the origin of the coordinate system origin (17)from those depending on aspect (16). The term aspectdenotes an orientation of the RLOS relative to the target.If the radar is bistatic, the aspect depends on theorientations of both lines of sight relative to the target,i.e., the orientation of the bisector vector ro shown inFig. 12. The remainder of this paragraph discusses onlythe monostatic case, but similar conclusions can bereached in the bistatic case. That the phase corrections in(16) depend only on aspect can be understood by notingthat, since imaging object sizes are usually much smallerthan radar ranges, the far-field approximation ofelectromagnetic scattering theory is valid. In this case,the relative range D(ro) is given, to an excellentapproximation, by the scalar product between the vector(ro) and the unit vector along the RLOS direction.Consequently, for a given point (ro), D(ro) depends onlyon aspect.

Furthermore, it is well known that radar returns at onefar-field range and at a given target aspect can bepredicted from returns measured at other far-field ranges,at the same aspect, by making a phase correction for therange difference. The calibration of IS(p)12 to give theRCS includes the usual range-squared amplitudecorrection. Thus the returns S(p) obtained fromcalculating (17) would be the same regardless of theranges at which the returns S(p) were obtained.Consequently, one can conclude that the properties ofG(ro) depend mainly on the target aspects sampled by thedata and (to a lesser extent) on the weights W(p) used incalculating the image.

It can be shown that the formulation of the imagefunction in (14) is equivalent to the backprojectionprocessing method [54], a common tool in the field ofCAT. The backprojection algorithm applied to a coherentimaging system forms an image via a coherent summation(for each resolvable image element) of samples ofmultiple functions representing the total reflectivity of thescene as projected onto the line of sight to the scene.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984372

Page 11: Developments in Radar Imaging

Thus the backprojection algorithm is equivalent to theoperation implied by (16), where S(p) exp -j4irD(p)/X]is the projected reflectivity of the scene. The phaseadjustment is required to account for the propagationeffects associated with measuring projected reflectivityfrom a remote location.

(2) Multiple-Subaperture Processing. Equation (14)can be used in principle to calculate well-focused imagesof scenes or objects of arbitrary dimensions, usingarbitrarily long coherent data intervals. In many practicalapplications, however, the pulse-by-pulse correlationimaging method, which is computationally inefficient,can be reliably replaced by a more efficient methodknown either as subaperture image processing in spotlightSAR applications, or as extended coherent processing inrotating space-object applications.

In this method, the sum over pulses is replaced by a

coherent sum of conventional range-Doppler imagescalculated over smaller subintervals of the total coherentprocessing data. The size of these subintervals(subapertures) is chosen to be sufficiently small that no

motion through resolution cells occurs for their duration.With subintervals of such size, the range-Doppler imagescan be calculated by FFT processing, which is at leastone order of magnitude faster than pulse-by-pulseprocessing.

The subimages are subsequently aligned in range andrange-rate to account for the relative motion of scatterersoccurring between separate subintervals. The extendedimage is obtained by coherently summing all alignedsubimages.

A more detailed description of the structure of suchan algorithm is presented in Section IVB dealing withimaging of rotating space objects.

(3) Multiple-Subpatc h Processing. As was describedpreviously, the migration of points through resolutioncells can be avoided if one chooses sufficiently smallcoherent processing time intervals and/or if the object sizeis sufficiently small. The previously described multiplesubaperture method relies on a sequence of conventionalrange-Doppler processing operations over short timeintervals followed by a coherent summation to form thefinal image. Similarly, one can achieve fine resolutionover scenes larger than those permitted by the inequalities(11) and (12) if the large scene is divided into an array ofsmaller subpatches. We then compensate for the motionbetween the radar and the center of each subpatch, andthe situation reduces to the case of an array of smallerrotating scenes.

The division of the large scene into smaller scenesinvolves dividing the range extent of the target field into anumber of subswaths and partitioning the total Dopplerspectrum into a number of frequency sub-bands followedby the usual Doppler-frequency analysis of each sub-bandto form the final set of subimages. One particularimplementation of this method has been called a two-

stage FFT 156] or, more generally, the multiple-subpatchapproach. In any case, by choosing the diameter D of thesubpatches to be

D ' 4p2/X (18)

an image of the entire scene with resolution p can beachieved by a final mosaicking operation.

In practice, the multiple-subpatch method is mostapplicable to vehicle-borne radar imaging of large scenes.An example of how this method can be implemented forprocessing spotlight mode radar data is described inSection IV.

(4). Polar Format Processing. Another method [51for dealing with the problem of motion through resolutioncells involves interpreting the radar data in an appropriatethree-dimensional spatial frequency space. The radarpulses are first converted to a range-frequency form(Fourier transform of compressed range data) whichcorrespond to polar line segments in the three-dimensional frequency space of the target. Each segmentis oriented according to the angular coordinates of theradar at the time of transmission. Depending on therelative motion of the radar and target during the timethat a sequence of pulses is transmitted, a portion of thethree-dimensional frequency space is collected (usually acurved surface). An image of the target can then beformed by taking a three-dimensional Fourier transformof the collected data.

The fundamental features of this method can bederived by observing that for each compressed rangepulse u(t), the complex signal received from a target fieldis given by

RI + R2 droC'O (19)

where R, + R2 is the two-way range to the differentialscattering volume element dro, located at ro, as shown inFig. 12, and where a (ro) is the reflectivity density and,for convenience, includes two-way propagation effectsand various system gains. The integration is carried outover the volume of the target.

If we take the Fourier transform of this range data,

(20)Sr(f) = f sr(t) exp[ -j2Trft] dt

we obtain

Sr(f) fI (ro)U(f) exp -j 'nf(R, + R,) dro (21)

where U(f) represents the non-negative frequencyresponse in range. Furthermore, we have assumed that R,+ R2 does not change significantly during a range pulse.

The time-varying effects of the two-way range (rl +r2) to the origin can be removed by multiplying thereceived signal with a reference function proportional to

Mref = exP[ +21Tf(r r2)1 (22)

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 373

Sr(t) = or (ro) u t

Page 12: Developments in Radar Imaging

(23)

(24)

This represents the fundamental motion compensationstep of the radar imaging system and, as is discussedlater, must be performed with great precision to producehigh-quality imagery. If the ranges to the transmitter andreceiver (rl and r2) are large compared with the size ofthe object, we can let

R 1 = r - roe =: r - ro -r1

R2 = 1r2 - roj -:- r2 - rO -r2

and the resulting range-frequency data can then beexpressed as

Sr(f) exp[+j2nf(r + )

U f n(ro) exp( + j . ro) dro (25)

where rb is the transmitter/receiver bisector vector asindicated in Fig. 12 and is given by

=b (r + r2) (26)

We have assumed that the antennas are moving relative tothe object and that therefore rb varies slowly from pulseto pulse.

An examination of (25) indicates that each radar pulseproduces a polar line segment of the three-dimensionalFourier transform of the target reflectivity function o-(ro)by proper interpretation of frequency space. That is, wecan define a three-dimensional spatial frequency variablef as

2ff =- frb. (27)

c

This implies that the radar data for a sequence of pulsescan be represented in three-dimensional frequency spaceas

S(f) = H(f) f x(ro) exp[±+j2 7rrj'] dro (28)

where H(f) is the three-dimensional aperture function.The effective length of each polar line segment of theaperture is determined by the bandwidth of thetransmitted signal U(]). As the radar observes the targetfrom different aspects (4tb, Kb), indicated in Fig. 12, fmaps out a surface in three-dimensional space whichconstitutes the complete three-dimensional aperturefunction of the imaging system.

The bistatic path shown in Fig. 13(a) is determined bythe pointing direction of the bisector vector rb as thetransmitting and receiving antennas move along theirtrajectories. For the monostatic spotlight mode case, thebistatic path then reduces to the path of the vehiclecarrying the spotlight radar and rb corresponds to theRLOS.

y

(a)

fz

0, fy

Length of Line SegmentDetermined by

Transmitter Bandwidth

(b)

Fig. 13. Signal surface in frequency space corresponding to changes inaspect angle during radar data collection. (a) Object space.

(b) Frequency space.

An image of the target, i.e., an estimate of cr(r0), isachieved by carrying out an inverse Fourier transform ofS(f). The image G resulting from this operation wasindicated previously as being characterized by the pointtarget response function h which is the three-dimensionalFourier transform of H(f). Ideally, h should have a verynarrow extent in all three dimensions, i.e., a three-dimensional delta function which would imply that theaperture function H should be unity over the entirefrequency space. This occurs only in the limit where aninfinite bandwidth signal is transmitted and returns arecollected over all aspect angles (0 ' f C 2-r,0 . Kb C fT).

In practical cases, only a small portion of thefrequency space is observed, as is depicted in Fig. 13(b),with the attendent limitation on the point target responsein each dimension. For example, straight line flight pathsproduce planar data collection surfaces and the generalthree-dimensional processing problem reduces to a two-dimensional Fourier transformation with a resulting two-dimensional image of the object, i.e., no resolution in thedirection normal to the collection plane. A wide varietyof radar configurations [57] can be envisioned forobserving other portions of the three-dimensionalfrequency space; e.g., a stationary two-dimensional arrayof mutually coherent wideband radars would generatesamples in a volume and a single moving continuouswave (CW) radar would sample the frequency space onlyalong a curve.

7EEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984374

Page 13: Developments in Radar Imaging

It can be shown mathematically that when the objectis small comapred with the radar ranges (as assumedabove) and when the data aspects are closely spaced sothat good imaging is possible, then the image function,defined as the inverse three-dimensional Fourier transformof S(T), is essentially equivalent to the cross correlationfunction given by (14).

The independent derivation of this image functiongiven in this subsection describes an alternative way toform the image. This derivation also shows the resolutionproperties of all these equivalent methods of imageformation. It provides a useful context in which to dealwith optimization of resolution by adjusting the weightingfunction H within the boundaries in the three-dimensionalfrequency space set by the available data. It also permitsquick iterative calculations relating resolution to theavailable data aspects and to the radar bandwidth. Forexample, to obtain approximately equal image resolutionsin three orthogonal directions in object space, one needsan approximately cubic boundary in frequency spaceoutside of which the aperture function H vanishes. If amonostatic radar has a bandwidth that is 10 percent of thecenter frequency, the radial extent of this cube is a tenthof its distance from the origin. The other two dimensionsof the cube must correspond approximately to a solidangle of aspects measuring 0. 1 rad by 0.1 rad.

As an illustration of the three-dimensional processingconcept for the rotating object case (mathematicallyequivalent to a fixed object and moving antenna), let usconsider an object consisting of three point scatterers asindicated in Fig. 14(a). The resulting "three-dimensional

Radar DataCollection

image" for the three object points is also indicated inFig. 14(c) as three cone-like distributions whose points ofconcentration correspond to the locations of the objectpoints (2 on the x,y plane and 1 above). If we project thedata stored on the conical surface onto the f, f plane infrequency space as shown in Fig. 14(d) and follow with atwo-dimensional Fourier transform, we obtain the imageshown in Fig. 14(e). By the projection-slice theorem[54], this is equivalent to an x,y plane slice in three-dimensional image space.

Although a three-dimensional data collection andprocessing approach can be used to obtain images ofthree-dimensional objects free from degradations causedby motion through resolution cells even in very generalradar configurations, two-dimensional processingapproaches are often desirable for practicalimplementations. This stems in part from processingspeed considerations and the operational difficulty inobtaining video signal samples over a large volume ofprocessor space.

Two-dimensional processing is optimum when therelative radar/object motion is such that the bistatic vectorrb remains in a plane and/or if the object points to beimaged lie on a plane. In the latter case, the three-dimensional data are projected onto the plane containingthe object points selected for optimum focus, as shown inFig. 14(d). Scattering centers of the object which arelocated out of the selected compensation plane sometimescalled the focused target plane (FTP) will be degraded inthe final image. This degradation is expected fromprojection-slice considerations or by observing that the

3D FT

I Project onto fx, fy Plane

2D FT

z

(c) 3D Image

SliSlice

Fig. 14. Illustration of the three-dimensional processing concept. (a) Point targets in object space. (b) Data surface in frequency space. (c) Three-dimensional image. (d) Projected data in frequency space. (e) X Y plane slice of image.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 375

z

B Y~~~~~(a) Point Targets in Object Space

fz

fx(b) Data Surface in Frequency Space

z

. y

X

(e) X, Y. Plane Slice of Imagefx

(d) Projected Data in Frequency Space

Page 14: Developments in Radar Imaging

z

X Z

(a)

( Normal toUnit SphereSurface )

! Plane

(b)

Fig. 15. (a) Target aspects sampled by RLOS when both K and ki change a few degrees. AK = K, - KI, 1+l = +. - I, A0 = [(AK)2 + (AKsin K)2]I/. (b) Detail enlargement from Fig. 16(a) at aspects sampled on surface of unit sphere. Dots represent pulse aspects.

relative spacing of the three-dimensional fringe structurein frequency space associated with each object point ispreserved after projection only for points located on thecompensation plane.

B. Properties of Three-Dimensional Radar Images

In this subsection, we discuss in more detail some ofthe important properties of radar images calculated usingthe methods described above. Specifically, we emphasizethe dependence of the point spread function on the targetaspects which are sampled by the radar pulses. Importantproperties include cross-range resolution and the spacingof cross-range ambiguous images. The results areapplicable to monostatic radars or to bistatic radars withsmall bistatic angles. For small bistatic angles, theequivalent monostatic RLOS bisects the bistatic angle.This subsection also assumes the object to be smallcompared with the radar range.

(1) Dependence on Observed Target Aspects. Asemphasized previously, the properties of an image dependmainly on aspects sampled by the pulses used incalculating the images. When the radar samples a planarangle of target aspects, the image will necessarily be twodimensional. When the radar densely samples a solidangle of target aspects, the image will be threedimensional. The aspects sampled depend on therotational motion (if any) of the object as well as on theorientation of the RLOS and its variation with time. Theyalso depend on the subsets of radar pulses chosen forimaging.

For a space object, it is generally convenient to dealwith object rotations and RLOS rotations relative to thedistant stars, i.e., relative to "inertial space." Artificialsatellites as well as natural objects in the solar systemgenerally rotate with a constant angular velocity vector ininertial space. For objects in the Earth's atmosphere(including stationary scenes, objects on rotating

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984376

Page 15: Developments in Radar Imaging

platforms, ground vehicles, boats, and aircraft), rotationalmotions are generally simplest to specify relative to theEarth. Thus RLOS orientations as well as objectorientations are described in a coordinate system fixed inthe Earth. The following discussion is valid whether theobject's rotational motion is specified in inertial space orin a background coordinate system that rotates with theEarth, as long as the RLOS directions are specified in thesame way.

The properties of the point spread function h areconveniently calculated in a coordinate system that rotateswith the object, such as the (x, y, z) system of Fig.15(a). The z axis is aligned with the object's angularvelocity vector. (For a fixed scene, the angular velocity iszero and the z-axis direction is arbitrary.) Define the unitsphere to be fixed with respect to the object so it sharesthe object's rotational motion. If the RLOS is directedalong the radius of the unit sphere, the azimuthal angle 4iand the polar angle K, the angle between the RLOS andthe angular velocity vector, called the aspect deviationangle, will define the aspects sampled by the pulses.Also, aspects can be represented graphically by drawingthe points on the unit sphere where the RLOS puncturesthe spherical surface.

The simplest description of the resolution andambiguity properties of h (ro) occurs in a rectangularcoordinate system aligned with the, aspect samplinggeometry, such as the (x', y', z') coordinate system ofFig. 15(b) or Fig. 16(b). This coordinate system alsorotates with the target. The y' axis is chosen to point inthe RLOS direction at the center time of the imaginginterval. The x' axis is oriented along the direction ofincreasing values of the target aspect angle 0, the angleswept out by the RLOS in the target coordinate system atthe image center time. The rate of change of 0, 6, equalsthe magnitude of the RLOS angular velocity vectorrelative to the target

6 = [(K)2 + (4 sin K)211/2 (29)

A0 [(A K)2 + (A i sin K)21J/. (30)This is the angle that determines the cross-range (x')resolution of the two-dimensional image.

If the object is rotating rapidly while the RLOSrotates slowly, data can become available over manyrotation periods with i >> K. This can occur with a

rapidly rotating object either on the ground or in deepspace. Under these circumstances, a solid angle ofaspects can be densely sampled by the data, as illustratedin Fig. 16. This can permit three-dimensional imaging if

iof RLOS Relative

Y

(a)

for

I 1*..... K

-~~~A -bAd Sin It-|

RD Image PlaneSingle Rotation

where K and x, are the rates of change of the angles Kand 4,, respectively.

The (x', y') plane is thus tangent to the surface sweptout in the target coordinate system by the RLOS. Thesecond cross-range direction is chosen perpendicular tothis plane so as to complete a right-hand coordinatesystem.

The RLOS's are approximately coplanar with respectto the object if the changes in the angles 4, and K, A4,,AAT and AK KAT, respectively, are small. When theRLOS directions are coplanar with respect to the object,the images will necessarily be two dimensional in nature.The radar returns will not be affected by the z' coordinateof any scatterer so the function G(ro) cannot depend onz'. For the two-dimensional case then, the cross-rangeaxis x' will be oriented along the direction of increasingvalues of the angle 0, such that during the imaginginterval AT, from (29),

(b)Fig. 16. (a) Target aspects sampled for a three-dimensional image (,k'>> K). (b) Detail enlargement from Fig. 17(a) at aspects sampled on

unit sphere. Dots represent pulse aspects.

the aspects are sampled densely enough. In such cases,6 4i+sin K and A0 Ad~sin K. The x' cross-range axisas defined above lies along the direction of increasing 4,while the z' cross-range axis is in the direction ofdecreasing K, as shown in Figs. 16(a) and (b). Thesefigures are drawn with a small positive value for K. Fig.16(b) is an enlargement of a portion of Fig. 16(a).

In Fig. 16(b), target rotation causes the RLOS torapidly sweep in the 4, direction. Successive pulses duringsuch a sweep sample the aspects shown by a row of dots.Pulses that do not fall within the AtJ interval are not usedin the image. The slow change in K due to RLOS rotationcauses the sampled aspects to be displaced downward tothe next row of dots on the next target rotation. Over

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 377

Page 16: Developments in Radar Imaging

many tens of rotations, this process densely samples asolid angle of aspects. The image will be threedimensional with resolution in the x' direction determinedby the aspect change A0 = Ad' sin K. Resolution in thez' direction is determined by the aspect change AK.

If K is too small to give a significant change AK overthe available data, an image using data from the intervalAd over many target rotations will be two dimensionalsince the RLOS are approximately coplanar. This class ofimages is known as "stroboscopic" and is discussed inSection 1VB.

(2) Cross-Range Ambiguous Interval and Cross-Range Resolution. When an approximately coplanar setof aspects is sampled, as illustrated in Fig. (16), thecross-range ambiguous images are separated in the x'direction by amb(x') = X/(280), where 60 is the changein aspect between pulses. To calculate 60, divide 0 givenby (29) by the radar's PRF. If the radar's PRF is too low,cross-range ambiguous images will overlap the trueimage. The resolution in the x' direction is proportionalto X/(2A0), where A0 is given by (30). Since the imagefunction does not depend on z', one can say that theresolution in the z' direction is "infinite."

When a solid angle of aspects is densely sampled asin Fig. 16(b), the resolutions in the two cross-rangedirections x' and z' depend on the extent of aspectchange, AO and AK, respectively. In addition, thediscrete sampling of aspects with steps 60 (per pulse) andAK (per rotation) causes cross-range ambiguous images inthe x' and z' directions, respectively. If either 60 or SK istoo large (because of the values of PRF, 4' and K), thenthe cross-range ambiguous images may overlap the trueimage of the target, making the image difficult orimpossible to interpret.

The cross-range ambiguous interval in the x'direction, amb(x'), and that in the z' direction, amb(z'),are given by

amb(x') = X/(280) (31)

and

amb(z') = X/(28K) (32)

respectively.If the values of amb(x') and amb(z') are larger than

the corresponding maximum cross-range extents of thetargets, the images will be unambiguous. For calculatingamb(x'), one can use

80 4' sin K = 4' sin K/PRF= 2 Tr sinK/(T PRF) (33)

where T is the target's rotation period and PRF is theradar pulse repetition frequency. Similarly, to getamb(z'), one can use

8K = TK = 2-rrK/k'. (34)When producing three-dimensional images, the

impulse response (IPR) widths (sometimes loosely referred

to as resolution) in the three dimensions follow from theprinciples given previously. That is, the range resolutionis determined by the transmitted radio frequency (RF)bandwidth (BW),

p(y') = k c/2BW (35)and the two cross-range dimensions have resolution givenby

p(x') = k X/2A0

and

p(z') = k X/2AK

(36)

(37)respectively. Here k is a parameter which encompassesboth the definition of resolution in terms of IPR width,i.e., IPR width at 3 dB down versus 6 dB down, and theeffect of IPR mainlobe broadening due to the apertureweighting function selected for IPR sidelobe control.

C. Motion Measurement Requirement

These coherent radar imaging techniques all requireprecise knowledge of the time-varying position of theradar relative to the target scene in order to form goodquality images. Ideally, the relative range to each imagegrid point must be known to some fraction of awavelength over the integration period being used toobtain fine cross-range resolution. Since we arecorrelating range-derived phase information over somecoherent aperture, any error in knowledge of relativeposition RE will give rise to a phase error given by

= 4ITRE/X (38)which will cause perturbations in the cross-range IPR ofthe radar in a manner analogous to antenna patternperturbations caused by mechanically or electricallyinduced phase errors across a real antenna aperture.

The effects upon image quality of such phase errorsdepend upon the form of the errors, as is determined bystandard antenna theory. For example, motion-measurement errors which give rise to phase errors whichvary linearly across the aperture cause shifting of theposition of the image response. Errors which varyquadratically across the aperture cause mainlobebroadening. Higher order errors cause perturbationsfurther out on the impulse response sidelobes. Forexample, errors which vary sinusoidally cause discretepaired-echo sidelobes some distance from the mainlobe.Wideband random phase errors cause noiselike sidelobesdistributed across the entire scene. Energy scattered intothe sidelobes by any of these errors comes at the expenseof mainlobe energy, and hence these errors all causeapparent loss of target RCS. Further, these effects can bescene position invariant, or position invariant, dependingupon whether the motion errors are applicable to theentire scene or target or are dependent upon theindividual resolution cell under consideration.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984378

Page 17: Developments in Radar Imaging

It is not possible to set a universal threshold onmotion determination accuracy. Such a limit dependsupon the quality required of the image, as well as uponthe form, or frequency content, of the phase errorfunction. In some cases, the effects of low frequencyerrors, which manifest themselves in the relatively highsignal-to-noise mainlobe, can be extracted from the imagedata and used to derive a correction to the collected data.Often several wavelengths of quadratic error can becorrected in this manner. On the other hand, higherfrequency errors are not only detrimental for a givenamplitude but are also more difficult to measure from theimage data. Thus, high frequency errors, and hence theposition measurement errors which cause them, are oftenrestricted to be less than some small fraction of awavelength.

In the case of airborne systems observing stationaryobjects on the ground, the relative motion must bemeasured onboard the aircraft using some type of motion-sensing equipment such as an inertial measurement unit(IMU), perhaps augmented by ground-based aids tonavigation such as beacons. In the case of Earth-fixedsystems observing space objects, the relative motion isdetermined by appropriate modeling and tracking of theobject's orbit, along with using radar derived dataregarding rotational motion of the object. Much of thetechnical challenge in implementing coherent imagingradars is in accomplishing these accurate determinationsof relative position, and substantial effort has beendirected toward this problem. An adequate treatment ofthese techniques is beyond the scope of this paper.

IV. RADAR IMAGING TECHNIQUES

The previous section described the fundamentalprocesses required to form images from radar signalsusing knowledge of target and sensor vehicle motion.Various specific implementations of these principles varysignificantly in detail depending upon the application,even though the underlying fundamentals are the same.This section reviews various generic implementations inorder to highlight similarities between applications.Where possible, specific examples are provided.

Implementations involving imaging of fixed targets orscenes from moving sensor-bearing vehicles areconsidered first. Conventional wide-area stripmap modeSAR and the spotlight mode SAR are both described. Thesecond part of the section provides a look atimplementations which utilize the same principles inproviding multidimensional images of moving or rotatingobjects from Earth-fixed coherent radar sensors.

A. Vehicle-Borne Imaging of Fixed Objects

Radar systems designed to provide images of theEarth's surface are generally airborne or spacebornesensors. The motion of the sensor-bearing vehicle

provides the relative motion between sensor and targetrequired to perform imaging.

There are two generic types of fixed-target imagingsystems. The conventional stripmap mode SAR providesfor wide-area coverage by producing imagery of a strip ofterrain illuminated by an antenna whose boresight angle isnominally fixed with respect to the vehicle velocityvector. For such a system, vehicle travel over time, inconjunction with antenna ground-range coverage,determines total image coverage. Cross-range resolutionis determined by the effective scene rotation duringillumination as determined by the antenna azimuthbeamwidth. The alternative approach is to decoupleantenna boresight angle from the vehicle velocity vectorin order to provide longer illumination dwell on the areaof interest. This approach provides for finer cross-rangeresolution at the expense of total image coverage. Thislatter approach is commonly referred to as spotlight modeSAR.

(1) Conventional Stripmap Mode SAR. Thefundamentals of conventional stripmap mode syntheticaperture radar has been extensively documented inavailable literature [44, 59, 60, 611. We provide a quickreview here in order to note its relationship to other formsof range-Doppler imaging.

The data acquisition geometry associated withstripmap SAR is depicted in Fig. 17. In such a system,

REGION ILLUMINATED ATA GIVEN INSTANT OF TIME

TERRAIN STRIP TO BE IMAGED

Fig. 17. Schematic representation of stripmap mode imaging radar.

range resolution is achieved through accurate time-delaymeasurement obtained by transmitting dispersed pulsesand applying pulse-compression techniques to thereturned pulses. As indicated previously, azimuth oralong-track resolution is obtained by recording theDoppler frequency (range-rate) as scattering elementsmigrate through the antenna beam. Knowledge of theDoppler frequency versus time relationship for a scattererat a known range, which is computed based onmeasurements or a priori knowledge of vehicle motion,

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 379

Page 18: Developments in Radar Imaging

allows one to precisely locate the scatterer in a manneranalogous to the pulse compression applied to the rangedirection.

Stripmap SAR Data Acquisition. The form of atypical stripmap mode SAR system is shown in Fig. 18.A coherent waveform generator (WFG) provides awideband signal for periodic transmission, at the pulserepetition frequency (PRF), through a "fixed' antenna inorder to illuminate the terrain strip of interest. Thetransitted signal has the form

s(t) = a(t) exp{j2rr[fot + 4(t)]} (39)

wherefo is the RF carrier frequency, a(t) is the amplitudeweighting of the pulse, and 4)(t) is the phase modulationused to obtain resolution in range.

During the imaging time, the antenna pointingdirection must be adjusted slightly to compensate forangular excursions made by the sensor platform. Steeringcommands are usually derived from information obtainedfrom the aircraft inertial navigation system (INS) andfrom real-time analysis of the Doppler spectrum of thereceived signals. Unlike a real aperture side-looking radarsystem, antenna pointing for an SAR does not effectoutput image geometry. Rather, pointing impacts SNR ofthe image by virtue of achieving adequate signal powerfrom the image area during coherent integration.

Returned signals from the terrain strip are received viaa coherent receiver and are frequency converted tobaseband for analog to digital (A/D) conversion inpreparation for digital processing. After basebandconversion, the signal received from a single pointscatterer at along-track position x0 and cross-track range(broadside) ro is given by

S(x,y,xo,ro) = cu exp{j2,n L 2f0R(x - xo, ro)

+ 2y 2R(x -xo ro))1 } (40)

where the complex-valued weight or is determined bytransmitted pulse weighting, the antenna pattern,attenuation with distance, propagation phase effects,scatterer complex radar reflectivity, and R(x - x0, ro) isthe one-way range to the scatterer of (x0, ro) for along-track position x. One-dimensional samples of thisfunction are obtained at along-track positions X = nizX= nv/PRF, where v is the platform velocity and nrepresents pulse number. The signal in the fast-timedimension is represented in cross-track spatial coordinatesy, where y = ct/2. The arguments x0 and ro in the formof S reflect the scatterer-position-variant nature of S.

As part of the process of baseband conversion,compensation for turbulence-induced antenna phase centermotion away from the desired straight path flight linemust be applied. Such compensation takes the form ofphase shifts, and in some cases time shifts, of thereceived signal. This is usually the case for airbornesystems flying in a turbulent atmosphere, rather than forspaceborne systems such as SEASAT and SIR-A whosemotion is generally accurately predictable usingephemeris information and spacecraft models. It is alsopossible to record the measured motion information andto take it into account as part of the correlationprocessing operation. However, the latter approach canprevent the use of more efficient means of implementingthe image formation processing step.

In general, the motion compensation process adjusts

Ant{nnAntenna

ImageDisplay/4Record

Fig. 18. Typical stripmap mode SAR system.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984380

Page 19: Developments in Radar Imaging

the phase and time delay of the signals to remove theeffects of aircraft displacement on a pulse-by-pulse basis.In cases where the depression angle change over theimage swath is sufficiently small, a single correctionapplied to the returns from all ranges will be adequate.For wide-swath-width systems, range-dependentcorrection schemes are required to apply correctionswhich are dependent upon the depression angle to therange of interest.

To correct for variations in along-track position, thesystem PRF can be slaved to vehicle velocity.Alternatively, the knowledge of along-track motion canaccompany the signal data and be accounted for as part ofthe image-formation process.

Following conversion to baseband and assuming thatdigital processing techniques are to be used in formingthe image, the received signals are converted to quantizeddiscrete sampled data. For cases involving significantlyless than unity duty cycle, a PRF buffer serves to spreadthe digital samples over the entire interpulse period inorder to minimize the peak data rate.

An azimuth presummer is then usually employed tolow-pass-filter and downsample the data in the azimuthdimension to the minimum Doppler bandwidth required tosupport the desired along-track resolution. This step istaken to minimize the amount of data to be digitallyprocessed. The original azimuth sample rate (the PRF)must be high enough to unambiguously sample theDoppler spectrum associated with the antenna beamwidth.This beamwidth is often greater than the minimumrequired to achieve the desired azimuth resolution due toantenna size constraints associated with the sensorplatform. Also, such excess beamwidth is often used toprovide noncoherent averaging in order to reduce theeffects of coherent microwave speckle in the final image.The usual presummer implementation consists of multipleoverlapping, recursive digital filters. If the system PRFhas not been slaved to along-track velocity, then along-track motion compensation can be accomplished in anequivalent manner by computing presummer outputs atequally spaced along-track positions.

Stripmap SAR Image Formation. The operationrequired of a digital stripmap SAR processor can beexpressed as

O(ndy, mdv) = I s(iAx, jAv)w(ix -ndx, jAy -mds).i

S(iAx -nd, jAy -mdv, tidx, mdv) (41)

representing the two-dimensional correlation of the two-dimensional sampled signal s with a weighted complexconjugate of the sampled response of the system to anisolated point scatterer, as given by (40). Here, O(ndx,mdy) represents the complex-valued output imagesampled at along-track positions, ndx, and cross-trackpositions (range bins) mdy. The image can exhibitdifferent sampling frequencies than the prefilteredsampled signals as indicated by the difference betweenAx and dx, and Ay and dy. Also, w(x, y) is the weighting

function applied to control the sidelobes of the systempoint-target response. The extent of summation over therange direction for each output sample is determined bythe time duration of the transmitted pulse. The extent ofsummation in along track is determined by theillumination interval, which in general is a function ofrange as determined by the antenna cross-rangebeamwidth.

In simpler notation, this process an be denoted by

O(n, m) = A s(i, j)w(i n, j m)i J

S(i - n, j m, n, m). (42)

The form of reference function S denotes the rangedependence of the system reference function.Theoretically, a different S, as indicated by the fourthargument m, must be used when correlating each rangebin of interest. In practice, however, a single referencefunction will suffice over a considerable number of rangebins.

In most systems, the signals are range-compressedprior to the azimuth correlation process. For example, inthe STAR system described in Section 11, the receivedsignals are pulse-compressed prior to A/D conversionusing a surface acoustic wave (SAW) device. In thiscase, the final azimuth correlation process is given by

O(n, m) = A s'(i, j)w(i - n)

S'(i- n,j- m, n, m) (43)

where s' (i, j) is the range-compressed signal, w(m) isthe weighting applied in azimuth, and S" is the range-compressed system reference function which in generalhas a sin xix amplitude variation in the range dimension[62]. The extent of the range summation (in j) in (43) isequivalent to the amount of range migration of a pointscatterer during the coherent integration time. For asystem with the antenna boresight pointed broadside, thisextent is generally significantly less than that implied by(42) for the uncompressed pulse. Thus range compressionfirst results in significant computational savings. For asystem with significant squint of the antenna away frombroadside, this is not the case and other procedures mustbe applied. Methods developed for the spotlight case maybe adapted for this purpose, as is described in a latersection.

The similarity of (43) to (14) describing the generalextended correlation processing for rotating objects isapparent. The range-dimension correlation of (43), thesummation over j, may be viewed as a finite impulseresponse interpolation process required to sample therange-compressed pulse at the precise range R(ro, p) for(14).

For broadside systems where the range migrationduring the integration time (so-called range walk) is lessthan on the order of 1/2 of a range resolution cell, the

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 381

Page 20: Developments in Radar Imaging

image formation process simplifies further. This conditionis achieved if

P2Prp 2R/16 (44)where P, and pr are the azimuth and range resolutions,respectively, and R is the operating range [63]. In such acase, the system reference function becomes separable inrange and azimuth and the image formation processbecomes a sequence of two one-dimensional processingsteps [64].

(2) Spotlight Mode SAR. Spotlight SAR [6-10, 54,56, 651 has as its objective the production of imageryexhibiting resolution finer than that associated with thelimits imposed by a fixed antenna, or to produce imagerywith a great deal of angular diversity with which tounderstand the directional characteristics of the scenereflectivity of interest. Benefits which may accruethrough use of the spotlight mode come at the expense ofarea coverage since longer dwell times are required.

Spotlight Data Acquisition. The collection geometryfor the spotlight mode SAR is shown in Fig. 19. As thevehicle carrying the SAR sensor moves past the area ofinterest, the antenna boresight is continually realigned soas to point at the center of the scene. The antennabeamwidth must be large enough to adequately illuminatethe desired area to be imaged, and the duration of theillumination must be long enough to obtain sufficient

__1

3D Spotlight Scene

effective rotation of the scene to obtain the desired cross-range resolution, as given by LzX = X/p,. The scene maybe three dimensional in nature and the sensor may notprecisely follow a straightline path. A motion-sensingsystem must be used to determine the required antennapointing angles and to provide knowledge of relativemotion between the vehicle and the scene, knowledgewhich must be used during the image formationprocessing step.

Fig. 20 shows a possible configuration for a spotlightmode SAR system. The diagram assumes the use of alinear frequency modulation (FM) waveform for use inobtaining fine range resolution, although such anassumption is not necessary in order to perform spotlightimaging. As before, an inertial measurement system canprovide pointing angles for the antenna, although for thespotlight case, the illumination follows a fixed point onthe ground rather than following a strip of terrain parallelto the flight track as was the case for stripmap.

The first step in preparing for range-Doppler imagingis to remove the effects of the gross range changes toscene center on a pulse-by-pulse basis over the coherentillumination time. In the system of Fig. 20, this isaccomplished by multiplying the returned signals with areplica of the transmitted signal, only delayed byprecisely the round-trip delay to scene center. This delayis determined by real-time computation of the range tothe scene center ra using INS-supplied information.

Flight Path -'

1

1

/

/

1/701 I!

7/

//

//

Fig. 19. Spotlight mode SAR collection geometry. (Symbol with overbar corresponds to boldface symbol in text.)

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984

701s5

382

Page 21: Developments in Radar Imaging

Fig. 20. Simplistic spotlight SAR system with polar-formatprocessing.

The frequency-versus-time characteristics of thesignals for a single radar pulse transmission are shown inFig. 21. The figure depicts the generation andtransmission of the linear FM waveform to begin at timezero with chirp rate y. The total set of signals returned

from the desired area, beginning with the near-edgereturn and ending with the far-edge, are shown occurringwith appropriate delay associated with round-trippropagation. Mixing with a replica of the originaltransmission delayed by the round-trip time to scenecenter produces a constant frequency signal for eachreturn from a point scatterer. The frequency of this signalis proportional to the range of the scatterer.

The entire set of constant-frequency signals associatedwith the scene to be imaged are shown centered aboutzero frequency in Fig. 21. These video signals wouldthus be encoded as complex-valued (I and Q) data. Thefigure depicts a direct conversion of RF to I and Q data.With practical considerations given to filtering of signalsfrom terrain which is illuminated but not desired in thefinal image, there might be several intermediate-frequency processes required to produce the desiredresult.

The total set of video signals have bandwidth BW,related to total range swath width (SW) by the scalefactor 2y/c. The duration of the signals is nowproportional to the original sweep time and hence to thebandwidth to be used to obtain range resolution. Notethat the signals from all ranges do not completely overlapin time, although chirp rates and pulse lengths may bechosen to minimize this effect. In such cases, only thecentral overlapped region is A/D converted and recordedto avoid the inefficiencies associated with storing andprocessing of digital data whose time-bandwidth productis not wholly occupied. The shaded area in Fig. 21indicates the time-bandwidth product of the signaldigitized and recorded over time period T. The effectiveRF bandwidth which determines range resolution, BWrf,is shown to be less than the full transmitted bandwidth inthis case.

Far-Edge Return

Reference Function

Near-Edge Return

I/I

1

Far-Edge I-T -

Fig. 21. Spotlight range tracking to produce video.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING

f

fol

383

Page 22: Developments in Radar Imaging

Use of linear FM waveform in this deramping schemeon a pulse-by-pulse basis has resulted in signals withfrequency and starting phase determined by the relativerange of a scatterer to scene center, thus establishing theconditions required to perform frequency-domain range-Doppler imaging. In instances where other transmittedwaveforms are desired, it is still possible to achieve thiscondition by first pulse-compressing the received signalsto achieve fine range resolution (while retaining the phaseinformation) and then Fourier-transforming the signalsuch that point scatterers give rise to signals withfrequency and starting phase proportional to relative rangeto scene center.

Spotlight Image-Formation. As described in SectionIll, there are at least four fundamental ways ofaccomplishing the image-formation process of suchsignals: (1) subaperture linear range-Doppler processing;(2) multiple-subpatch linear range-Doppler processing;(3) whole-scene polar-format processing; and(4) backprojection or general correlation processing. Therelative advantages and disadvantages of these approachesare dependent upon the particular system parameters athand and would have to be determined on an ad hocbasis.

The subaperture linear range-Doppler processingapproach is analogous to the extended correlationprocessing (ECP) algorithm described in Section IVB forefficient processing of extended correlation data. Thetwo-dimensional signals are arrayed in a simple linearfashion and multiple azimuth subapertures are two-dimensional Fourier-transformed to form complex-valued,coarse azimuth-resolution images. The subapertures arechosen to be small enough such that there is insignificantrange-cell migration during the subaperture interval. Eachof the images formed within subapertures are thencompensated in phase and spatial rotation to compensatefor the scene rotation which occurs between subapertures,are upsampled in azimuth to accommodate the eventualfiner azimuth resolution, and coherently summed anddetected to form the final image. Because of the analogyto the ECP algorithm, discussion of the approach isdeferred to Section IVB.

The backprojection processing method is common tothe field of computer-automated tomography (CAT).Various analogies have been drawn between CATprocessing and spotlight SAR processing 154]. As wasmentioned in Section IlIl, the backprojection processingapproach is essentially the same as the pulse-by-pulsecorrelation approach as it is applied to the imaging ofrotating objects and thus will not be discussed in furtherdetail here.

The multiple-subpatch processing method and thepolar-format processing method are described in moredetail below.

Multiple-Subpatch Processing. If the spotlight SARsignals are simply rectilinearly formatted and two-dimensionally Fourier-transformed over the entirecollection duration, then the point scatterer migration

effects described in Section III will limit the usefulportion of the final scene to a region about the centralcompensation point with approximate diameterD = 4p2/X. The multiple-subpatch image-formationapproach accepts this limitation and in an efficientmanner applies the same process multiple times atdifferent locations to obtain full quality over the entirescene. The full scene is essentially divided into severalsmaller scenes of diameter less than the limit imposed byscatterer migration, each scene being compensated formotion relative to its center.

An efficient process for accomplishing this method isdepicted in Fig. 22. The input to the process is the videosignal which has been compensated to the center of thescene by mixing with the reference function R,,(t). Thisfunction, which changes on a pulse-by-pulse basisaccording to the pulse-by-pulse changes in range, hascaused any signal received from the center of the scene toexhibit zero frequency and phase over the entire coherentintegration period. The full scene signal is passed througha bank of bandpass filters which filter the data in the fast-time dimension into some number N of frequency sub-bands corresponding to range subswaths across the scene.The frequency content of each subswath has been reducednominally by a factor of I IN such that the N outputchannels can each be downsampled (reduced samplingfrequency) by an equivalent amount. In practice,however, some excess subswath BW is required toprevent ambiguities due to nonideal bandpass filters.

Each of the signals corresponding to the rangesubswaths are then recompensated using referencefunction R,(t) such that the nominal center of each rangesubswath exhibits zero frequency and phase over theintegration time. The reference function is generated on

the basis of differential range, on a pulse-by-pulse basis,between the center of the subswath and the center of theentire scene. This process "stabilizes" the azimuthDoppler frequency content within each subswath in

preparation for filtering in the azimuth, or slow time,dimension to form image subpatches.

The azimuth bandpass filters for each subswathpartition the Doppler spectrum into some number M ofsub-bands corresponding to multiple subpatches in thecross-range direction within each subswath. The outputsof this process are downsampled in slow time to theminimum allowed to unambiguously represent the signals.The outputs of each of the N x M filters is additionallycompensated to set the center of each subpatch to zerofrequency and phase over the integration time bymultiplying by reference functions R,,,,,(t), which areformed on the basis of differential ranges betweensubpatch centers and range subswath centers on a pulse-by-pulse basis.

Each of the N X M channels which have been createdby this process now provide signals for multiplesubpatches covering the entire scene, with each channelcompensated in frequency and range to the center of theassociated subpatch. The data within each channel is then

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984384

Page 23: Developments in Radar Imaging

SubpatchSignal Data

Fig. 22. Multipatch range-Doppler processing.

two-dimensionally Fourier-transformed to form thesubpatch images, which may then be mosaicked to formthe full scene image. The processing within eachsubpatch relies entirely on linear range-Doppler analysissince each subpatch scene dimension was limited inextent to prevent relative range walk greater than somefraction of a resolution cell during the required integrationtime.

Polar-Format Processing. The formulation ofSection 1II provides a sound basis for application of thepolar-format processing approach to spotlight SAR data.Section 1 determines that each individual radartransmission and reception which is appropriatelycompensated for range to scene center and which isprocessed such that frequency and starting phase becomeproportional to relative range to a scatterer can be thoughtof as viewing a linear one-dimensional segment of thethree-dimensional Fourier transform of the (in general)three-dimensional scene. Taken as a whole, the total setof such observations over the flight path in Fig. 19corresponding to the coherent illumination period, one isessentially observing a two-dimensional curved surfacewithin the three-dimensional transform. This surface isknown as the collection signal surface.

Although one can theoretically perform a three-dimensional transform of a volume containing this surfaceto form a three-dimensional image of the scene, theobtainable sensor vehicle excursion in the third dimension

is usually not sufficient to provide meaningful resolutionin the third dimension. Thus, two-dimensional approachesnormally suffice. (The resultant two-dimensional output isalso compatible with current two-dimensional displaytechnology.)

As implied in Section Ill, when applying two-dimensional processing to form a two-dimensional image,one must account for the noncoplanar excursions of thesensor vehicle in order to obtain a correctly focusedimage. Even then, correct focus can be obtained only forcollections of scatterers which lie in a common plane.This plane is called the focus target plane and can bearbitrarily chosen prior to processing but would usuallybe made to correspond to the nominal ground planewithin the scene of interest.

The method for accounting for noncoplanar motion ofthe collection vehicle is illustrated in Fig. 23. The signalvalues corresponding to the collection signal surface mustbe projected in a direction normal to the chosen focusplane until they intersect the desired processing plane.Projection of the signal values in this particular directionpreserves the correct relative phase of the samples forsignals which result from scatterers lying in the focusplane, as was described in Section 111. The intersectingplane is known as the reference plane, or alternatively,the output image plane.

Selection of the reference plane determines theperspective, or point of view, associated with the final

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 385

Page 24: Developments in Radar Imaging

yg..Tr

KTr

x

Fig. 23. Polar-format signal projections in frequency domain.

image. If one wishes the image to appear as if the viewerwere looking straight down upon the scene, then theground plane itself is selected as the reference plane(possibly synonymous with the focus plane in this case).Conventional SAR images for the stripmap case have theperspective normal to the "SAR plane," which is definedto be the plane formed by a central point within the scene

and the velocity vector of the sensor vehicle. To affect a

similar appearance for the spotlight case, the referenceplane might be selected to nominally coincide with thecurved collection signal surface. It must be pointed outthat the concept of point of view is correct only for scene

elements lying within the focus plane. Point scatterers outof this plane will image at positions which do notcorrectly follow from rules of perspective, an effectknown as range layover.

Once the focus and reference planes have beenselected, the polar-formatting operation is straightforward.Based upon the projection of collection surface signalsample positions in a direction normal to the focus plane,positions of the samples in the intersected reference planeare computed. Auxiliary data describing the pulse-to-pulse position of the antenna phase center relative to thescene center and knowledge of the frequency-versus-timerelationship for the transmitted pulse are used in thisgeometrical computation. Once these positions have beendetermined, the collected signal data is appropriatelydescribed in Fig. 24.

Fig. 24 shows the relative positions of the datasamples as they have been projected into the referenceplane. The samples, represented as black dots, are shownarrayed along radial lines at angles 0 corresponding tothe line-of-sight angles to scene center for each radartransmission, as projected into the reference plane. Thespacing between samples Tr is determined by the originalvideo signal sampling period as projected into thereference plane. The original video sampling frequencymust be high enough to unambiguously sample the videospectrum whose bandwidth is dependent upon the range

extent of the illuminated scene.

toi

Fig. 24. Digital polar-format geometry.

The number of samples along each radial line K isdetermined by the effective duration of the video signalas given by T in Fig. 21. By virtue of the linear FMderamping operation, this duration is directly proportionalto the effective RF bandwidth used to obtain rangeresolution.

To form an image, the geometric array of samples inFig. 24 must be Fourier-transformed in two dimensions.In order to take advantage of the efficiency of a two-dimensional FFT and to produce an image which issampled on a two-dimensional grid, one must resamplethe data of Fig. 24 to produce new samples occurring atthe intersections of a two-dimensional rectilinear grid as

depicted in Fig. 24. The range and azimuth samplespacings associated with this grid, T, and Tv, determinesthe extent of the output image in the range and azimuthdimensions, respectively. The number of grid samples,and hence the extent of the grid, in the range and azimuthdimensions determines the output sample spacing in bothdimensions of the image. This interpolated grid is oftenincreased in size prior to the two-dimensional FFT by thepadding of zeros in order to increase the image samplingrate on output, although the system resolution isdependent only on the portion of the grid filled withactual signal data. The grid values might also beweighted in range and azimuth prior to FFT in order tolower the sidelobes of the Fourier transform process.

The original system concept depicted in Fig. 20indicated that the formation of the new sample grid maybe performed as two separate one-dimensionalinterpolation steps. The first step, illustrated in Fig. 25, isknown as range interpolation. For this stage, eachindividual radar pulse is simply resampled such that thenew samples fall on positions along the horizontal linesmaking up the interpolation grid. This operation may beviewed as a digital filtering technique where the inputfunction is a discrete set of uniformly spaced samples and

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984

~~~NTX' ~~~~~~~~~~~~Tx-C1;:1:1:. ** * 4 ym

-I---

y

386

Page 25: Developments in Radar Imaging

Fig. 25. Polar-format range interpolation. (Key: 0 indicates rangeinterpolated samples (input); 0 indicates azimuth interpolated samples

(equally spaced in x and v.)

the output samples are computed at a lower sampling rateand with some specified delay with respect to the firstsample within each radar pulse. Since, in general, theoutput samples from this operation occur with lowerfrequency than the input samples (due to the likelihood ofoverillumination of the desired scene), one must performlow-pass filtering of the original data to ensure thataliasing effects do not occur. For range interpolation, thislow-pass digital filter may be thought of as a rangeprefilter which limits the video frequencies present tothose associated with the final desired range extent of theimaged scene.

The second stage of polar-format interpolation isshown in Fig. 26. The azimuth interpolation operates onthe samples produced from the range interpolation, only

Fig. 26. Polar-format azimuth interpolation. (Key: 0 indicates raw datasamples (input); 0 indicates range interpolated samples (equally spaced

in y.)

in an orthogonal direction. The required interpolation canalso be implemented as a digital filtering operation withoutput samples computed at appropriate times for eachrow of data. However, the input samples to this processcannot be considered equally spaced and digital filteringtechniques which take this into account must beemployed. As was the case for range interpolation, thislatter interpolation process is also a low-pass filteringoperation. The azimuth signal bandwidth must be reducedto that associated with the desired cross-range scene sizeprior to resampling the data. The low-pass filtering inboth dimensions is easily accomplished as part of theresampling process.

After the two-dimensional interpolation to form therectilinear signal grid, a complex-valued image is formedusing a standard two-dimensional FFT algorithm.Detection of this array produces the desired image.

Extension of Spotlight Processing to StripmapSAR. Practical implementations of the stripmapprocessing algorithm noted above are restricted tosituations where the radar illumination is primarily in thebroadside direction, and where the amount of range cellmigration of scatterers is minimal (on the order of a fewcells). However, the aforementioned spotlight SAR imageformation processes may be beneficially applied tostripmap cases which do exhibit squinted antenna sincethe algorithms, in combination with the preprocessingcompensation for gross pulse-to-pulse range changes,inherently compensate for the associated range walkeffects.

The approach to accomplishing "spotlight"production of stripmap data is shown in Fig. 27. Thedesired stripmap scene is envisioned as being partitioned

Fig. 27. Spotlight image formation applied to squinted stripmap SAR.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 387

Page 26: Developments in Radar Imaging

into many subpatches in a manner analogous to thespotlight subpatch processing approach mentioned earlier.As the fixed antenna beam migrates over each of thesesubpatches, the signal data for each subpatch is isolatedby motion-compensating to the center of the cell(removing range changes to subpatch centers) and low-pass-filtering to remove signal data which does notcontribute to the desired subpatch scene. The data withineach cell is then processed by any of the given spotlightalgorithms, and the final full-strip image is formed bymosaicking of the individual subpatches after resamplingin the along-track and cross-track dimensions.

Although this approach is robust in terms ofcompensating for severe range walk, there are variousfactors which could limit the practicality of a givenimplementation. Since the antenna beam is not stewed toilluminate the individual subpatches, the subpatch mustbe limited in size such that the entire patch can beilluminated simultaneously over the coherent integrationperiod required to achieve the desired cross-rangeresolution. To achieve reasonably sized patches, thisrequires excess antenna beamwidth over the minimumrequired to achieve the cross-range resolution. Also, afair amount of processing overhead is entailed in filteringthe data into subpatches and in resampling andmosaicking the results to form the full image. In theevent that FFT methods are involved in forming subpatchimages, the smaller patch sizes resulting from the antennabeamwidth limits also begin to limit the efficiency of theFFT algorithm itself.

B. Ground-Based Imaging of Moving Objects

In this section, we discuss useful image types andimaging applications for the case of a ground-based, fixedradar with moving object targets, such as the solarsystem's bodies and artificial Earth satellites. Because oftheir possible varied motion characteristics, these objectscan provide a wide spectrum of illustrative and importantimaging cases and applications.

Many of the imaging methods discussed here apply,or can be appropriately modified, for other moving

t = +T/2

t=~~~~~~~~~

do

x, 1-c 9_

targets such as rotating platforms, aircraft, ships, andground vehicles.

(1) FFT Range-Doppler Images. In Section 1, itwas pointed out that if the coherent processing intervalAT is sufficiently small, one can perform conventionalrange-Doppler processing to calculate images. Here, wedemonstrate that G(ro) can be efficiently calculated usingthe (FFT) method if AT strictly satisfies the constraint ofno motion through resolution cells. In this case, G(ro)will be known as an FFT range-Doppler image.

In many space object imaging applications, usefulresults often can be obtained by FFT range-Dopplerimaging. This can be understood from (9) and (10), sinceartificial satellites typically have limited dimensions andplanetary imaging requires resolutions on the order of 10km.

The constraints on AT can be more precisely statedby requiring that during this interval

(i) the relative range to every scatterer not change bymore than a fraction of the range resolution,

(ii) the relative range rate to every scatterer remainwithin 1 range-rate resolution cell.

It will be shown that condition (i) is necessary to use thefull efficiency available from the FFT method. Whencondition (ii) is satisfied, it can be shown that the relativerange to every scatterer varies linearly with time to aprecision of a small fraction of the wavelength.Furthermore, in this case, it can also be shown that forthree-dimensional objects, it is necessary for all RLOSduring the imaging interval to be very close to coplanar.

Specific expressions for image intervals that meet therange-Doppler imaging conditions, analogous to those in(9) and (10), are stated for typical applications.Range-Doppler Imaging Coordinate System. Range-Doppler images can be most conveniently calculated inthe (x',y',z'Y "imaging" coordinate system of SectionI1B. In this coordinate system, the RLOS directions arespecified by the angles 0 and n shown in Fig. 28. Fig. 28is a local view of the surface of the unit sphere includingall RLOS positions during AT. The angle between thex',y' plane and the RLOS is denoted by q. This is the

Z'

t = - AT/2

R10

Fig. 28. Aspects sampled for linear range-Doppler imaging on surface of unit sphere.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984388

Page 27: Developments in Radar Imaging

complement of a conventional polar angle measured fromthe z' axis. The azimuthal angle of the RLOS about thez' axis is 0, with 0 = 0 at t = 0, 0 increasing with time.

With this geometry, the relative range to point(x',Y',z'), defined by (16), can be written

D(t) =x- cos m sin 0

From this one gets

D(0) -Do = v'

D (0) Do = x'0H

In this more general context, the relation between therelative Doppler frequency of the scattererfD and thescatterer's cross-range displacement x' is given by

fD = 2Do(x',y',z')/X = 2x'6/Xas compared with (5).

+ yv cos q cos 0 + z' sin q. (45) Determination of Range-Doppler ImagingIntervals. The first condition limiting the interval ATfor range-Doppler imaging (that the relative range to a

(46a) scatterer change less than the range resolution) can bewritten

(46b) AO OAT< Pr/IXmaxID(O):- Do = x,0O- vO + z'-io. (46c)

Range-Doppler Image Function. If the radar's PRFis constant and if condition (ii) for FFT range-Dopplerimaging is satisfied, one can approximate D(t) over theinterval AT by a linear function of the pulse number p

D(x',y',z',t) D(p) Do + Dop/PRF (47)

where p is defined to be zero at t = 0, the center of AT.The error in this linear approximation can be estimated by1ot2/2.

With (46) and (47), equation (15) becomes

G(x',',z') exp( -4Trjv'/)P(x',v') (48a)

where

P(x',y') = E W(p)S(p) exp(-4ljOx'p/PRF) (48b)p)

is the range-Doppler image function. The summationextends over the pulses in the interval AT. Because of(46a) and (46b), the calculated image function P(x',y') isalso available as a function of D and Do, P(Do,DO).

The function P(x',y'), as expressed in (48b), has theform of a discrete Fourier transform to be calculated ateach value of y'. This form has some advantages incomputational speed. However, without an additionalapproximation, it cannot be evaluated with the fullefficiency available from the FFT. The quantity S(p)[(17) and the definition of S(p)] comes from the radarreturn sampled at the range R(O,t) + D(t), where D(t)depends on both x' and y' by (46) and (47). To evaluate(48b) efficiently with the FFT, S(p) cannot depend on x'However, if condition (i) is satisfied, i.e., if the changein relative range is small compared with the rangeresolution, then S(p) sampled at

R(9,t) + y' + x'Op/PRF

is approximately equal to S(p) sampled at

R(0,t) + y'.

With this approximation, S(p) becomes independent of x'and (48b) can be evaluated as an FFT at each value ofrelative range, y'.

(49)Here, X' max is the largest cross-range displacement ofany scatterer in the target. This is a more general versionof the expression in (9).

A similar limiting expression for AT from condition(ii) is given by

AT < C (X/iD50Imax)/' (50)

where ID0I is the maximum value of lDo(x',Y',z')¶ forany scatterer in the target and C is a dimensionlessnumerical constant.

In most cases where range-Doppler imaging is used,the (60)2 contribution to Do predominates. In such cases,the angular rate of the RLOS relative to the target isapproximately constant, and the RLOS are approximatelycoplanar relative to the target. Then (50) can be written

A0 z=AT < C (X/ly'Imax) (51)

where lY' Lx is the maximum range displacement fromthe origin to a scatterer. This expression is a moregeneral version of (10).

If the range-Doppler image is to have equal rangeand cross-range resolution, p, = P, = p, then to satisfy(49) and (51), respectively, jx'imax and JYxlnax must eachbe less than 4p2/X. Outside these limits one can see aslight smearing of the scatterer's image. The smearinggradually becomes more pronounced the farther thescatterer is from the origin.

Fig. 29 shows this smearing as a function of thetarget's location. This range-Doppler image wascalculated from simulated radar data, assuming that X =3 cm. The true location of each scatterer is at the centerof its image area. It should be noted that as a scatterer'simage is smeared over a larger area, the peak RCS fallsbelow the actual RCS of the scatterer. For a few of thescatterers, the loss in image RCS is shown in dB. Theintegral in square meters over the scatterer's image arearemains constant. Total "power" is conserved betweendata input and range-Doppler image output.

In some important cases, the contribution of 60 or Nto (46c) cannot be neglected. The first class of casesconsists of stable or very slowly rotating targets in lowEarth orbit near the beginning or end of a pass when thesatellite's velocity vector is directed almost toward or

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 389

Page 28: Developments in Radar Imaging

20

16-

ui0z 12-

8-

4-

0-

0 4 8 12 16 20CROSS RANGE (m)

Fig. 29. Range-Doppler image over 7.20 of aspect change with toolarge an interval (0 = constant; simulated data X = 3 cm).

away from the radar. In these cases, 00 is important. Inthe second class of cases, --0 is predominant. Theseinvolve very rapidly rotating targets where the angle K

between rotation axis and the RLOS is small or close to1800. Other cases where these second derivatives cannotbe neglected must be expected, but the above two classesare known to occur frequently.

(2) Extended Images. An extended correlationimage is obtained by evaluating G(ro) over a set of

20-

16-_

W

z

> 8-

---

4u4-

0-

0 4 8 12 16 20CROSS RANGE (m)

Fig. 30. Comparison of range-Doppler with correlation image over 7.20 of aspect change (0 = constant; simulated data).

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984

coherent data that spans a time significantly greater thanthe interval AT which can be used in linear range-Doppler imaging. This set of data need not becontinuous. It can, for example, be made up of manywidely separated intervals. Two particular classes of datasets are discussed.

The first class of data sets, associated with wide-angleimaging, uses one continuous interval of data AT, whereAT is larger than can be used in linear range-Dopplerimaging, but is a fraction of the target's rotation period.The second class, associated with multiple rotationimaging, uses many equal intervals selectedsynchronously from successive target rotation periods.This second class includes stroboscopic and three-dimensional imaging.

Wide-Angle Imaging. In wide-angle imaging, thecontinuous data interval used AT is significantly largerthan the interval that can be used in range-Dopplerimaging, i.e., AT severely violates (49) and (50) so thatsimple Fourier transform processing cannot be used.These larger values of AT correspond to larger aspectchanges AO. Point scatterer-like features that givepersistent returns during this interval image with a sharpercross-range resolution according to (8). Also, theboundaries of specular surface are more sharply defined.More specular surfaces are included in the image becauseof the wider range of aspects covered by the data. Theimage SNR will improve for small persistent pointscatterers, thus yielding in some cases otherwiseunobtainable information about some of the low RCSfeatures of the target.

When the aspect rate 0 is constant, Fig. 29 shows thesmearing of a range-Doppler image that results when

SINGLE LINEAR IMAGE

-8.0 dBdB-10.3 dB

-6.0 dB -7.2 dB

* 4' 4

0.0 dB -6.0 dB -9 7 dB

SINGLE LINEAR IMAGE EXTENDED CORRELATION IMAGE

a Xsw

-8.0 dB-10 3 dB

-6.0 dB -7.2 dB

Al t t

t~~~~ , * I b

%0.0 dB -6.0 dB -9.7 dB

I I I~~~~~~~~~~~~~~~~~~~~~~~~~~~

390

Page 29: Developments in Radar Imaging

both constraints on range-Doppler imaging are violatedby an extraordinarily large target. (The radar data issimulated.) Fig. 30 shows the same range-Doppler imagebeside a correctly calculated correlation image. Thecorrelation image correctly focuses, with no loss of RCS,all scatterers in this large target.

Near the end of subsection IV B-1, a not unusual casewas described where a linear range-Doppler was limitedin cross-range resolution by 00. This occurs because early(and late) in a near overhead pass of an Earth-stabletarget, the aspect rate, 0, is small and rapidly varying.Fig. 31(a) shows a range-Doppler image calculated usinga AO that violates (49) by a factor of 7. (Again, radardata is simulated.) The actual scatterer locations can beseen in Fig. 31(b) which was calculated by correlationimaging using the same data interval. The nature andlocation of the smearing on the range-Doppler imageconfirms that the 00X' contribution to Do in (46c) is thedominant source of smearing. Again, on the correlationimage, each scatterer is correctly focused. Similar imageimprovement in these cases has been obtained with realdata.

Multiple Rotation Imaging. The data used inmultiple rotation images covers the same interval A1J(Fig. 16) on each of many successive rotations.According to the discussion in Subsection IIC, usefulmultiple rotation images require that the rotation rate ibe much larger than the rate of change K of the aspectdeviation angle. For satellites in orbit about the Earth, itcan be shown that the largest possible value of K when

data is taken at geosynchronous altitudes is about 1.3 x10`4 rad/s. This limiting value is the fastest rate of theRLOS in inertial space, when the satellite's speed is lessthan Earth escape velocity. All lower values of K canoccur. Many geostationary satellites have K = 0. Atlower altitudes, both larger and smaller values of K arepossible, but the smaller values of i occur much lessfrequently and for shorter periods of time.

There are two useful classes of multiple rotationimaging: three-dimensional images and stroboscopicimages. Since the boundary between three-dimensionaland stroboscopic imaging is not sharp, an arbitraryboundary will be defined. An image will be calledstroboscopic if p(z') > z' extent of target, where p(z') isgiven by (37). It will be called three dimensional ifamb(z') > z' extent of target > p(z'), where amb(z') isgiven by (32). This arbitrarily includes in the three-dimensional category images where the target's Z' extentis only a little greater than the resolution.

The aspect sampling that permits three-dimensionalimaging is illustrated in Fig. 16. The first requirement inthe selection of data for three-dimensional imaging is toensure that amb(x'), given by (31), is greater than the x'extent of the target and that amb(z'), given by (32), isgreater than the z' extent. At X band with typical satellitedimensions, this generally requires that both 80 and 6Kbe a small fraction of a degree. The second requirementis that enough data be used to give the desired resolutionin both the x' and the z' directions. At X band, thisgenerally requires A0 AK.\K a few degrees.

For stroboscopic imaging, the aspect deviation angleK effectively remains constant over the entire data set

14 X-

12-

Z8E 10-

-J

WL

Z 8-

2 6-

ccW 4-

..tg.,..4.A

AL = 320 kmEL = 7 to 17MAX EL = 90do/2 = 16

a' .

2-

0U-6 -4

1 1 1-2 0 2CROSS RANGE (m)

4 6

degdeg

Fig. 31. (a) Range--Doppler image of Earth-stable target at low elevation from overhead pass (0 = rapidly changing; AO z 60; simulated data).(b) Correlation image from same data used to calculate (a).

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING

1 A* -l

(a) (b)

391

Jilt

Page 30: Developments in Radar Imaging

used in the image. The same arc of a K = constant smallcircle on the unit sphere in the target coordinate system issampled again and again on successive target rotations.This redundant sampling of the same arc of aspects isuseful mainly because it permits coherent integration tosuppress the noise. With uniform weighting. correlationimaging is the optimum process for this coherentintegration over rotation periods. With each rotationweighted the same, the noise power level in thecorrelation image is suppressed by the factor of 1IN,where N is the number of rotations used. Because theweights are normalized, the target RCS in the imageremains the same.

If the RLOS used are approximately coplanar, thestroboscopic image, like a single rotation image using thesame +i interval, is two dimensional. The image functionG(ro) need only be calculated over the z' = 0 plane.

An additional important property of stroboscopicimaging is that it can be used to suppress the cross-rangeambiguous images that occur with PRF limited data. Witha large rapidly rotating target, 60, the change in aspectbetween the pulses used in an image calculated from asingle rotation may be so large that the x' extent of thetarget exceeds the ambiguity interval amb(x') given by(31). This would cause the ambiguous images on eachside to overlap the true image. If the apparent rotationperiod of the satellite is not an integral multiple of theradar's interpulse period, the aspects sampled by thepulses on successive rotation periods are not precisely thesame and the sampled aspects are interleaved. Thisinterleaving of aspects, which usually occurs, causes the" ghost" (i.e., ambiguous) images in successive singlerotation correlation images to be misaligned in phase,while the true images are correctly aligned in phase witheach other. With coherent summing over rotation periods,the ghost images are partially suppressed. Optimalsuppression of the ghost images can be achieved withproper selection of nonuniform weights between rotationperiods. The nonuniform weights usually cause a modestreduction in the SNR gain for the true image.

Any rapidly rotating geostationary satellite whoserotation axis has a constant orientation relative to theEarth will have a constant K so stroboscopic imaging canbe done. At geostationary ranges, the improved imageSNR is needed. Other deep space targets may possibly befound with the rapid rotation and extremely slow Kneeded for stroboscopic imaging.

(3) Data Acquisition: Ground-BasedRadars. Ground-based radars for imaging of man-mademoving objects require, among other attributes, sufficientsensitivity for the objects' range, good frequency stabilityfor phase measurements, a PRF capability greater thanthe greatest Doppler frequency spread of the objects to beimaged, and a PRF control system to insure thattransmitted pulses will not interfere with the receivedpulses.

The long range imaging radar (LR1R) is an example[1 1]. It was designed to meet these requirements for

artificial satellites out to geosynchronous ranges. Some ofits design parameters are listed in 11 11. In deep space,coherent integration over a large number of pulses isgenerally required for adequate image signal-to-noiseratio. At a PRF of 1200, unambiguous images of objectswith cross-range extents exceeding 4 m can be obtainedwith rotation periods as short as 2 s.

These ground-based radars, like (spotlight) SARradars, can use time-bandwidth exchange techniques topulse-compress a wideband FM waveform. Fig. 32, whichis similar to Fig. 20, shows a possible simplified radarsystem configuration. The most essential difference fromFig. 20 is that the real-time tracking system with its inputmeasurements of range, azimuth, and elevation replacesthe motion measurement system of the airborne SAR.The other major, but not essential, difference is that thesteps that represent polar-format processing are omitted.Instead, in Fig. 32, the pulse-compressed radar data,along with auxiliary data, are recorded for separate imageprocessing. Fig. 21, which shows the signals from thetransmitter, from the receiver, and from the correlationmixer of a (spotlight) SAR also applies to the ground-based radar. However, the small range extents of typicalman-made moving targets causes the two parallelogramson the figure to become extremely slender. Essentially,the full length of the chirp pulse can be used.

The recorded pulse-compressed signals and auxiliarydata are used to calculate FFT images (see Section IVB-1). Because of the small size of the objects, single FFTimages with cross-range resolution equal to the rangeresolution often are well focused over the full extent ofthe target. If extended images are required, they also canbe calculated. Since the extended images require bettertrajectory and rotational models of the objects, humanintervention would normally be required.

The recorded auxiliary data contains metricinformation which can be used along with the pulse-compressed signal data to improve the trajectory estimate.The "predicted range" which is recorded gives theprecise range to one of the range recording bins (i.e., toone of the range FFT outputs). This allows the range toevery range bin to be accurately calculated at every pulse.

Dynamical Modeling for Space ObjectImaging. Dynamical models describing the orbitalmotion of the target center of mass and the rotationalmotion of the target relative to the distant stars are

essential inputs in the calculation of the correlation imagefunction of (16). The orbital model is needed to performthe overall center of mass Doppler component phasecorrection of (17). Both the orbital and the rotationalmotion model are needed to determine the extent of theintegration time (number of pulses) required to sample theaspects necessary to provide a good resolution image andto determine the correct relative range rate (frequency to

cross-range conversion scales).

Precision Requirements for Dynamical Models. Aspreviously indicated (Section IIIC), in order to obtain

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984392

Page 31: Developments in Radar Imaging

FFT I- -*.4(Cross Rangel

Resolution) IL. _ _ _ _ _

Fig. 32. Simplified ground-based radar systems.

quality images from the calculation of (16), the combineddynamical model parameters should be determined tosufficient precision that they determine range variations toany point ro in the object to a precision of a smallfraction of a wavelength over the data interval of theimage calculation. Experience has shown that in manycases orbital and rotational motion parameters determinedfrom radar data can give on the order of X/30 rangeprecision or better. In particular, one can demonstrate thatwhen one is estimating orbital parameters, the calculatedrange variation error due to orbital parameter errors canoften be estimated by

RE cr(Robs) AT/DT (52)

where c(Robs) is the rms range observation error.Typically, cr(Robs) is on the order of several centimetersto a meter. DT is the data time interval over which thetrajectory fit is calculated. It is typically 10 min orlonger. AT is the coherent imaging interval. For anEarth-stable target, AT may be typically one or twoorders of magnitude smaller than DT (which is limited bythe duration of the pass) and consequently the X/30precision requirement may or may not be achieved. Forrapidly rotating targets, AT is usually sufficiently smallerthan DT that the precision requirement is easily reached.Also for some of these rapidly rotating targets, techniquesusing phase-derived ranges have been developed forcalculating orbit fits with rms range observation errors,c-(Robs), on the order of 1 mm. With phase-derivedranges, the X/30 precision is achievable for AT - DT

and thus all the recorded data can be used in one coherentinterval.

Rotational model parameters, such as the target'sangular velocity and the orientation of its rotation axis,often cannot be determined as reliably as the orbitalparameters. The development of techniques to determinerotational motion parameters is a crucial and still veryactive endeavor. The successful techniques dependcritically on the nature of the data, such as the extent ofchange of the orientation of the target. Refinement ofrotational motion parameters using phase-derived rangemeasurements has been successfully achieved for a fewselected targets. In every case, a preliminary approximaterotational model must be obtained before refinementswith phase-derived ranges can be performed. Often thepreliminary model, however, may be difficult todetermine.

Except at low elevations or over very long timeintervals, a propagation model that is a standardtroposphere model for the radar site is adequate. Thetroposphere model errors and the ionospheric effects donot change rapidly enough to defocus the resultingimages. In the exceptional cases, orbit fit range residualsfrom phase-derived ranges have been fitted to smoothfunctions of time. These smooth functions have been usedfor additional propagation corrections.

Extended Coherent Processing (ECP). Here wediscuss an algorithm called ECP which has beendeveloped for the purpose of efficiently calculating,analysing, and displaying the pulse-by-pulse correlation

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 393

Page 32: Developments in Radar Imaging

image function defined by (14). The method used by ECPis basically of the multisubaperture processing typediscussed in Section IIIB-2.

This program estimates G(ro) for arbitrarily long setsof data by means of a coherent summation of linearimages evaluated over shorter segments of the dataintervals. Like all correlation imaging, ECP requiresorbital and rotational models that give precise estimatesof range variations at all the input data times. Theseprecise models are used to correctly account for thenonlinear motion of scatterers in a wide-angle image,and/or the rotation-to-rotation relative motion in amultiple rotation image. The ECP image is an excellentapproximation to G(ro). Furthermore, its evaluation isalso very efficient, being at least an order or magnitudefaster than the pulse-by-pulse calculation of G(ro).

The ECP Algorithm. To calculate a correlationimage using (14) or (15)-(17), it is necessary to calculatethe quantity W(p)S(p) expt-j44-R(p)/XI (or theequivalent quantity from (16)) once for every ro grid pointin the image and to repeat these calculations for everypulse. These repetitive calculations, althoughstraightforward, require about an order of magnitudemore computer time than calculating a linear range-Doppler image using (48b) over the same set of pulses.This fact suggests that it would be worthwhile toreformulate extended correlation imaging so as to userange-Doppler image functions, P(D,D), calculated oversubintervals of the total data set.

In order to do this reformulation, it is necessary to

replace (46a) and (46b) by a calculation of Do and Do asa function of (x, y,z) for a more general fixed orientationof the (x,y,z) coordinate system with respect to thetarget. The y' axis can be aligned with the range directionat the center time of only one range-Doppler subinterval.Thus (46a) and (46b) cannot be used for any othersubintervals. Fortunately, the calculation of D and Dfrom (x,y,z) is straightforward given any rectangularcoordinate system that rotates with the target.

Let the time-varying unit vector u(t) be aligned withthe RLOS at all times. This unit vector, and its timederivative ui(t) can be calculated in the target's coordinatesystem ro = (x,y,z) using the target's orbital androtational models. The equations needed arecommonplace tools in applied satellite dynamics. Theywill not be given here. If the radar range is much largerthan the target, the relative range to an (ro) grid point isgiven in terms of this unit vector by the dot product

(53a)D (ro) = u ro

and the relative range rate is given by

D(ro) = a*ro.

Equation (53b) is obtained by differentiating (53a),holding ro constant. These relative ranges and relativerange rates need only be calculated at the center times ofthe range-Doppler subintervals. The calculations are

(53b)

further shortened by calculating u(t) and ii(t) only oncefor each subinterval. The simple dot products, (53a) and(53b), are all that must be repeated for each (ro) gridpoint in the image.

If the radar range is not much larger than the object,the dot product approximation of (53) cannot be used.Instead, the instantaneous position of the radar in thetarget's coordinate system is given by

rr = -R(0,t)u

and the exact relative range is given by

D(ro) = Iro - rrl - Ir,^l

(54)

(55)

The relative range rate D is obtained by differentiatingthe expression for D, holding ro constant. These exactcalculations require a modest increase of computer timeover the time required by the dot product approximation.For a bistatic system, the relative range would be theaverage between relative ranges calculated for thetransmitter and the receiver.

Let D(n) D(ro,n) and D(n) -D(ro,n) denote thecalculated values of Do and Do at the center time of thenth range-Doppler image subinterval to be used incalculating the correlation image. With this notation, thelinear approximation equivalent to (47) gives the relativerange at pulse p in the nth subinterval as

D(rop) D(ron) + D(ro,n)p/PRF (56)

where, as in (47), p = 0 at the center time of the nthrange-Doppler subinterval. When (56) is used in (16),the contribution to G(ro) from the nth subintervalbecomes

G(ro,n) exp[ -4IjD(n)/X]P[n,D(n),D(n)]where

P[n,D,D] = E W(p)S(p) exp[-4'rrjD/(XPRF)].{An

(57)

(58)

The summation in (58) is over the set of pulses {p},,contained in the nth subinterval. In (58), as in (48b), thephase-corrected signal S(p) is sampled at the approximaterange [R(O,t) + D], instead of at the more precise range[R(0,t) + D + D p/PRF], in order to permit FFTevaluation of the Fourier transform.

The ECP approximation to the correlation imagefunction G(ro) is obtained by summing (57) over the Nsubintervals.

N

GECP(rO) -= E (n) exp[ 4irjD(n)

/X]P[n, D(n), D(n)]. (59)

For flexibility and convenience, a new set of weightsw(n) has been introduced here, controlling the relativecontribution from the various range-Doppler imagefunctions. To closely approximate the original correlation

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984394

Page 33: Developments in Radar Imaging

image, the original weights W(p) would be used in (58)and the new weights w(n) would be uniform.

Effective calculation of (58) using FFT methods, as inSubsection IVB-l, will give the function P[n,D,D] onlyat a discrete set of (D, D) grid points. In general, thecalculated points [D(ron), D(ro,n], where the function Pis needed for (59), will fall between the (D,D) gridpoints at which the function P has been calculated. Thecalculated function P(n,D,D) is stored in a two-dimensional array or table containing the real andimaginary parts of P. Bivariate linear interpolation is usedto extract P[n,D(ro,n), D(ro,n)l from this table. Theinterpolation is, in effect, performed separately on thereal and imaginary parts of P. The table or array in whichP(n,D,D) is stored is called the nth "periodogram."

When (59) is used to approximate the originalcorrelation image, one is effectively modeling thesmoothly varying actual relative range D(ro,p) by apiecewise linear function of time where it is used tocalculate the phase corrections in (58) and (59). WhenD(ro,p) is used in extracting S(p) from the radar data for(58), it is approximated by a step function that takes anew constant value for each range-Doppler subinterval.This modeling is illustrated in Fig. 33 for one ro grid

--

m

c

ama

RD Interval

Time -*

Fig. 33. Relative range to a scatterer versus time. Piecewise linearapproximation shown for phase (solid lines) and step functionapproximation for range sampling (dashed lines). The length of the FFT

processing interval is exaggerated to make the errors visible.

point in a rapidly rotating target. The sloping linestangent to the smooth curve are the piecewise linearapproximations used in phase corrections. The horizontaland vertical dashed lines are the step functionapproximation used for the relative range in sampling theradar data.

The ECP Algorithm: Numerical Considerations. Toensure that the preceding approximations do not causesignificant errors, both constraints on range-Dopplerimaging expressed by inequalities (49) and (50) must besatisfied by every range-Doppler subinterval used.

Low cross-range sidelobes in the periodogram imagesare desirable if they can be obtained without degradingthe extended image. Using a sidelobe suppression set oftapered weights W(p) in calculating the periodogramsaccomplishes this.

The radar returns S(p) must be extracted from therecorded pulse-compressed radar signals at the desiredrange Rd = R(O,t) + D without introducing harmfulerrors. Interpolation is required. If the signals arerecorded at a range spacing of c/(4BW), where c is thespeed of light and BW is the bandwidth, then linearinterpolation is adequate. This sample spacing is half themaximum spacing allowed by the sampling theorem. It isobtained by padding half the pulse compression FFTinputs with zeros.

If the PRF is much greater than the Dopplerbandwidth of the object, then a number of successivepulses can be presummed into each FFT input at a givenvalue of D. Prior to this presumming, the signals must bephase-corrected (17) and interpolated to the range Rdt. Ifthe number of pulses presummed is less than the numberof pulses between FFT inputs, it is best to select forpresumming a cluster of adjacent pulses centered on thetime of each FFT input.

We call the Fourier transform in (58) "symmetric"because the reference phases in the exponential functionare all zero at the center time of the range-Dopplerinterval, since p is zero at this time. The FFT calculationof P(DoDo) must be arranged to calculate such asymmetric Fourier transform in order for (57) and (59) tobe valid.

After the periodograms, P[n,D, D] are correctlycalculated and stored, a possible source of error incalculating (59) is the bivariate linear interpolation usedto get P[n,D(n),D(n)] from the periodogram tables.Errors here are controlled by calculating and storing theperiodograms over a sufficiently fine grid in both therange and range rate directions.

A grid spacing in relative range D of c/(4BW) issufficiently small to give reasonably accurate rangeinterpolation. The grid spacing required in relative rangerate D depends on the periodogram sidelobe level. If thesidelobes in the periodograms are low, a grid spacing ofX/(4AT) in D gives reasonably accurate range-rateinterpolation. This too is half the maximum grid spacingallowed by the sampling theorem. This requires paddingabout half the FFT input array with zeros for Doppler

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING

Al

r -

1

1

395

Page 34: Developments in Radar Imaging

imaging. If the sidelobes in the periodograms are large,then a finer D grid is required. The relative weights w(n)to use between periodograms depend on the type ofimaging. For three-dimensional imaging, a sidelobesuppression taper, as a function of K suppresses sidelobesin the second cross-range direction z'. For stroboscopicimaging, uniform weights between periodograms giveoptimum SNR improvement, but the nonuniform weightsdiscussed in Section IVB-2 may be needed to suppressambiguous images in the original cross-range x' direction.

For wide-angle imaging, it has been found best to useperiodograms that overlap 50 percent, i.e., half the pulsesused in the nth periodogram are reused in the (n1 + 1)thperiodogram. A sidelobe suppression weighting W(p) isused in calculating these periodograms, and similarweighting w(n) is used between periodograms incalculating (59).

V. SUMMARY

In th s paper we have presented a general treatment ofrange-doppler radar imaging techniques and have givendetailed discussions of some of the most prominent andillustrative applications, such as airborne SAR imagingand space object (planets, artificial satellites) imagingfrom ground-based wideband radars. We have also statedthe general properties of, and the necessary requirements

for, useful radar images. The different image processingalgorithms required to perform particular imaging taskshave been introduced and outlined.

We have stressed that all these imaging techniques arebasically equivalent and can be developed from acommon theoretical background, which is, in fact, alsocommon with tomographic imaging applications.

These techniques have been conceived and developedto deal with the problem of the scatterer's motion throughresolution cells, thus permitting a much wider spectrumof applications than allowed by obeying the stringentrequirements of linear range-Doppler imaging. Thesetechniques, in order to handle the data-intensiveapplications, have also been developed to becomputationally efficient.

The differences among the various computationalalgorithms are affected by the approximations that arevalid in specific applications and also by tradeoffsbetween image quality and computational efficiency.

ACKNOWLEDGMENT

The developments reviewed in this paper are theresult of significant contributions over nearly threedecades by various researchers too numerous to list. Thereferences provide partial documentation of thesecontributions.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984396

Page 35: Developments in Radar Imaging

REFERENCES

[1] Cutrona, L.J., Vivian, W.E., Leith, EN., and Hall, G.O.(1961)

A high-resolution radar combat-surveillance system.IRE Transactions on Military Electronics, MIL-5, (Apr.1961), 127.

[2] Leith, E.N. (1977)Complex spatial filters for image deconvolution.Proceedings of the IEEE, 65, 1 (Jan. 1977), 18.

[3] Jordan, R.L. (1980)The Seasat A synthetic aperture radar aperture.IEEE Journal of Oceanic Engineering, OE-5, 5 (Apr. 1980),154.

[4] Marlow, H.C., Watson, D.C., Van Hoozer, C.H., and Freeny,C.C. (1965)

The RAT SCAT cross-section facility.Proceedings of the IEEE, 53 8 (Aug. 1965), 946.

[5] Walker, J.L. (1980)Range-Doppler imaging of rotating objects.IEEE Transactions on Aerospace and Electronic Systems,AES-16 1 (Jan. 1980), 23-52.

[6] Kirk, J.C., Jr. (1975)A discussion of digital processing in synthetic aperture radar.IEEE Transactions on Aerospace and Electronic Systems,AES-II (May 1975), 326-337.

[7] Kirk, J.C., Jr. (1975)Digital synthetic aperture radar technology.In IEEE 1975 International Radar Confrrence Record, p.482.

[8] Kirk, J.C., Jr. (1975)Motion compensation for synthetic aperture radar.IEEE Transactions on Aerospace Electronic Systems, AES-1 /3 (May 1975), 338-348.

[9] Kovaly, J.J. (1977)High resolution radar fundamentalsIn Eli Broakner (Ed.), Radar Technology.Dedham, Mass.: Artech House, 1977.

[10] Brookner, E. (1977)Synthetic aperture radar spotlight mapper.In Eli Brookner (Ed.), Radar TechnologyDedham, Mass.: Artech House, 1977.

[11] Bromaghim, D.R., and Perry, J.P. (1980)A wideband linear FM ramp generator for the long-rangeimaging radar.IEEE Transactions on Microwave Theory and Techniques,MTT-26, 5 (May 1980), 322.

[12] Shapiro, J.J. (1968)Planetary radar astronomy.IEEE Spectrum, 5, 3 (Mar. 1968), 70.

[13] Prickett, M.J., and Chen, C.C. (1980)Principles of inverse synthetic aperture radar (ISAR)imaging.IEEE 1980 EASCON Record, p. 340.

[14] Sherwin, C.W., Ruina, J.P., and Rawcliff, R.D. (1962)Some early developments in synthetic aperture radarsystems.IRE Transactions on Military Electronics, MIL-6 (Apr.1962), 111.

[15] Jolley, J.H. and Dotson, C. (1981)Synthetic aperture radar improves reconnaissance.Defense Electronics, 13 9 (Sept. 1981), I 1 1.

[161 Porcello, L.J., et al. (1974)The Apollo lunar sounder radar system.Proceedings of the IEEE, (June 1974), 768-783.

[17] Daily, M., Elachi, C., Farr, T., and Schaber, G. (1978)Discrimination of geologic units in Death Valley using dualfrequency and polarization radar data.Geophysical Research Letters, 5 (1978), 889.

[18] Shuchman, R.A., Davis, C.F., and Jackson, P.L. (1975)Contour stripmine detection and identification with imagingradar.Bulletin of the Association of Engineering Geology, X-Hl(1975), 99.

[19] Brown, W.E., Jr., Elachi, C., and Thompson, T.W. (1976)Radar imaging of ocean surface patterns.Journal of GeophYsical Research, 81 (1976), 2657.

[20] Shemdin, O.H., Brown, W.E., Jr., Staudhammer, F.G.,Shuchman, R., Larson, R., Zelenka, J., Rose, D.B., McLeish,W., and Berles, R.A. (1978)

Comparison of in situ and remotely sensed ocean waves offMarineland, Florida.Boundary Layer Meteorology, 13 (1978), 173.

[21] Shuchman, R.A. (1981)Processing synthetic aperture radar data of ocean waves.In J.F.R. Gower (Ed.), Oceanography from Space.New York: Plenem, 1981, p. 477.

[22] Gray, A.L., Hawkins, R.K., Livingstone, E.E., Arsenault,Drapier, and Johnstone, W.M. (1982)

Simultaneous scatterometer and radiometer measurements ofsea-ice microwave signatures.IEEE Journal of Oceanic Engineering, OE-7, (1982), 20.

[23] Luther, C.A., Lyden, J.D., Shuchman, R.A., Larson, R.W.,Holmes, QA., Nuesch, D.R., Lowry, R.T., and Livingstone,C.E. (1982)

Synthetic aperture radar studies of sea ice.In IEEE International Geoscience and Remote SensingSymnposium, Munich, Germany, (1982), pp. TA-8 1.1-1.9.

[24] Beal, R.C., DeLeonibus, P. and Katz, I. (eds.). (1981)Spaceborne Synthetic Aperture Radar for Oceanography.Baltimore, Md.: Johns Hopkins Press, 1981.

[251 Gonzalez, F.l., Beal, R.C., Brown, W.E., Jr., DeLeonibus,P.S., Gower, J.F.R., Lichy. D., Ross, D.B., Rufenach, C.L.,Sherman, J.W._ III, and Shuchman, R.A. (1979)

SEASAT synthetic aperture radar: Ocean wave detectioncapabilities.Science, 204 (1979), 418.

[26] Gower, J.F.R. (Ed.). (1981)Oceanography from Space.New York: Plenum, 1981.

[27] Elachi, C., et al. (1982)Shuttle imaging radar experiment.Science, 218 (1982), 996.

[28] Duchossors, G., and Honvault, C. (1981)The first ESA remote sensing satellite system ERS-1.Presented at the 15th International Symposium on RemoteSensing of the Environment (Ann Arbor, Mich., May 1981).

[29] Raney, R.K. (1982)The Canadian RADARSAT program.In Proceedings of the 1982 International Geoscience andRemote Sensing Symposium (IGARSS '82), IEEE Catalog82CH 14723-6.

[30] Matsumoto, K., Kishida, H., Yamada, H., and Hisoda, Y.(1982)

Development of active microwave sensors in Japan.In Proceedings of the 1982 International Geoscience andRemote Sensing Svmposium (IGARSS '82), IEEE Catalog82CH 14723-6.

[311 Green, P.E., and Price, R. (1960)Signal processing in radar astronomy.Technical Report 234, Lincoln Laboratory, MassachusettsInstitute of Technology, Cambridge, Oct. 1960.

[321 Green, P.E. (1978)Radar measurements of target scattering properties.In J.V. Evans and T. Hagfors (Eds.), Radar Astronomy.New York: McGraw-Hill, 1978, pp. 1-75.

[33] Pettengill, G.A. (1960)Measurements of lunar reflectivity using the Millstone radar.Proceedings of the IRE, 48 (1960), 933.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 397

Page 36: Developments in Radar Imaging

[34] Pettengill, G.A., et al. (1982)A radar investigation of Venus.Astronomical Journal, 67 (1982), 181.

[35] Smith, W.B. (1963)Radar observations of Venus 1959 and 1961.Astronomical Journal, 68 (1963), 15.

[36] Muchleman, D.O., Black, N., and Holdridge, D.B. (1962)The astronomical unit determined by radar reflections fromVenus.Astronomical Journal, 67 (1962), 191.

[37] Thompson, J.H., et al. (1961)A new determination of the solar parallax by means of radarechoes from Venus.Nature, 190 (1961), 519.

[381 Kotelnikov, V.A., et al. (1962)Radar system employment during radar contact with Venus.Radiolekhnika i Electronika, 7 (1962), 1715.

[39] Carpenter, R.L., and Goldstein, R.M. (1983)Radar observations of Mercury.Science, 142 (1983), 381.

[40] Pettengill, G.H. (1965)Recent Arecibo observations of Mars and Jupiter.Journal of Research of the National Bureau of Standards D,69 (1965), 1627.

[411 Dyce, R.B. (1965)Recent Arecibo observations of Mars and Jupiter.Journal of Research of the National Bureau of Standards D,69 (1965), 1628.

[42] Evans, J.V., et al. (1965)Radio echo observations of Venus and Mercury at 23 cmwavelength.Astronomical Journal, 70 (1965), 486.

[43] La Hoffneon, R.A., Hurlbut, R.H., Kind, D.E., and Wentroub,H.J. (1969)

A 94-GHz radar for space object identification.IEEE Transactions on Microwave Theory and Techniques,M7T-17 12 (Dec. 1969), 1145.

[44] Brown, W.M. (1967)Synthetic aperture radar.IEEE Transactions on Aerospace and Electronic Systems,AES-3 (1967), 217.

[45] Brown, W.M., and Fredericks, R.J. (1969)Range-Doppler imaging with motion through resolution cells.IEEE Transactions on Aerospace and Electronic Systems,AES-5 (Jan. 1969), 98.

[46] Walker, J.L., Carrara, W.G., and Cindrich, 1. (1973)Optical processing of rotating-object radar data using a polarrecording format.Technical Report RADC-TR-73-136, AD 526 738, RomeAir Development Center, Rome, NY, May 1973.

[47] Mensa, D., Heidbreder, G., and Wade, G. (1980)Aperture synthesis by object rotation in coherent imaging.IEEE Transactions on Nuclear Science, NS-27 (Apr. 1980),989.

[48] Mensa, D. (1982)High resolution imaging.Dedham, Mass.: Arctech House, 1982.

[49] Wehner, D.R., Prickett, M.J., Rock, R.G., and Chen, C.C.(1979)

Stepped frequency radar target imagery, Theoretical conceptand preliminary results.Technical Report 490, Naval Ocean Systems Center, SanDiego, CA, Nov. 1979.

[50] Chen, C.C., and Andrews, H.C. (1980)Target-motion-induced radar imaging.IEEE Transactions on Aerospace and Electronic Systems,AES-16 (Jan. 1980), 2-14.

[51] Chen, C.C., and Andrews, H.C. (1980)Multifrequency imaging of radar turntable data.IEEE Transactions on Aerospace and Electronic Systems,AES-16 (Jan. 1980), 15-22.

[521 Mensa, D.L., Halevy, S., and Wade, G. (1983)Coherent Doppler tomography for microwave imaging.Proceedings of the IEEE, 71 (Feb. 1983), 254.

[531 Munson, D.C., and Jenkins, W.K. (1981)A common framework for spotlight mode synthetic apertureradar and computer-aided tomography.In Proceedings of the 15th Asilomar Conference on Circuits,Systems, and Computus (G.L. Pacific Grove, Calif., Nov.9-11, 1981), p. 217.

[54] Munson, D.C., O'Brien, J.D., and Jenkins, W.K. (1983)A tomographic formulation of spotlight-mode syntheticaperture radar.Proceedings of the IEEE, 7 (Aug. 1983), 917-925.

[55] Aleksoff, C.C., LaHaie, I.J., and Tai, A.M. (1983)Optical-hybrid backprojection processing.In Proceedings of the 10th International ComputingConference (Apr. 6-8, 1983), IEEE Catalog 83 CH 1880-4.

[56] Mims, J., and Farrell, J.L. (1972)Synthetic aperture imaging with maneuvers.lEEE Transactions on Aerospace and Electronic Systems,AEA-8 (July 1972), 410-418.

[57] Brown, W.M. (1980)Walker model for radar sensing of rigid target field.IEEE Transactions on Aerospace and Electronic Systems,AES-16 (Jan. 1980), 104-107.

[58] Lewitt, R.M. (1983)Reconstruction algorithms: Transform methods.Proceedings of the IEEE, 71 (Mar. 1983), 390--408.

[59] Brown, W.M., and Porcello, J.L. (1969)An introduction to synthetic aperture radar.IEEE Spectrum, 6 (Sept. 1969), 52-62.

[60] Leith, E.N. (1971)Quasi-holographic techniques in the microwave region.Proceedings of the IEEE, 59 (Sept. 1971), 1305- 1318.

[61] Kozma, A., et al. (1972)Tilted-plane optical processor.Applied Optics, 11 (Aug. 1972), 1766-1777.

[62] Wu, C. (1980)A digital fast correlation approach to produce SEASAT SARimagery.In Proceedings of the IEEE 1980 International RadarConference, pp. 153-160.

[63] Leith, E.N. (1973)Range-azimuth-coupling aberrations in pulse-scannedimaging systems.Journal of the Optical Society ofAmerica, 63 (Feb. 1973),119-126.

[64] Ausherman, D.A. (1980)Digital versus optical techniques in synthetic aperture radar(SAR) data processing.Optical Engineering, 19 (Mar./Apr. 1980), 157-167.

[65] Skolnik, M.I. (1980)Introduction to Radar Systems, 2nd ed.New York: McGraw-Hill, 1980, pp. 34-44.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 5 JULY 1984398

Page 37: Developments in Radar Imaging

Dale A. Ausherman (S'66. M'72) was born in Maryville, Mo., on January 12, 1947.He received the B.S., M.S., and Ph.D. degrees in electrical engineering from theUniversity of Missouri at Columbia in 1969, 1970, and 1973, respectively.

While attending graduate school, he was engaged in research on the application ofdigital image processing to automated diagnosis from medical radiographs. He joinedthe Environmental Research Institute of Michigan in 1973 and has done research ondigital image formation processing techniques for SAR systems, including the methodsrequired for fine resolution imaging of rotating objects. He has acted as a consultant togovernment and industry in putting such techniques into practice. He is currentlyDeputy Director of the Radar Division.

Dr. Ausherman is a member of Tau Beta Pi, Eta Kappa Nu, and Sigma Xi.

Adam Kozma (M'66) was born in Cleveland, Ohio, on February 2, 1928. Hereceived the B.S.E. degree in mechanical engineering and the M.S.E. degree ininstrumentation engineering from the University of Michigan in 1952 and 1964,respectively. He received the M.S. degree in engineering mechanics from Wayne StateUniversity in 1961 and the Ph.D. degree in electrical engineering from ImperialCollege, University of London, in 1968.

After a number of years with the automobile industry, he joined the Willow RunLaboratories of the University of Michigan in 1958 where he worked in the Radar andOptics Laboratory on synthetic aperture radar, holography, and coherent optics. In1969, he joined Harris, Inc., where he directed the Ann Arbor Electro-Optics Centerin research and development of coherent optical processing systems, holographicmemories, and various electro-optical devices and systems. Since 1973, he has beenwith the Environmental Research Institute of Michigan and has been engaged inresearch on synthetic aperture radar techniques. He is currently Vice-President andDirector of the Radar Division. His experience includes industrial and governmentconsulting assignments, as well as lecturing at Imperial College and the Universityof Michigan.

Dr. Kozma is a member of Sigma Xi, the American Defense PreparednessAssociation, the American Management Association and a fellow of the OpticalSociety of America.

Jack L. Walker (S'61, M'64) was born in Mattawan, Mich., on May 6, 1940. Hereceived the S.B. degree in electrical engineering from Massachusetts Institute ofTechnology in 1962 and the M.S. and Ph.D. degrees in electrical engineering from theUniversity of Michigan in 1967 and 1974, respectively.

He has worked for General Electric, Bendix, the Willow Run Laboratories of theUniversity of Michigan and, since 1973, at the Environmental Research Institute ofMichigan. His experience includes research on MTI radar systems, coherent optics,and synthetic aperture radar. He received the IEEE Aerospace and Electronic SystemsSociety M. Barry Carlton award in 1981 for his paper on Range-Doppler Imaging. Heis presently Vice-President and Director of the Infrared and Optics Division.

Dr. Walker is a member of Eta Kappa Nu, Sigma Xi, and the Optical Society ofAmerica.

AUSHERMAN ET AL: DEVELOPMENTS IN RADAR IMAGING 399

Page 38: Developments in Radar Imaging

Harrison M. Jones was born in Ottumwa, Kans., on November 16, 1922. Hereceived the B.S. degree in naval architecture and marine engineering in 1944 fromWebb Institute of Naval Architecture, now located in Glen Cove, Long Island, N.Y.,and the M.S. and Ph.D. degrees in physics from Yale University, New Haven, Conn.,in 1948 and 1956, respectively.

He served in the U.S. Navy from 1943 to 1946. From 1948 to 1952, he was aResearch Assistant in the Department of Physics at Yale, working in theoreticalnuclear physics. During the academic year 1952-1953, he served as AssistantProfessor in the Department of Physics at Vanderbilt University, Nashville, Tenn. Hehas been a staff member of the M.I.T. Lincoln Laboratory since 1953, doing researchin radar detection theory, ionospheric physics, orbital mechanics, ballistic missiledefense systems, and radar imaging.

Dr. Jones is a member of the American Physical Society, Sigma Xi, the AmericanInstitute of Aeronautics and Astronautics, the American Defense PreparednessAssociation, and the U.S. Naval Institute.

Enrico C. Poggio was born in Milan, Italy, on January 29, 1945. He received theB.S. degree in physics, the B.S. degree in applied mathematics in 1966, and thePh.D. degree in theoretical physics in 1971, all from the Massachusetts Institute ofTechnology.

From 1971 to 1978 he held research positions at Columbia University, HarvardUniversity, and Brandeis University, working in theoretical elementary particle physicsand quantum field theory and published over 20 papers. He joined the M.I.T. LincolnLaboratory in 1978 where he has been a staff member in the Radar ImagingTechniques Group. He is presently on a leave of absence and is a candidate for theM.S. degree in the management of technology at the M.I.T. Sloan School ofManagement.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. AES-20, NO. 4 JULY 1984400