8
A 3D Outdoor Scene Scanner Based on a Night-Vision Range-Gated Active Imaging System David Monnin, Armin L. Schneider, Frank Christnacher and Yves Lutz French-German Research Institute of Saint-Louis (ISL) 5, rue du Général Cassagnou - 68301 Saint-Louis - France [email protected] Abstract We present a 3D outdoor scene scanner for the acquisi- tion of kilometers-deep scenes in night conditions. Its imag- ing system is based on a compact and low-cost pulsed laser illuminator and a light-intensifier equipped CCD camera. By precisely synchronizing both the illuminator and the camera shutter, it is possible to acquire "slices" of the scene at specific known distances. We show that even with large laser pulses and without megahertz-capable electronics, the third dimension can be recovered for the whole range of the scene by processing only two images acquired in specific conditions. As the pixel intensities of the images produced by active imaging systems vary with the square of the range, and due to the limited dynamics of image sensors, scanning long-range scenes with shorter "slices" allows the camera gain to be adjusted with respect to the range and the ac- curacy to be enhanced. The imaging system as well as the different image processing steps are detailed in this paper and an example of typical results is given. 1. Introduction Night-vision imaging has remarkably progessed in the past forty years. The two main technologies involved in this quest for the vision in the dark are passive low-light level intensification and infrared thermal imaging. Passive light intensification techniques are well suited to short-range ap- plications in which the darkness is not complete and offer a very good image resolution. However, difficult weather conditions like fog, rain or snow as well as parasitic light in the surroundings of the observed scene, make this tech- nology encounter serious difficulties. Thermal imaging sys- tems based on the infrared radiation principle can operate in total darkness conditions, but need a good thermal contrast in the observed scene. They can be used for long-range ap- plications, but do not deliver very high resolution images and are better suited to detection. Active imaging systems are the heirs to the passive low- light level intensified systems, but they differ from them in- sofar as they bring their own laser light sources. Further, the control of both the illuminator pulsed-laser beam and the camera shutter by using the so-called “range gating” tech- nology described hereafter makes it possible to overcome difficulties such as backscattering on particles between the camera and the scene under observation. Hence, range- gated active vision imaging systems like the one presented here can operate at long range in total darkness and bad weather conditions while delivering high-resolution images. The acquisition of long-range outdoor scenes in all- weather night conditions is a challenge which has been al- ready taken up successfully by our range-gated active imag- ing system [4], but using this system for a 2.5D reconstruc- tion of the scene is quite a new issue. Methods successfully exploiting range-gated laser radars for three-dimensional imaging are often based on very short laser pulses in the order of a few tens to a few hundreds of picoseconds. An in- teresting approach is then to use a megahertz-capable shut- ter and to modulate the camera gain in a specific way in or- der to produce two different images under different known illumination conditions which can be processed to recover the third dimension [6]. An efficient alternative to the use of expensive electronics is to scan the scene under observa- tion by moving a range gate using many steps which have to be much shorter than the gate depth for a good accu- racy [3]. The method presented in this paper contrasts with others because it is not limited by larger laser pulses, it nei- ther requires the use of a megahertz camera shutter nor that of a camera gain modulation and it can operate from only two different images to recover the third dimension. How- ever, for long-range scenes of several kilometers, scanning the scene by using range gates shorter than the scene range both allows the enhancement of the accuracy and the adap- tation of the camera gain with respect to the range. After a presentation of the imaging system, its three-dimensional imaging use will be explained and typical results given. Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) 0-7695-2825-2/06 $20.00 © 2006

[IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

  • Upload
    yves

  • View
    216

  • Download
    4

Embed Size (px)

Citation preview

Page 1: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

A 3D Outdoor Scene Scanner Based on a Night-Vision Range-Gated ActiveImaging System

David Monnin, Armin L. Schneider, Frank Christnacher and Yves LutzFrench-German Research Institute of Saint-Louis (ISL)

5, rue du Général Cassagnou - 68301 Saint-Louis - [email protected]

Abstract

We present a 3D outdoor scene scanner for the acquisi-tion of kilometers-deep scenes in night conditions. Its imag-ing system is based on a compact and low-cost pulsed laserilluminator and a light-intensifier equipped CCD camera.By precisely synchronizing both the illuminator and thecamera shutter, it is possible to acquire "slices" of the sceneat specific known distances. We show that even with largelaser pulses and without megahertz-capable electronics, thethird dimension can be recovered for the whole range of thescene by processing only two images acquired in specificconditions. As the pixel intensities of the images producedby active imaging systems vary with the square of the range,and due to the limited dynamics of image sensors, scanninglong-range scenes with shorter "slices" allows the cameragain to be adjusted with respect to the range and the ac-curacy to be enhanced. The imaging system as well as thedifferent image processing steps are detailed in this paperand an example of typical results is given.

1. Introduction

Night-vision imaging has remarkably progessed in thepast forty years. The two main technologies involved in thisquest for the vision in the dark are passive low-light levelintensification and infrared thermal imaging. Passive lightintensification techniques are well suited to short-range ap-plications in which the darkness is not complete and offera very good image resolution. However, difficult weatherconditions like fog, rain or snow as well as parasitic lightin the surroundings of the observed scene, make this tech-nology encounter serious difficulties. Thermal imaging sys-tems based on the infrared radiation principle can operate intotal darkness conditions, but need a good thermal contrastin the observed scene. They can be used for long-range ap-plications, but do not deliver very high resolution images

and are better suited to detection.Active imaging systems are the heirs to the passive low-

light level intensified systems, but they differ from them in-sofar as they bring their own laser light sources. Further, thecontrol of both the illuminator pulsed-laser beam and thecamera shutter by using the so-called “range gating” tech-nology described hereafter makes it possible to overcomedifficulties such as backscattering on particles between thecamera and the scene under observation. Hence, range-gated active vision imaging systems like the one presentedhere can operate at long range in total darkness and badweather conditions while delivering high-resolution images.

The acquisition of long-range outdoor scenes in all-weather night conditions is a challenge which has been al-ready taken up successfully by our range-gated active imag-ing system [4], but using this system for a 2.5D reconstruc-tion of the scene is quite a new issue. Methods successfullyexploiting range-gated laser radars for three-dimensionalimaging are often based on very short laser pulses in theorder of a few tens to a few hundreds of picoseconds. An in-teresting approach is then to use a megahertz-capable shut-ter and to modulate the camera gain in a specific way in or-der to produce two different images under different knownillumination conditions which can be processed to recoverthe third dimension [6]. An efficient alternative to the useof expensive electronics is to scan the scene under observa-tion by moving a range gate using many steps which haveto be much shorter than the gate depth for a good accu-racy [3]. The method presented in this paper contrasts withothers because it is not limited by larger laser pulses, it nei-ther requires the use of a megahertz camera shutter nor thatof a camera gain modulation and it can operate from onlytwo different images to recover the third dimension. How-ever, for long-range scenes of several kilometers, scanningthe scene by using range gates shorter than the scene rangeboth allows the enhancement of the accuracy and the adap-tation of the camera gain with respect to the range. Aftera presentation of the imaging system, its three-dimensionalimaging use will be explained and typical results given.

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 2: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

Figure 1. The night-vision imaging system: the laser illuminator is mounted on top of the camera.

(a) near-field diode intensity profile (b) far-field illumination profile after collimation (c) 3D view of the profile after collimation

Figure 2. Laser diode illumination profile on the object plane

2. The night-vision range-gated active imagingsystem

The imaging system, shown in Figure 1, consists of acompact and low-cost laser illuminator as well as an inten-sified CCD camera. The whole system is mounted on a lightmotorized tripod and only one person is necessary to oper-ate it. The power supply and the synchronization unit arestill installed in a separate rack, but will soon be integratedinto the system in a much more compact form.

2.1. The compact and low-cost laser illumi-nator

The laser illuminator is based on a high-power stackedlaser diode which delivers a peak power of 1000 W in the800 nm wavelength region. This device can be operatedwith a pulse duration ranging from 200 ns to 500 µs. Theduty cycle is limited to 0.4%.

It is well known that the output beam of a laser stackis not directly usable for illumination purposes. To makeit usable it is necessary to employ a beam-shaping com-ponent. There are a lot of efficient collimation techniquessuch as micro-lenses or micro-fibers [5, 1], we have devel-oped ours with a view to keeping simplicity and low costas a guideline [7]. Our latest development makes it pos-sible to obtain a very homogeneous illumination profile asshown in Figure 2. Furthermore, the obtained illuminationspot has a rectangular shape and its size ratio perfectly fitsthe image sensor size, which leads to a very efficient use ofthe whole light power without losses. With this collimationtechnique, the laser beam divergences can be brought backto a range between less than one degree and a few tens ofdegrees without any loss of homogeneity. The efficiency ofthe collimation obtained is 64%.

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 3: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

Figure 3. Principle of the range-gating operational mode

(a) Gating from 500 m to 2000 m (b) Gating from 1000 m to 1060 m

Figure 4. Raw images of a night scene for different integration times

2.2. The intensified camera

The recording system is an on-the-shelf low light levelintensified camera equipped with an 18 mm diagonal light-intensifier directly taped on the 2/3” camera CCD chip. Thecamera catadioptric lens has a focal length of 250 mm andan aperture of f/5.6 offering a 3.25˚ × 2.44˚ field of viewperfectly matching the illuminator divergence. In this con-figuration, the imaging system is able to record images upto a range of about 3000 m.

2.3. The principle of range gating

The principle of the range-gating technology is based onthe ability to control the laser beam and the camera shutterin a very accurate way in order to limit the detection to aspecific range of space. The different steps of this very par-ticular operational mode of the active imaging system aredescribed in Figure 3. First a timer is started and a laserpulse is emitted during a time △tpls. The generated laser

pulse moves toward the observed scene while the camerashutter is still closed. Thus, the backscattering due to theparticles in the atmosphere near the imaging system doesnot affect the acquisition process (see Figure 3 - step 1).After a time t, the laser pulse reaches the part of the scenewe intend to observe. The light encountering an obstacleis then reflected with respect to its reflection coefficient orcontinues its way if no obstacle is met with (see Figure 3 -step 2). At a time 2t, the light reflected by the observedscene reaches the camera and the shutter is then opened(see Figure 3 - step 3). The depth and the profile of the“slice of light” integrated by the camera depends on thetime △texp during which the shutter stays open as well ason the laser pulse duration △tpls. If the laser pulse du-ration △tpls equals the exposure time △texp, all the lightreflected by an obstacle reached at a time t is efficiently in-tegrated by the image sensor. Of course, △texp and △tpls

can be adjusted differently to achieve different results. InFigure 4(a), where △texp > △tpls, the observed scene isilluminated from 500 m to 2000 m, while in Figure 4(b),

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 4: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

where △texp = △tpls, the observed scene is illuminatedfrom 1000 m onwards, on a depth of only 60 m.

To be more precise, only the light intensifier of the cam-era can be triggered quickly enough to operate in the de-scribed way, while the CCD electronic shutter is kept open.Indeed, even if the CCD camera used is very sensitive, asingle laser pulse is not sufficient to integrate enough light.That is why, in practice, sequences of pulses have to be in-tegrated. Hence, the process described here will be iterateduntil enough light has been integrated by the CCD sensor.

3. 3D outdoor scene scanning

Beyond conventional single-shot two-dimensional imageacquisitions, the active imaging system described here canbe operated in a specific way in order to produce a setof tomographic-like images of the observed scene. Fromonly two images it is already possible to recover the scenedepth information, while the use of more images allows thequality of the results to be enhanced. The acquisition oftomographic-like images, and the way of processing themfor the reconstruction of a 2.5D model of the scene is de-scribed in this part.

3.1. Acquisition of tomographic-like images

In order to recover the depth information from two dif-ferent images, it is important to understand which informa-tion is acquired by the camera in the case of an active visionsystem. As in [2], and because it is very close to the formof our laser pulse, we will consider the ideal case where thelaser pulse Lpls(t) of a duration △tpls has a square formdescribed by:

Lpls (t) =

{

1, ∀t ∈ [0;△tpls]0, otherwise

(1)

while the camera shutter operation is described by its idealgain function:

G (t) =

{

1, ∀t ∈ [topen; topen + △texp]0, otherwise

, (2)

where topen is the shutter opening time and △texp the expo-sure time. The shutter opening time is defined as a functionof a reference distance z◦ to the camera, as:

topen(z◦) =2

c· z◦, (3)

where c is the speed of the light. This opening time cor-responds to the time when the beginning of the laser pulsecomes back to the camera after having been reflected at thereference distance z◦. Given the exposure time △texp, theshutter closing time can then be defined as:

tclose(z◦,△texp) =

2

c· z◦ + △texp, (4)

or alternately, using the distance △zexp covered by the lightduring △texp, it can also be written:

tclose(z◦,△zexp) =

2 · z◦ + △zexp

c. (5)

When the laser pulse duration△tpls is equal to the exposuretime △texp , the closing time corresponds to the time whenthe end of the laser pulse comes back to the camera afterhaving been reflected at the reference distance z◦.

If △texp < △tpls, i.e. if the exposure time is shorterthan the laser pulse duration, part of the laser pulse will notbe integrated by the camera. As this is a waste of efficiency,the exposure time and the laser pulse duration are chosensuch that △texp > △tpls. It is then important to noticethat, if △texp > △tpls, the whole emitted laser pulse cancome back to the camera after having been reflected at adistance z∗ greater than z◦. The time t∗ corresponding tothe precise moment when the end of the laser pulse comesback to the camera after having been reflected at a distancez∗ is given by:

t∗(z∗,△zpls) =2 · z∗ + △zpls

c. (6)

As the limit condition occurs when t∗ corresponds to theshutter closing time, it leads to:

t∗(z∗,△zpls) = tclose(z◦,△zexp). (7)

Solving equation (7) for z∗ finally gives:

z∗ = z◦ +△zexp −△zpls

2. (8)

Hence the range where the contribution of the laser pulse ismaximum is defined as:

[z◦; z∗] =

[

z◦; z◦ +△zexp −△zpls

2

]

. (9)

On the other hand, even if the camera cannot integratethe whole laser pulse duration for reflective elements lo-cated after z∗, a part of the laser pulse is integrated all thesame. This partial contribution of the laser pulse linearlydecreases as the distance increases until it completely van-ishes at a distance z+. The time t+ necessary for the be-ginning of the laser pulse to reach z+ and come back to thecamera is expressed by:

t+(z+) =2

c· z+. (10)

As the limit condition occurs when t+ corresponds to theshutter closing time, it yields:

t+(z+) = tclose(z◦,△zexp). (11)

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 5: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

(a) exposure time longer than the laser pulse duration (b) exposure time equal to the laser pulse duration

Figure 5. Laser pulse contribution measured at about 500 m for a pulse duration of 200 ns

Solving equation (11) for z+finally gives:

z+ = z◦ +△zexp

2. (12)

Hence the range where the contribution of the laser pulselinearly decreases until it vanishes is be defined as:

[

z∗; z+]

=

[

z◦ +△zexp −△zpls

2; z◦ +

△zexp

2

]

. (13)

At a distance z in this range, the partial contribution of thelaser pulse normalized with respect to its full contributionis written P (z) and given by:

P (z) =2(z◦ − z) + △zexp

△zpls

, for z ∈[

z∗; z+]

. (14)

Similarly, the camera integrates a part of the laser pulsereflected on some elements located before z◦. This par-tial contribution, corresponding to the last part of the laserpulse, linearly decreases when the distance decreases, un-til it completely vanishes at a distance z− located beforez◦. The time t− necessary for the end of the laser pulse toreach z− and come back to the camera is expressed by:

t−(z−,△zpls) =2 · z− + △zpls

c. (15)

As the limit condition occurs when t− corresponds to theshutter closing time, it leads to:

t−(z−,△dpls) = topen(z◦). (16)

Solving equation (11) for z−finally gives:

z− = z◦ −△zpls

2. (17)

Hence the range where the contribution of the laser pulselinearly increases from 0 to its maximum is defined as:

[

z−; z◦]

=

[

z◦ −△zpls

2; z◦

]

. (18)

At a distance z in this range, the partial contribution of thelaser pulse normalized with respect to its full contributionis written P (z) and given by:

P (z) =2(z − z◦) + △zpls

△zpls

, for z ∈[

z−; z◦]

. (19)

In a more general way P (z) can be described by the fol-lowing cross-correlation product :

P (z) = G (t) ⊗ Lpls (t) =

∫ +∞

0

G (τ) · Lpls (τ − t) dτ,

(20)and as the distance z is related to the time t = 2

c· z which

is necessary for the light to go and return, it finally yields:

P (z) =

∫ +∞

0

G (τ) · Lpls

(

τ −2

c· z

)

dτ. (21)

In the present application, the laser pulse duration △tpls

has been set to 200 ns and the exposure time △texp to400 ns, which leads to a depth of 90 m for the slice of light.As explained before, only the elements illuminated by thefirst 30 m from the reference distance of observation z◦ ben-efit from the full laser pulse. Hence, in order to acquire atleast once each element with their full reflection, the stepchosen between two successive slices is also 30 m. Thisfurther means that in this case each pixel is also illuminatedonce in the [z−; z◦]range and another time in the [z∗; z+]range.

This specific timing is illustrated in Figure 7. Thus, scan-ning a scene from 500 m to 2000 m leads to a set of 50images. The particular choice of successive “overlapping”slices of light with △texp > △tpls will be explained in§ 3.3 and the range of 90 m has been chosen arbitrarily.

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 6: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

3.2. Image enhancement and segmentation

As will be highlighted in § 3.3, it is not only the fi-nal picture quality which depends on the quality and pre-cision of the pixel intensities, but also the precision of lo-calization of the pixel in the 2.5D scene model. Single-shottomographic-like images are quite noisy, but using a fewof them and working pixelwise in the time domain ratherthan in the spatial domain allow the acquired images to bedenoised almost completely. As the noise is random, thereis no doubt that the averaging of a very large set of im-ages obtained in the same illumination conditions leads toa perfectly denoised image with an enhanced signal reso-lution. But the goal in the choice of an appropriate pro-cessing method is to get the best result with as few imagesas possible. The noise encountered in the images acquiredby our system is a Poisson-type noise and presents strongimpulse components. Removing those impulse componentsby averaging requires the processing of a lot of images. Amedian filter would deal more efficiently with impulse noiseas it sorts all the measurements made over time for the samepixels in an array and only selects the median value in themiddle of the array. Unfortunately, the median value is stilla raw measured intensity with a basic non-enhanced reso-lution. A good way of combining the advantages of bothfilters, i.e. a good resolution enhancement of the measuredpixel intensities as well as an efficient solution for the re-moval of the impulse noise with only a few images, is theso-called kappa-median filter usually used for processingastronomical images. For each pixel it sorts the intensitiesmeasured over time, removes an amount kappa correspond-ing to the approximate amount of impulse noise in the sig-nal and averages the rest of the intensity values in order toenhance the resolution of the signal.

In order to put only significant information in the 2.5Dscene model, the images have to be segmented in order tokeep only the pixels of the images with a value resultingfrom an active illumination. This also implies that the para-sitic lights have to be removed as they do not result from anactive illumination. This can be easily done by substract-ing an image obtained without active illumination from allthe tomographic-like images. The pixels greater than a littlethreshold are then simply ignored.

As shown in Figure 4(a), the foreground is much moreilluminated than the background. This is because the lightpower varies with the square of the observation distance,as shown in Figure 6. The illumination inhomogeneity be-tween foreground and background which has not been di-rectly compensated by the imaging system can be enhancedhere by means of image processing. The principle is to com-pensate the irradiance curve of Figure 6 with respect to thedistance of the range gate. However, when doing this, wemust take care to keep a copy of the original pixel intensities

Figure 6. Image irradiance as a function of thedistance

which have to be used without modifications in § 3.3. Anuntested alternative so far which is theoretically more ex-act, would be to compensate the luminosity of the pixels ac-cording to their depth obtained from the method presentedin § 3.3.

3.3. 2.5D model reconstruction of a 3Dscene

Given a set of at least two tomographic-like images ac-quired under the conditions described in § 3.1 and processedas described in § 3.2, the depth of each pixel can be cal-culated by using two successive images obtained with twosuccessive slices of light. Figure 7 represents the relativecontribution P (z) of the laser pulse to the image-formingprocess for a reflective element located at a distance z fromthe camera. P (z) is neither the expression of any pixel in-tensity nor an amount of reflected light, but a ratio whichexpresses, for a reflective element located at a distance z,how much light can be integrated under particular illumi-nation conditions in comparison with the maximum amountof light which could be integrated if the illumination condi-tions were optimized for this purpose.

Let us consider a pixel of the first image with an inten-sity Imax representing a reflective element physically lo-cated in the range [z◦i ; z∗i ] of the first slice of light whereP (z) = 1. Here the intensity is written Imax as the pixelbenefits from the full duration of the laser pulse. Let us alsoconsider its correspondent of intensity I in the second im-age and located in the range

[

z−i+1; z◦i+1

]

of the second sliceof light. Having two different measurements of the intensityfor the same reflective element located at a distance z fromthe camera, the partial contribution of the laser pulse to thispixel in the second image can be described by:

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 7: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

Figure 7. Laser pulse contribution as a function of the distance to the reflective element for twosuccessive “slices of light”

P (z) =I

Imax

. (22)

Introducing equation (19) into equation (22) and solving itfor the depth z of a pixel yields:

z = z◦i+1 −

(

1 −I

Imax

)

·△zpls

2. (23)

In the same way, it is also possible to measure I within therange

[

z∗i−1; z+

i−1

]

of the preceding slice of light. In thiscase z is given by:

z = z∗i−1 +

(

I

Imax

− 1

)

·△zpls

2. (24)

It must also be noticed that in this paper the illuminationconditions have been chosen in order to process the ratio

IImax

which then leads to the distance z. But the ratio II′

of two pixel intensities measured either in [z−; z◦] or in[z∗; z+] can also lead to the expression of z. This allowsus, in particular, to work with an exposure time equal to thelaser pulse duration, i.e. △texp = △tpls, as illustrated inFigure 5(a), and with scanning steps of △tpls

2. In this case

every pixel is illuminated under only two different illumi-nation conditions. The color-coded depth map obtained foran outdoor scene scanned from 500 m to 2000 m is given inFigure 8.

Each tomographic-like image is then shaped with respectto the calculated depth of its pixels and placed in a 2.5D

isotropic scene model in order to compose an enhancedview of the scene under observation. Taking into accountthe same pixel position over the whole image set, only thepixel with the highest intensity Imax is kept and put in thescene model at the depth processed by using equation (23)or (24). The result obtained is given in Figure 9. The realcamera view angle is the one of Figure 9(a), while a newvirtual angle was processed from the 2.5D model to obtainFigure 9(b). Using a collection of 2.5D objects correspond-ing to each of the tomographic-like images rather than onesingle 2.5D object representing the whole scene allows usto easily point out shadows in non-illuminated places.

4. Conclusion

Beyond traditional applications of night-vision range-gated active imaging, it has been shown that our system canbe used to explore the third dimension, thus offering a 3Dview of the observed scene. The running of this very com-pact system only requires one person and allows kilometer-deep outdoor scenes to be scanned in total darkness. Theencouraging results obtained have shown that it is possible,with a set of at least two tomographic-like images producedby regularly scanning the observed range in depth, to bemuch more precise than the finite-range subdivision definedby the scanning steps. Despite the good visual results ob-tained, further studies have to be made in order to evaluatethe real precision of the system. Furthermore, the best com-bination of laser pulse duration, exposure time and number

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006

Page 8: [IEEE Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) - Chapel Hill, NC, USA (2006.06.14-2006.06.16)] Third International Symposium

Figure 8. Color-coded depth-map

(a) Reconstructed original view angle (b) Virtual view angle processed from the 2.5D scene model

Figure 9. Scanned outdoor scene

of images per scan still has to be determined and connectedto the precision and processing time.

Acknowledgments

Authors would like to thank Etienne Bieber and Em-manuel Bacher from ISL, for their great work on the au-tomation of the night-vision active imaging system for in-depth scanning.

References

[1] BSSL Italy. Method and device for conditioning the lightemission of a laser diode array. EP 1 059 713 A2.

[2] J. Busck. Underwater 3-D optical imaging with a gatedviewing laser radar. Optical Engineering, 44(11):116001.1–116001.7, November 2005.

[3] J. Busck and H. Heiselberg. Gated viewing and high-accuracythree-dimensional laser radar. Applied Optics, 43(24):4705–4710, August 2004.

[4] F. Christnacher, Y. Lutz, and D. Monnin. Systèmesd’imagerie active portables et embarquables dans différentsvecteurs d’observation. In Optro 2005, May 2005.

[5] INO Quebec. Multiple emitter laser diode assembly withgrader-index fiber micro-lenses. US Patent 5.825.803.

[6] M. Kawakita. Gain-modulated axi-vision camera (highspeed high-accuracy depth-mapping camera). Optics Express,43(24):5336–5344, November 2004.

[7] Y. Lutz and F. Christnacher. Laser diode illuminator for nightvision on-board of a 155 mm artillery shell. In Aerosense andDefense Sensing, Florida, USA, April 2003.

Proceedings of the Third International Symposium on3D Data Processing, Visualization, and Transmission (3DPVT'06)0-7695-2825-2/06 $20.00 © 2006