14
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright

Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies areencouraged to visit:

http://www.elsevier.com/copyright

Page 2: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

Model-based light stripe detection for indoor navigation

Ho Gi Jung a,b,�, Jaihie Kim b

a MANDO Corporation, Republic of Koreab Yonsei University, Republic of Korea

a r t i c l e i n f o

Article history:

Received 17 October 2007

Received in revised form

16 April 2008

Accepted 30 July 2008Available online 10 September 2008

Keywords:

Light stripe projection

Light stripe feature detection

Light stripe feature width

Laplacian of gaussian filtering

High dynamic range imaging

Indoor navigation

a b s t r a c t

Light stripe projection (LSP) is one of the most robust 3D recognition methods, and the general method

of light stripe feature (LSF) detection is Laplacian of Gaussian (LOG) filtering. If distances to objects are

various, as in the case of indoor navigation, LSF width becomes various according to distance. As the

window size of spatial filtering influences the performance significantly, various LSF widths disturb

LOG-based LSF detection with constant base length, that is constant window size. The irradiance maps

of LSFs were reconstructed by high dynamic range imaging (HDRi) while changing the distance. By

analyzing the irradiance maps, LSF irradiance map was modeled as a 2D Gaussian function whose

parameters were approximated as functions of distance. After deriving LSF width function of distance, it

was transformed into LSF width function of pixel coordinates by one-to-one relation between distance

and y coordinates in LSP. The LSF width function can provide proper base length to LOG filtering. A self-

calibration procedure is proposed which can estimate the parameters of LSF width function with only

several images captured at new environment. Experimental results show that proposed model-based

LSF detection overcomes normal LOG-based LSF detection.

& 2008 Elsevier Ltd. All rights reserved.

1. Introduction

Light stripe projection (LSP) is a method of acquiring 3Dinformation with light plane projector and camera. Light planegenerated by light plane projector makes light stripe feature (LSF)onto object surface. By finding the common solution of the lightplane equation and the line equation of a pixel on LSF acquired bycamera, the 3D coordinates of a point on object surfacecorresponding to the pixel on LSF can be measured [1]. Typicalapplication field of LSP is industrial inspection such as tireinspection [2], semiconductor inspection [3], highway surfaceprofiling [4], 3D log surface scanning [5] and workpiece structureverification [6].

To guarantee the accuracy of LSP, LSF should be detectedexactly, irrespective of illumination condition. The most com-monly used method is using single wavelength light source andoptical band-pass filter tuned to the wavelength. This methodsimplifies software-based post-processing and provides reliableLSF detection result. However, this method has the disadvantagethat the captured image contains no information except LSF.Pulsed light source that is turned on only when imaging elementcaptures a scene makes LSF detection easier and enables the

system to satisfy eye-safe condition. When background illumina-tion contains considerable light power of the same wavelength asthat of the light source, background subtraction is used. Thebackground subtraction extracts LSF by subtracting an imagecaptured with light source off from an image captured with lightsource on. This method cannot be used if camera or forward scenechanges between the two images [7].

Various methods were developed to make LSF detection moreefficient by analyzing the effect of object surface shape andmaterial. Zhang Guangjun et al. [8] developed an ellipse fittingusing the already known radius as constraint to detect LSF oncylindrical workpiece. Josep Forest et al. [9] developed finiteimpulse response (FIR) low-pass filter-based preprocessing meth-od for translucent material assuming that scattering of incidentlight in the material is noise. Baba et al. developed two methodsfor measuring incident angle of light into camera to apply LSP tospecular surface: field stop using multiple slits [10] and beamsplitter between two orthogonally arranged cameras [11]. Nar-asimhan et al. [12] derived physical models for the appearances ofa surface immersed in a scattering medium, to apply LSP andstereo to underwater and aerial imaging.

If LSP is applied to a situation where the variation of depth toobjects is large, like environment and object recognition forintelligent vehicle [7,13] and mobile robot [14–16] in indoornavigation, two new problems arise. The first is defocusingproblem. LSF detection system should be able to capture focusedLSF image on object surfaces located in a wide depth range. Thesecond problem is LSF width variation. An example of LSF width

ARTICLE IN PRESS

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/optlaseng

Optics and Lasers in Engineering

0143-8166/$ - see front matter & 2008 Elsevier Ltd. All rights reserved.

doi:10.1016/j.optlaseng.2008.07.018

� Corresponding author at: 413-5, Gomae-Dong, Giheung-Gu, Yongin-Si, Kyonggi-

Do 446-901, Republic of Korea. Tel.: +82 31300 5253; fax: +82 31300 5496.

E-mail address: [email protected] (H.G. Jung).

Optics and Lasers in Engineering 47 (2009) 62–74

Page 3: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

variation when the depths to objects are various is shown in Fig. 1.Li et al. [17] accorded focused plane with the light plane of LSP byinclining image plane based on Scheimpflug condition. TheScheimpflug condition, well known and used in photography fora long time, provides considerable improvement in the depth-of-view without the loss of intensity (the lens aperture can be kept atmaximum). Chang et al. [18] focused the middle of camera scenedepth and applied 2D wavelet transform. Subsequently, byincreasing LSF-related components and decreasing other compo-nents in frequency domain, they could enhance defocused LSF.While detecting LSF with Canny operator, they applied 2-channelwavelet transform to 1D profile perpendicular to LSF direction.Subsequently, by using the average of low-frequency componentsas high threshold value (HTV) and the average of high-frequencycomponents as low threshold value (LTV) of Canny operator, theycould enhance LSF detection performance.

The most general LSF detection method is using Laplacian ofGaussian (LOG) assuming that LSF has Gaussian function shape[2,19–20]. LOG is the same as removing noise by Gaussian filteringand finding a position having the maximum second derivativevalue. The most important parameter of spatial filtering iswindow size. Too small a window will amplify high-frequencynoise and too large a window will ignore the detail of features[21]. Therefore, considering LSF width, it is important to set thesize parameter of LOG operator to proper value.

This article proposes a method to improve LOG-based LSFdetection when depth variation is large as in indoor navigation. Ithandles the situation when intelligent vehicle or mobile robotuses LSP for navigation in indoor environment, such as under-ground parking lot. Generally, indoor navigation system uses wideangle lens so that they do not suffer defocusing problem.Furthermore, it is reasonable to assume that indoor wall ishomogeneous Lambertian surface. However, the variation ofdepth to object is very large and the system cannot usebackground subtraction, because it should detect LSF whilemoving. Exceptionally, it is assumed that background subtractioncan be used during calibration procedure. This article explains theproposed method in five steps: (1) Reconstructing irradiancemap of LSF using high dynamic range imaging (HDRi) [22]and then modeling LSF irradiance map as 2D Gaussian function.(2) Approximating parameters of 2D Gaussian function asfunctions of distance and then deriving LSF width function ofdistance. (3) Based on the one-to-one relation between y

coordinates and distance of LSP, deriving LSF width function of

pixel coordinates. (4) Deriving novel LSF detection method byusing half of LSF width for specific pixel coordinates as the baselength of LOG filtering. In other words, LOG-based LSF detectionsets its window size to predicted LSF width. (5) Self-calibrationprocedure for the proposed LOG-based LSF detection methodusing background subtraction. The self-calibration procedure wasexecuted using several image pairs captured at undergroundparking lot. With the parameters obtained, the proposed LSFdetection method was applied to various test images. When theresult of the proposed method was compared with the result ofnormal LOG-based LSF detection that used constant base length, itwas observed that the proposed method proved superior to thenormal method. Especially, it was confirmed that the optimal baselength of normal method for a specific scene depended on thedistribution of scene depth, brightness, and object shape. Conse-quently, it was verified if the proposed method can provideadditional and more accurate LSF information to LSP-based indoornavigation system.

2. Irradiance map reconstruction-based 2D Gaussian modelof LSF

Intensity image is not proper for LSF analysis because camerasetting affects it severely. Besides, it contains a lot of quantizationnoise caused by limited bit length of each pixel. Therefore, theauthors reconstructed and used irradiance map, which is aphysical measure of light power, instead of intensity to observethe change of LSF with respect to distance.

2.1. HDRi and recovery of camera response curve [22]

Debevec’s HDRi is based on exploiting the physical property ofimaging systems, both photochemical and electronic, known asreciprocity. The response of a film to variations in exposure issummarized by the characteristic curve, or Hurter–Driffield curve.In the case of charge coupled arrays, reciprocity holds under theassumption that each site measures the total number of photons itabsorbs during the integration time. The exposure X is defined asthe product of the irradiance E at the imager and exposure time t.Intensity value Z can be represented by the nonlinear function ofexposure X as shown in Eq. (1). The nonlinear function f is thecomposition of the characteristic curve of the imager as well asthe nonlinearities introduced by the later processing steps. It isassumed that the function f monotonically increases and so itsinverse f�1 is well defined as

Z ¼ f ðXÞ ¼ f ðEtÞ (1)

Eq. (2) is obtained by using the inverse function f�1, replacingthe exposure X with the product of irradiance E and exposure timet, and applying log function on both the sides. To simplifynotation, if function g is defined like g ¼ log f�1, Eq. (2) can beconverted into Eq. (3), where, i is a spatial index over pixels and j

is an index over exposure time:

log f�1ðZijÞ ¼ log Ei þ log tj (2)

gðZijÞ ¼ log Ei þ log tj (3)

The problem is finding Ei and g simultaneously in a least-squared error sense. Because intensity Z is a set of finite number ofvalues, g also can be represented by the same number of values.Therefore, the problem is solved by finding Ei and g, minimizingthe quadratic objective function Eq. (4). N is the number of pixellocations and P is the number of photographs. Zmin and Zmax

ARTICLE IN PRESS

Fig. 1. LSF width variation in the case of indoor navigation.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–74 63

Page 4: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

represent the least and greatest pixel values, respectively:

O ¼XN

i¼1

XP

j¼1

½gðZijÞ � log Ei � log tj�2 þ l

XZmax�1

z¼Zminþ1

g00ðzÞ2 (4)

By capturing a scene with multiple exposure times, irradiancemap and camera response curve, that is g, can be estimated. As theresponse curve estimated from real images is noisy, a modeledresponse curve like in the below equation below is used:

logðArcZ þ BrcÞ ¼ logðXÞ (5)

2.2. LSF irradiance map

After capturing a scene with different exposure times,irradiance value corresponding to each exposure time t can becalculated by applying measured intensity Z and used exposuretime t into Eq. (3). Irradiance E at pixel (x, y), E(x, y), is calculatedby averaging calculated irradiance values. LSF irradiance map canbe estimated by calculating the difference between two irradiancemaps: one with light plane projector on and the other with lightplane projector off. Generally, camera for indoor navigation uses awide angle lens or fisheye lens, to cover a wide angular range.Consequently, acquired LSF irradiance map is distorted by theradial distortion of fisheye lens. This distortion is removed byrectification using radial distortion parameters estimated duringlens calibration procedure [23].

2.3. Estimation of y-axis parameters

If LSF irradiance map is assumed to follow 2D Gaussianfunction [19], it can be mathematically expressed as Eq. (6). E(x, y)denotes the irradiance map value of image coordinates (x, y).Consequently, to describe LSF irradiance map in 2D Gaussianmodel, five Gaussian parameters should be estimated: amplitudeK, mean (mx, my) and standard deviation for each axis (sx, sy):

Eðx; yÞ ¼K

2p � sxsyexp �

1

2

ðx� mxÞ2

s2x

þðy� myÞ

2

s2y

! !(6)

It is hard to estimate all five Gaussian parameters simulta-neously. As 2D Gaussian function is separable, E(y) can bereconstructed up to scale as Eq. (7) by integrating Eq. (6) withrespect to x. In other words, irradiance function of y, E(y), acquiredby integrating E(x, y) in a limited x range has the same mean andvariance as E(y) acquired by integrating E(x, y) through the wholex-axis. Owing to wide field of view (FOV) of fisheye lens,irradiance function of x, E(x), contains only a limited portion ofGaussian function in the x-axis direction. On the contrary, E(y) cancontain the whole Gaussian function in the y-axis direction.Therefore, it is reasonable to first estimate the parameters of they-axis distribution:

EðyÞ ¼

Z xright

xleft

1ffiffiffiffiffiffi2pp

sx

�1

2

ðx� mxÞ2

s2x

!dx

�Kffiffiffiffiffiffi

2pp

sy

exp �1

2

ðy� myÞ2

s2y

!

¼IxKffiffiffiffiffiffi2pp

sy

exp �1

2

ðy� myÞ2

s2y

!(7)

By applying log function to both sides of Eq. (7) and thenarranging it into matrix form, a linear equation of all y coordinatesand corresponding E(y) values is acquired. By pseudo-inverse,parameter matrix PY is estimated. From PY, y-axis standard

deviation sy and mean my are estimated as shown in Eqs. (8)and (9) respectively. ybottom and yceil denote y coordinatesbelonging to region of interest (ROI):

y2bottom ybottom 1

..

.

y2ceil yceil 1

266664

377775

�1

2s2y

my

s2y

logðIxKÞ �m2

y

2s2y

� logffiffiffiffiffiffi2pp

sy

� �

266666666664

377777777775

|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}PY

¼

logðEðybottomÞÞ

..

.

logðEðyceilÞÞ

26664

37775

s2y ¼ �

1

2PY ð1Þ(8)

my ¼ PY ð2Þs2y (9)

2.4. Estimation of scale and x-axis parameters

E(x), irradiance function of x, can be established by integratingE(x, y) in y-axis direction as shown in Eq. (10). Previously estimatedy-axis mean my and standard deviation sy determine the value of Iy:

EðxÞ ¼

Z yceil

ybottom

1ffiffiffiffiffiffi2pp

sy

�1

2

ðy� myÞ2

s2y

!dy

�Kffiffiffiffiffiffi

2pp

sx

exp �1

2

ðx� mxÞ2

s2x

!

¼IyKffiffiffiffiffiffi2pp

sx

exp �1

2

ðx� mxÞ2

s2x

!(10)

By applying log function to both the sides of Eq. (10) andarranging it into matrix form, a linear equation of all x coordinatesand corresponding E(x) values is acquired. By pseudo-inverse,parameter matrix PX is estimated. From PX, x-axis standarddeviation sx, mean mx and amplitude K are estimated as shown inEqs. (11)–(13), respectively. xleft and xright denote x coordinatesbelonging to ROI as

x2left xleft 1

..

.

x2right xright 1

266664

377775

�1

2s2x

mx

s2x

logðIyKÞ �m2

x

2s2x

� logffiffiffiffiffiffi2pp

sx

� �

26666666664

37777777775

|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}PX

¼

logðEðxleftÞÞ

..

.

logðEðxrightÞÞ

26664

37775

s2x ¼ �

1

2PXð1Þ(11)

mx ¼ PXð2Þs2x (12)

K ¼exp PXð3Þ þ m2

x=2s2x þ log

ffiffiffiffiffiffi2pp

sx

� �� �Iy

(13)

ARTICLE IN PRESS

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–7464

Page 5: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

2.5. Parameter functions of distance

While changing the distance between light plane projector andpainted wall, 2D Gaussian function parameters were estimated. Itis observed that amplitude K is inversely proportional to distancesquare and x-axis standard deviation sx is inversely proportionalto distance square and has a constant offset. With respect to they-axis parameter, it is observed that y-axis standard deviation sy

is inversely proportional to distance and has a constant offset.Assuming that the mean of 2D Gaussian function is invariant withrespect to distance, LSF irradiance map can be modeled at anydistance using the equations given below:

KðdÞ ¼aamp

d2(14)

sxðdÞ ¼asx

d2þ bsx (15)

syðdÞ ¼asy

dþ bsy (16)

3. Width function-based LSF detection

3.1. LSF width function of distance

LSF width is defined as the length of the region having higherintensity than neighboring pixels by yZ in the y-axis direction.With Eq. (3), intensity threshold yZ can be converted intoirradiance threshold yE. Measuring LSF width is the same asfinding two y coordinates having irradiance value yE in a specificcolumn of LSF irradiance map. In this case, it is assumed thatpixel, except LSF, has zero value in the LSF irradiance map.According to 2D Gaussian model in Eq. (6), pixels whoseirradiance value is yE will satisfy the equation below:

ðy� myÞ2¼ 2s2

y logK

2psxsy�

1

2

ðx� mxÞ2

s2x

� log yE

!(17)

As twice the y-axis displacement between pixel whoseirradiance value is yE and y-axis mean my is the LSF widthcorresponding to yE, LSF width function of x coordinates anddistance d, w(x, d), is defined as in Eq. (18). It is noteworthy thatamplitude, x-axis standard deviation, and y-axis standard devia-tion of Gaussian function are functions of distance d:

wðx; dÞ ¼ 2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2syðdÞ

2 logKðdÞ

2psxðdÞsyðdÞ�

1

2

ðx� mxÞ2

sxðdÞ2� log yE

!vuut(18)

3.2. LSF width function of pixel coordinates

LSP is an important tool for acquiring 3D information invarious fields. This method has the advantage that y coordinatesof LSF generated by projected light plane can be converted directlyinto 3D information. By making light plane orthogonal to cameray-axis, it can be assumed that there is only one pixel belonging toLSF in each image column. Eq. (19) shows how y coordinate inimage coordinate system is directly related to Z coordinates inworld coordinate system [19]. In this application, Z coordinatemeans the distance to object, that is d, f means the focal length ofcamera lens, and b means the baseline of LSP. a denotes the anglebetween light plane and camera y-axis:

d ¼fb tan a

f � y tan a3y ¼

f

tan a�

fb

d(19)

As in Eq. (19), the y coordinate of LSF is related to the distancebetween camera and object by one-to-one correspondence.Therefore, by replacing d of Eq. (18) with Eq. (19), one can deriveLSF width function of pixel coordinates (x, y), w(x, y).

3.3. Model-based LSF detection

LOG, or Mexican hat wavelet, is one of the most popular linedetection methods in computer vision field. In mathematics andnumerical analysis, the Mexican hat wavelet is the normalizedsecond derivative of a Gaussian function like Eq. (20) [24–25]. AsLOG is the integration of peak enhancement and Gaussianfunction-based low-pass filtering, s value influences its frequencyresponse. As shown in Fig. 2, s determines the locations of twozero-crossing positions. Of these, displacement can be thought ofas line width. If the value is much smaller than true line width,LOG will detect the boundary of line region instead of its center.On the other hand, if the value is much larger than true line width,LOG will ignore the line:

cðtÞ ¼1ffiffiffiffiffiffi

2pp

s31�

t2

s2

� �exp �

t2

2s2

� �(20)

When surrounding environment consists of homogeneousLambertian surface, LSF width function of pixel coordinates(x, y), w(x, y) can provide LSF width information to LOG-basedLSF detector. As only one position is detected for each columnin LSP, LSF detection is the same as finding y coordinates showingthe largest LOG filtering response. With w(x, y) definedby applying a specific LSF intensity threshold value to yZ ofEqs. (18) and (19), LSF width at all pixels can be calculated beforeactual navigation starts. When applying LOG filtering to any pixel(x, y), one can set the base length of LOG operator, that is s ofEq. (20), to half of the calculated LSF width. This means that LOGoperator will use the optimal base length tuned for each pixelcoordinates. As the proposed LSF detection method is based onthe 2D Gaussian model of LSF irradiance map, the method iscalled model-based LSF detection to differentiate it from normalLOG-based LSF detection.

3.4. Self-calibration

As LSF width is considerably affected by object material, when theproposed method is exposed to a new environment with unknownwall material, it should adjust five parameters of Eqs. (14)–(16).However, as HDRi-based method, which is used to reconstruct LSFirradiance map, needs to capture hundreds of images while moving

ARTICLE IN PRESS

Fig. 2. LOG, Mexican hat wavelet.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–74 65

Page 6: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

slightly and perpendicularly to wall surface, its execution is hardwhenever the system meets a new environment.

Therefore, self-calibration procedure was developed for eval-uating the parameter values of LSF width function, using only theimages captured at a new environment. It is assumed that theimages contain scenes consisting of objects located at variousdistances from camera. As previously mentioned in the introduc-tion, during self-calibration, background subtraction was used.Proposed self-calibration consists of three steps: (1) Evaluation ofsy parameters, that is asy, bsy, using LSFs detected by analyzing thehistogram of difference image; (2) evaluation of the remainingthree parameters, that is Kamp, asx, bsx, using LSFs located at theimage horizontal centre; (3) optimization of five parameters suchthat the difference between detected LSF width and predicted LSFwidth becomes minimum.

3.4.1. Calibration of sy parameters

It is assumed that the system captures several image pairswhile changing its direction. Each image pair is captured withoutmotion: one is captured with the light plane projector on and theother with the projector off. Let difference image be defined as theimage that results by subtracting the image with light planeprojector off from the image with the light plane projector on. AsLSF pixels cover small area of the whole difference image and asan image captured by camera generally contains Gaussian noise,intensity histogram of the difference image is expected to havesingle-sided Gaussian distribution with its center at zero. It isreasonable that pixels having difference value greater than threetimes the Gaussian distribution’s standard deviation are classifiedas belonging to LSF.

Binary morphological open operation was applied to the pixels,classified as belonging to LSF, to reduce noise and then skeletondetection applied to find LSF center for each column. According toEq. (19), distance from camera to object d can be calculated from y

coordinates of LSF center. Assuming that all LSFs are Gaussianfunction in vertical direction, w1 is defined as LSF width withthreshold y1 (a little smaller than intensity maximum value) andw2 as LSF width with threshold y2 (a little larger than intensityminimum value). In this case, values at 80% and 20% of maximumintensity value are used, respectively. If the irradiance valuecorresponding to y1 is yE1 and that corresponding to y2 is yE2, w1

and w2 are expressed as in Eqs. (21) and (22), respectivelyaccording to Eq. (18) as

w1 ¼ 2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2syðdÞ

2 logKðdÞ

2psxðdÞsyðdÞ�

1

2

ðx� mxÞ2

sxðdÞ2� log yE1

!vuut (21)

w2 ¼ 2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2syðdÞ

2 logKðdÞ

2psxðdÞsyðdÞ�

1

2

ðx� mxÞ2

sxðdÞ2� log yE2

!vuut (22)

By subtracting squared Eq. (22) from squared Eq. (21) as below,the y-axis standard deviation at distance d, sy(d), can be estimatedas

w21 �w2

2 ¼ 8syðdÞ2 ln

yE2

yE1

� �

syðdÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiw2

1 �w22

8 logðyE2=yE1Þ

s(23)

If LSFs are detected in M columns, by the same way M sy(d)’scan be estimated from M distance values. With the estimated

values and Eq. (16), a matrix linear equation can be written asEq. (24). Subsequently, the equation is solved by LS and asy and bsy

are estimated:

1

d11

..

. ...

1

dM1

26666664

37777775

asy

bsy

" #¼

s1

..

.

sM

2664

3775 (24)

3.4.2. Calibration of sx and amplitude parameters

LSF width w(d), corresponding to detection threshold yE, of LSFslocated at image central column, was measured. According toEq. (19), distance d is calculated from LSF y coordinates and sy(d) canbe calculated based on parameter values estimated by Eq. (24). x

coordinates of LSF is image central column and Eq. (18) is simplifiedby removing the x coordinates-related terms as shown below:

wðdÞ ¼ 2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2syðdÞ

2 logKðdÞ

2psxðdÞsyðdÞ� log yE

� �s

After calculating the square of both the sides and collecting K(d)and sx(d) to the left-hand side (LHS), as in Eq. (25), right-hand side(RHS) can be calculated, because it is composed of setting values,measured values and estimated values. Let the calculated RHS valuebe denoted as C(d):

KðdÞ

sxðdÞ¼ exp

wðdÞ2

8syðdÞ2þ logð2psyðdÞÞ � log yE

!|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

CðdÞ

(25)

After replacing K(d) and sx(d) of Eq. (25) with Eqs. (14) and (15),Eq. (25) is arranged into a linear equation of parameters as shownbelow:

aamp=d2

asx=d2þ bsx

¼ CðdÞ ! aamp � CðdÞasx � CðdÞd2bsx ¼ 0

If the equation is applied to LSFs located in image central column,a matrix linear equation can be acquired as in Eq. (26). The number

ARTICLE IN PRESS

1

0

-1

-2

-3

-4

-50 50 100 150 200 250 300

pixel value Z

log

expo

sure

X

Curve fitting result

Estimated response curve

Fig. 3. Estimated response curve and its curve fitting result.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–7466

Page 7: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

of used LSF is L:

1 �Cðd1Þ �Cðd1Þd21

..

. ... ..

.

1 �CðdLÞ �CðdLÞd2L

26664

37775

aamp

asx

bsx

264

375

|fflfflfflfflffl{zfflfflfflfflffl}PC

¼ 0 (26)

As the parameter matrix PC of Eq. (26) should not be a zero vector, itshould be a null vector of its LHS matrix. Therefore, by singular valuedecomposition (SVD), three parameters can be estimated up to scale.

3.4.3. Genetic algorithm (GA)-based optimization

Parameters are optimized such that the differences betweenmeasured LSF widths and estimated LSF widths become minimized.

ARTICLE IN PRESS

70605040302010

0

expo

sure

X

7060504030

2010

0

expo

sure

X

706050403020

100

expo

sure

X

600400

200y yx0 0

200400

600800

x

1000

500

0 0500

10001500

600400

2000 0

200400

600800

yx

70605040302010

0

expo

sure

X

600400

2000 0

200400

600800

yx

Fig. 4. LSF irradiance map reconstruction and rectification. (a) Irradiance map reconstruction with light plane projector on. (b) Irradiance map reconstruction with light

plane projector off. (c) LSF irradiance map. (d) Rectified LSF irradiance map. (e) Rectified LSF irradiance map in gray image form.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–74 67

Page 8: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

First, to estimate the scale factor k of aamp, asx, bsx, gene wascomposed like [k, aamp, asx, bsx] and then GA-based optimizationconducted such that the fitness function (27) becomes minimized.The resultant parameter set was P ¼ [kaamp, kasx, kbsx, asy, bsy].Lastly, gene was composed with all the five parameters of LSFwidth function, P ¼ [aamp, asx, bsx, asy, bsy], and then GA-basedoptimization conducted such that the fitness function (27)becomes minimized. As GA-based optimization hardly falls intolocal minimum and can be applied to the problem with non-continuous fitness function, it is proper for the case when fitnessfunction cannot be predicted as this problem. Especially, in thiscase, because the initial values of parameters can be set close tocorrect values using the methods explained in Subsections 3.4.1and 3.4.2, convergence probability is high and the speed is fast:

�ðPÞ ¼1

M

XMm¼1

wðmÞ � wðmÞ�� ��

wðmÞ

� �exp �

xðmÞ � Ox

�� ��W

� �� �� P

x¼Ox ;yminpypymax

wðx; yÞo0 [ w0ðx; yÞo0

(27)

The fitness function (27) reflects two factors: first, thedifferences between measured LSF widths and predicted LSFwidths; second, the degree of how predicted LSF width functiondisobeys shape constraints. The first term of fitness function (27)measures the error of predicted LSF width. It is the differencebetween measured LSF width w(m) and predicted LSF width basedon P, wðmÞ, divided by w(m). The number of used LSF is M. Andthen it is weighted by x-axis displacements from image centralcolumn and averaged. The reason why the first term is weightedby x-axis displacement is that LSF, located close to image centralcolumn, provides higher LSF width resolution and is moreimportant for precise estimation of parameters. x-axis displace-ment, forced to be minus, was divided by image width W and theninputted into exponential function so that the resultant value wasnonlinearly weighted and limited in the 0–1 range. The secondterm of fitness function (27) measures the probability of howoften LSF width at image central column, x ¼ Ox, estimated withparameter P, becomes less than zero or its gradient becomes lessthan zero. If the range of LSF y coordinates is [ymin, ymax] while LSPis operating, LSF width estimate should be positive and mono-tonically increasing. In other words, LSF width at closer object islarger than LSF width at farther object.

4. Experimental results

To verify the feasibility of the proposed model-based LSFdetection method, the performance of LOG-based LSF detection,using LSF width function of pixel coordinates, was compared withthat of the detection using constant s. In every test case, theproposed method showed better performance.

4.1. Experimental setup

A brief specification of light plane projector, or laser diodemodule, is as follows: fan angle is 1201, beam line width is lessthan 0.1 mm at 300 mm, wavelength is 808 nm, and optical poweris 70 mW. Used camera is Point Grey Research’s Scorpion [26]; theimaging element is Kodak KAC-9628 CMOS image sensor, whichhas high sensitivity at the wavelength of the light plane projector.Image resolution is 640� 480. The focal length of fisheye lens is2.1 mm and the FOV is 1201.

4.2. 2D Gaussian function parameter estimation

The camera response curve was estimated from 42 imageswith multiple exposure times, 1.56–65.54 ms, is shown in Fig. 3.

The modeled response curve was obtained by curve fitting toEq. (5) and approximated the estimated response curve in usualoperation range.

The irradiance map reconstruction process from differentlyexposed images is illustrated by Fig. 4(a and b); Fig. 4(a) shows

ARTICLE IN PRESS

Fig. 5. Initial E(y) and final E(y).

Fig. 6. Initial E(x) and final E(x).

Fig. 7. Measured LSF irradiance map and generated LSF irradiance map.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–7468

Page 9: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

the resultant irradiance map when the light plane projector ison and Fig. 4(b) when the light plane projector is off. Subtractingthe irradiance map of Fig. 4(b) from the irradiance map ofFig. 4(a), LSF irradiance map is acquired [see Fig. 4(c)]. Theresultant rectified LSF irradiance map is given in Fig. 4(d) andthe same irradiance map in gray image form in Fig. 4(e).The boundary drawn is the boundary of rectified irradiancemap, which corresponds to the rectangular boundary of inputimage.

From the reconstructed LSF irradiance map, 2D Gaussianparameters were estimated. Initial E(y) acquired by integrating

E(x, y) with respect to x-axis and final E(y) re-estimated withmodified scale after estimating amplitude K are shown in Fig. 5.Similarly, initial E(x) acquired by integrating E(x, y) with respect toy-axis and final E(x) re-estimated with modified scale afterestimating amplitude K are shown in Fig. 6. LSF irradiance mapsacquired by HDRi method and generated from the estimated 2DGaussian parameter values are shown in Fig. 7. It is observed thatmodeling LSF irradiance map as 2D Gaussian function andpreviously described parameter estimation method, Eqs. (8), (9)and (11)–(13), are feasible to approximate measured LSF irradi-ance map.

ARTICLE IN PRESS

Fig. 8. 2D Gaussian parameters are modeled as functions of distance d: (a) estimated K(d), (b) estimated sx(d), and (c) estimated sy(d).

Fig. 9. LSF width with respect to x and d: (a) measured w(x, d) and (b) estimated w(x, d).

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–74 69

Page 10: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

Parameter functions of distance, Eqs. (14)–(16), were esti-mated. While changing the distance between light plane projectorand painted wall, 2D Gaussian function parameters wereestimated. Estimated values of K at multiple distances, whichare depicted by ‘� ’ markings, are shown in Fig. 8(a). The linegraph depicts K(d), amplitude function of distance d, estimated byleast square (LS). Estimated values of sx at multiple distances,which are depicted by ‘� ’ markings, are shown in Fig. 8(b). Theline graph depicts sx(d), x-axis standard deviation function ofdistance d, estimated by LS. The values of sy estimated at multipledistances, which are depicted by ‘� ’ markings, are shown inFig. 8(c). The line graph depicts sy(d), y-axis standard deviationfunction of distance d, estimated by LS. It is observed thatproposed modeling methods, Eqs. (14)–(16) are feasible toapproximate measured 2D Gaussian function parameters.

4.3. LSF width function of distance

Measured LSF width w(x, d), while changing the distance towall within 0.63–6.32 m range, is shown in Fig. 9(a). Usedintensity threshold yZ is 40. LSF width w(x, d), calculated byEq. (18) is shown in Fig. 9(b). It is observed that LSF widthfunction of x coordinates and distance d, expressed in Eq. (18), canproperly predict LSF width. Measured LSF width at pixelcoordinates (x, y) is shown in Fig. 10(a) and calculated LSF widthaccording to derived w(x, y) in Fig. 10(b). It can be observed thatLSF width function w(x, y) can properly predict LSF width at pixelcoordinates (x, y). Consequently, s(x, y) table calculated forpainted wall of underground parking space with intensitythreshold yZ ¼ 40, in gray image form, is illustrated in Fig. 11.

4.4. Self-calibration and performance comparison

The histogram of difference image pixels tends to make single-sided Gaussian distribution with its center at zero, as shown inFig. 12. Note that the vertical axis of Fig. 12 is in logarithmic scaleto provide a whole view of histogram distribution. The dottedvertical line of Fig. 12, corresponding to three times the Gaussiandistribution’s standard deviation, shows the threshold value forLSF detection.

Twenty image pairs were captured at an underground parkinglot. Each image pair consisted of two images: one captured withlight plane projector on and the other with projector off. True LSFposition, the criterion of performance measure, was set by acombination of automatic and manual designation methods. ForLSF columns, classified as certainly LSF by background subtractionand histogram analysis, binary morphological open operation andskeleton detection designated the center of LSF. LSF of object close

to camera and LSF close to the horizontal image center could betreated in this manner. For LSF of object far from camera and LSFfar from the horizontal image center, the center of LSF wasdesignated by manual method.

ARTICLE IN PRESS

Fig. 10. LSF width of pixel coordinates (x, y): (a) measured w(x, y) and (b) estimated w(x, y).

Fig. 11. Expected s corresponding (x, y).

Fig. 12. Histogram of difference image and threshold for LSF detection.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–7470

Page 11: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

Eight of the twenty image pairs, captured while movingslightly as shown in Fig. 13(a), were used for self-calibration. Itis noteworthy that two images of one pair were capturedwithout movement. The predicted LSF width of pixel coordinatescalculated, using parameters from self-calibration, is shown inFig. 13(b) and the one calculated from HDRi-based modelingexplained in Section 3.2 in Fig. 13(c). A comparison between thetwo shows that self-calibrated LSF width prediction is very similarto the HDRi-based modeling result. Both the GA-based optimiza-tions, one for scale factor and the other for final refinement, wereconducted through 100 generations with 1000 populations.

LSF detection was evaluated using 20 images, with light stripeprojection on, of the 20 image pairs. LOG-based LSF detection withconstant s was tested while changing s from 1 to 19. Then,proposed model-based LSF detection was tested with the sameimage set. The performance measure is the percentage of correctlydetected LSF, whose position is the same as true LSF center.

An example of performance comparison between the proposedmethod and the LOG filtering with constant s, when the distancesto objects in observation are various, is shown in Fig. 14. Fig. 14(a)shows an input image, Fig. 14(b) the ground truth designated byautomatic and manual methods and Fig. 14(c) the result of theproposed method. Pixel intensity represents the response ofmodel-based LOG filtering and the line drawn shows therecognized LSF positions. Fig. 14(d) is the result of LOG filtering

with constant s ( ¼ 1). It can be observed that thin LSF fails to bedetected because of noise on the ground surface and thick LSF isincorrectly detected at its boundary instead of center. Fig. 14(e) isthe result of LOG filtering with constant s ( ¼ 5). Thin LSF beginsto be missed and objects on the ground plane begin to disturb LSFdetection. As s increases, computation time increases proportion-ally. Fig. 14(f) is the comparison between the recognition rates ofproposed method and LOG filtering with constant s.

Although the optimal case of LOG filtering with constant scould show almost the same performance as that of the proposedmethod, the optimal case depends on the distance distribution ofobserved scene, added noise, and shape of observed objects. It isnoteworthy that the optimal s and the whole shape ofperformance graph of LOG filtering with constant s will definitelydiffer from case to case. Therefore, the optimal s could not bedetermined beforehand. Several input images and their LSFdetection performance comparisons are shown in Fig. 15. It isobserved that the optimal s of LOG filtering with constant s is 2,3, 4, and 5 for each input image.

5. Conclusion

This paper proposes a model-based LSF detection method toimprove the performance of LOG-based line detection when LSP is

ARTICLE IN PRESS

40

30

20

10

01000

500

0y x

0500

10001500

strip

e w

idth

40

30

20

10

01000

500

0y x

0500

10001500

strip

e w

idth

Fig. 13. Self-calibration results (a) Images, with light plane projector on, of eight pairs used for self-calibration. (b) Predicted LSF width w(x, y) using parameters estimated

by self-calibration. (c) Predicted LSF width w(x, y) using parameters estimated by HDRi-based modeling.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–74 71

Page 12: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

applied to indoor navigation in a wide depth and angular range.The LSF irradiance map was modeled as a 2D Gaussian functionand Gaussian parameters as functions of distance. Based on thismodeling, LSF width function of distance and x coordinates couldbe derived. Then, using one-to-one relation between y coordinatesand distance in LSP method, LSF width function of pixelcoordinates (x, y) could be derived. Predicted LSF width can beused for the base length of LOG filtering such that LOG filteringcan be focused on probable LSF width at each pixel. Experimentalresults show that the proposed method is superior to LOG filteringwith constant base length.

The major contribution of this paper is providing probable LSFwidth to LOG-based line detector using 2D Gaussian modeling ofLSF irradiance map and one-to-one correspondence between y

coordinates and distance in LSP. LSF analysis, using irradiance mapreconstructed by HDRi technology, is considered viable for otherresearches. HDRi gives effective method to overcome the bit limitof digital camera and nonlinear response characteristics.

The major limitation of the proposed method is that everyobject is assumed to be a homogeneous Lambertian surface.Although such assumption is reasonable in many indoor naviga-tion situations, navigation system does frequently meet objectswith different surfaces. Therefore, future work has to focus onobjects having different surfaces. The potential solution isacquiring parameter sets for different surfaces during self-calibration by clustering estimated parameters. In LSF detectionphase, the LSF detector selects the most probable parameter setamong the collected parameter sets for each LSF.

ARTICLE IN PRESS

Fig. 14. Performance comparison between proposed method and constant LOG: (a) input image, (b) true LSF center, (c) LSF detected by proposed method, (d) LSF detected

by LOG with constant s ( ¼ 1), (e) LSF detected by LOG with constant s ( ¼ 5), and (f) comparison of LSF detection performance.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–7472

Page 13: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copyARTICLE IN PRESS

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

True

Pos

itive

Rat

e

0 2 4 6 8 10 12 14 16 18 20

Sigma of LOG

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

True

Pos

itive

Rat

e

0 2 4 6 8 10 12 14 16 18 20

Sigma of LOG

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

True

Pos

itive

Rat

e

0 2 4 6 8 10 12 14 16 18 20

Sigma of LOG

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

True

Pos

itive

Rat

e

0 2 4 6 8 10 12 14 16 18 20

Sigma of LOG

Fig. 15. Optimal s and detection ratio graph of LOG filtering with constant s varies from case to case: (a) case when optimal s is 2, (b) case when optimal s is 3, (c) case

when optimal s is 4, and (d) case when optimal s is 5.

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–74 73

Page 14: Author's personal copy - Yonsei Universityweb.yonsei.ac.kr/hgjung/ho gi jung homepage/publications... · 2014-12-29 · Author's personal copy Model-based light stripe detection for

Author's personal copy

References

[1] Klette Reinhard, Schlns Karsten, Koschan Andreas. Computer vision—threedimensional data from images. Berlin: Springer; 1998.

[2] Wei Zhenzhong, Zhou Fuqiang, Zhang Guangjun. 3D coordinates measure-ment based on structured light sensor. Sensors Actuators A 2005;120(2):527–35.

[3] Andrew Wilson. Machine vision targets semiconductor inspection. Visionsystem design, July 2007, available at /http://vsd.pennnet.comS, accessed at17 October 2007.

[4] Cor Maas. 3-D system profiles highway surfaces. Vision system design, February2007, available at /http://vsd.pennnet.comS, accessed at 17 October 2007.

[5] Barry Dashner. Future of 3D log scanning technology. Mill Product News,March/April 2007. p. 10–11.

[6] Wei Zhenzhong, Zhang Guangjun. Inspecting verticality of cylindrical work-pieces via multi-vision sensors based on structured light. Opt Lasers Eng2005;43(10):1167–78.

[7] Aufrere Romuald, Gowdy Jay, Mertz Christonph, Thorpe Chuck, Wang Chieh-Chih, Yata Teruko. Perception for collision avoidance and autonomous driving.Mechatronics 2003;13(10):1149–61.

[8] Guangjun Zhang, Zhenzhong Wei. Ellipse fitting of short light stripe forstructured light based 3D vision inspection. Proc SPIE 2003;5253:46–51.

[9] Josep Forest, Joaquim Salvi, Enric Cabruja, Carles Pous. Laser stripe peakdetector for 3D scanners. A FIR filter approach. In: 17th internationalconference on pattern recognition, vol. 3, August 2004. p. 646–9.

[10] Baba M, Narita D, Ohtani K. 3601 shape measurement system for objectshaving from Lambertian to specular reflectance properties utilizing a novelrangefinder. J Opt A: Pure Appl Opt 2002;4(6):S295–303.

[11] Baba M, Narita D, Ohtani K. An advanced rangefinder equipped with a newimage sensor with the ability to detect the incident angle of a light stripe. JOpt A: Pure Appl Opt 2004;6(1):10–6.

[12] Narasimhan Srinivasa G, Nayar Shree K, Sun Bo, Koppal Sanjeev J. Structuredlight in scattering media. Proceedings of the tenth IEEE internationalconference on computer vision 2005;1:420–7.

[13] Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, Jaihie Kim. Light stripe projectionbased parking space detection for intelligent parking assist system. In: IEEEintelligent vehicles symposium, June 2007. p. 962–8.

[14] Kim Min Young, Cho Hyungsuck. An active trinocular vision system of sensingindoor navigation environment for mobile robots. Sensors Actuators A2006;125(2):192–209.

[15] Aldon MJ, Le Bris L. Mobile robot localization using a light-stripe sensor. In:IEEE intelligent vehicle symposium, October 1994. p. 255–9.

[16] Janne Haverinen, Juha Rning. An obstacle detection system using a light stripeidentification based method. In: IEEE international joint symposium onintelligence and systems, May 1998. p. 232–6.

[17] Li Jianfeng, Guo Yongkang, Zhu Jianhua, Lin Xiangdi, Xin Yao, Duan Kailiang,et al. Large depth-of-view portable three-dimensional laser scanner and itssegmental calibration for robot vision. Opt Lasers Eng 2007;45(11):1077–87.

[18] Chang S, Hongli Deng, Fuller J, Farsaie A, Elkins L. An adaptive edge detectionfilter for light stripe projection (LSP) in surface reconstruction. In: Proceed-ings of the SPIE, vol. 5302, April 2004. p. 1–8.

[19] Izquierdo MAG, Sanchez MT, Ibanez A, Ullate LG. Sub-pixel measurement of3D surfaces by laser scanner. Sensors Actuators A: Physical 1999;76(1–3):1–8.

[20] Fisher RB, Naidu DK. A comparison of algorithm for subpixel peak detection.In: Proceedings of British machine vision association conference, 1991. p.217–25.

[21] Sharon S. Welch. Effects of window size and shape on accuracy of subpixelcentroid estimation of target images. NASA Technical Paper 3331, September1993.

[22] Paul E. Debevec, Jitendra Malik. Recovering high dynamic range radiancemaps from photographs. International conference on computer graphics andinteractive techniques, 1997. p. 369–78.

[23] Ho Gi Jung, Yun Hee Lee, Pal Joo Yoon, Jaihie Kim. Radial distortion refinementby inverse mapping-based extrapolation. In: 18th international conference onpattern recognition, vol. 1, 2006. p. 675–8.

[24] Gonzalez Rafael C, Woods Richard E. Digital image processing. EnglewoodCliffs, NJ: Prentice-Hall, Inc.; 2002.

[25] Wikipedia, Mexican hat wavelet, retrieved from /http://en.wikipedia.org/wiki/Mexican_hat_waveletS, accessed at 17 October 2007.

[26] Point Grey Research. Technical datasheet of scorpion. Available at /http://www.ptgrey.comS, accessed at 17 October 2007.

ARTICLE IN PRESS

H.G. Jung, J. Kim / Optics and Lasers in Engineering 47 (2009) 62–7474