10
Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors Zexiao Xie n , Xiaomin Wang, Shukai Chi Ocean University of China, Engineering College, 238 Songling Road, Qingdao 266100, China article info Article history: Received 15 November 2013 Received in revised form 14 December 2013 Accepted 1 January 2014 Available online 7 February 2014 Keywords: Structured-light sensor Intrinsic and extrinsic calibration Cross-ratio invariance Planar grid target Conjugate pair abstract A novel approach for simultaneously calibrating the intrinsic and extrinsic parameters of structured-light sensors is presented in this paper. A planar target etched with grid lines is adopted to generate calibration points. By intersecting the laser plane with the grid lines on the target some co-linear points are formed, and their 3-D coordinates are computed according to the rule of cross-ratio invariance. For obtaining non-colinear points in the laser plane, the sensor is located at several different positions by the moving of coordinate measuring machine (CMM). However, the co-linear points corresponding to different CMM positions locate at different coordinate frame. In order to establish conjugate pairs to calibrate the sensor, all of the generated points in the laser plane are transformed into the CMM machine coordinate frame. Then using the 3D points in this coordinate frame a 2D coordinate frame is established with determined direction vectors, i.e., the extrinsic parameters are calibrated. By transforming the 3D points in the CMM machine coordinate frame into the 2D coordinate frame, conjugate pairs for calibrating the intrinsic parameters are established. Experimental studies show that the planar target is simple, and it can be manufactured with high accuracy. The calibrating process is facilitating, and the calibration result possesses high accuracy. & 2014 Published by Elsevier Ltd. 1. Introduction In recent years, structured light sensors have been widely used in quality control and reverse engineering [15]. This sensor has two types of working modes. One working mode is scanning measure- ment [68]. The sensor is usually integrated with coordinate measur- ing machines (CMMs), CNC machines or other scanning devices to collect data cloud on a part for reverse engineering. Another mode is online inspection [911]. The sensor is xed on a device, and the part is measured when it passes through the working range of the sensor. The dimension to be inspected usually includes diameter and thickness, for example, the diameter of a cable or a steel pipe, the thickness of a steel or wood board. A structured-light sensor is basically composed of a CCD camera and a line laser projector, according to the working principle of laser triangulation. When the laser plane is projected on a part, the CCD camera captures the image with the modulated light stripe. If the sensor is calibrated, the 2D data in the laser plane could be obtained from the image. This calibrating procedure is called intrinsic calibration. The aim of intrinsic calibration is to determine the mapping relationship between the 2D computer image plane and the laser plane, meanwhile establish a 2D coordinate frame in the laser plane. When the sensor is mounted on a CMM to implement 3D scanning, the data point directly obtained possesses 2D coordinate in the laser plane, which should be converted into 3D data in the CMM machine coordinate frame. The procedure for identifying the transformation from 2D coordinate frame in the laser plane to 3D CMM machine coordinate frame is extrinsic calibration. The studies on structured-light sensors mainly focus on intrinsic calibration to determine the relationship between the laser plane and the image plane [1114]. By contrast, only several researchers [1517] have paid their attentions on extrinsic calibration, and successfully solved the extrinsic calibration problem when mount- ing the structured light sensor on a CMM or a CNC machine tools. Traditionally intrinsic and extrinsic calibrations are two sepa- rate steps. They are fullled usually in different conditions with different equipments. For some commercial sensors, the intrinsic calibration are carried out by the manufacture, only the extrinsic calibration needs to be done by the customers when the sensor is mounted on a 3D scanning device. When calibrating the extrinsic parameters, the intrinsic para- meters are usually considered as stable values. In practice, the intrinsic parameters are likely to vary after the sensor is used for a long time, since they can be inuenced by many factors, such as ambient temperature, shaking etc. In order to solve this problem Santolaria [18] proposed a one-step intrinsic and extrinsic Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/optlaseng Optics and Lasers in Engineering 0143-8166/$ - see front matter & 2014 Published by Elsevier Ltd. http://dx.doi.org/10.1016/j.optlaseng.2014.01.001 n Corresponding author. Tel.: þ86 532 6678 6313; fax: þ86 532 6678 1550. E-mail address: [email protected] (Z. Xie). Optics and Lasers in Engineering 58 (2014) 918

Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors

  • Upload
    shukai

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Simultaneous calibration of the intrinsic and extrinsic parametersof structured-light sensors

Zexiao Xie n, Xiaomin Wang, Shukai ChiOcean University of China, Engineering College, 238 Songling Road, Qingdao 266100, China

a r t i c l e i n f o

Article history:Received 15 November 2013Received in revised form14 December 2013Accepted 1 January 2014Available online 7 February 2014

Keywords:Structured-light sensorIntrinsic and extrinsic calibrationCross-ratio invariancePlanar grid targetConjugate pair

a b s t r a c t

A novel approach for simultaneously calibrating the intrinsic and extrinsic parameters of structured-lightsensors is presented in this paper. A planar target etched with grid lines is adopted to generatecalibration points. By intersecting the laser plane with the grid lines on the target some co-linear pointsare formed, and their 3-D coordinates are computed according to the rule of cross-ratio invariance. Forobtaining non-colinear points in the laser plane, the sensor is located at several different positions by themoving of coordinate measuring machine (CMM). However, the co-linear points corresponding todifferent CMM positions locate at different coordinate frame. In order to establish conjugate pairs tocalibrate the sensor, all of the generated points in the laser plane are transformed into the CMM machinecoordinate frame. Then using the 3D points in this coordinate frame a 2D coordinate frame is establishedwith determined direction vectors, i.e., the extrinsic parameters are calibrated. By transforming the 3Dpoints in the CMM machine coordinate frame into the 2D coordinate frame, conjugate pairs forcalibrating the intrinsic parameters are established. Experimental studies show that the planar targetis simple, and it can be manufactured with high accuracy. The calibrating process is facilitating, and thecalibration result possesses high accuracy.

& 2014 Published by Elsevier Ltd.

1. Introduction

In recent years, structured light sensors have been widely used inquality control and reverse engineering [1–5]. This sensor has twotypes of working modes. One working mode is scanning measure-ment [6–8]. The sensor is usually integrated with coordinate measur-ing machines (CMMs), CNC machines or other scanning devices tocollect data cloud on a part for reverse engineering. Another mode isonline inspection [9–11]. The sensor is fixed on a device, and thepart is measured when it passes through the working range of thesensor. The dimension to be inspected usually includes diameter andthickness, for example, the diameter of a cable or a steel pipe, thethickness of a steel or wood board.

A structured-light sensor is basically composed of a CCD cameraand a line laser projector, according to the working principle of lasertriangulation. When the laser plane is projected on a part, the CCDcamera captures the image with the modulated light stripe.If the sensor is calibrated, the 2D data in the laser plane could beobtained from the image. This calibrating procedure is called“intrinsic calibration”. The aim of intrinsic calibration is to determinethe mapping relationship between the 2D computer image plane and

the laser plane, meanwhile establish a 2D coordinate frame in thelaser plane.

When the sensor is mounted on a CMM to implement 3Dscanning, the data point directly obtained possesses 2D coordinatein the laser plane, which should be converted into 3D data in theCMM machine coordinate frame. The procedure for identifying thetransformation from 2D coordinate frame in the laser plane to 3DCMM machine coordinate frame is “extrinsic calibration”.

The studies on structured-light sensors mainly focus on intrinsiccalibration to determine the relationship between the laser planeand the image plane [11–14]. By contrast, only several researchers[15–17] have paid their attentions on extrinsic calibration, andsuccessfully solved the extrinsic calibration problem when mount-ing the structured light sensor on a CMM or a CNC machine tools.

Traditionally intrinsic and extrinsic calibrations are two sepa-rate steps. They are fulfilled usually in different conditions withdifferent equipments. For some commercial sensors, the intrinsiccalibration are carried out by the manufacture, only the extrinsiccalibration needs to be done by the customers when the sensor ismounted on a 3D scanning device.

When calibrating the extrinsic parameters, the intrinsic para-meters are usually considered as stable values. In practice, theintrinsic parameters are likely to vary after the sensor is used for along time, since they can be influenced by many factors, such asambient temperature, shaking etc. In order to solve this problemSantolaria [18] proposed a one-step intrinsic and extrinsic

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/optlaseng

Optics and Lasers in Engineering

0143-8166/$ - see front matter & 2014 Published by Elsevier Ltd.http://dx.doi.org/10.1016/j.optlaseng.2014.01.001

n Corresponding author. Tel.: þ86 532 6678 6313; fax: þ86 532 6678 1550.E-mail address: [email protected] (Z. Xie).

Optics and Lasers in Engineering 58 (2014) 9–18

calibration method. He adopted a 3D target manufactured withsome non-coplanar characteristic points, and the world coordinateframe is directly established on the 3D target. The laser planeintersects the 3D target, and some non-colinear calibration pointsare formed in the world coordinate frame. Thus the “intrinsiccalibration” and “extrinsic calibration” can be simultaneously doneusing these points. Since the final measuring result is in CMMmachine frame, the transformation from the world coordinateframe to CMM global frame is derived after the position andorientation of the target in CMM machine frame have beenmeasured by using a trigger probe. In this method, the CCDcamera and the sensor share the same coordinate frame, thecamera model and the sensor model are easy to be established,while the manufacture of 3D target is difficult, and it is hard toensure high accuracy. On the other hand, by adopting a contactprobe to identify the target orientation in the CMM global frame,the calibration process is inconvenient and not facilitating.

In this study, a planar target is adopted to simultaneouslycalibrate the intrinsic and extrinsic parameters of structured lightsensors. As shown in Fig. 1, the structure of the target is simple, andeasy to be manufactured with high accuracy. In the process ofcalibrating, the target is placed on the worktable of the CMMwithoutany special adjustment. When the laser plane projects on the target,a laser line is formed. It intersects with the grid lines on the target,and several co-linear points are created. If the sensor is located atdifferent positions, more non-colinear points can be obtained in thelaser plane. They are taken as calibration points for simultaneouslycalibrating the intrinsic and extrinsic parameters of structured-lightsensors. The specific description of this approach is given as follows:

In Section 2, the camera is modeled and calibrated to identify thetransformation between the world coordinate frame and the cameracoordinate frame. In Section 3 the model of the structured light sensoris created. The intrinsic model is the mapping relationship betweenthe laser plane and the image plane. The extrinsic model is thetransformation from the 2-D coordinate frame in the laser plane to theCMM machine coordinate frame. In Section 4 the method fordetermining calibration points is given. When the laser plane projectson the planar target, it intersects with the grid lines on the target andsome co-linear points are formed. The 3D coordinates of the inter-sected points in the world coordinate frame are computed accordingto the rule of cross-ratio invariance. The co-linear points obtained atdifferent sensor positions are taken as calibration points. All of themare transformed into the CMMmachine coordinate frame. In Section 5,the extrinsic parameters are solved by establishing a 2D coordinateframe using the 3D data points in the laser plane, also in the CMM

machine coordinate frame. The conjugate pairs for solving the intrinsicparameters are created by transforming the 3D points in the CMMmachine coordinate frame into 2D data points in the established 2Dcoordinate frame. Experimental results are given in Section 6.

2. Camera modeling and calibrating

In this section, the camera model is established, the transfor-mation from the camera coordinate frame to the world coordinateframe is determined, and the lens distortion is corrected. A planartarget is adopted to calibrate the camera, it is also used to obtaincalibration points for simultaneously calibrating the intrinsic andextrinsic parameters of the sensor in Section 4.

2.1. Camera modeling

The modeling of the camera is performed according to theperspective projection principle [16], as shown in Fig. 2. Because oflens distortion, the image would be distorted and that would lead tomeasurement errors. There are two types of lens distortion: radialdistortion and tangential distortion. Since the radial distortion is themain factor that affects the measurement accuracy, we only take itinto consideration when establishing the camera model.

Note that owxwywzw is the 3D world coordinate frame. O0XY isthe CCD array plane coordinate frame, O0 is the intersection of theoptical axis and the CCD array plane. ocxcyczc is the 3D cameracoordinate frame, where oc is the projection center of the camera, zcaxis is the optical axis of the camera lens, xc and yc are parallel to Xand Y , respectively. P is a point in ocxcyczc or owxwywzw. Itscorrespondence in O0XY should be PuðX;YÞ, but the actual corre-sponding point is PdðXd;YdÞ due to the lens distortion. f is theeffective focal length. o″uv is the computer image coordinate frame,o″ is the origin of the image, u,v axes are parallel to X,Y respectively,the unit of u axis and v axis is pixel. Let ðu0; v0Þ be the coordinate ofO0 in o″uv, here ðu0; v0Þ is the principal point. The transformationfrom owxwywzw to o″uv is derived through the following process.

According to perspective projection, the transformation fromocxcyczc to O0XY is

ρX

Y1

264

375¼

f 0 00 f 00 0 1

264

375

xcyczc

264

375 ð1Þ

where ρ is a scale factor.

Fig. 1. The planar target for intrinsic and extrinsic calibrating of the sensor.

Y

xc

yc

zc

oc

X O

u

v

xs

ys

os

Laser projector

Laser plane

xw

yw

zw

ow

f

P

o

(xc,yc,zc) or (xw,yw,zw) or (xs,ys)

Pu(X,Y)

Pd(Xd,Yd)

Fig. 2. Principle of perspective projection and camera model.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–1810

The transformation from owxwywzw to ocxcyczcis

xcyczc1

26664

37775¼Hw

c

xwywzw1

26664

37775

Hwc ¼ Rw

c Twc

0 1

" #ð2Þ

where Rwc is a 3� 3 rotation matrix, Tw

c is a translation vector.

Rwc ¼

r1 r2 r3r4 r5 r6r7 r8 r9

264

375;Tw

c ¼txtytz

264

375

The transformation from O0XY to o″uv is

u¼NxXþu0

v¼NyYþv0

(ð3Þ

where Nx and Ny are the number of pixels on the computer imagecorresponding to per unit length on the CCD array plane. They canbe derived from the specifications given by the CCD manufacturer.

Taking the radial distortion into consideration, the distortion ofthe computer image is corrected as

u¼ udð1þk1q2þk2q4Þv¼ vdð1þk1q2þk2q4Þ

(ð4Þ

where q2 ¼ Xd2þYd

2 ¼ ud�u0ð Þ=Nx� �2þ vd�v0ð Þ=Ny

� �2; k1 is the

first-order distortion coefficient, k2 is the second-order distortioncoefficient.

Substituting Eqs. (2) and (3) into Eq. (1) we can get the cameramodel

ρu

v

1

264

375¼

f Nxr1þr7u0 f Nxr2þr8u0 f Nxr3þr9u0 f Nxtxþtzu0

f Nyr4þr7v0 f Nyr5þr8v0 f Nyr6þr9v0 f Nytyþtzv0r7 r8 r9 tz

264

375

xwywzw1

26664

37775ð5Þ

In the camera model, r1 � r9,tx,ty,tz ,f ,u0,v0 and k1,k2 in Eq. (4)are unknown parameters. They should be solved by cameracalibration.

2.2. Camera calibrating

When using the planar target to calibrate the CCD camera, theworld coordinate frame owxwywzw is established on the target. owis the cross point of the two thick line in Fig. 1, xw and yw are alongthe two thick line respectively, zw is perpendicular to owxwyw andmeets the requirement of right hand rule.

From the captured image, the cross points of the grid lines areextracted and taken as calibration points, with known coordinateðu; vÞ in o″uv and correspondence ðxw; yw;0Þ in owxwywzw. Theunknown parameters in the camera model are worked out usingRAC (radial arrangement constraint) two-step method [19].

3. Sensor modeling

3.1. Intrinsic model

As described above, the modeling of the camera is to establishthe transformation from the 3D world coordinate frame to the 2Dcomputer image. For a structured light sesnor, its intrinsic model isthe transformation from the 2D image plane to a 2D coordinate

frame in the laser plane. As shown in Fig. 3 osxsys is a 2Dcoordinate frame established in the laser plane. Therefore, accord-ing to the camera model, the intrinsic model of the sensor can beobtained by removing the third column in Eq. (5), correspondingto zw axis.

ρu

v

1

264

375¼

f Nxr1þr7u0 f Nxr2þr8u0 f Nxtxþtzu0

f Nyr4þr7v0 f Nyr5þr8v0 f Nytyþtzv0r7 r8 tz

264

375

xsys1

264

375 ð6Þ

Since osxsys and o″uv are all 2-D coordinate frames, therelationship between them is one-to-one mapping, thus Eq. (6)can be simplified as

ρu

v

1

264

375¼

a1 a2 a3a4 a5 a6a7 a8 1

264

375

xsys1

264

375 ð7Þ

In this case, the intrinsic calibration is to calculate the eightparameters in Eq. (7) by solving a linear equations set, instead ofcomputing the parameters in Eq. (6).

3.2. Extrinsic model

In Fig. 3 the 2D coordinate frame osxsys in the laser plane isestablished after the inrinsic model is created and calibrated. Thedata point directly measured by this sensor is in osxsys accordingto Eq. (7).

In order to implement 3D measurement, the data in osxsysshould be transformed into the CMM machine coordinate frameomxmymzm, as shown in Fig. 3. The three axes of omxmymzmareparallel to the axes of the CMM respectively, and om is at homeposition of the CMM. The transformation from osxsys to omxmymzm,i.e., the extrinsic model is

Pm ¼Hsm � Ps ¼

Rsm Ts

m

0 1

" #� Ps ð8Þ

where Pm and Ps are the coordinates of the same point expressedin vector form in omxmymzm and osxsys, respectively. Ts

m is thetranslation vector of osxsys in omxmymzm, which can be regarded asthe movement of the CMM, and can be read from the CMM opticalscales directly.

Tsm ¼ qx qy qz

h iT

Rsm is the rotational matrix from osxsys to omxmymzm, which

needs to be calibrated.

Rsm ¼

lx lymx my

nx ny

264

375 ð9Þ

Fig. 3. The scheme for expressing the intrinsic and extrinsic model.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–18 11

where ½ lx mx nx �T is the direction vector of xs axis in omxmymzm,½ ly my ny �T is the direction vector of ys axis in omxmymzm.

4. Identifying calibration points for intrinsic calibrating andextrinsic calibrating of the sensor

As described in Section 2, the planar target used to calibrate theCCD camera is also adopted to simultaneously calibrate theintrinsic and extrinsic parameters of the structured light sensor.

To begin with, an image is captured when the lamp on thesensor is opened to illuminate the target, thus the grid lines on thetarget are visible in the image, as shown in Fig. 4(a). Subsequently,the lamp is closed, the laser is opened, and another image iscaptured, which only contains the laser stripe as shown Fig. 4(b).By using gray-weight centroid method, the center positions of thegrid lines in Fig. 4(a) are extracted as shown in Fig. 4(c), and centerposition of the laser stripe in Fig. 4(b) is extracted as shown inFig. 4(d). When capturing the images in Fig. 4(a) and (b), thesensor and the target keep fixed. Therefore, Fig. 4(c) and (d) canoverlap together, as shown in Fig. 4(e), and several points areformed, such as point 1–4. Their coordinates on the target aresolved according to the rule of cross-ratio invariance.

Fig. 5 shows the intersected points on the image and thecorresponding points on the target. owixwiywizwi is a world coordinate

frame located on the target. Q is a intersected point on the target bythe laser line MN and the grid line AC, q is the corresponding pointon the image. The coordinate of q can be directly obtained by imageprocess, while the coordinate of Q cannot be directly achieved. AsA,B,C are the points on the target with known coordinates in theworld coordinate frame owixwiywizwi, a,b and c are the correspondingpoints on the image, the coordinate of Q on the target can be solvedaccording to the rule of cross-ratio invariance [20].

ABQB

:ACQC

¼ abqb

:acqc

ð10Þ

Similarly, the coordinates of other intersected points inowixwiywizwi are gained. It is obvious that these points are co-linear points in the laser plane. In order to fulfill the intrinsiccalibration, more non-colinear calibration points in the laser planemust be identified.

In Fig. 6(a), the sensor is located at three different positions, ateach position five co-linear points are created. So three groups ofpoints are picked up on three lines located at three worldcoordinate frames. But they share the same image coordinateframe o″uv as shown in Fig. 6(b).

As described above, the world coordinate frame is located on theplanar target which keeps fixed when the sensor is moved by theCMM to three different positions. It can also be considered that the

1 2 4 3

Fig. 4. Generating calibration points by intersecting the laser plane with the planar target. (a) Image of the target, (b) image of laser stripe on the target, (c) the extractedcenter of the grid lines, (d) the extracted center of the laser stripe, (e) overlap of the extracted center of the grid lines and the laser stripe.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–1812

target is moved relatively to the sensor. That is to say the the worldcoordinate frame is moved to three positions, while the cameracoordinate frame ocxcyczc keeps fixed. In this case the world coordi-nate frame is defined as local world coordinate frame. The pointsobtained at three sensor positions are in three local world coordinateframes, moreover, all of them are also in the laser plane. They shouldbe tranformed into a reference coordinate frame to obtain calibrationpoints.

First, four types of coordinate frames are defined.

(1) Local world coordinate frame owixwiywizwi (i¼1,2,3…). It isestablished when the camera is calibrated at different sensor

positions. owixwiywi is on the target plane, zwi is perpendicularto the target plane.

(2) Camera coordinate frame ocxcyczc . It is established when thecamera is calibrated. As the CCD camera and the laserprojector are all fixed on the sensor, the relationship betweenthe laser plane and ocxcyczc is invariable.

(3) Global world coordinate frame owxwywzw. In this study, thefirst local world coordinate frame ow1xw1yw1zw1 is defined asthe global world coordinate frame.

(4) CMM machine coordinate frame omxmymzm. The three axes areparallel to the axes of the CMM respectively, its origin isdetermined when the CMM is homing.

The intersected points by the laser plane and the grid lines on thetarget at different sensor positions are in local world coordinateframe, they should be transformed into omxmymzm to create calibra-tion points. The tranforming procedure is described as follows:

1. Transformation from owixwiywizwi to ocxcyczc .

Pc ¼Hwic Pwi ð11Þ

where Hwic ¼ Rwi

c Twic

0 1

" #is the transformation of the ith local

world coordinate frame to ocxcyczc . Hwic is equivalent to Hw

c inEq. (2), it is obtained after the camera is calibrated at acorresponding sensor position.

2. Transformation from ocxcyczc to owxwywzw.Since owxwywzw is equivalent to ow1xw1yw1zw1, from Eq. (2) wecan get

Pw ¼ ðHwc Þ�1Pc ¼ ðHw1

c Þ�1Pc ð12Þ

Hw1c ¼ Rw1

c Tw1c

0 1

" #is determined when the camera is cali-

brated at the first sensor position.3. Transformation fromowxwywzw to omxmymzm

Pm ¼HwmPw ð13Þ

where Hwm ¼ Rw

m Twm

0 1

" #. When the sensor is moved by the

CMM, owxwywzw and omxmymzm keep fixed. The origin ofocxcyczc is moved with the sensor. If the coordinate of oc isachieved in owxwywzw and omxmymzm, H

wm can be worked out.

(1) Determining the coordinate of oc inomxmymzmThe position of the sensor is moved by the CMM, and at eachposition the three coordinate of the CMM can be directly readfrom the three optical scales. In practice, the current coordi-nate of the CMM is usually considered as the coordinate of thetip of the contact probe. Actually, the readings from the threeoptical scales can be regarded as the coordinate of any fixedpoint on Z axis of the CMM. As the sensor is mounted on Z axis,and the origin of camera coordinate frame oci is a fixed pointon Z axis, the readings from the three optical scales can betaken as the coordinate of oci, defined as ½ xmi ymi zmi �T , asshown in Fig. 7. Thus the coordinate of oci corresponding todifferent sensor positions in omxmymzm is determined.

(2) Computing the coordinate of oc in owxwywzwAs described above, the planar target keeps fixed on theworktable, while sensor is moved by the CMM. The transforma-tion from camera coordinate frame corresponding to differentsensor position into the global world coordinate frame is

Pw ¼ ðHwci Þ�1Pci ð14Þ

c

A

B

C

q

a

b

yc

xc

oc

s i

c

zwiywi

xwi

s laser plane i target plane c CCD array plane

Q

M

N

u

v

zc

Fig. 5. Generating colinear points according to the rule of cross-ratio invariance.

zwi

xwi

ywiowi

Planar target

sensor

position1 position2

position3

position1 in xw1yw1zw1

laser plane

CCD array plane

1 5

6 10

11 15

position2 in xw2yw2zw2

position3 in xw3yw3zw3

u v

Fig. 6. Generating non-colinear points at different sensor positions. (a) Threegroups co-linear points obtained at three sensor positions, (b) three groups co-linear points share the same image coordinate frame.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–18 13

where Hwci is determined when the camera is calibrated at the ith

sensor position. Pci is a point in the ith camera coordinate frame.The origin of the ith camera coordinate frame is oci, the

coordinate of oci in ocxcyczc is 0 0 0� �T . Substituting

0 0 0� �T into Eq. (14), the coordinate of oci inowxwywzw can

be worked out, defined as xwi ywi zwi

h iT, as shown in Fig. 7.

So far, the coordinates of oci corresponding to different sensorpositions in omxmymzm and owxwywzw are obtained. Substituting

xmi ymi zmi

h iTand xwi ywi zwi

h iTinto Eq. (13), Hw

m can be

found out by solving an over-determined linear equations set.According to the transforming process owixwiywizwi-ocxcyczc-

owxwywzw-omxmymzm, the intersected points by the laser plane andthe grid lines on the target in different local world coordinate framesowixwiywizwi are transformed into omxmymzm as shown in Fig. 8. So thecalibration points, also called “conjugate pairs”, in the laser planeare created with one coordinate in omxmymzm, and another coordi-nate in o″uv.

5. Solving the intrinsic and extrinsic parameters of the sensor

5.1. Solving the extrinsic parameters

As described in Section 4, the calibration points are createdfrom the intersected points of the laser plane and the planar

target. They are coplanar points in the laser plane, with 3Dcoordinates in omxmymzm, and 2D coordinates in o″uv.

The intrinsic calibration of the sensor is to determine themapping relationship between the laser plane and the imageplane. So a 2D coordinate frame should be established in the laserplane. The 3D calibration points in omxmymzm should be trans-formed into 2D format first.

As shown in Fig. 9, point 1–5 are five co-linear points inomxmymzm. They are applied to fit a line, and the direction of theline is from 1 to 5, which can be regarded as the direction of anaxis in the laser plane. Define this axis as xs, denote the directionvector as ½ lx mx nx � in Eq. (9). Subsequently, all of the calibra-tion points in omxmymzm are used to fit a plane, the normaldirection of the plane is computed as n.

From ½ lx mx nx �T and n, and according to the right hand rule,a third axis in the laser plane can be determined, defined as ys,denote the direction vector as ½ ly my ny �T . If ys passes throughpoint 3, the origin of the 2-D coordinate frame is located at point 3.Thus a 2-D coordinate frame osxsys is established in thelaser plane. The coordinate of os and the direction vectors of xsand ys in omxmymzm are simultaneously worked out when estab-lishing osxsys. The extrinsic parameters Rs

m shown in Eq. (9) aredetermined.

5.2. Solving the intrinsic parameters

After the 2-D coordinate frame osxsys is established, the 3Dcalibration points in omxmymzm should be transformed into osxsys.From the extrinsic model (Eq. 8) we have

Ps ¼ ðHsmÞ�1Pm ð15Þ

Pm is a 3D form of the calibration point in omxmymzm. Ps is the2D form of the calibration point in osxsys. From Eq. (15), thecalibration points in osxsys can be achieved, together with theircorresponding points on the image, conjugate pairs are created.By substituting the conjugate pairs into Eq. (7), the intrinsicparameters a1; a2;…; a8 can be worked out.

6. Experiments and results

The structured light sensor designed in this work consists of aWATEC (WAT-902B) CCD camera with a 12 mm lens, a laser planeprojector (wave length 650 nm, line width o0.4 mm) and a LEDlamp as shown in Fig. 10. The sensor is mounted on a CMM made

zw

xw

ywow

ym

zm

xm

om

oci

xci

yci

zci

(xwi,ywi,zwi) (xmi,ymi,zmi)

Fig. 7. The coordinate of the origin of camera coordinate frame in the CMM machinecoordinate frame and in global world frame corresponding to different sensor positions.

ym

xm

zm

om

xs

ys

os1

5 3 2 4

laser plane

smH

Fig. 9. Establishing 2D coordinate frame using the 3D points in the laser plane.

Fig. 8. The procedure for transforming the local world coordinate frames into theCMM machine coordinate frame.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–1814

by Leader corporation, with RENISHAW UCCLite2 controller asshown in Fig. 11. The CCD array resolution is 752 (H)�582 (V), thedimension of a pixel on the CCD array is 8.6 μm (H)�8.3 μm (V).The resolution of the computer image is 768 (H)�576 (V). Thus Nx

and Ny in Eq. (3) are derived as

Nx ¼768

752� 8:6� 10�3 ¼ 118:75309 Ny ¼576

582� 8:6� 10�3 ¼ 119:23984

The angle between the optical axis and the laser plane is about35°. As shown in Fig. 12, the minimum laser line length L1¼60 mm,the maximum laser line length L2¼80 mm, the workingdepth¼70 mm.

(1) Image processThe video signal from the CCD camera is in PAL standard. Animage grab card, with the model Picolo, made by Belgium isadopted to transform the video signal into a 768�576 digitalimage. The default setting of the image card is gain¼1000,offset¼0. A laser stripe captured at this setting is given inFig. 13, in which edge diffraction can be found. In this studythe gain and the offset are reduced to gain¼800, off-set¼�200, and a captured laser stripe is shown in Fig. 14.Compared to Fig. 13, the width of the laser line is smaller, andthe edge diffraction is removed.The brightness of the laser line is sufficient to enable the CCDsaturation, but due to the reduction of gain and offset, themaximum brightness of light gray value is 183, not 255. Theimage of the planar target captured at gain¼800, offset¼�200

is given in Fig. 15. It can be seen the gray-contrast between theblack grid lines and the white background is clear. The gray valueof the white background is 95 to 171, the gray value of the blackgrid lines is 0.

Fig. 11. The experimental system.

Laser projector CCD Camera Lamp

Fig. 10. The composition of the structured light sensor.

L2

L1

Dep

th

Laser plane

1 2 3

4 5 6

7 8 9

Fig. 12. The working range and the nine regions where the standard ball is scannedto test accuracy.

Fig. 13. The image of a laser stripe captured at the image card setting gain¼1000,offset¼0.

Fig. 14. The image of a laser stripe captured at the image card setting gain¼800,offset¼�200.

Fig. 15. The image of planar target captured at the image card setting gain¼800,offset¼�200.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–18 15

The method for obtaining the intersection point between thelaser line and grid line is given as follows.In Fig. 14 the centroid position of the laser line is extracted usinggray-weight centroid method, it is in the form of pixels, up to768. These pixels are applied to fit a line LLaser , as shown inFig. 16. The centroid position of the grid line is also extracted inpixel-form using gray-weight centroid method along row direc-tion by ignoring the black grid intersection areas, and a line LGridis fitted using the extracted pixels as shown in Fig. 16. Theintersection point between the laser line and grid line can beformed by intersecting LLaser with LGrid as shown in Fig. 16, P is anintersection point. This method can achieve high precision ofintersection points.

(2) Calibration testThe size of the planar target is 130 mm�130 mm, with 11lines in row direction and 11 lines in column direction. It wasplaced on the working table of the CMM, and roughly adjustedto make the laser line approximately parallel to the grid lineson the target as shown in Fig. 1. The sensor was moved to 20positions, 85 calibration points are created. They are approxi-mately uniformly distributed in the laser plane. The extrinsicparameters and the intrinsic parameters are computed asfollows:

Rsm ¼

�0:023681 0:248316�0:962239 �0:2637860:271260 �0:931129

264

375

a1 ¼ 0:103429 a5 ¼ �0:305935a2 ¼ 10:67165 a6 ¼ �8:813869a3 ¼ 1:249122 a7 ¼ �0:032231a4 ¼ �16:7319 a8 ¼ 0:7830325

(3) Accuracy testThe centroid position of the laser line is extracted using gray-weight centroid method along column direction on a image.An extracted value is obtained with the coordinate ðu; vÞ onthe image, u is the number of the column, v is the extractedvalue along this column. Up to 768 sets of ðu; vÞ can beobtained. In Fig. 17, the extracted laser centroid position aresome discrete points, while the laser line projected on theobject is a continuous curve. If a point with the coordinateðu; vÞ on the image is picked out, its corresponding point onthe laser line on the object cannot be identified. Thus wecannot evaluate the accuracy of one point on the laser line.

In this study, a standard ball as shown in Fig. 18 is measured,and the collected scatter points are used to analyse and evaluatethe accuracy of the sensor. For preventing mirror reflection it isuniformly painted with white matt paint. After surface treatmentthe accuracy of the ball is tested with a CMM trigger probe, theradius is 20.0175 mm, the sphericity error is 0.0056 mm.

The ball is measured nine times, the laser plane intersects withthe ball at nine different regions as shown in Fig. 13. To do so, theaccuracy of the entire working range of the sensor can be tested.Fig. 19. shows the collected scatter points on the standard ball at

region 5. All of the these scatter points are applied to fit a sphere.The result of the fitted sphere center and radius are generatedfrom many points, thus it is not easily influenced by the accuracyof a single point. The error between the standard radius and thefitted radius is the shape error of the ball. The fitted sphere centersand the fitted radii of the nine sphere are listed in Table 1.

Fig. 20 shows the errors between the standard radius and thefitted radii corresponding to nine regions in the laser plane, they aredistributed from �0.014 mm to þ0.013 mmwithout a regular trend.

As the ball is fixed on the worktable, the sphere centerpositions of nine measurement should be the same value.In Table 1 the fitted sphere center positions are distributed within0.012 mm�0.018 mm�0.025 mm range. It is obvious that dis-tribution along Z direction is significantly bigger than along X and

LLaser

LGrid

P

Fig. 16. The image of planar target and a laser stripe captured at the image cardsetting gain¼800, offset¼�200.

Continuous curve

Discrete points

Image

Object

Fig. 17. The discrete points on the image and the continuous curve on the object.

Fig. 18. The standard ball for accuracy test.

Fig. 19. The collected scatter point on the standard ball.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–1816

Y directions. Because Z direction is along the working depth of thesensor.

Since both of the radius error and the sphere center error aregenerated from many scatter points, they can be considered assystem error of the sensor. The calibration error is the main errorsource that causes the system error.

The error from each point to the fitted sphere surface iscomputed. Fig. 21 shows the errors distribution between all ofthe scatter points and the fitted sphere surface measured at region5, the maximum distance from the points outside and inside thesphere to the fitted surface are 0.043 mm and 0.046 mm. This kindof error comes from a single point. It is mainly caused by the image

quality, including image resolution and clarity. This error can beconsidered as the random error of the sensor.

The maximum distance from the points outside and inside thesphere to the fitted surface corresponding to nine regions arerespectively computed and shown in Fig. 22. It can be seen that themaximum errors at No.7–9 regions are significantly bigger thanthose at No.4–6 regions. This is because the sensor resolution andclarity at No.7–9 regions are lower than those at No.4–6 regions.The maximum errors at No.1–3 positions are smaller than those atNo.4–6 regions with small amplitude. Because the resolutions atNo.1–3 regions are higher than those at No.4–6 regions, while theclarities at No.1–3 regions are lower than those at No.4–6 regions.

In summary, by measuring a standard ball at nine differentsensor positions within the sensor working range, the accuracy ofthe sensor is thoroughly tested and analysed. The analysis resultsdemonstrate that the sensor possesses satisfying accuracy aftercalibrated using the method proposed in this study.

7. Conclusion

(1) A novel approach for simultaneously calibrating the intrinsicand extrinsic parameters of structured-light sensors is pre-sented. A planar target etched with grid lines is adopted forgenerating calibration points. The manufacture of the target issimple, the calibration process is convenient. The problem thatthe intrinsic calibration error lead errors to extrinsic calibra-tion is completely resolved.

(2) The accuracy test result shows: (1) After the sensor is calibratedusing the presented one-step method, it possesses satisfyingaccuracy with 768�756 resolution, 70 mmworking depth, and60 mm to 80 mm laser line length. (2) The calibration errorcannot be determined separately. It can be influenced by imageresolution, image clarity, and the method of image process. (3)Calibration error can cause system error of the sensor, the largerthe calibration system error, the larger the system error. (4) Therandom error is mainly caused by the image resolution andimage clarity (image blur). The biggest random error can befound at the far edge of the working depth, because where theimage resolution and the clarity are lower than other regions.

Acknowledgements

This project is financially supported by the National NaturalScience Foundation of China (Project number: 61171162) and thePh.D. Programs Foundations of Ministry of Education of China

Table 1The nine fitted sphere centers and radii (mm).

Region no. Fitted sphere centers Fitted radii

x y z

1 236.633 �358.100 �293.387 20.0232 236.638 �358.106 �293.378 20.0303 236.634 �358.118 �293.370 20.0134 236.640 �358.104 �293.393 20.0115 236.632 �358.115 �293.387 20.0286 236.633 �358.107 �293.381 20.0157 236.642 �358.113 �293.388 20.0198 236.635 �358.102 �293.392 20.0039 236.630 �358.111 �293.381 20.014Max value 236.642 �358.100 �293.370 20.030Min value 236.630 �358.118 �293.393 20.003

0 1 2 3 4 5 6 7 8 9 10

-0.015

-0.010

-0.005

0.000

0.005

0.010

0.015

Rad

ius

erro

r(m

m)

Sensor position No.

Fig. 20. The errors between the fitted radii and the standard radius.

Fig. 21. The errors distribution between the fitted sphere and the scatter pointsmeasured at region 5.

0 1 2 3 4 5 6 7 8 9

0.035

0.040

0.045

0.050

0.055

0.060

0.065

0.070

0.075points outside spherepoints inside sphere

Sensor position No.

Max

ium

err

or fr

om s

catte

r poi

nts

to fi

tted

sphe

re s

urfa

ce(m

m)

Fig. 22. The maximum distance from the points outside and inside the sphere tothe fitted surface.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–18 17

(Project number: 20110132110010). The authors would like toexpress their sincere thanks to them.

References

[1] Vosselman George. Automated planimetric quality control in high accuracyairborne laser scanning surveys. J Photogramm Remote Sens 2012;74:90–100.

[2] Hsieh Tung Hsien, Jywe Wen Yuh, Huang Hsueh Liang, Chen Shang Liang.Development of a laser-based measurement system for evaluation of thescraping workpiece quality. Opt Lasers Eng 2011;49:1045–53.

[3] Park Sang C, Chang Minho. Reverse engineering with a structured lightsystem. Comput Ind Eng 2009;42:1377–84.

[4] Niola Vincenzo, Rossi Cesare, Savino Sergio. A new real-time shape acquisitionwith a laser scanner: first test results original. Rob Comput Integr Manuf2010;26:543–50.

[5] Korosec Marjan, Duhovnik Joze, Vukasinovic Nikola. Identification and opti-mization of key process parameters in noncontact laser scanning for reverseengineering. Comput Aided Des 2010;42:744–8.

[6] Mahmud Mussa, Joannic David, Roy Michaël, Isheil Ahmed, Fontaine Jean-François. 3D part inspection path planning of a laser scanner with control onthe uncertainty. Comput Aided Des 2011;43:345–55.

[7] Van GN, Cuypers S, Bleys P, Kruth JP. A performance evaluation test for laserline scanners on CMMs. Opt Lasers Eng 2009;47:336–42.

[8] Bešić Igor, Van Gestel Nick, Kruth Jean-Pierre, Bleys Philip, Hodolič Janko.Accuracy improvement of laser line scanning for feature measurements onCMM. Opt Lasers Eng 2011;49:1274–80.

[9] Liu Zhen, Sun Junhua, Wang Heng, Zhang Guangjun. Simple and fast rail wearmeasurement method based on structured light. Opt Lasers Eng 2011;49:1343–51.

[10] Park Jiyoung, Kim Cheolhwon, Na Jaekeun. Using structured light for efficientdepth edge detection. Image Vision Comput 2008;26:1449–65.

[11] Zexiao Xie, Weitong Zhu, Zhang Zhiwei, Ming Jin. A novel approach for thefield calibration of line structured-light sensors. Measurement 2010;43:190–6.

[12] Vincenzo Niola, Rossi Cesare, Savino Sergio, Strano Salvatore. A method for thecalibration of a 3-D laser scanner. Rob Comput Integr Manuf 2011;27:479–84.

[13] Zhenzhong Wei, Cao Lijun, Zhang Guangjun. A novel 1D target-based calibra-tion method with unknown orientation for structured light vision sensor. OptLaser Technol 2010;42:570–4.

[14] Jing Xu, Douet Jules, Zhao Jianguo, Song Libin, Chen Ken. A simple calibrationmethod for structured light-based 3D profile measurement. Opt Laser Technol2013;48:187–93.

[15] Che C G, Ni J. A ball-target-based extrinsic calibration technique for high-accuracy 3-D metrology using off-the-shelf laser stripe sensors. Precis Eng2002;24:210–9.

[16] Xie Z, Zhang C, Zhang Q. A simplified method for the extrinsic calibration ofstructured light sensors using a single-ball target. Int J Mach Tools Manuf2004;44:1197–203.

[17] Xie Z, Zhang Q, Zhang G. Modeling and calibration of a structured-light-sensor-based five-axis scanning system. Measurement 2004;36:185–94.

[18] Santolaria J, Pastor JJ, Brosed FJ, Aguilar JJ. A one-step intrinsic and extrinsiccalibration method for laser line scanner operation in coordinate measuringmachines. Meas Sci Technol 2009;20:1–12.

[19] Tsai Roger Y. A versatile camera calibration technique for high-accuracy 3Dmachine vision metrology using off the-Shelf TV cameras and lenses. IEEE JRob Autom 1987;3:323–44.

[20] Zhou Fuqiang, Cui Yi, Liu Liu, Gao He. Distortion correction using a singleimage based on projective invariability and separate model. Int J Light ElectronOpt 2013;124:3125–30.

Z. Xie et al. / Optics and Lasers in Engineering 58 (2014) 9–1818