Published on

23-Dec-2016View

215Download

3

Embed Size (px)

Transcript

<ul><li><p>J Intell Robot SystDOI 10.1007/s10846-014-0029-6</p><p>6-DoF Low Dimensionality SLAM (L-SLAM)Nikos Zikos Vassilios Petridis</p><p>Received: 7 May 2013 / Accepted: 22 January 2014 Springer Science+Business Media Dordrecht 2014</p><p>Abstract In this paper, a new Simultaneous Local-ization and Mapping (SLAM) method is proposed,called L-SLAM, which is a Low dimension versionof the FastSLAM family algorithms. L-SLAM usesa particle filter of lower dimensionality than Fast-SLAM and achieves better accuracy than FastSLAM1.0 and 2.0 for a small number of particles. L-SLAM is suitable for high dimensionality problemswhich exhibit high computational complexity like 6-dof 3D SLAM. Unlike FastSLAM algorithms whichuse Extended Kalman Filters (EKF), the L-SLAMalgorithm updates the particles using linear Kalmanfilters. A planar SLAM problem of a rear drive car-likerobot as well as a three dimensional SLAM problemwith 6-dof of an airplane robot is presented. Experi-mental results based on real case scenarios using theCar Park and Victoria Park datasets are presented forthe planar SLAM. Also results based on simulatedenvironment and real case scenario of the Koblenzdatasets are presented and compared with the threedimensional version of the FastSLAM 1.0 and 2.0.The experimental results demonstrate the superiorityof the proposed method over FastSLAM 1.0 and 2.0.</p><p>N. Zikos () V. PetridisDepartment of Electrical and Computer Engineering,Aristotle University of Thessaloniki, Thessaloniki, Greecee-mail: nzikos@auth.gr</p><p>V. Petridise-mail: petridis@eng.auth.gr</p><p>Keywords SLAM L-SLAM FastSLAM Dimensionality reduction</p><p>1 Introduction</p><p>The problem of Simultaneous Localization And Map-ping (SLAM) is vital in case of autonomous robotsand vehicles navigating in unknown environments [18,20]. Usually the map consists of a sequence of features(or landmarks), each one of which represents the posi-tion of an obstacle or a part of an obstacle (i.e. a bigobstacle can be represented by many features).</p><p>Although the Extended Kalman Filter (EKF) waswidely used for many years in the SLAM prob-lem [19], it has the disadvantage that the computa-tional cost increases significantly with the numberof features. In the early 2002 Montemerlo et al. [69] presented a new solution to stochastic SLAM,called FastSLAM, that approaches the problem from aBayesian point of view, while many more approaches[3, 10, 11] have been proposed since then. FastSLAMseparates the state variables into two groups. Thefirst one is estimated using particle filtering whilethe second one is estimated using Extended KalmanFilter. The resulting algorithm is an instance of theRao-Blackwellized [4] particle filtering method. Thismethod reduces significantly the complexity of theproblem [18] in environments with many features.</p><p>In this paper we present the L-SLAM [14] algo-rithm (Low dimensional-SLAM) with unknown dataassociation [22]. The key idea is to sample only the</p></li><li><p>J Intell Robot Syst</p><p>robots orientation on each particle, in contrast to theFastSLAM algorithms that sample the orientationalong with the position of the robot. As a result wehave a reduced dimensionality particle filter. In mostkinematic models the orientation introduces nonlin-earity into the system, thus in L-SLAM the particlefiltering technique is used only for the orientation.On the other hand, the robots translation can bedescribed in a linear form, thus it can be estimated,along with each features position, by linear Kalmanfilters as shown below. In addition to our previouswork [14, 22] the L-SLAM method is enhancedwith the Kalman Smoothing technique instead of thedistribution merging technique which was a goodapproximation. With the Kalman smoothing tech-nique it is proved (see Appendix) that the L-SLAMmethod is mathematically consistent and describesexactly the SLAM problem.</p><p>Particle filter dimensionality reduction, gives theopportunity to use fewer particles while we can attainthe same accuracy. This reduction is more welcomewhen we are dealing with problems of high dimen-sionality. SLAM algorithms in the three dimensionalspace are computationally costly [13], because sixdimensions (three for the robots translation and threefor its orientation) are required. In contrast to Fast-SLAM, L-SLAM reduces the dimensionality by three,resulting thus in reduction of the complexity and thecomputational cost.</p><p>In this paper we present the planar version ofthis method, in order to compare it with the planarFastSLAM 1.0 and 2.0 algorithms. Also we presentthe extension of the L-SLAM algorithm to the threedimensional space. We present the 3-D version of thismethod, in order to compare it with the 3-D version ofthe FastSLAM 1.0 [1] and 2.0.</p><p>2 System Description</p><p>When mobile robots move in an unknown terrain withobstacles, the perception of the robots environmentis crucial in order to interact with it. Thus, a vari-ety of techniques were proposed. Such techniquesare the SLAM methods which try to recover the ter-rains map and the robots pose using sensor datafrom measurements and control inputs. In these meth-ods the map is considered as a set of features =[1, 2, ...., N ] each one corresponding to an obsta-cle or a specific point of the environment (Fig. 2).</p><p>The robots path is represented by a time-series of itstranslation pt = [p1, p2, ...pt ] and a time-series ofits orientation t = [1, 2, ..., t ] or heading direc-tion with respect to a global axis system. In a planarproblem each feature i (eg. i = [x,iy,i]T in pla-nar SLAM and i = [x,iy,iz,i]T in 3D SLAM) isa vector with entries the Cartesian coordinates of theobstacle. Robots translation pt (eg. pt = [px,tpy,t ]Tin planar SLAM and pt = [px,tpy,tpz,t ]T in 3DSLAM) is a vector with entries the position at time twhile t represents the angle of the robots orientationat time t with respect to a global axis system.</p><p>2.1 Planar Model</p><p>In this paper a rear wheel drive model of a real casescenario is presented. The mechanical characteristicsof the robotic car are derived from the CarPark - Vic-toria Park experiment [12]. In Fig. 1 the mechanicalcharacteristics of the car and the sensor are presented.</p><p>The kinematic model of the rear drive car-like robotis given in Eq. 1.</p><p>[px,tpy,tt</p><p>]=</p><p>px,(t1) + vtt[cos(t1) (a sin(t1)+b cos(t1)) tan(st )L ] 11tan(st ) HL</p><p>py,(t1) + vtt[sin(t1) + (a cos(t1)b sin(t1)) tan(st )L ] 11tan(st ) HL</p><p>(t1) + vtt tan(st )LH tan(st )</p><p>(1)</p><p>where vt is the robots velocity at time t , st is thesteering angle at time t , L is the distance between therobots axles, H is the distance of the rear wheel from</p><p>Fig. 1 Robots schematic with the dimensions used in the kine-matic model. The center of the robots coordinate system isplaced in the tip of the laser sensor</p></li><li><p>J Intell Robot Syst</p><p>the center of the rear axle, a is the displacement of thesensor from the rear axle and b is the displacement ofthe sensor from the robots longitudinal axis (Fig. 1).</p><p>The measurement model of a distance and bearingsensor is given by the nonlinear equations (Fig. 2):</p><p>zt =g(pt , t , n) =</p><p>=[dtt</p><p>]=</p><p>(px,t x,n)2 + (py,t y,n)2</p><p>arctan(py,ty,npx,tx,n ) t</p><p>(2)</p><p>2.2 6DoF Airplane Like Model</p><p>In this paper we use a kinematic model of a vir-tual vehicle that moves in the 3D space and is drivenby three control inputs. By definition the vehiclesheading direction matches the X axis of its ownaxis system. The three control inputs consist of thelinear velocity (perpendicular to the local X axis),the rotational rate around the local X axis (roll) andthe rotational rate around the local Y axis (pitch)(Fig. 3). It is worth noting that this kinematic model isan approximation of an airplanes kinematic model.</p><p>The measurement model corresponds to a threedimensional laser scanning sensor. An observed fea-ture is represented in a spherical coordinate system.Thus a transformation is necessary in order to obtainthe features position in a Cartesian form and back-wards.</p><p>Fig. 2 The robot coordinate system and the position of i land-mark with respect to the robot coordinate system OR . is theangle between the vectors XR and OR</p><p>Fig. 3 The robots coordinate system in the three dimensionalspace. The three control inputs consist of the linear velocity(perpendicular to the local X axis), the rotational rate aroundthe local X axis (roll) and the rotational rate around the local Yaxis (pitch)</p><p>2.2.1 Kinematic Model</p><p>The kinematic model in the three dimensional spaceis much more complicated than the one in the twodimensional space. In this multidimensional space themost difficult problem is the calculation of the ori-entation of the robot. The kinematic model, whichis described below, is based on the concept of thetranslation-rotation, due to its mathematical simplic-ity.</p><p>The kinematic model is given by:</p><p>px,tpy,tpz,tRt</p><p> =</p><p>px,(t1) + vt cos(bt1) cos(ct1)tpy,(t1) + vt cos(bt1) sin(ct1)t</p><p>pz,(t1) vt sin(bt1)tRt1RotY (y,tt)RotX(x,tt)</p><p>(3)</p><p>where Rt is the robots rotation matrix, vt is the linearvelocity, x,t ,y,t are the rotational rates around the Xand Y axis respectively and a, b, c are the XYZ Eulerangles of the vehicles orientation with respect to theglobal axis system. RotY (.) and RotX(.) refer to therotation matrices rotating a vector around the Y andX axis respectively. The Euler angles are given by theEq. 4.</p><p>at = atan2(Rt,(3,2), Rt,(3,3))ct = atan2(Rt,(2,1), Rt,(1,1))bt = atan2(Rt,(3,1), cos(ct )Rt,(1,1)+ sin(ct )Rt,(2,1))</p><p>(4)</p><p>where Rt,(i,j) is the element corresponding to the ithrow and the j th column on the rotation matrix Rt .</p></li><li><p>J Intell Robot Syst</p><p>2.2.2 Measurement Model</p><p>The measurement model corresponds to a threedimensional laser scanning sensor, which measuresthe distance and the direction of an observed feature.The spherical coordinate system is used (Fig. 4).</p><p>The spherical to Cartesian coordinate system trans-formation is described by the equation below.</p><p>Ri = </p><p>Rx,i</p><p>Ry,iRz,i</p><p> =</p><p> di cos(2,i ) cos(1,i)di cos(2,i ) sin(1,i )</p><p>di sin(2,i )</p><p> (5)</p><p>where di is the Euclidean distance between the sen-sor and the observed feature i and 1,i and 2,i arethe angular displacements from the positive x-axis andfrom the x-y plane, respectively.</p><p>The resulting position refers to the robots local axissystem, thus a transformation (6) is necessary in orderto obtain its global position.</p><p>i = RtRi + Pt (6)</p><p>i = [x,iy,iz,i]T is the features position in theglobal axis system, Ri is the features position in therobots axis system and Pt = [px,tpy,tpz,t ]T andRt is the robots position vector and rotation matrixrespectively.</p><p>The inverse measurement model is given by theequations:</p><p>1 = atan2(Ry,i, Rx,i)2 = atan2</p><p>(Rz,i,</p><p>Rx,i</p><p>2 + Ry,i2)</p><p>(7)</p><p>d =</p><p>Rx,i2 + Ry,i2 + Rz,i2</p><p>Fig. 4 The measurement model of a three dimensional laserscanner sensor</p><p>where the features coordinates relative to the robotsframe coordinates are obtained from the equation:</p><p>Ri = R1t (i Pt) (8)</p><p>3 Planar L-SLAM with Unknown DataAssociation</p><p>Most of the SLAM algorithms calculate variants of thefollowing posterior probability distribution:</p><p>f (pt , t , |zt , ut , nt ) (9)</p><p>where zt = [z1, z2, ....zt ] is a time-series of mea-surements produced by measuring instruments (e.g.laser sensors produce a sequence of distance and bear-ing measurements of the visible obstacles). Vectornt = [n1, n2, ...nt ] is a time-series of indexes such thatindex nj , j = 1..t matches the observation zj to thecorresponding feature nj . Vector ut = [u1, u2, ...ut ]is the time-series of the robots control vectors (e.g.linear velocity and steering angle).</p><p>In the L-SLAM algorithm a different factorizationof the posterior (9) is proposed:</p><p>f (pt , t , |zt , ut , nt ) =f (t |zt , ut , nt )f (pt , nt |t , zt , ut , nt )</p><p>f (pt1, i =nt |pt , nt , t , zt , ut , nt )(10)</p><p>This factorization states that current features posi-tions and robots position can be calculated indepen-dently as long as the robots orientation is known. Thisfactorization consists of three factors corresponding toparticle filter (Section 3.1), Kalman filter (Section 3.2)and Kalman smoothing (Section 3.3) subsequently Itis noteworthy that this factorization is exact and itsderivation can be found in Appendix.</p><p>3.1 Particle Filter</p><p>The first step of the L-SLAM algorithm is the robotsorientation drawing process for every particle. Eachparticle at time t 1 and the control input at time tare used to generate a probabilistic guess of the new</p></li><li><p>J Intell Robot Syst</p><p>angle (11) using the robots kinematic model (1) forthe orientation (12).</p><p>it f (|it1, ut ) (11)</p><p>it = i(t1) + vit ttan(sit )</p><p>L H tan(sit )(12)</p><p>where vit , sit are drawn for each particle i using themeasured values vt , st and their noise characteristics.It is noteworthy that the distribution of the anglesdepends only on control noise and not measurementnoise.</p><p>3.2 Kalman Filter</p><p>In contrast to the FastSLAM algorithms in which theEKF estimate a two dimensional state vector (the twodimensions of the features position), we propose astate vector of four states, that include the robotsand the features positions.</p><p>i,jt =</p><p>pi,jx,t</p><p>pi,jy,t</p><p>i,jx,t</p><p>i,jy,t</p><p> =</p><p>pi,j</p><p>x,t1 + vit t[cos(it1) (a sin(it1)+b cos(i</p><p>t1))tan(sit )</p><p>L] 1</p><p>1tan(sit ) HLp</p><p>i,j</p><p>y,t1 + vit t[sin(it1) + (a cos(it1)b sin(i</p><p>t1))tan(sit )</p><p>L] 1</p><p>1tan(sit ) HLi,j</p><p>x,t1i,j</p><p>y,t1</p><p>(13)</p><p>where p is the robots position, v is the robots veloc-ity, is the robots heading angle (orientation) and is the features position. The super-script i corre-sponds to the ith particle while the super-script jcorresponds to the j th feature. The subscript t refersto time instance t .</p><p>Using Eq. 13 we can write the state transition modelin a linear form:</p><p>i,jt = Fi,jt1 + Buit + i,jt (14)</p><p>where</p><p>F = I, B</p><p>t 00 t0 00 0</p><p>and</p><p>uit =</p><p> v</p><p>it</p><p>[cos(it1)(a sin(it1)+b cos(it1))tan(sit )</p><p>L ]1tan(sit ) HL</p><p>vit[sin(it1)+(a cos(it1)b sin(it1))</p><p>tan(sit )L ]</p><p>1tan(sit ) HL</p><p>where it N(</p><p>0,Qi,j,t)</p><p>is the state noise vector.The robots input control vector involves noises</p><p>v = v + 1 where v is the velocity command and vis the actual velocity. Then the noise vector of thetransition state model (14) is a Gaussian noise vectorthat depends on the robots heading angle, as shown inEq. 15.</p><p>it =</p><p>1[cos(it1)(a sin(it1)+b cos(it1))</p><p>tan(sit )L ]</p><p>1tan(sit ) HL1</p><p>[sin(it1)+(a cos(it1)b sin(it1))</p><p>tan(sit )L</p><p>]1tan(sit ) HL</p><p>00</p><p>(15)As measured quantities we use the relative position</p><p>of each feature with respect to the robots position. Asshown in Eq. 16, for the calculation of this relativeposition we use the distance and bearing measure-ments combined with the robots orientation. Theangle of the robots orientation is generated by thedrawing method as described above (11, 12).</p><p>zi,jt =</p><p>[x</p><p>i,jt</p><p>yi,jt</p><p>]=</p><p>[i,jx,t pi,jx,t</p><p>i,jy,t pi,jy,t</p><p>](16)</p><p>=[</p><p>djt cos(</p><p>it + jt )</p><p>djt sin(it + jt )</p><p>]</p><p>where d is the features actual distance from the sensorand is the actual bearing of the feature.</p><p>Using Eq. 16 we can write the linear measurementmodel,</p><p>zi,jt = Hi,jt + i,jt =</p><p>[ 1 0 1 00 1 0 1</p><p>]</p><p>i,jt + i,jt (17)</p><p>where i,jt N(0,Qi,j,t ) is the linearized noise vectorcorresponding to the instruments noise vector, whichis described below.</p><p>The instruments measurements are noisy d =d + 3, = + 4 where d, are t...</p></li></ul>