A Flexible and Scalable SLAM System with Full 3D ?· The real-time correlative scan matching approach…

  • Published on
    18-Sep-2018

  • View
    214

  • Download
    0

Embed Size (px)

Transcript

<ul><li><p>A Flexible and Scalable SLAM System with Full3D Motion Estimation</p><p>Stefan Kohlbrecher and Oskar von StrykTechnische Universitat Darmstadt</p><p>Hochschulstrae 10Darmstadt, Germany</p><p>kohlbrecher,stryk@sim.tu-darmstadt.de</p><p>Johannes Meyer and Uwe KlingaufTechnische Universitat Darmstadt</p><p>Petersenstrae 30Darmstadt, Germany</p><p>meyer,klingauf@fsr.tu-darmstadt.de</p><p>AbstractFor many applications in Urban Search and Rescue(USAR) scenarios robots need to learn a map of unknownenvironments. We present a system for fast online learning ofoccupancy grid maps requiring low computational resources.It combines a robust scan matching approach using a LIDARsystem with a 3D attitude estimation system based on inertialsensing. By using a fast approximation of map gradients anda multi-resolution grid, reliable localization and mapping ca-pabilities in a variety of challenging environments are realized.Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the systemis sufficiently accurate as to not require explicit loop closingtechniques in the considered scenarios. The software is availableas an open source package for ROS.</p><p>Keywords: Simultaneous Localization and Mapping, InertialNavigation, Robust and Fast Localization</p><p>I. INTRODUCTIONThe ability to learn a model of the environment and to</p><p>localize itself is one of the most important abilities of trulyautonomous robots able to operate within real world envi-ronments. In this paper, we present a flexible and scalablesystem for solving the SLAM (Simultaneous Localizationand Mapping) problem that has successfully been used onunmanned ground vehicles (UGV), unmanned surface vehicles(USV) and a small indoor navigation system. The approachconsumes low computational resources and thus can be usedon low-weight, low-power and low-cost processors such asthose commonly used on small-scale autonomous systems.Our approach uses the ROS meta operating system [1] asmiddleware and is available as open source software. It honorsthe API of the the ROS navigation stack and thus can easilybe interchanged with other SLAM approaches available in theROS ecosystem.</p><p>The system introduced in this paper aims at enabling suffi-ciently accurate environment perception and self-localizationwhile keeping computational requirements low. It can be usedfor SLAM in small scale scenarios where large loops donot have to be closed and where leveraging the high updaterate of modern LIDAR systems is beneficial. Such scenariosinclude the RoboCup Rescue competition, where robots have</p><p>SLAM subsystem (2D)</p><p>LIDAR</p><p>Preprocessing Scan Matching Mapping</p><p>Navigation subsystem (3D)</p><p>IMU GPS Compass</p><p>Navigation Filter</p><p>Altimeter...</p><p>Controller</p><p>2D Pose Estimate Attitude and Initial Pose</p><p>Stabilization</p><p>Joint Values</p><p>Fig. 1. Overview of the mapping and navigation system (dashed lines depictoptional information)</p><p>to find victims in simulated earthquake scenarios which featurerough terrain and thus require full 6DOF motion estimation ofvehicles, or the indoor navigation of agile aerial vehicles thatmove fast compared to ground robots. Previous results wherethe system has been used in the context of building semanticworld models in USAR environments are available in [2].</p><p>Our approach combines a 2D SLAM system based on theintegration of laser scans (LIDAR) in a planar map and anintegrated 3D navigation system based on an inertial mea-surement unit (IMU), which incorporates the 2D informationfrom the SLAM subsystem as one possible source of aidinginformation (Fig. I). While SLAM usually runs in soft real-time triggered by the updates of the laser scanner device, thefull 3D navigation solution is calculated in hard real-time andusually forms a part of the vehicle control system.</p><p>II. RELATED WORK</p><p>There has been a wealth of research into the SLAM problemin recent years, with reliably working solutions for typicaloffice-like indoor scenarios using Rao-Blackwellized particlec 2011 IEEE</p></li><li><p>filters like Gmapping [3] being available as open source soft-ware. However, these solutions work best in planar environ-ments, rely on available, sufficiently accurate odometry and donot leverage the high update rate provided by modern LIDARsystems. For unstructured environments, that lead to significantroll and pitch motion of the carrier, or implementation onaerial platforms such systems are not applicable or have tobe modified significantly.</p><p>A distinction between SLAM frontend and backend systemhas to be made. While SLAM frontends are used to estimaterobot movement online in real-time, the backend is usedto perform optimization of the pose graph given constraintsbetween poses that have been generated before using thefrontend. The approach presented in this paper serves as aSLAM frontend and does not provide pose graph optimizationlike the solutions presented in [4] and [5]. However, we showthat in many scenarios such optimizations are not needed underreal world conditions as the approach is sufficiently accuratefor robots to perform their mission.</p><p>Multiple indoor navigation systems based on laser scanmatching have been presented for the use on Quadrotor UAVs[6], [7], [8]. Here, a two-stage approach is employed, with afront end fast scan alignment step for pose estimation and aslower backend mapping step running in the background or ona remote computer. The pose estimates from scan alignmentare not directly incorporated in the vehicles control loop andthus they move at a low speed only.</p><p>In [9] and [10] other front end systems used on mobilerobots are described. In contrast to system presented in thispaper they do not provide a full 6DOF pose estimate and arenot available as open source software.</p><p>Work on localization using scan matching started with theIterative Closest Point (ICP) [11] which originated as a generalapproach for registering 3D point clouds. The main drawbackof many ICP-based methods is the expensive search for pointcorrespondences, which has to be done in every iteration. PolarScan Matching (PSM) [12] avoids the correspondence searchby taking advantage of the natural polar coordinate systemof the laser scans to estimate a match between them. Scanshave to be preprocessed to be used in the polar scan matcher.The real-time correlative scan matching approach [13] uses anexhaustive sampling based approach for scan matching. Usingseveral optimizations this approach is capable of real-timeapplication. Normal Distribution Transform (NDT) [14] basedscan matching aligns scans to a mixture of normal distributionsrepresenting preceding scans.</p><p>For littoral water scenarios there has been research intousing expensive multi-sensor scanners [15], but to the authorsknowledge there is no single emitter LIDAR-based SLAMapproach available that was tested under real world conditions.</p><p>III. SYSTEM OVERVIEW</p><p>As the presented system has to be usable on platforms thatexhibit full 6DOF motion as opposed to the 3DOF motionassumed in many other 2D grid-based SLAM algorithms,our system has to estimate the full 6DOF state consisting</p><p>of translation and rotation of the platform. To achieve this,the system consists of two major components. A navigationfilter fuses information from the inertial measurement unitand other available sensors to form a consistent 3D solution,while a 2D SLAM system is used to provide position andheading information within the ground plane. Both estimatesare updated individually and only loosely coupled so that theyremain synchronized over time.</p><p>We define the navigation coordinate system as a right-handed system having the origin at the starting point of theplatform with the z axis pointing upwards and the x axispointing into the yaw direction of the platform at startup. Thefull 3D state is represented by x =</p><p>(T pT vT</p><p>)T, where</p><p> = (, , )T are the roll, pitch and yaw Euler angles, and</p><p>p = (px, py, pz)T and v = (vx, vy, vz)</p><p>T are the position andvelocity of the platform expressed in the navigation frame.</p><p>The inertial measurements constitute the input vectoru =</p><p>(T aT</p><p>)Twith the body-fixed angular rates =</p><p>(x, y, z)T and accelerations a = (ax, ay, az)</p><p>T. The mo-tion of an arbitrary rigid body is described by the nonlineardifferential equation system</p><p> = E (1)p = v (2)v = R a + g (3)</p><p>where R is the direction cosine matrix that maps a vector inthe body frame to the navigation frame. E maps the body-fixed angular rates to the derivatives of the Euler angles andg is the constant gravity vector [16]. The effects of pseudoforces due to the earths rotation can usually be neglectedwhen low-cost sensors are used.</p><p>Due to sensor noise the integrated velocity and positionexhibit significant drift. Therefore further sensor informationhas to be integrated. In this paper, this information is providedby the scan matcher, which is well suited for use in indoorscenarios. Other possible sources are magnetic field sensorsfor heading information or barometric pressure sensor foraltitude estimation. If available, wheel odometry can be usedto provide measurements of velocity. In outdoor scenarios,satellite navigation systems are commonly used as an aidingsystem to prevent the inertial navigation solution from drifting[17].</p><p>Depending on the target platform, further constraints can beintroduced in the system equation, resulting in a reduction ofthe state space.</p><p>IV. 2D SLAM</p><p>To be able to represent arbitrary environments an occupancygrid map is used, which is a proven approach for mobile robotlocalization using LIDARs in real-world environments [18].As the LIDAR platform might exhibit 6DOF motion, the scanhas to be transformed into a local stabilized coordinate frameusing the estimated attitude of the LIDAR system. Using theestimated platform orientation and joint values, the scan isconverted into a point cloud of scan endpoints. Depending</p></li><li><p>(a) (b)</p><p>Fig. 2. (a) Bilinear filtering of the occupancy grid map. Point Pm is thepoint whose value shall be interpolated. (b) Occupancy grid map and spatialderivatives.</p><p>on the scenario, this point cloud can be preprocessed, forexample by downsampling the number of points or removalof outliers. For the presented approach, only filtering basedon the endpoint z coordinate is used, so that only endpointswithin a threshold of the intended scan plane are used in thescan matching process.</p><p>A. Map Access</p><p>The discrete nature of occupancy grid maps limits theprecision that can be achieved and also does not allow thedirect computation of interpolated values or derivatives. Forthis reason an interpolation scheme allowing sub-grid cellaccuracy through bilinear filtering is employed for both esti-mating occupancy probabilities and derivatives. Intuitively, thegrid map cell values can be viewed as samples of an underlyingcontinuous probability distribution.</p><p>Given a continuous map coordinate Pm, the occupancyvalue M(Pm) as well as the gradient M(Pm) =(</p><p>Mx (Pm),</p><p>My (Pm)</p><p>)can be approximated by using the four</p><p>closest integer coordinates P00..11 as depicted in Fig. 2(a).Linear interpolation along the x- and y-axis then yields</p><p>M(Pm) y y0y1 y0</p><p>(x x0x1 x0</p><p>M(P11) +x1 xx1 x0</p><p>M(P01)</p><p>)+y1 yy1 y0</p><p>(x x0x1 x0</p><p>M(P10) +x1 xx1 x0</p><p>M(P00)</p><p>)(4)</p><p>The derivatives can be approximated by:M</p><p>x(Pm) </p><p>y y0y1 y0</p><p>(M(P11)M(P01))</p><p>+y1 yy1 y0</p><p>(M(P10)M(P00)) (5)</p><p>M</p><p>y(Pm) </p><p>x x0x1 x0</p><p>(M(P11)M(P10))</p><p>+x1 xx1 x0</p><p>(M(P01)M(P00)) (6)</p><p>It should be noted that the sample points/grid cells of themap are situated on a regular grid with distance 1 (in mapcoordinates) from each other, which simplifies the presentedequations for the gradient approximation.</p><p>B. Scan Matching</p><p>Scan matching is the process of aligning laser scans witheach other or with an existing map. Modern laser scannershave low distance measurement noise and high scan rates. Amethod for registering scans might yield very accurate resultsfor this reason. For many robot systems the accuracy andprecision of the laser scanner is much higher than that ofodometry data, if available at all. The evaluation scenariospresented in section VI show examples where odometry datais not available.</p><p>Our approach is based on optimization of the alignmentof beam endpoints with the map learnt so far. The basicidea using a Gauss-Newton approach is inspired by work incomputer vision [19]. Using this approach, there is no needfor a data association search between beam endpoints or anexhaustive pose search. As scans get aligned with the existingmap, the matching is implicitly performed with all precedingscans.</p><p>We seek to find the rigid transformation = (px, py, )T</p><p>that minimizes</p><p> = argmin</p><p>ni=1</p><p>[1M(Si())]2 (7)</p><p>that is, we want to find the transformation that gives the bestalignment of the laser scan with the map. Here, Si() are theworld coordinates of scan endpoint si = (si,x, si,y)T. They area function of , the pose of the robot in world coordinates:</p><p>Si() =</p><p>(cos() sin()sin() cos()</p><p>)(si,xsi,y</p><p>)+</p><p>(pxpy</p><p>)(8)</p><p>The function M(Si()) returns the map value at the coor-dinates given by Si(). Given some starting estimate of ,we want to estimate which optimizes the error measureaccording to</p><p>ni=1</p><p>[1M(Si( + ))]2 0 . (9)</p><p>By first order Taylor expansion of M(Si(+ )) we get:n</p><p>i=1</p><p>[1M(Si())M(Si())</p><p>Si()</p><p>))</p><p>]2 0 .</p><p>(10)This equation is minimized by setting the partial derivative</p><p>with respect to to zero:</p><p>2</p><p>ni=1</p><p>[M(Si())</p><p>Si()</p><p>]T[1M(Si())M(Si())</p><p>Si()</p><p>))</p><p>]= 0 (11)</p><p>Solving for yields the Gauss-Newton equation for theminimization problem:</p><p> = H1n</p><p>i=1</p><p>[M(Si())</p><p>Si()</p><p>]T[1M(Si())]</p><p>(12)</p></li><li><p>with</p><p>H =</p><p>[M(Si())</p><p>Si()</p><p>]T [M(Si())</p><p>Si()</p><p>](13)</p><p>An approximation for the map gradient M(Si()) is pro-vided in section IV-A. With equation (8) we get</p><p>Si()</p><p>=</p><p>(1 0 sin()si,x cos()si,y0 1 cos()si,x sin()si,y</p><p>)(14)</p><p>Using M(Si()) and Si() , the Gauss-Newton equation(12) can now be evaluated, yielding a step towards theminimum. It is important to note that the algorithm workson non-smooth linear approximations of the map gradientM(Si()), meaning that local quadratic convergence to-wards a minimum cannot be guaranteed. Nevertheless, thealgorithm works with sufficient accuracy in practice.</p><p>For many applications a Gaussian approximation of thematch uncertainty is desirable. Examples are updates of para-metric filters as well as pose constraints for use with graphoptimization SLAM backends. One approach is to use asampling based covariance estimate, sampling different poseestimates close to the scan matching pose and constructinga covariance from those. This is similar to the idea of theUnscented Kalman Filter [20]. A second method is the useof the approximate Hessian matrix to arrive at a covarianceestimate. Here, the covariance matrix is approximated by</p><p>R = Var{} = 2 H1 (15)</p><p>where is a scaling factor dependent on the properties of thelaser scanner device. A comprehensive derivation is availablein [21].</p><p>C. Multi-Resolution Map Representation</p><p>Any hill climbing/gradient based approach has the inher-ent risk of getting stuck in local minima. As the presentedapproach is based on gradient ascent, it also is potentiallyprone to get stuck in local minima. The problem is mitigatedby using a...</p></li></ul>