9
STAR CENTER S h a n g h a i T e c h A u t o m a t i o n a n d R o b o t i c s C e n t e r 大学 MARS LAB Towards Generation and Evaluation of Comprehensive Mapping Robot Datasets Hongyu Chen 1 , Xiting Zhao, Jianwen Luo, Zhijie Yang, Zehao Zhao, Haochuan Wan, Xiaoya Ye, Guangyuan Weng, Zhenpeng He, Tian Dong, and S¨ oren Schwertfeger 1 Accepted for: IEEE Conference on Robotics and Automation (ICRA) 2019 workshop Citation: Hongyu Chen 1 , Xiting Zhao, Jianwen Luo, Zhijie Yang, Zehao Zhao, Haochuan Wan, Xiaoya Ye, Guangyuan Weng, Zhenpeng He, Tian Dong, and S¨ oren Schwertfeger 1 , ”Towards Generation and Evaluation of Comprehensive Mapping Robot Datasets”, IEEE Conference on Robotics and Automation (ICRA) 2019 workshop: IEEE Press, 2019. This is a publication from the Mobile Autonomous Robotic Systems Lab (MARS Lab), School of Information Science and Technology (SIST) of ShanghaiTech University. For this and other publications from the MARS Lab please visit: https://robotics.shanghaitech.edu.cn/publications c 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. arXiv:1905.09483v2 [cs.RO] 24 Aug 2019

Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

STAR CENTER

Shan

ghaiT

ech A

utom

ation

and R

obotics Center • 上海科技大

学自动化与机器人中心

MARS LAB

Towards Generation and Evaluation ofComprehensive Mapping Robot Datasets

Hongyu Chen1, Xiting Zhao, Jianwen Luo, Zhijie Yang, Zehao Zhao, Haochuan Wan,Xiaoya Ye, Guangyuan Weng, Zhenpeng He, Tian Dong, and Soren Schwertfeger1

Accepted for:

IEEE Conference on Robotics and Automation (ICRA) 2019 workshop

Citation:

Hongyu Chen1, Xiting Zhao, Jianwen Luo, Zhijie Yang, Zehao Zhao, Haochuan Wan,Xiaoya Ye, Guangyuan Weng, Zhenpeng He, Tian Dong, and Soren Schwertfeger1, ”Towards Generation andEvaluation of Comprehensive Mapping Robot Datasets”, IEEE Conference on Robotics and Automation (ICRA) 2019workshop: IEEE Press, 2019.

This is a publication from the Mobile Autonomous Robotic Systems Lab (MARS Lab), School of Information Scienceand Technology (SIST) of ShanghaiTech University. For this and other publications from the MARS Lab please visit:https://robotics.shanghaitech.edu.cn/publications

c© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses,in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component ofthis work in other works.

arX

iv:1

905.

0948

3v2

[cs

.RO

] 2

4 A

ug 2

019

Page 2: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Towards Generation and Evaluation ofComprehensive Mapping Robot Datasets

Hongyu Chen1, Xiting Zhao, Jianwen Luo, Zhijie Yang, Zehao Zhao, Haochuan Wan,Xiaoya Ye, Guangyuan Weng, Zhenpeng He, Tian Dong, and Soren Schwertfeger1

Abstract— This paper presents a fully hardware synchronizedmapping robot with support for a hardware synchronized exter-nal tracking system, for super-precise timing and localization.We also employ a professional, static 3D scanner for groundtruth map collection. Three datasets are generated to evaluatethe performance of mapping algorithms within a room andbetween rooms. Based on these datasets we generate maps andtrajectory data, which is then fed into evaluation algorithms.The mapping and evaluation procedures are made in a veryeasily reproducible manner for maximum comparability. In theend we can draw a couple of conclusions about the tested SLAMalgorithms.

I. INTRODUCTION

Localization and mapping are essential robotic tasks andare often solved together in a Simultaneous Localization andMapping (SLAM) system [1]. SLAM systems have to beevaluated regarding their performance. But this is not a trivialtask.

Often, ground truth robot paths are used to measure thequality of the SLAM system, since it is assumed that a goodlocalization will result in good maps. In [2] and [3] theground truth paths are compared with the paths estimatedby the SLAM algorithms. A metric for measuring the errorof the manually corrected trajectory of datasets is availableto the public in [4]. Recently, Zhang and Scaramuzza haveprovided a tutorial and software for quantitative trajectoryevaluation [5].

If ground truth paths are not available, the maps can beused to evaluate the quality of the mapping system. For thatimage similarity methods [6] and pixel-level feature detectors[7], [8] can be applied to the maps, but have their limitations,because maps often have errors like structures appearingmore than once due to localization errors. More high levelfeatures like barrels are used for evaluation in [9], [10] andalso in 3D maps in [11].

Another approach is to capture the topology of the mapsand use the matches for comparison [12]. There are alsoevaluation methods that don’t rely on ground truth data. In[13] suspicious and plausible arrangements of planes in 3Dscans are detected and the map is evaluated accordingly.

Often, simulations are used for the evaluation of SLAMsystems [14]. A great factor for errors in SLAM systemsis the noise of the sensors. It is thus essential to accuratelysimulate the noise and errors in the simulation, a task whichis not simple. Datasets of 3D sensor measurements and

1All authors are with ShanghaiTech University <chenhy3,soerensch>@shanghaitech.edu.cn

Fig. 1. The MARS Mapper robot with sensors, that is used in this paper.

ground truth poses for benchmarking the performance ofSLAM algorithms are provided in [15] and [16]. The groundtruth information has been obtained using a tracking systemand by creating the data in a simulation, respectively.

In this paper we propose to use an advanced mappingsystem with sensors that are hardware-synchronized to anexternal tracking system to collect data for benchmarkingdatasets. We believe this approach is a valuable supplementto SLAM evaluation using simulations, because it ensuresreal sensor noise and real locomotion, vibrations and otherfactors that are difficult to accurately simulate. Using thetracking system we gather ground truth localization infor-mation. But we believe that it is also important to evaluatethe mapping performance, especially for visual SLAM. Sowe also collect ground truth map information using a pro-fessional, static 3D scanner (Faro Focus 3D).

We provide three datasets. One purely within the trackingsystem, especially also for evaluating mapping performance.Additionally we have two longer, very similar datasets, thatstart and end in the tracking system. One can do loop closingat the end, the other cannot (no overlapping sensor data).Using these datasets we aim to evaluate the loop-closing and

Page 3: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Fig. 2. Top-view of the ground-truth faro point cloud (480mill points) with the MARS-8 path (green), the MARS-Loop and MARS-NoLoop paths (red)as well as the location of the curtain (blue) added.

scan-matching performance.We provide ROS launch files to automatically generate the

maps from the datasets and to evaluate them by comparisonto ground truth. Our results are thus fully reproducible.

The key contributions of this paper are thus:• Presentation of a hardware-synchronized advanced map-

ping robot with external tracking.• Generation of three datasets for mapping and SLAM

evaluation.• Providing reproducible evaluations of standard mapping

software based on the datasets.This work is work-in-progress and we will provide more

mapping solutions and evaluation systems based on thedatasets with the final version of this paper, if it getsaccepted.

The reminder of the paper is organized as follows: SectionII describes the mapping robot and the other hardware andprocedures for dataset generation. In Section III we describethe three datasets we provide. The reproducible evaluationof those datasets explained in Section IV, followed by theconclusions in Section V.

II. MAPPING SYSTEM

Our mapping system consists of a Clearpath Jackal robotwith an upgraded power supply and computer (Intel Core

i7-6770k CPU, Raid 0: 3x Samsung 850 EVO 500G). Themainboard supports eight independent USB 3.1 ports, whichare mainly used for the cameras. The robot is collecting datafrom the following sensors:

• Nine 5MP wide-angle color cameras (FLIR Grasshop-per3 GS3-U3-51S5C-C) with wide-angle lenses (82◦ x61◦), 10Hz (4 stereo pairs: front, left, right, up; oneback-looking camera)

• Two Velodyne HDL-32E 3D Laser scanners, 10Hz (onehorizontal, one vertical; both in dual-return mode)

• IMU Xsense MTi-300, 200Hz• Robot odometry• Optitrack tracking system (21 Prime 13 cameras, 30Hz)We compress the camera images with JPEG quality 90.

Due to CPU speed limitations we can not store much morethan 10Hz for the 9 cameras, so we chose to collect theimages with the same frequency as the Velodynes. Thisresults in a total storage bandwidth of about 170 MB/s.

A. Synchronization

Especially with respect to quality evaluation, but also formapping systems in general, it is important to synchronizeall the sensor data. We use an Asus Tinker Board with aquad core 1.8 GHz ARM Cortex-A17 processor to providehardware synchronization. The Tinker Board serves as our

Page 4: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

reference time. It is triggering the cameras and the trackingsystem with 10 Hz and the Xsens IMU with 200Hz. Buthardware triggering of sensors is only half of the job:Afterwards the data from the cameras arrives on PC inthe Jackal robot at different times, due to USB and CPUscheduling issues. We thus need to be able to associate thehardware triggers with the actual data and make use of thisinfo in the software. Thus, for every trigger it generates,the Tinker board also sends a ROS timestamp to the PC,which is collected in the bagfile. In a post-processing stepwe then match the sensor data with this time-stamp and thencorrect the time-stamps of the data. Afterwards all data (e.g.all images and the tracking data and the IMU information)that was triggered together will have exactly the same time-stamp.

The tracking system is also triggered by the Tinker board,but it is then increasing the frequency from 10Hz to 30Hz.For that the robot is physically connected to the trackingsystem via an ethernet cable when inside the systems cameraview. Before leaving or entering the tracking system wemanually (un)plugged this cable, briefly stopping the data-collection for this. To avoid having a second ethernet cablefrom the tracking system to the robot, we collect the trackingsystem’s (which is running on Windows) data on a separateUbuntu PC and merge the bagfiles later. Before each run,the time of the PC, the Jackal PC and the Tinker Board areupdated via NTP from the router of our lab.

Since the Velodyne is a rotating sensor, it cannot betriggered. Instead it time-stamps its messages using GPS ppsand NMEA data. The Tinker Board is providing fake GPSdata with its own time to the Velodynes, such that their dataarrives at the Jackal PC already with the correct time stamp.

B. Calibration

Our MARS Mapper robot (MARS is the acronym ofour Mobile Autonomous Robotic Systems Lab) is fullycalibrated. Intrinsic calibrations are acquired using the knownmethods. The extrinsic calibration of the sensors (i.e. theirposes) are gathered by pair-wise calibration of various, alsoheterogeneous sensor pairs (4x stereo cameras, 32x non-overlapping cameras, 13x lidar to camera, 1x lidar to lidar,9x tracking system to camera) and then minimizing the errorusing G2O [17]. Figure 3 shows a Velodyne scan wherethe points that are within the field of view of one of the7 horizontal cameras are colored.

Because we calibrated the sensors with the tracking sys-tem, we can ready use the tracking system results as thepose of the robot. The tracking system reports an error ofthe collected poses of less than 1.5mm.

III. DATASETS

The datasets were collected in the Mobile AutonomousRobotic Systems Lab (MARS Lab) of ShanghaiTech Uni-versity, and have thus MARS in their name:

• MARS-8: A short (23m) figure eight driven by themapping robot with continuous tracking information.

Fig. 3. A 3D Lidar scan colored by all the 7 horizontal cameras. Thetransformations between each camera and 3D lidar are acquired from globaloptimization result. All the green points represent areas where no camerais overlapping with the point cloud.

For basic SLAM evaluation and evaluation of mappingperformance.

• MARS-Loop: A medium length (77m) mapping run,starting in the tracking system in the MARS lab, leavingthe lab and re-entering it through a different door,finally entering the tracking system again and finishingat the start pose. For evaluation of basic loop closingperformance.

• MARS-NoLoop: The MARS lab is devided into twoparts by two curtains (10cm apart; along the center ofthe tracking system). The robot follows the same path asMARS-Loop, except that it stops a little earlier (becausethe curtains are in the way). The robot starts and endsin the same tracking system. No loop closing is possiblebetween the start and end of the dataset, because thereis almost no overlap between the areas.

Figure 2 shows the paths of the robot in the differentdatasets: green for MARS-8 and red for MARS-Loop andMARS-NoLoop. The approximate paths we followed arealso marked on the ground and are thus visible in someof the collected camera images (black for MARS-8 (andMARS-Loop where they overlap) and white for MARS-Loop). MARS-NoLoop is following MARS-Loop, exceptstopping earlier. Almost nothing in the environment waschanged between the robot data collections nor for the Faroscans.

Figure 2 is actually the complete point cloud (480 millionpoints) from the 18 FARO scans we collected (each about 27million points). We used the FARO Scene software to registerthe scans. It reported an average error between the scanpoints of 1.2mm. This is an excellent value and much smallerthan the expected sensor noise. The FARO data can thusserve as ground truth for map comparison. The approximatepositions of the Faro scans are marked with red crosses onthe ground. Most of the scans were taken at a hight similar tothe horizontal Velodyne (61 cm). Figure 4 shows the MARSLab with the markings on the floor, the Faro scanner and therobot. It also shows the curtain for MARS-NoLoop.

We also placed several checkerboards in the lab. Addi-

Page 5: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Fig. 4. The robot and the FARO scanner in the MARS lab on top. Below,the same area with the curtain (made with tarp) for map MARS-NoLoop.

tionally we have many April tags distributed on the ceilingand, in the MARS Lab, also on the walls. In the future weplan to evaluate how well those can be used for localizationevaluation of SLAM algorithms. For good measure we alsoplaced other cool robots of the MARS Lab as well as asmall living-room arrangement with sofa, plants and TV inthe scene.

Dataset statistics:• MARS-8: 16.4GB; 99 seconds• MARS-Loop: 50.7GB; 290 seconds• MARS-NoLoop: 54.8GB; 315 seconds• MARS-8-Sample: 500MB, 3 secondsThe datasets are available online 1. We also provide a very

short and small sample dataset from within MARS-8.

IV. EVALUATION

Scientific results should be reproducible. Since we providethe dataset, we also want to give the reader the possibility tore-create the exact same map (barring differences caused byrandomized SLAM algorithms). We are thus providing ROSlaunch files (start scripts) that generate the maps and otherneeded information (e.g. the path estimated by the SLAMalgorithm). Furthermore, we also want to make it as easyas possible for the user to then evaluate the result, so weprovide the according tools on the dataset website.

1https://robotics.shanghaitech.edu.cn/datasets/MARS-Dataset

We apply the following mapping methods to our dataset:• 2D Grid Mapping (converting the horizontal Velodyne

scan in a 2D LRF message; 5cm resolution):– Hector Mapping [18]– Cartographer [19]– GMapping [20]

• 3D Point Cloud Mapping (with horizontal Velodyne):– BLAM 2

– BLAM with both Velodynes– Cartographer (for final version)

• visual SLAM:– ORB2 [21]– RTAB-Map mono [22] (for final version)– RTAB-Map stereo (for final version)

If needed we modify those algorithms to output the time-stamped path data as a text file. The trajectory estimated bythe SLAM algorithms is then compared to the trajectory ofthe tracking system. Of course this only works for the partsof the datasets that were collected inside the tracking system- other parts are omitted (and ”jumped over”). We use thesoftware provided by [5] for the evaluation.

Figure 5 shows the 2D grid maps from Hector Mappingand cartographer. We see that both algorithms have problemswhen coming back into the MARS lab and no loop closingis possible. Cartographer even has a broken map in MARS-Loop. Figure 6 shows the error in the trajectories betweenHector (top) and cartographer (bottom). Again, note the jumpfrom the bottom right corner to the top left in MARS-Loopand MARS-NoLoop: This is where the robot left the trackingsystem and later re-entered it. We can see that the shownerror correlates nicely with the perceived map quality ofFigure 5. Figure 10 quantifies the error of Figure 6 in adiagram. It shows absolute errors of 10cm for Hector MARS-8 and -Loop, but values round 1m error for the brokenMARS-NoLoop.

Figure 7 shows the 3D maps generated with BLAM andthe horizontal Velodyne. For comparison we include Figure8, which is generated using the localization estimate from thehorizontal BLAM, but only rendering the vertical Velodyne.In the future we will color all the points using the camerasand then do colored point cloud mapping.

We can make use of the trajectory evaluation shown inFigure 9. We see that the error is low, but looking at Figure10 we see that it is double the value of the good Hectormaps. The pointclouds in Figure 7 are good and nicely thedouble curtain in MARS-NoLoop.

We have also employed cloudcompare3 for quality mea-surement. We register the Faro point cloud with the robotpoint cloud and then calculate the RMS. The result is anRMS of 0.084 with a theoretical overlap of 90%. Figure 12shows the two point clouds overlaid.

Finally, we see the results of visual SLAM using just onecamera (forward-looking on the left side) in Figure 11. It

2https://github.com/erik-nelson/blam3http://cloudcompare.org/

Page 6: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Fig. 5. The 2D grid maps created by Hector Mapping on top and cartographer below, on maps MARS-8, MARS-Loop and MARS-NoLoop, respectively.

Fig. 6. The trajectories of the robot compared to the tracking system trajectories. On top Hector Mapping on MARS-8, MARS-Loop and MARS-NoLoop,respectively. Cartographer below.

Page 7: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Fig. 7. 3D maps generated by BLAM using the horizontal Velodyne for MARS-8 (left), MARS-Loop (middle) and MARS-NoLoop (right).

Fig. 8. 3D maps generated by BLAM, rendering only the vertical Velodyne for MARS-8 (left), MARS-Loop (middle) and MARS-NoLoop (right).

Fig. 9. Comparison of ground truth trajectory and BLAM for MARS-8, MARS-Loop and MARS-NoLoop, respectively.

also shows the feature map with camera poses and posegraph. The error shown is biggest compared to the laserbased SLAM algorithms. We don’t show results for MARS-NoLoop, because ORB2 lost track already in the other lab.

We will use other visual SLAM algorithms on the datasetin the future, in the hope that they will perform better.Especially when using all 9 cameras we hope to see muchimproved results. Another option is to explicitly make useof the four stereo camera systems.

V. CONCLUSIONSIn this paper we have presented our contributions in

three areas: Firstly, we have described a fully hardware-synchronized advanced mapping robot for research on laser-based and visual SLAM. Secondly, we have collected three

datasets for SLAM evaluation in short ranges. Thirdly, weprovided repeatable evaluation procedures and compared anumber of 2D, 3D and visual Simultaneous Localization andMapping algorithms with each other, using our dataset. Theresults confirm the intuition, that using loop closures the errorof maps can be reduced. We are also able to compare laser-based SLAM algorithms with visual SLAM algorithms andconclude, that, at least for our selection of algorithms, thelaser based 3D solution outperforms visual SLAM.

This project is still ongoing. In the final version of thispaper we hope to include a few more mapping algorithms(cartographer 3D, 3D SLAM including the vertical Velodyne,other visual SLAM algorithms). We will also improve theevaluation by also employing other algorithms.

Page 8: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Fig. 10. The translation error in all three axes for the part of the trajectories covered by the tracking system. On the top Hector Mapping with MARS-8(left), MARS-Loop (middle) and MARS-NoLoop (right). Below cartographer. BLAM is at the bottom.

Fig. 11. Results for ORB2 visual SLAM on MARS-8 (left) and MARS-Loop (right). On the very right is the ORB2 feature cloud and trajectory forMARS-8.

REFERENCES

[1] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira,I. Reid, and J. J. Leonard, “Past, present, and future of simultaneouslocalization and mapping: Toward the robust-perception age,” IEEETransactions on robotics, vol. 32, no. 6, pp. 1309–1332, 2016.

[2] O. Wulf, A. Nuchter, J. Hertzberg, and B. Wagner, “Ground truthevaluation of large urban 6d slam,” oct. 2007, pp. 650 –657.

[3] R. Kummerle, B. Steder, C. Dornhege, M. Ruhnke, G. Grisetti,C. Stachniss, and A. Kleiner, “On measuring the accuracy of slamalgorithms,” Autonomous Robots, vol. 27, no. 4, pp. 387–407, 2009.

[4] W. Burgard, C. Stachniss, G. Grisetti, B. Steder, R. Kummerle,C. Dornhege, M. Ruhnke, A. Kleiner, and J. Tardos, “A comparison ofslam algorithms based on a graph of relations,” in Intelligent Robotsand Systems, 2009. IROS 2009. IEEE/RSJ International Conferenceon, 2009, pp. 2089–2095.

[5] Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectoryevaluation for visual (-inertial) odometry,” in 2018 IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS). IEEE,2018, pp. 7244–7251.

[6] I. Varsadan, A. Birk, and M. Pfingsthorn, “Determining map qualitythrough an image similarity metric,” in RoboCup 2008: Robot World-Cup XII, Lecture Notes in Artificial Intelligence (LNAI), L. Iocchi,H. Matsubara, A. Weitzenfeld, and C. Zhou, Eds. Springer, 2009,pp. 355–365.

[7] J. Pellenz and D. Paulus, “Mapping and Map Scoring at theRoboCupRescue Competition,” Quantitative Performance Evaluationof Navigation Solutions for Mobile Robots (RSS 2008, Workshop CD),2008.

[8] R. Lakaemper and N. Adluru, “Using virtual scans for improvedmapping and evaluation,” Auton. Robots, vol. 27, no. 4, pp. 431–448,2009.

[9] S. Schwertfeger, A. Jacoff, C. Scrapper, J. Pellenz, and A. Kleiner,“Evaluation of maps using fixed shapes: The fiducial map metric,” inProceedings of PerMIS, 2010.

[10] S. Schwertfeger, A. Jacoff, J. Pellenz, and A. Birk, “Using a fiducialmap metric for assessing map quality in the context of robocup rescue,”in International Workshop on Safety, Security, and Rescue Robotics(SSRR). IEEE Press, 2011.

[11] S. Schwertfeger and A. Birk, “Using fiducials in 3d map evaluation,”in IEEE International Symposium on Safety, Security, Rescue Robotics(SSRR), IEEE Press. IEEE Press, 2015.

[12] ——, “Map evaluation using matched topology graphs,” AutonomousRobots, pp. 1–27, 2015. [Online]. Available: http://dx.doi.org/10.1007/s10514-015-9493-5

[13] M. Chandran-Ramesh and P. Newman, “Assessing map quality usingconditional random fields,” in Field and Service Robotics, SpringerTracts in Advanced Robotics, C. Laugier and R. Siegwart, Eds.Springer, 2008.

[14] C. Scrapper, R. Madhavan, and S. Balakirsky, “Stable navigation

Page 9: Towards Generation and Evaluation of Comprehensive Mapping ... · Two Velodyne HDL-32E 3D Laser scanners, 10Hz (one horizontal, one vertical; both in dual-return mode) IMU Xsense

Fig. 12. Faro ground truth point cloud overlaid with BLAM MARS-Loop in cloudcompare.

solutions for robots in complex environments,” in IEEE InternationalWorkshop on Safety, Security and Rescue Robotics (SSRR), 2007, pp.1–6.

[15] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “Abenchmark for the evaluation of rgb-d slam systems,” in IntelligentRobots and Systems (IROS), 2012 IEEE/RSJ International Conferenceon, Oct 2012, pp. 573–580.

[16] A. Handa, T. Whelan, J. McDonald, and A. Davison, “A benchmarkfor rgb-d visual odometry, 3d reconstruction and slam,” in Roboticsand Automation (ICRA), 2014 IEEE International Conference on, May2014, pp. 1524–1531.

[17] R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard,“g 2 o: A general framework for graph optimization,” in 2011 IEEEInternational Conference on Robotics and Automation. IEEE, 2011,pp. 3607–3613.

[18] S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, “A flex-ible and scalable slam system with full 3d motion estimation,” inProc. IEEE International Symposium on Safety, Security and RescueRobotics (SSRR). IEEE, November 2011.

[19] W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closurein 2d lidar slam,” in 2016 IEEE International Conference on Roboticsand Automation (ICRA). IEEE, 2016, pp. 1271–1278.

[20] G. Grisetti, C. Stachniss, W. Burgard et al., “Improved techniques forgrid mapping with rao-blackwellized particle filters,” IEEE transac-tions on Robotics, vol. 23, no. 1, p. 34, 2007.

[21] G. S. Blair, G. Coulson, N. Parlavantzas, K. Saikoski, A. Andersen,L. Blair, M. Clarke, F. Costa, H. Duran-Limon, T. Fitzpatrick et al.,“The design and implementation of open orb 2,” IEEE DistributedSystems Online, no. 6, p. null, 2001.

[22] M. Labbe and F. Michaud, “Rtab-map as an open-source lidar andvisual simultaneous localization and mapping library for large-scaleand long-term online operation,” Journal of Field Robotics, 2018.