11
Technical note Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm Miguel Pinto , Héber Sobreira, A. Paulo Moreira, Hélio Mendonça, Aníbal Matos INESC TEC – INESC Technology and Science (formerly INESC Porto), The Faculty of Engineering, University of Porto, rua Dr. Roberto Frias, 4200-465 Porto, Portugal article info Article history: Received 1 August 2012 Accepted 16 July 2013 Available online 17 August 2013 Keywords: Robot localisation Matching Multi-hypotheses Artificial vision Tilting Laser Range Finder abstract This paper proposes a new, fast and computationally light weight methodology to pinpoint a robot in a structured scenario. The localisation algorithm performs a tracking routine to pinpoint the robot’s pose as it moves in a known map, without the need for preparing the environment, with artificial landmarks or beacons. To perform such tracking routine, it is necessary to know the initial position of the vehicle. This paper describes the tracking routine and presents a solution to pinpoint that initial position in an autonomous way, using a multi-hypotheses strategy. This paper presents experimental results on the performance of the proposed method applied in two different scenarios: (1) in the Middle Size Soccer Robotic League (MSL), using artificial vision data from an omnidirectional robot and (2) in indoor environments using 3D data from a tilting Laser Range Finder of a differential drive robot (called RobVigil). This paper presents results comparing the proposed methodology and an Industrial Positioning System (the Sick NAV350), commonly used to locate Autonomous Guided Vehicles (AGVs) with a high degree of accuracy in industrial environments. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction In majority of the mobile robotic applications, the robots need to perform continuous and interruptible tasks, roles or actions, which require them to navigate autonomously. Therefore, to know where the robot is supposed to go when performing an autono- mous routine, the robot must know its own location. In this paper, the problem of estimating robot location consists of computing the robot’s pose: the 2D position (x and y coordinates); and the yaw orientation (h angle) relatively to the world referential. Some of the challenges that appear when estimating the vehi- cle’s pose are: (1) to perform the localisation without the need of prepare the environment with artificial landmarks or beacons; (2) in dynamic scenarios where people and objects cross the ro- bot’s navigation area; and (3) be able to take advantage from dead reckoning sensors (the high frequency rate), in particularly the odometry, despite the fact that it leads to an error which increases over time. This paper presents a robust and fast localisation algo- rithm which takes into account these issues. Most localisation algorithms need an initial estimation of the vehicle’s pose, close to the actual pose, and only after, those algo- rithms are capable to perform their function in an autonomous way, as exemplified by the Particle Filters [1], or by the works pre- viously published by authors of this paper: [2,3]. In the case of the Global Positioning System (GPS), the initial position of the vehicle is obtained by Triangulation, but it is only available for outdoor environments. In indoor cases, as described here, using artificial landmarks or beacons to compute, by triangu- lation or trilateration, the initial pose of the vehicle makes it nec- essary to prepare the environment. Sometimes, it is not possible or convenient to equip the indoor environment with artificial land- marks or beacons, since in some situations, becomes impractical both aesthetically and functionally. Therefore, the fundamental motivation for the work presented in this paper was: developing a fast localisation algorithm to find the initial position of the vehicle. The algorithm can be used online and in short time cycles, and is capable of operating in robust way in dynamic and different indoor scenarios, without having to use artificial landmarks or beacons. The self-localisation method described in this paper is com- posed of two steps: the Initial Position Computation and the Position Tracking. The Initial Position Computation is a global localisation step, which autonomously computes the initial estimative of the vehicle’s pose in the world referential, with no knowledge about 0957-4158/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.mechatronics.2013.07.006 Corresponding author. Tel.: +351 916871891. E-mail addresses: [email protected] (M. Pinto), [email protected] (H. Sobreira), [email protected] (A. Paulo Moreira), [email protected] (H. Mendonça), [email protected] (A. Matos). Mechatronics 23 (2013) 727–737 Contents lists available at ScienceDirect Mechatronics journal homepage: www.elsevier.com/locate/mechatronics

Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

  • Upload
    anibal

  • View
    219

  • Download
    5

Embed Size (px)

Citation preview

Page 1: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Mechatronics 23 (2013) 727–737

Contents lists available at ScienceDirect

Mechatronics

journal homepage: www.elsevier .com/ locate/mechatronics

Technical note

Self-localisation of indoor mobile robots using multi-hypothesesand a matching algorithm

0957-4158/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.mechatronics.2013.07.006

⇑ Corresponding author. Tel.: +351 916871891.E-mail addresses: [email protected] (M. Pinto), [email protected]

(H. Sobreira), [email protected] (A. Paulo Moreira), [email protected] (H. Mendonça),[email protected] (A. Matos).

Miguel Pinto ⇑, Héber Sobreira, A. Paulo Moreira, Hélio Mendonça, Aníbal MatosINESC TEC – INESC Technology and Science (formerly INESC Porto), The Faculty of Engineering, University of Porto, rua Dr. Roberto Frias, 4200-465 Porto, Portugal

a r t i c l e i n f o

Article history:Received 1 August 2012Accepted 16 July 2013Available online 17 August 2013

Keywords:Robot localisationMatchingMulti-hypothesesArtificial visionTilting Laser Range Finder

a b s t r a c t

This paper proposes a new, fast and computationally light weight methodology to pinpoint a robot in astructured scenario.

The localisation algorithm performs a tracking routine to pinpoint the robot’s pose as it moves in aknown map, without the need for preparing the environment, with artificial landmarks or beacons. Toperform such tracking routine, it is necessary to know the initial position of the vehicle. This paperdescribes the tracking routine and presents a solution to pinpoint that initial position in an autonomousway, using a multi-hypotheses strategy.

This paper presents experimental results on the performance of the proposed method applied in twodifferent scenarios: (1) in the Middle Size Soccer Robotic League (MSL), using artificial vision data froman omnidirectional robot and (2) in indoor environments using 3D data from a tilting Laser Range Finderof a differential drive robot (called RobVigil).

This paper presents results comparing the proposed methodology and an Industrial Positioning System(the Sick NAV350), commonly used to locate Autonomous Guided Vehicles (AGVs) with a high degree ofaccuracy in industrial environments.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

In majority of the mobile robotic applications, the robots needto perform continuous and interruptible tasks, roles or actions,which require them to navigate autonomously. Therefore, to knowwhere the robot is supposed to go when performing an autono-mous routine, the robot must know its own location. In this paper,the problem of estimating robot location consists of computing therobot’s pose: the 2D position (x and y coordinates); and the yaworientation (h angle) relatively to the world referential.

Some of the challenges that appear when estimating the vehi-cle’s pose are: (1) to perform the localisation without the need ofprepare the environment with artificial landmarks or beacons;(2) in dynamic scenarios where people and objects cross the ro-bot’s navigation area; and (3) be able to take advantage from deadreckoning sensors (the high frequency rate), in particularly theodometry, despite the fact that it leads to an error which increasesover time. This paper presents a robust and fast localisation algo-rithm which takes into account these issues.

Most localisation algorithms need an initial estimation of thevehicle’s pose, close to the actual pose, and only after, those algo-rithms are capable to perform their function in an autonomousway, as exemplified by the Particle Filters [1], or by the works pre-viously published by authors of this paper: [2,3].

In the case of the Global Positioning System (GPS), the initialposition of the vehicle is obtained by Triangulation, but it is onlyavailable for outdoor environments. In indoor cases, as describedhere, using artificial landmarks or beacons to compute, by triangu-lation or trilateration, the initial pose of the vehicle makes it nec-essary to prepare the environment. Sometimes, it is not possibleor convenient to equip the indoor environment with artificial land-marks or beacons, since in some situations, becomes impracticalboth aesthetically and functionally.

Therefore, the fundamental motivation for the work presentedin this paper was: developing a fast localisation algorithm to findthe initial position of the vehicle. The algorithm can be used onlineand in short time cycles, and is capable of operating in robust wayin dynamic and different indoor scenarios, without having to useartificial landmarks or beacons.

The self-localisation method described in this paper is com-posed of two steps: the Initial Position Computation and the PositionTracking. The Initial Position Computation is a global localisationstep, which autonomously computes the initial estimative of thevehicle’s pose in the world referential, with no knowledge about

Page 2: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

728 M. Pinto et al. / Mechatronics 23 (2013) 727–737

the vehicle previous state. This initial estimation is used to startthe next step: the Position Tracking. In this phase the vehicle’s poseis tracked and computed also in the world referential. For that thevehicle’s odometry and the acquired data on the environment areused. These two steps are explained in detail in Section 4.

The localisation method was applied in two different scenarioswith different data: (1) the RoboCup’s Middle Size Soccer RoboticLeague (MSL) in the robots of the 5DPO team where the authorsare members, using 2D data from artificial vision and (2) in indoorenvironments, with the RobVigil surveillance robot, using 3D dataprovided by a tilting Laser Range Finder.

Therefore, the fundamental contributions of this paper are theinitial computation of the vehicle’s pose and the adaptation ofthe light computational matching algorithm used in the robots ofthe 5DPO team with two-dimensional data so that it can be usedin the RobVigil, with three-dimensional data acquired by a tiltingLaser Range Finder (LRF), maintaining the low computationalrequirements.

The document is organised as follows: the literature review ispresented at Section 2, the map matching strategy is described inSection 3. The Perfect Match algorithm is explained in Section 4,and it is used in the localisation algorithm is described in Section 5.Experimental results are presented in Section 5. The last section in-cludes a discussion and conclusions about this work.

2. Literature review

Within civil, service and industrial contexts, the increasingautonomy in robotics has made it possible to create robots whichare applicable in our everyday lives. The PR2 robot, [4] which isequipped with a planar LRF at the base, uses a previous 2D mapand a Particle Filter during the localisation process. The Roombais a robot that cleans house floors autonomously, while the Roviois a small omnidirectional robot which supervises house interiorsand its localisation is possible with beacons. Examples of enter-prises that successful develop Autonomous Guided Vehicles (AGVs)applicable in industry are AGV Electronics and Kiva Systems.

Sensors and techniques used in the localisation and navigationof robots are described in [5]. They are divided into absolute andrelative localisation. Dead reckoning sensors are relative localisa-tion sensors which cause an error increase over time. Examplesare: odometry (the most common), accelerometers, gyroscopes,inertial navigation systems (INS), inertial measurement units(IMU) and Doppler-effect sensors (DVL).

The dead reckoning sensors are commonly used to fuse withmore complex techniques or sensors, using for that the Kalmanor the Particle Filters, [6]. Examples of more complex techniquesor sensors, which provide more quality and quantity information,include infrared sensors, ultrasound sensors, Laser Range Finders,artificial vision and techniques based on beacons.

The absolute localisation sensors and techniques provide infor-mation on the robot’s position in the world frame. Examples ofabsolute localisation sensors include attitude sensors, digital com-passes, GPS and passive or active beacons, such as acoustic beacons[5]. The two essential localisation techniques based on active orpassive beacons are triangulation and trilateration, [7]. However,these methods require previous environment preparation.

The algorithms used for locating mobile robots can be dividedinto two large areas: the matching algorithms and the Simulta-neous Localisation and Mapping algorithms (SLAM).

There are matching algorithms that require a prior knowledgeon the navigation area (examples [3,8]). Another example is thePerfect Match described by Lauer et al. [9], which is used inRoboCup’s Robotic Soccer Middle Size League (MSL). There areother types of matching algorithms which compute the overlap-ping zone between consecutive observations to obtain the vehicle’s

displacement. Examples are the Iterative Closest Point algorithmsfamily (ICP) [10]. In addition to the computational time spent,the ICP approach has another problem, which is that sometimesthere is no sufficient overlapping between two consecutive laserscans, and therefore it is hard to find a correct solution.

In [11,12] it is possible to find solutions capable of computingthe displacement between consecutively acquired images, usingstereo-vision, by matching those images. The algorithms describedin those papers represent successful results of visual odometry,which proved to be more accurate when compared to the vehicle’sencoder odometry in outdoor scenarios. The survey by Mattieset al. [13] is an important study describing the evolution, improve-ments and opportunities of computer vision over the last four dec-ades. Chen et al. [14] presents a survey describing the most recentdevelopments on artificial vision entering in account, among otherapplications, localisation and mapping.

Although the encouraging results obtained in the works re-ferred in the previous paragraph, the work presented in this paperdo not follow the same approach since the processing of artificialimages needs huge computational requisites.

The most common solutions for the SLAM problem are: the Ex-tended Kalman Filter (EKF-SLAM) exemplified in [15–17], and theFastSlam or the Rao-Blackwellized Particle Filters, exemplified in[18–20]. The EKF-SLAM uses only one state matrix representingthe vehicle state and the feature map, which is increased whenevera new feature is found. Contrarily, the FastSlam can be seen as a ro-bot and a collection of N landmark estimation problems. The com-putational complexity of EKF-SLAM is O(N2), while the FastSlam isO(M logN), with M particles and N landmarks.

3. Map matching

The 5DPO’s robots participate in the games of RoboCup’s Ro-botic Soccer Medium Size League (MSL). They use a camera witha 360� field of view (Fig. 1) as sensorial method to see the sur-roundings. An initial manual calibration of the colours is performedon the camera’s image. Therefore, the HSV colour of the field, thelines, the ball and the obstacles are defined manually and kept ina look-up table. After that, using that look-up table, at each ac-quired image, it is possible to apply the segmentation algorithmand make the correspondence between each pixel and the respec-tive colour (Fig. 2).

In the segmented image, it is possible to detect green to whitetransitions along the camera’s 360� field of view. The resulting de-tected transitions (shown in Fig. 1) are transformed into the 2Dcoordinate points. These points are helpful to locate the robotand make it possible to apply the Perfect Match algorithm, using2D data, as described in paper [9].

In the MSL, the dimensions of the map (soccer field) are known.The knowledge on the map makes it possible to calculate threefundamental matrices in the Perfect Match algorithm, as explainedin Section 3: the distance matrix and the gradient matrices, alongthe x and y directions. These matrices are stored in the memoryto be used in the localisation process. The use of these pre-com-puted matrices makes the localisation algorithm extremely fast,and therefore it can be used online.

The following figure, Fig. 3, shows a robot of the 5DPO team andthe Soccer Field in the 5DPO’s lab, while Fig. 4 shows the distancematrix of the MSL field map. In this first scenario, the 360� visiondata is compared with the known map (distance and gradientmatrices) to perform a matching algorithm used in the localisation,as described in Section 3.

The localisation methodology was also applied in a second ro-bot called RobVigil. The RobVigil is a differential drive robot, whichnavigates in indoor buildings using three-dimensional data

Page 3: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Transitions Green-White

Fig. 1. 360� Vision. Green to white transitions are detected and used in thelocalisation process. (For interpretation of the references to colour in this figurelegend, the reader is referred to the web version of this article.)

Fig. 2. Segmented image: field (green), field lines (white), obstacles (black) and theball (orange/circular form).

Fig. 3. Robotic soccer field/robot from the 5DPO team.

Fig. 4. Distance matrix of the Robotic Soccer Field.

Fig. 5. RobVigil robot equipped with the observation module.

M. Pinto et al. / Mechatronics 23 (2013) 727–737 729

acquired by a tilting LRF. In Fig. 5 it is possible to see the RobVigil.The tilting LRF is shown in Fig. 6.

The LRF (Hokuyo URG-04LX-UG01) was used to perceive theenvironment. In order to obtain a three-dimensional sensor, arotating platform was created based on the dc servo motor AX-12 Dynamixel Bioloid. The knowledge about the relation betweenthe laser and the vehicle frames (including the servo motor’s rota-tion angle and speed, acquired at each cycle), make it possible toapply a set of homogeneous transformations (rotations and trans-lations), which allows to transform the 2D points on the laserframe, into the 3D points on the vehicle frame.

The upper side of a building, including vertical walls and theceiling above the normal height of a person (the headroom above1.8 m of height), can be seen as a static scenario without movingobstacles which remain unaltered for long periods of time. If thetilting LRF scans this static scenario, the data (in the 3D space)can be used to locate the vehicle (in the 2D space – x, y and hh),even in dynamic scenarios where plenty of people move in thenavigation area.

If a three-dimensional map of the static scenario has been pre-viously created, the localisation algorithm used by the 5DPO robotscan be adapted to use the 3D points of the tilting LRF.

As the map of the navigation area in this situation is not knownin advance, contrarily to the MSL case, the Extended Kalman Filter

is used to solve the Simultaneous Localisation and Mapping (SLAM)problem offline and only once. The aim is to obtain the occupancyof the building in the three-dimensional space, as shown in Fig. 7.The SLAM solution is explained in the work published by theauthors in [2].

The distance transform is applied to the 3D occupancy grid(Fig. 7) of the building in order to create the distance matrix.

Page 4: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Fig. 6. Observation Module developed (rotating 2D LRF).

Fig. 7. Occupancy grid.

Fig. 8. Graphical representation of a slice of the distance matrix.

730 M. Pinto et al. / Mechatronics 23 (2013) 727–737

Furthermore, the Sobel filter, again in the 3D space, is applied toobtain the gradient matrices, in both the x and y directions.

In the following figure, Fig. 7, the red points represent the upperside of the building (with z coordinate above 1.8 m), which is con-sidered a scenario more static to be used during the vehicle’s local-isation. Fig. 8 shows a slice of the 3D distance matrix when the zcoordinate is 1.8 m.

As previously mentioned these matrices are in the three-dimen-sional space and stored in the memory to be used during the local-isation methodology, in the vehicle normal operation. Therefore, inthis second scenario the data from the tilting LRF are compared ateach cycle time, relatively to the 3D map (distance and gradientmatrices), aiming to perform the self-localisation.

4. The Perfect Match algorithm

The Perfect Match algorithm used during the Initial PositionComputation and Position Tracking steps is applicable not only inthe robots of the 5DPO team, but also in the RobVigil. In this

section, the Perfect Match algorithm is described in the case ofthe RobVigil, with 3D data acquired by the tilting LRF.

The distance matrix includes the distance to the closest occu-pied cell on the 3D occupancy grid – which can be, for instance,an obstacle, a wall, a door, the ceiling – at each coordinate point.The gradient matrices represent the variation of the distance ma-trix in the x and y directions.

The developed Perfect Match takes the previous vehicle stateand runs the following steps: (1) error gradient calculation, (2)optimisation routine Resilient Back-Propagation (RPROP), and (3)second derivative for error estimation. The first two steps are con-tinuously iterated until the maximum number of iterations isreached. The last step is performed only once after the result ofthe RPROP is obtained. The maximum number of iterations wasempirically obtained as described in the paper [2].

Lets consider that the list of points of a tilting Laser Range Fin-der scan, in the world referential, is PList(i), composed of the pointi, PList(i) = (xi, yi, zi). The distance matrix dmap, stored in memory, isused to compute the matching error. The matching error is a mea-sure of how far the actual estimation of the vehicle’s pose is fromthe actual true pose, that is, it measures the distance from the listof points PList to the obstacles on the 3D map.

Therefore, using the RPROP optimisation algorithm, the PerfectMatch tries to minimise this matching error. The matching erroris computed using the cost value of the list of points PList:

E ¼XN

i¼1

Ei; Ei ¼ 1� L2c

L2c þ d2

i

ð1Þ

in which di is the distance matrix which corresponds to the pointPList(i), with coordinates (xi, yi, zi), di = dmap(xi, yi, zi). The value ofEi is the cost function for the point PList(i). N is the number of pointson the PList.

The cost function Ei was designed to achieve a similar behaviourwith the quadratic error function for the case of small values of di,neglecting points with large values of di (outliers). The parameterLc is an adjustable parameter which is used despite outlier pointsfrom PList, that is, points with a high distance matrix value di. In or-der to have a cost function which meets the condition 0.5 6 Ei 6 1,when di P 1 m, the value of Lc, used in the work, was 1 m.

The gradient matrix of the cost function in the iteration n of theRPROP algorithm for the state XMatch is given by the followingexpression:

@E@XMatch

ðnÞ¼XN

i¼1

2Lc �di

ðLcþdiÞ2@di

@xMatch

@di@yMatch

@di@hMatch

h iT() @E

@XMatchðnÞ

¼XN

i¼1

2Lc �di

ðLcþdiÞ2

rxðxi;yi;hiÞryðxi;yi;hiÞ

rxðxi;yi;hiÞryðxi;yi;hiÞ

� �T �sinhMatch �coshMatch

coshMatch �sinhMatch

� �xi

yi

� �26664

37775ð2Þ

in which rx(xi, yi, zi) and ry(xi, yi, zi) are the gradient values at theposition (xi, yi, zi), of the pre-computed gradient matrices.

After computing the gradient matrix, it is possible to apply theRPROP algorithm to each vehicle state variable. The algorithmtakes the previous state of the vehicle and estimates a new positionfor the vehicle, which is used in the next iteration.

The RPROP algorithm uses the actual gradient matrix, @E@XMatch

ðnÞand the product @E

@XMatchðnÞ � @E

@XMatchðn� 1Þ to test the convergence

direction of the cost function E. Therefore, this convergence direc-tion is used to find the vehicle state (xMatch, yMatch and hMatch),whose cost function E, corresponds to a local minimum. For moredetails on the RPROP algorithm, see the paper in [2].

The Perfect Match algorithm is really fast due the use of pre-computed matrices (gradients rx and ryi and the distance matrix

Page 5: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

M. Pinto et al. / Mechatronics 23 (2013) 727–737 731

dmap) about the navigation environment, which are stored inmemory.

To completely perform the Perfect Match stage, it is necessary tocalculate the second derivative. The analysis of the second deriva-tive makes it possible to find an error classification for the PerfectMatch’s solution.

For an actual set of 3D LRF data, if the cost function E has a per-

fect distinctive minimum, the second derivative @2E@X2

MatchðnÞ is high,

representing situations where the confidence in the obtained solu-tion is higher. Contrarily, when the cost function E for those pointsis flat, and there is no distinctive minimum, the second derivativeis low, representing cases where the confidence in the solution islower. The covariance matrix of the Perfect Match is given as thefollows:

PMatch ¼ diagð@2E=@x2Match; @

2E=@y2Match; @

2E=@h2MatchÞ ð3Þ

in which diag(.,.,.) is the diagonal matrix 3 � 3. The parameters in(3), KXY and Kh, are normalised values achieved empirically. Thealgorithm was tested with values of KXY and Kh in the gap between[10�6, 10�1].The best performance (smooth localisation, withoutvisible vibration) was achieved when: KXY = 10�3 and Kh = 10�3.

5. Localisation algorithm

As previously mentioned, the localisation algorithm describedin this paper is composed of two phases, the Initial Position Compu-tation and the Position Tracking.

At the beginning, the robot has no clue of its location. Com-monly, as explained in the papers [2,3], the initial pose of the vehi-cle can be given as a parameter. Otherwise, the vehicle can startperforming a routine from the same initial and known pose. In thispaper, an initial step is proposed, the Initial Position Computation,which pinpoints the initial position of the vehicle, allowing it tostart from any location. After that, the Position Tracking routine isinitialised and performs, in an autonomous way, an estimation ofthe vehicle’s pose using odometry and the Perfect Match algorithm(described in the previous section).

5.1. Initial Position Computation

The Initial Position Computation procedure is the process of find-ing the local minimum of cost function (1), which corresponds tothe vehicle’s true location.

Lets consider the following assumptions: (1) initially the vehi-cle’s pose is not known and, therefore, it is necessary to test multi-ple hypotheses for the pose (these hypotheses represent possibleposes for the initial location of the vehicle); (2) due to the noiseof the acquired data, sometimes the global minimum of the costfunction (1) is not necessarily the true position of the vehicle.Moreover, there are positions where the cost functions are similar;and (3) the true position of the vehicle corresponds to a local min-imum of the cost function.

During the Initial Positioning Computation step, the robot doesnot know its position. The robot needs to move in order to obtainnew and helpful information. The robot moves forward until anobstacle appears. That will cause the robot to rotate until it findsa free space that will allow it to move forward again. In the5DPO robots case, an obstacle is another robot, the ball or the fieldlines (supposing that the robot remains always inside the field).These obstacles are seen through the artificial vision using the seg-mented image, as described previously. In the RobVigil case, the ro-bot has ultra-sound obstacle sensors. After the Initial PositioningComputation step, the vehicle enters into the Position Trackingphase and can, therefore, follow its predefined trajectory.

Each robot has a compass (see Table 1), whose output value isused to perform the Initial Positioning Computation step. The solu-tion used in the Initial Positioning Computation step, which wasdeveloped and tested using real data, can be described as the fol-lowing algorithm:

(1) Along the map, multiple hypotheses for the vehicle’s loca-tion are placed evenly with a fixed space between them.The initial angle of each hypothesis is the value of thecompass.

(2) After that, the multiple hypotheses are continuously iteratedas follows:

(2.1) The poorly located hypotheses are ‘‘killed’’. A hypothesis isconsidered poorly located if for at least tlost seconds the fil-tered absolute difference between the estimated and thecompass angles is higher than alost degrees.

(2.2) The state of each hypothesis is propagated using the vehi-cle’s kinematic model, shown in Eq. (4), and odometry data.The result of the propagation of the hypothesis state, repre-sented as XHyp

Odo , is used as the initial state to perform the Per-fect Match algorithm, which will obtain the new state XHyp

match

of each hypothesis.(2.3) The repeated hypotheses are merged. Repeated hypotheses

are robot locations where the Euclidian distance is lowerthan dmerge metres and the angle difference is lower thanamerge degrees.

(2.4) The algorithm reaches the end of the convergence processwhen for tconv seconds the number of ‘‘killed’’ and mergedhypotheses is no more than Dh. It is considered that afterthe algorithm converges, each ‘‘survivor’’ hypothesis findsa local minimum.

(2.5) After the end of the convergence process is achieved, thefirst hypothesis, which has a score of hscore, is consideredthe vehicle’s true location. The localisation algorithmchanges its state and the Position Tracking step is performedfrom here. The score of a hypothesis increases if its costfunction, defined in (1), is at least two times lower thanthe other hypotheses.

The parameters alost and amerge are 45�. In indoor environments,the scenario is structured with elements, such as walls, doors andfurniture, which are orthogonally positioned. In the MSL field alllines are orthogonal. Therefore, hypotheses with an angle differ-ence (between their orientation and the compass value) above45� will be lost, since they will be performing a matching with azone of the map, which is orthogonal relatively to the correct posi-tion. On the contrary, hypotheses with an angle difference and dis-tance smaller than amerge and dmerge respectively, should be mergedsince they represent the same local minimum of the cost function.The RPROP algorithm described in the previous section is capableof converging to a local minimum, which is at a maximum distanceof 35 cm. Therefore, all the hypotheses that are at a distance lowerthan two times the range of the RPROP algorithm (2 � 35cm) rep-resent or will converge for the same local minimum. Therefore,dmerge should be 0.7 m.

The difference between the hypothesis’ orientation and thecompass angle is filtered with a low pass-filter. The parameters tlost

and tconv (equal to 4 s) are related to the time that this differenceneeds to be higher than alost to be considered that an hypothesisis lost. To obtain this value for tlost and tconv, a log of data aboutcompass measurements, with the vehicle navigating in an indoorscenario was used. tlost and tconv are values that guarantee that, inthis log, the following conditions are met: there are correctly de-tected the true positives situations (bad located hypothesis witha difference angle higher than alost), and despised false positivessituations (well located hypothesis with a difference angle higher

Page 6: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Table 15DPO’s Robot and RobVigil: dimensions, main sensors and actuators.

5DPO Robot RobVigil

Dimension 40 � 40 � 80 cm3 50 � 50 � 153 cm3

Locomotion Omnidirectional DifferentialEnvironment perception

sensorOmnidirectional Vision: Merlin F033C Res:656 � 494

Tilting LRF placed on the top: Hokuyo URG-04LX-UG01 plus AX-12 Dynamixel Bioloid servomotor.765 points

Pc Laptop Intel dual Core 2.0 GHz Onboard Mini ITX 1.0 GHzCompass HMC5843 HMC5843

732 M. Pinto et al. / Mechatronics 23 (2013) 727–737

than alost), introduced by bad measurements of the compass,caused by perturbations of power lines and irons beams on theenvironment.

The parameters hscore and Dh were achieved empirically, for acorrect operation, simultaneously in both scenarios (MSL roboticsoccer and RobVigil). These values have a high margin of security.The values achieved were Dh = 10 and hscore = 9.

5.2. Position Tracking

The Position Tracking follows (tracking) the local minimumfound by the Initial Position Computation procedure. It consists ofthree crucial stages: (1) the Kalman Filter Prediction, (2) the PerfectMatch procedure described in Section 3, and (3) the Kalman FilterUpdate. The Kalman Filter equations can be seen in more detailin [6].

The Kalman Filter Prediction stage takes the previously esti-mated vehicle state (Xv(k)) and, using odometry, it estimates thenext vehicle state (XOdo(k + 1)). The previous vehicle state Xv(k) issetup by the state variables xv(k), yv(k) and hv(k), which are the2D coordinates and orientation relatively to the x direction,respectively.

The following transition function fv(Xv(k), Odo, qv) was used tomodel the vehicle’s kinematic movement:

fvðXv ; d;Dh; qvÞ ¼xvðkÞ þ d � cos hvðkÞ þ Dh

2

� �þ eqx

yvðkÞ þ d � sin hvðkÞ þ Dh2

� �þ eqy

hvðkÞ þ Dhþ eqh

264

375 ð4Þ

in which d and Dh respectively represent the linear and angular dis-placement (estimated with odometry), between two consecutivetime steps (k + 1) and (k). The kinematic model’s error qv was mod-elled as additive Gaussian noise with zero mean ðqv ¼ 0Þ and co-variance Qv. The vector qv is setup by the variables eqx, eqy and eqh.Therefore, the new vehicle state is equal to the following equation:

XOdoðkþ 1Þ ¼ fvðXv ; d;Dh; qvÞ ð5Þ

The Kalman Filter Prediction stage also uses the previous estimatedcovariance matrix (Pv(k))) and computes the following (POdo(k + 1)).The estimated covariance after the prediction stage is given by theequation:

POdoðkþ 1Þ ¼ @fv@Xv

PvðkÞ@fv@Xv

T

þ @fv@qv

Qv@fv@qv

T

ð6Þ

in which the gradient of the transition function fv, is equal to:

@fv@Xv¼

1 0 �d � sin hvðkÞ þ Dh2

� �0 1 d � cos hvðkÞ þ Dh

2

� �0 0 1

264

375 ð7Þ

The gradient of the transition function fv relatively to the noiseqv is the identity matrix. The covariance Qv depends (increase) onthe vehicle’s translation and rotation estimated using odometry(d and Dh):

Qv ¼d �cos hv ðkÞþ Dh

2

� ��rd

� �2 0 0

0 d � sin hv ðkÞþ Dh2

� ��rd

� �2 0

0 0 ðd �rdhÞ2þðDh �rhÞ2

2664

3775ð8Þ

When the vehicle moved forward 1 m, the odometry accumu-lated a translational and rotational error rd and rdh, respectively.When the vehicle rotates 1 radian, the odometry error of the vehi-cle’s rotation is rh. The parameters were obtained measuring thedifference between the real and estimated pose using the vehicle’sodometry.

The Kalman Filter Update stage combines the estimated stateusing the odometry XodoðK þ 1Þ, and the Perfect Match resultXMatch(k + 1), described in Section 3. The Kalman Filter equationscan be seen in [6]. The observation model hv(Xv, r) is equal to thevehicle’s state Xv:

hvðXv ; rÞ ¼xv þ erx

yv þ ery

hv þ erh

264

375 ð9Þ

in which r is modelled as white noise, with a Gaussian distributionwith zero mean ðr

_¼ 0Þ and covariance matrix R. The error vector r

is setup by the variables erx, ery and erh.In the update stage, the observation is equal to the state ob-

tained after applying the Perfect Match:

Zv ¼ XMatchðkþ 1Þ ð10Þ

The estimated observation is equal to the present estimation ofthe vehicle state, propagated during the Kalman Filter Predictionstage:

hvðXv ; rÞ ¼ XOdoðkþ 1Þ ð11Þ

Therefore, the innovation of the Kalman filter (V(k + 1)) is equalto:

Vðkþ 1Þ ¼ Zv � hvðXv ; rÞ ð12Þ

At this stage, the propagated covariance matrix, using odometry(POdo(k + 1)), and the covariance matrix computed in the PerfectMatch ðPmatchðkþ 1ÞÞ, are used to determine the Kalman Filter gain(W(k + 1)):

Wðkþ1Þ¼POdoðk

þ1Þ@hv

@Xv

T@hv

@XvPOdoðkþ1Þ@hv

@Xv

T

þ@hv

@rPMatchðkþ1Þ@hv

@r

T" #�1

ð13Þ

The gradient of the observation module, relatively to the vehiclestate and the observation noise @hv

@Xvand @hv

@r respectively�

are iden-tity matrices. Therefore, the previous equation can be re-written asfollows:

Wðkþ 1Þ ¼ POdoðkþ 1Þ½POdoðkþ 1Þ þ PMatchðkþ 1Þ��1 ð14Þ

Page 7: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

M. Pinto et al. / Mechatronics 23 (2013) 727–737 733

Therefore, after the update stage, the new estimated state is gi-ven by the expression:

Xvðkþ 1Þ ¼ XOdoðkþ 1Þ þWðkþ 1Þ � Vðkþ 1Þ ð15Þ

The new covariance matrix decreases with the followingequation:

Pvðkþ 1Þ ¼ ½I3�3 �Wðkþ 1Þ� � POdoðkþ 1Þ ð16Þ

where I3�3 is the square identity matrix with the dimension 3 � 3.

6. Experimental results

The robots of the 5DPO team are applied in the Middle Size Lea-gue games, in the RoboCup competition and in the Portuguese Fes-tival of Robotics, which occur every year.

The main application of the RobVigil is the surveillance of pub-lic facilities, such as department stores, hospitals or other servicescenarios. Using a robot of this type makes it possible to systemat-ically and uninterruptedly perform inspection routines, with min-imum human intervention.

These robots (5DPO’s robots and RobVigil) include the charac-teristics, sensors and actuators presented in Table 1.

Furthermore, the RobVigil is equipped with sensors to detecthazardous situations, such as fires, floods, gas leaks or movements.It is also equipped with three surveillance cameras, including anomnidirectional, a high resolution, and finally, a thermal camera.This equipment is required to perform the surveillance task. Therobot also has ultrasound and infrared sensors, allowing the robotto navigate safely avoiding obstacles. Furthermore, the robot in-cludes a communication interface and GUI interface, which allowsit to supervise information, such as alarms, and the different cam-eras in real time. Fig. 9 shows the on-board sensors and character-istics of the 5DPO’s robots and RobVigil.

The industrial sensor Sick NAV350 was used as ground truth toclassify the robustness and accuracy of the localisation methodproposed in this paper (Initial Position Computation and PositionTracking steps), in both scenarios. The Sick NAV350 is an expensivesensor, commonly applied in autonomous guided vehicles (AGVs).This sensor has its own positioning algorithm which uses artificiallandmarks (reflectors).

Software to map the reflectors is available together with theNAV350. The software is called ‘‘SOPAS’’ and it was used to cali-brate the sensor and its reflectors. After this calibration, the abso-lute average error between the vehicle’s real and the acquiredorientation with the Sick sensor is 0.003 radians, as well as theaverage Euclidian distance between the vehicle’s real 2D positionand the one obtained using the Sick sensor is 0.011 m. Here, thevehicle’s real orientation and the vehicle’s real 2D position werecarefully measured manually. Considering its high accuracy, theNAV350 sensor was used as ground truth.

6.1. MSL league scenario results

All experiments in this section were performed by tele-operat-ing a robot from the 5DPO team. However, this localisation algo-rithm is applied during the MSL games for the robot to performautonomous actions, such as scoring goals (see video [21]).

The first figure, Fig. 10, shows the beginning of the Initial Posi-tion Computation process, where the multiple hypotheses are dis-tributed evenly along the map, with a fixed distance of 2 m. Thesecond figure, Fig. 11, shows the Initial Positioning Computation pro-cess, immediately before the choice of the correct initial pose of thevehicle. In this figure, some poorly located hypotheses have al-ready been ‘‘killed’’ and some repeated hypotheses have been

merged. Only five hypotheses were left at the end of the multi-hypotheses convergence process.

Graphs pertaining to the situation represented in Fig. 11 areillustrated in the two following figures. The cost function and scorevalues of each hypothesis are presented in the graphs in Figs. 12and 13, respectively. Hypothesis no. 2 has a cost function valueobeying the condition (2.4), Section 4. The score of hypothesisno. 2 is nine. Therefore, between these five remaining hypotheses,the correct hypothesis (no. 2) was chosen as the initial location ofthe vehicle, using the criteria defined in (2.5) of Section 4.

For a better understanding, the vehicle’s trajectory was dividedin two and displayed in Figs. 14 and 15. In this experiment, thevehicle’s trajectory is presented in Figs. 14 and 15. The vehiclestarted moving at the ‘‘Start’’ position and stopped at the ‘‘End’’ po-sition. The blue dotted line shows the vehicle’s position, acquiredby the NAV350 during the Initial Positioning Computation step.The Position Tracking is performed after the Initial Positioning Com-putation step. In the figures, the black dashed line shows the vehi-cle’s position given by the NAV350 during the Position Trackingstep. The red line shows the vehicle’s location provided by thelocalisation methodology proposed in this paper, during the Posi-tion Tracking step, immediately after of the Initial Position Computa-tion step.

As shown in this last figure, the proposed methodology operatescorrectly comparatively to the ground truth (NAV350). The follow-ing graphs, in Figs. 16 and 17, help justify this last comparison. Thefirst graph shows the Euclidean distance error between the vehi-cle’s 2D position obtained during the Position Tracking phase andthe NAV350, while the second graphic shows the error of the esti-mated angle. The two errors are shown in the graphs as functionsof the distance travelled by the vehicle. As shown in the graphics,the distance and angle error along the vehicle’s movement is small.The average error of the estimated angle is equal to 1.822� with astandard deviation of 1.8105�, while the average Euclidian distanceis 0.077 m with a standard deviation of 0.089 m. However, theseerrors can be ignored since the pose estimation is sufficiently accu-rate to be used in the decision-making process during the MSLgames, such as for scoring goals, as show in the video in [21].

In the robots of the 5DPO team, the average execution time of thePosition Tracking is 2 ms to compute 50 vision points, in an Intel dualCore 2.0 GHz Laptop. The artificial vision segmentation, with imagesof 656 � 494 of resolution, is a simple colour segmentation algo-rithm, but still requires an average time of 20 ms to be executed.

6.2. RobVigil scenario results

All the experiments in this section were conducted with theRobVigil in tele-operation mode. However, the vehicle is capableof navigating autonomously, making uninterrupted surveillanceroutines, as shown in the video [21]. Fig. 18 shows the initialisationof the multiple hypotheses during the Initial Positioning Computa-tion step, while Fig. 19 shows the end of the convergence phase.

The cost function value and score of each hypothesis are shownin the graphs in Figs. 20 and 21, respectively. From the remaining11 hypotheses, the algorithm was capable of selecting the correctone. Hypothesis no. 1 was chosen as the vehicle’s initial pose,which has a cost function value that obeys the condition (2.4) inSection 4. The score of this hypothesis is nine, as imposed by thecondition (2.5) in Section 4.

For a better understanding, the vehicle’s trajectory was dividedin two and displayed in Figs. 22 and 23. The trajectory begins at the‘‘Start’’ position in Fig. 22 and stops at the position marked with‘‘End’’ in Fig. 23.

Again, as shown in these last figures, comparatively to theground truth (NAV350), it is possible to conclude that the proposedmethodology operates correctly.

Page 8: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Fig. 9. a) RobVigil Robot. Its features and sensors. b) MSL 5DPO Robot. Its features and sensors.

Fig. 10. Initial Position Computation. Multiple hypotheses initialisation.

Fig. 11. End of the convergence of the multi-hypotheses phase with only fiveremaining hypotheses. The chosen hypothesis was no2.

Fig. 12. The graph is related to the situation of Fig. 11 and it shows the cost value ofeach hypothesis.

Fig. 13. The graph is related to the situation of Fig. 11 and it shows the score valueof each hypothesis.

734 M. Pinto et al. / Mechatronics 23 (2013) 727–737

The graph in Fig. 24 present, as function of the distance trav-elled by the vehicle, the Euclidian distance between the estimatedposition of the vehicle and the position obtained using theNAV350. Contrarily, the second graph, in Fig. 25, shows the errorof the angle.

As demonstrated in the graphs, the difference between the pro-posed methodology and the ground truth sensor is not significant.The average Euclidian distance between the estimated position andthe position acquired using the NAV350 is equal to 0.075 m, with astandard deviation of 0.084 m, while the average angle error is

1.0256�, with a standard deviation of 1.1459 rad. When using theRobVigil, these errors can be negligible. With this level of accuracy,the vehicle is capable of performing autonomous inspection rou-tines, even in dynamic scenarios, as shown in the video in [21].

In the RobVigil scenario, the Position Tracking step is executedswiftly and online, with an average execution time of 12 ms, using765 points of the tilting LRF, at each sample period of 100 ms. Theoperation is performed in a Mini ITX, EPIA M10000G, 1.0 GHz.

7. Discussion and conclusions

The localisation method is applied in the omnidirectional robotsof the 5DPO team, the robotic soccer Middle Size League, using

Page 9: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Fig. 14. Trajectory of the 5DPO team’s robot from ‘‘start’’ to ‘‘P1’’. Field dimensions9 � 6 m.

Fig. 15. Trajectory of the 5DPO team’s robot from ‘‘P1’’ to ‘‘stop’’. Field dimensions9 � 6 m.

Fig. 16. Euclidian distance error between the vehicle’s 2D position given by thePosition Tracking step and the position acquired by the NAV350.

Fig. 17. Angle error between the estimation given by the Position Tracking and theposition acquired with the ground truth (NAV350).

Fig. 18. Initial Position Computation step. Multi-hypotheses initialisation.

Fig. 19. End of the convergence of the multi-hypotheses phase, with 12 hypothesesremaining.

Fig. 20. The graphs are related to the situation in Fig. 19. Cost function value of eachhypothesis.

Fig. 21. The graphs are related to the situation in Fig. 19. The score value of eachhypothesis.

M. Pinto et al. / Mechatronics 23 (2013) 727–737 735

artificial vision and odometry. The localisation method was alsoapplied in the surveillance robot, the RobVigil, using the 3D occu-pancy grid of indoor environments. The available data is 3D pointsacquired by the tilting LRF.

The experiments conducted made it possible to confirm thatthere are advantages to the proposed localisation algorithm: (1)the used observation modules (both in the MSL or RobVigil scenar-ios) are affordable; (2) the localisation algorithm used here worksin well structured mapped environments, without the need forartificial landmarks or beacons.

Page 10: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

Fig. 22. Trajectory of the RobVigil from ‘‘start’’ to ‘‘P1’’. Corridor with 12 � 6 m.

Fig. 23. Trajectory of the RobVigil from ‘‘P1’’ to ‘‘Stop’’. Corridor with 12 � 6 m.

Fig. 24. Euclidian distance error between the vehicle’s 2D position given by thePosition Tracking step and the position acquired by the NAV350.

Fig. 25. Angle error between the estimation given by the Position Tracking and theposition acquired with the ground truth (NAV350).

736 M. Pinto et al. / Mechatronics 23 (2013) 727–737

Comparatively to the work published previously by the authorsin [2,3], the work described in this paper presents a new algorithmto pinpoint the initial pose of the vehicle, based on a multi-hypoth-eses approach, the Initial Position Computation. Until now, either inthe papers previously published by the authors or in localisationalgorithms in the literature, as is the case of the Particle Filters[1], the initial pose of the vehicle is an configuration parameter,or the robot starts every time from the same and known location.

The entire algorithm proved to work correctly in both the sce-narios. The experimental and real tests helped confirm that the en-tire localisation methodology operates properly and that theposition computation during the Position Tracking is sufficientlyaccurate for decision-making and surveillance routines, both in

the MSL League and in the RobVigil scenarios, as confirmed bythe video in [21].

The fundamental basis for the localisation methodology is thepaper by Lauer et al. [9], which proves the importance of the Mid-dle Size competition in the annual RoboCup festival. Comparativelyto the localisation method described by Lauer et al. [9], the PositionTracking was improved. The Extended Kalman Filter was applied asa multi fusion sensor system in order to combine the odometryinformation and the result of the Perfect Match. Furthermore, withregard to the Perfect Match applied in the RobVigil, the algorithmused in the 5DPO team’s robots was adapted to use three-dimen-sional data from a tilting LRF, maintaining the computationalrequirements, even performing in the 3D space. Since the compu-tation time used is not compromised in both the applications – the5DPO team’s robots and RobVigil, the experiments conducted byLauer et al. [9], which elects the Perfect Match as the faster algo-rithm comparatively to the Particle Filter algorithm, remain valid.In this experiment, while the Perfect Match has a time spent of4.2 ms, the Particle Filter, using 200 and 500 particles, spends17.9 ms (four times higher) and 48.3 ms (ten times higher).

In the robots in which is used the localisation methodology de-scribed here, low power processors are used in order to increasebattery autonomy. Particularly in the MSL league robots, the timecycle is quite short. However, the localisation methodology pro-posed here proved to be fast enough to be used online in systemswith shorter time cycles (40 ms in the MSL robots of the 5DPOteam and 100 ms in the RobVigil), even when low power proces-sors are used (the 2.0 GHz Intel dual Core and the Mini ITX, EPIAM10000G with a 1.0 GHz processor, respectively in the 5DPOteam’s robots and RobVigil).

Comparatively to the ICP algorithms, the localisation algorithmproposed and described in this paper is faster and can be appliedonline in smaller cycle times, even when using a larger quantityof Laser Range Finder points. The MbICP algorithm described in[10], which already shows improvements to the standard ICP, takesan average time of 76 ms to find the optimal solution, using only361 Laser Range Finder points, in a sample rate of 200 ms, in a Pen-tium IV 1.8 GHz.

Furthermore, the 3D matching algorithm proposed in this workhas a limited execution time, depending on the number of pointsused. In the ICP algorithms, the point to point match step is neces-sary to obtain the correspondence with the previous scan. Suchstep is not limited to the execution time and widely depends onthe overlapping zone between consecutive scans. This overlappingzone also influences the quality of the solution achieved.

Acknowledgments

This work is funded (or part-funded) by the ERDF – EuropeanRegional Development Fund through the COMPETE Programme(operational programme for competitiveness) and by NationalFunds through the FCT – Fundação para a Ciência e a Tecnologia(Portuguese Foundation for Science and Technology) within pro-ject « FCOMP-01-0202-FEDER-7905». The author was supportedby the Portuguese Foundation for Science and Technology throughthe PhD grant SFRH/BD/60630/2009.

References

[1] ROS-Gmapping package; November 2012. <http://www.ros.org/wiki/gmapping>.

[2] Pinto M, Moreira AP, Matos A, Sobreira H. Novel 3D matching self-localisationalgorithm. Int J Adv Eng Technol 2012;5(1).

[3] Pinto M, Moreira AP, Matos A. Localization of mobile robots using an extendedKalman filter in a LEGO NXT. IEEE Trans Educ 2012;55(1):135–44.

[4] Marder-Eppstein E, Berger E, Foote T, Gerkey B, Konolige K. The officemarathon: robust navigation in an indoor office environment. In: Roboticsand automation (ICRA), IEEE international conference; May 2010. p. 300–7.

Page 11: Self-localisation of indoor mobile robots using multi-hypotheses and a matching algorithm

M. Pinto et al. / Mechatronics 23 (2013) 727–737 737

[5] Borenstein J, Everett HR, Feng L, Wehe D. Mobile robot positioning and sensorsand techniques. J Robot Syst, Spec Issue Mobile Robots 1997;14(4):231–49.

[6] Thrun S, Burgard W, Fox D. Probabilistic robotics. Cambridge,Massachusetts: The MIT Press; 2005.

[7] Sobreira Héber, Paulo Moreira A, Sena Esteves João. Characterization ofposition and orientation measurement uncertainties in a low-cost mobileplatform. In: 9th Portuguese conference on automatic control, Controlo 2010,Coimbra, Portugal; 8–10 September, 2010. p. 635–40.

[8] Sousa A, Moreira AP, Costa P. Multi hypotheses navigation for indoor cleaningrobots. In: 3rd International workshop on intelligent robotics (IRobot 2008),Lisbon, Portugal; October 2008. p. 71–82.

[9] Lauer M, Lange S, Riedmiller M. Calculating the perfect match: an efficient andaccurate approach for robot self-localization. In: RoboCup symposium, Osaka,Japan; 13–19 July, 2005. p. 142–53.

[10] Minguez J, Lamiraux F, Montesano L. Metric-based scan matching algorithmsfor mobile robot displacement estimation. IEEE Trans Robot2006;22(5):1047–54.

[11] Gonzalez R, Rodriguez F, Guzman JL, Pradalier C, Siegwart R. Combined visualodometry and visual compass for off-road mobile robots localization. Robotica2012;30(6):865–78.

[12] Nister D, Naroditsky O, Bergen J. Visual odometry for ground vehicleapplications. J Field Robot 2006;23(1):3–20.

[13] Matthies L, Maimone M, Johnson A, Cheng Y, Willson R, Villalpando C, et al.Computer vision on mars. Int J Comput Vis 2007;75(1):67–92.

[14] Chen Shengyong, Li Youfu, Kwok Ngai Ming. Active vision in robotic systems: asurvey of recent developments. Int J Robot Res 2011;30(11):1343–77.

[15] CSorba M. Simultaneous localisation and map building. Thesis, (PhD),Robotic Research Group Department of Engineering of Oxford, Oxford,England; 1997.

[16] Garulli Andrea, Giannitrapani Antonio, Rossi Andrea, Vicino Antonio. Mobilerobot SLAM for line-based environment representation. In: 44th IEEEconference on decision and control decision and control, 2005 and 2005European control conference, (CDC-ECC ‘05), Seville, Spain; 12–15 December,2005. p. 2041–6.

[17] Teslic L, Škrjanc I, Klancar G. Using a LRF sensor in the Kalman-filtering-basedlocalization of a mobile robot. ISA Trans (Elsevier) 2010;49(1):145–53.

[18] Thrun S, Fox D. A real-time algorithm for mobile robot mapping withapplications to multi-robot and 3D mapping. In: Best conference paperaward, IEEE international conference on robotics and automation. SanFrancisco, vol. 1, ; April 2000. p. 321–8.

[19] Hähnel D, Burgard W, Thrun S. Learning compact 3D models of indoor andoutdoor environments with a mobile robot. Robot Auton Syst (Elsevier)2003;44(1):15–27.

[20] Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping withRao-Blackwellized particle filters. IEEE Trans Robot 2007;23(1):34–46.

[21] Video of Global Localisation with Multi-hypotheses using a MatchingAlgorithm; December 2012. <http://paginas.fe.up.pt/~dee09013/index_ficheiros/videos/videoGlobal.mp4.