12
Robotics and Autonomous Systems 60 (2012) 1267–1278 Contents lists available at SciVerse ScienceDirect Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot Cooperative SLAM using M -Space representation of linear features Daniele Benedettelli, Andrea Garulli, Antonio Giannitrapani Dipartimento di Ingegneria dell’Informazione, Università di Siena, 53100 Siena, Italy article info Article history: Received 26 January 2012 Received in revised form 20 June 2012 Accepted 2 July 2012 Available online 7 July 2012 Keywords: SLAM Mapping Multi-robot M-Space abstract This paper presents a multi-robot simultaneous localization and map building (SLAM) algorithm, suitable for environments which can be represented in terms of lines and segments. Linear features are described by adopting the recently introduced M-Space representation, which provides a unified framework for the parameterization of different kinds of features. The proposed solution to the cooperative SLAM problem is split into three phases. Initially, each robot solves the SLAM problem independently. When two robots meet, their local maps are merged together using robot-to-robot relative range and bearing measurements. Then, each robot starts over with the single-robot SLAM algorithm, by exploiting the merged map. The proposed map fusion technique is specifically tailored to the adopted feature representation, and takes into account explicitly the uncertainty affecting both the maps and the robot mutual measurements. Numerical simulations and experiments with a team composed of two robots performing SLAM in a real- world scenario, are presented to evaluate the effectiveness of the proposed approach. © 2012 Elsevier B.V. All rights reserved. 1. Introduction The self-localization of a mobile robot is widely recognized as one of the most basic problems of autonomous navigation. While such a task can be performed pretty well when the environment is a priori known, robot localization becomes much harder when a map of the environment is not available beforehand. This may be due to a lack of information on the environment the robot moves in, or to the excessive cost of manually building a map on purpose. In these circumstances, the robot must at the same time build a map of the environment and localize itself within it. This problem, known as Simultaneous Localization and Map building (SLAM), has been extensively studied over the last two decades (see [1–6] and references therein for a thorough review). The solutions to the SLAM problem presented so far differ mainly for the environment description adopted and for the estimation technique employed. In many applications of interest, a mobile robot has to move in indoor environments, which can be well described in terms of linear features. Typical examples are houses or offices, where there is plenty of walls and furniture [7,8]. When it comes to represent lines and segments, several possibilities are available, each with its pros and cons. Probably the most intuitive way to represent a segment is to specify the coordinates of its endpoints. One drawback of such parameterization is that more often than Corresponding author. Tel.: +39 0577 234792; fax: +39 0577 233602. E-mail addresses: [email protected] (D. Benedettelli), [email protected] (A. Garulli), [email protected] (A. Giannitrapani). not the actual extrema of the segment are not really observed in a single measurement, due to sensor limited field of view or occlusions (e.g., only a portion of a long wall is measured by a laser scan). In these circumstances, what can still be measured are the distance ρ and the angle α of the supporting line of the segment. In fact, the ρ and α parameterization is indeed anther possible representation of linear features for mapping purposes. The biggest drawback of such a choice is the so called lever arm effect, meaning that the parameter uncertainty increases with the distance from the origin of the reference frame. A possible solution is to resort to special line representations, like the SP- Model [1] or the recently proposed M-Space representation [9], which express the feature coordinates in different local reference frames. The M-Space representation provides a unified framework for describing different kinds of 2D and 3D geometric features, thus being versatile enough to be adopted for SLAM purposes in a wide range of environments and in the presence of heterogeneous sensors. One key feature of the M-Space representation is its ability to use only the partial information contained in a measurement of a given element of the map. This characteristic is especially useful for the early initialization of the estimate of a feature which has been only partially observed. When exploring large areas, the map building procedure can be more effective if tackled by a team of robots. Besides being able to cover a given area faster than a single agent, a multi-robot sys- tem is more robust to failure and often also less expensive when compared to a single complex vehicle. Another crucial issue for a SLAM algorithm is being able to recognize places that have been already visited (see, e.g., [10–12]). Since the difficulty of the loop closure increases with the size of the environment, using a team 0921-8890/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.robot.2012.07.001

Cooperative SLAM using -Space representation of linear features

Embed Size (px)

Citation preview

Page 1: Cooperative SLAM using -Space representation of linear features

Robotics and Autonomous Systems 60 (2012) 1267–1278

Contents lists available at SciVerse ScienceDirect

Robotics and Autonomous Systems

journal homepage: www.elsevier.com/locate/robot

Cooperative SLAM usingM-Space representation of linear featuresDaniele Benedettelli, Andrea Garulli, Antonio Giannitrapani ∗Dipartimento di Ingegneria dell’Informazione, Università di Siena, 53100 Siena, Italy

a r t i c l e i n f o

Article history:Received 26 January 2012Received in revised form20 June 2012Accepted 2 July 2012Available online 7 July 2012

Keywords:SLAMMappingMulti-robotM-Space

a b s t r a c t

This paper presents a multi-robot simultaneous localization andmap building (SLAM) algorithm, suitablefor environments which can be represented in terms of lines and segments. Linear features are describedby adopting the recently introduced M-Space representation, which provides a unified framework forthe parameterization of different kinds of features. The proposed solution to the cooperative SLAMproblem is split into three phases. Initially, each robot solves the SLAM problem independently. Whentwo robots meet, their local maps are merged together using robot-to-robot relative range and bearingmeasurements. Then, each robot starts over with the single-robot SLAM algorithm, by exploiting themerged map.

The proposed map fusion technique is specifically tailored to the adopted feature representation, andtakes into account explicitly the uncertainty affecting both themaps and the robotmutualmeasurements.Numerical simulations and experiments with a team composed of two robots performing SLAM in a real-world scenario, are presented to evaluate the effectiveness of the proposed approach.

© 2012 Elsevier B.V. All rights reserved.

1. Introduction

The self-localization of a mobile robot is widely recognized asone of the most basic problems of autonomous navigation. Whilesuch a task can be performed pretty well when the environmentis a priori known, robot localization becomes much harder when amap of the environment is not available beforehand. This may bedue to a lack of information on the environment the robot movesin, or to the excessive cost of manually building a map on purpose.In these circumstances, the robot must at the same time build amap of the environment and localize itself within it. This problem,known as Simultaneous Localization and Map building (SLAM), hasbeen extensively studied over the last two decades (see [1–6] andreferences therein for a thorough review). The solutions to theSLAM problem presented so far differ mainly for the environmentdescription adopted and for the estimation technique employed.In many applications of interest, a mobile robot has to movein indoor environments, which can be well described in termsof linear features. Typical examples are houses or offices, wherethere is plenty of walls and furniture [7,8]. When it comes torepresent lines and segments, several possibilities are available,each with its pros and cons. Probably the most intuitive way torepresent a segment is to specify the coordinates of its endpoints.One drawback of such parameterization is that more often than

∗ Corresponding author. Tel.: +39 0577 234792; fax: +39 0577 233602.E-mail addresses: [email protected] (D. Benedettelli),

[email protected] (A. Garulli), [email protected] (A. Giannitrapani).

0921-8890/$ – see front matter© 2012 Elsevier B.V. All rights reserved.doi:10.1016/j.robot.2012.07.001

not the actual extrema of the segment are not really observedin a single measurement, due to sensor limited field of view orocclusions (e.g., only a portion of a long wall is measured by alaser scan). In these circumstances, what can still be measuredare the distance ρ and the angle α of the supporting line of thesegment. In fact, the ρ and α parameterization is indeed antherpossible representation of linear features for mapping purposes.The biggest drawback of such a choice is the so called lever armeffect, meaning that the parameter uncertainty increases withthe distance from the origin of the reference frame. A possiblesolution is to resort to special line representations, like the SP-Model [1] or the recently proposed M-Space representation [9],which express the feature coordinates in different local referenceframes. TheM-Space representation provides a unified frameworkfor describing different kinds of 2D and 3D geometric features,thus being versatile enough to be adopted for SLAM purposes in awide range of environments and in the presence of heterogeneoussensors. One key feature of theM-Space representation is its abilityto use only the partial information contained in a measurement ofa given element of the map. This characteristic is especially usefulfor the early initialization of the estimate of a feature which hasbeen only partially observed.

When exploring large areas, themap building procedure can bemore effective if tackled by a team of robots. Besides being ableto cover a given area faster than a single agent, a multi-robot sys-tem is more robust to failure and often also less expensive whencompared to a single complex vehicle. Another crucial issue for aSLAM algorithm is being able to recognize places that have beenalready visited (see, e.g., [10–12]). Since the difficulty of the loopclosure increases with the size of the environment, using a team

Page 2: Cooperative SLAM using -Space representation of linear features

1268 D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278

of robots allow one to break down a large area into smaller re-gions to be explored by each agent. In light of these considerations,several multi-robot SLAM algorithms have been proposed in re-cent years, adopting different estimation techniques, like ExtendedKalman Filters (EKF) [13,14], information filters [15], particle fil-ters [16], set-membership estimators [17], or sparse optimizationtechniques [18].

Besides the aforementioned advantages, cooperative localiza-tion and mapping solutions present an additional challenge too.In fact, a key issue for the effectiveness of multi-robot SLAM algo-rithms is the ability to merge in a common reference frame mapsbuilt by different robots in different frames. Such a task, whichis particularly difficult if no information is available on the ini-tial robot poses, is crucial to preserve the quality of the map andfully exploit the benefits of themulti-agent architecture. Wrong orinaccurate map merging can completely destroy the map consis-tency and eventually jeopardize the correct behavior of the sys-tem. Hence, despite the heterogeneity of techniques employed,most multi-robot SLAM algorithms have tackled this point. A fu-sion algorithm for maps made up of point-wise landmarks hasbeen presented in [19]. Under the assumption that the agents canperform range and bearing mutual measurements during a ren-dezvous, the frame transformation is derived from geometrical ar-guments and a two step merging procedure has been proposed.First, the map built by one robot is incorporated into the map ofthe other one, according to the coordinate transformation relatingthe reference frames of the two robots. Then, correspondences be-tween landmarks present in both maps are sought after, in orderto improve the accuracy of the coordinate transformation. An al-gorithm for merging two occupancy grid maps has been proposedin [20]. In this case a set of possible reference frame transforma-tions are computed by analyzing the cross-correlation of suitablespectra of the two maps. Each transformation is then assigned aweight representing the confidence on the corresponding mergedmap. In this way it is also possible to track multiple hypotheses incase of ambiguous associations. While this approach does not re-quire robot rendezvous and mutual measurements, it can be ap-plied only if the two maps have significant overlap. Recently, aprobabilistic map merging procedure has been proposed in [21].Analogously to what is done in [19], map fusion is carried out intwo steps. An initial map alignment is performed based on rangeand bearing robot-to-robot measurements. Then the merged mapis updated according to duplicate features present in both orig-inal maps. The novelty of this approach lies in the probabilisticmethod adopted for the merging procedure, which is suitable forparticle filter based SLAM algorithms. A map merging algorithmfor mixed topological/metric maps has been presented in [22]. Agraph-like topological map is built, with vertices representing lo-cal occupancy grid maps and edges describing relative positions ofadjacent local maps. In this framework, the map fusion betweentwo robots boils down to adding an edge that connects the twotopological maps, and associating to it the estimation of the rel-ative robot pose. A similar approach is adopted in [23], whereeach robot builds landmark-based local maps topologically con-nected through an adjacency graph. In this framework, rendezvousbetween robots, feature correspondences in different robot mapsand absolute localization measurements give rise to a cycle in theglobal graph which translates into constraints that allow the sys-tem to refine the estimates of the transformation between the localreference frames. This work has been extended in [24] for dealingwith heterogeneous teams of aerial and ground robots equippedwith monocular cameras. Several local 3D maps containing visuallandmarks and line segments are built, whereas a global connectiv-ity graph captures their relative relationships. Another visual SLAMalgorithm for a team of robots has been recently presented in [25].

In this case, a Rao–Blackwellized particle filter is proposed to col-laboratively build a global map made of 3D visual landmarks de-tected by stereo cameras.

In this paper, a new multi-robot SLAM algorithm is presented,for line-based environment descriptions. The proposed approachbuilds upon theM-Space representation to describe linear featuresand adopts a map fusion scheme inspired by the techniqueproposed in [19]. The multi-robot algorithm goes through threestages. Initially, each robot runs an EKF-based single-robotSLAM algorithm for independently building local maps until arendezvous occurs. When two robots meet, the local maps aremerged by using the information coming from mutual robotmeasurements and feature matching. Afterward, the agents startover with the single-robot SLAM algorithm, taking advantageof the merged map. The main contribution of the paper is topresent a novel map fusion algorithm tailored to environmentsdescribed in terms of lines and segments. The uncertainty of theresulting map is properly updated, and the covariance matricesnecessary to resume the single-robot SLAM when the M-Spacerepresentation is adopted are analytically computed. Results fromnumerical simulations and experimental tests involving real robotsare reported, to assess the viability of the proposed approach in areal-world scenario. A preliminary version of this work has beenpresented in [26].

The paper is organized as follows. In Section 2, the M-Spacerepresentation of linear features is revisited. The single-robotSLAM algorithm, based on the EKF and M-Space representationis outlined in Section 3. The main contribution of the paper ispresented in Section 4, where the map fusion technique, as wellas the update of the resulting map uncertainty, are illustrated. InSection 5 the results of simulations and experimental tests withreal robots are reported. Finally, in Section 6 some conclusions aredrawn and lines of future research are outlined.

Notation. The symbol Iq denotes the identity matrix of orderq. The matrix diag(a1, . . . , an) is the diagonal matrix havingthe scalars a1, . . . , an on its diagonal. Similarly, the matrixblkdiag(A1, . . . , An) is the block diagonal matrix having matricesA1, . . . , An on its diagonal. Boldface symbols denote vectors. Thesymbol x denotes the estimate of the quantity x, and x = x −

x is the corresponding estimation error. In the notation f xr , theleft superscript f means that the quantity x is expressed in thereference frame ⟨Rf ⟩, whereas the right subscript r indicates whichrobot the quantity x refers to. Whenever found, the subscript sdenotes a quantity expressed in theM-Space.

2. M-Space representation of segments

Among the several possible representations, in this paper a linesegment in the plane is described in a reference frame ⟨G⟩ eitherby its endpoints coordinates

xf = [xA yA xB yB]T , (1)

or, alternatively, by the parameters

xp = [α ρ dA dB]T . (2)

The parameter α ∈ (−π, π] is the angle between the x-axis of⟨G⟩ and the normal n to the segment passing through the origin,whereas ρ ≥ 0 is the distance of the origin of ⟨G⟩ to the line. Todefine dA and dB, let us introduce a second reference frame ⟨S⟩,whose origin is at the intersection between the segment and thenormal n, and whose x-axis lies on n pointing out from the originof ⟨G⟩. Then, dA and dB are the coordinates (with proper sign) of thesegment endpoints A and B along the y-axis of ⟨S⟩ (see Fig. 1).

As pointed out in the previous section, both parameterizationssuffer from some drawbacks: the segment endpoints are not

Page 3: Cooperative SLAM using -Space representation of linear features

D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278 1269

Fig. 1. Two different segment parameterizations. In this case dA < 0 and dB > 0.

always observable from ameasurement, whereas the ρ − α repre-sentation is prone to the lever-arm effect. A possible solution,overcoming both problems, is provided by the measurementsubspace representation (briefly, the M-Space representation). Inthis framework, a local reference frame is attached to each feature,with respect to which the feature parameters (or coordinates) areexpressed. The M-Space is general enough to allow for a uniformtreatment of many different kinds of features, like 2D or 3D point-wise landmarks, segments, lines, planes. In the following, we willfocus on maps made up of 2D segments, referring the readerinterested to theM-Space representation of generic features to [9].

TheM-Space coordinates of a segment are the parameters xp in(2), expressed in the reference frame of the corresponding feature.As a consequence, coordinates xp relative to different featureslive in different reference frames. An immediate benefit of theM-Space representation is that being each feature described in alocal reference frame, the effects of the lever-arm phenomenon aremitigated. Intuitively, this happens because each segment is closeto the origin of the corresponding local reference frame. Anothernice property of the M-Space representation is that the featuresare parameterized in a way that both their location and extentare fully specified, as it occurs when the segment endpoints areused. However, until a segment is not completely observed, theM-Space representation allows one to initialize the feature in asubspace corresponding to the partial information provided by therobot sensors. As a consequence, the dimension q of the M-Spacecoordinates xp can grow from aminimumof 2 (when the robot firstmeasures the line position and orientation) up to 4 (when it hasobserved both segment endpoints). Similarly, whenever a segmentalready included in themap is not entirely observed, i.e. only ρ andα are measured, still this information can be exploited to correctthe current estimate of the feature.

The M-Space coordinates and feature space coordinates of afeature are relatedby twoprojectionmatricesBf (xf ) andBf (xf ). Letδxf denote a small change in the feature coordinates correspondingto a small change in theM-Space coordinates δxp. The relationshipsbetween δxp and δxf are

δxp = B(xf )δxf ,

δxf =B(xf )δxp. (3)

Eqs. (3), relating small changes in the M-Space to small changesin the feature space, can be applied to project estimate correctionsfrom one space to the other, as will be shown in the next section.Notice that the projectionmatrices B(xf ) andB(xf ) are a function of

the value of the feature space coordinates xf , and their dimensionvaries with the dimensions q of xp. In the following treatment,for ease of notation we will always assume q = 4 (the case q < 4requiring straightforward modifications). In this case, B(xf ) andB(xf ) take on the form

B(xf ) =

cosα

L√2

sinα

L√2

−cosα

L√2

−sinα

L√2

cosα√2

sinα√2

cosα√2

sinα√2

− sinα cosα 0 00 0 − sinα cosα

,

B(xf ) =

L cosα√2

cosα√2

− sinα 0

L sinα√2

sinα√2

cosα 0

−L cosα√2

cosα√2

0 − sinα

−L sinα√2

sinα√2

0 cosα

where L denotes the length of the segment.

3. Single-robot SLAM

In this section, first the single-robot SLAM problem will becast as a state estimation problem for an uncertain dynamicsystem. Then, an EKF-based solution, suitable when an M-Space representation of linear features is adopted, will be brieflyreviewed.

Let us consider an autonomous robot navigating in a 2Denvironment, and let xR = [xR yR θR]T be its pose, where [xR yR]T

is the position and θR is the orientation with respect to a globalreference frame. The generic robot motion model based on linearand angular velocity commands u(k) = [v(k) ω(k)]T is denotedby

xR(k + 1) = f (xR(k),u(k), εu(k)), (4)

where εu is awhite noise affecting the velocities, with E[εu(k)] = 0and E[εu(k)εT

u(k)] = Q (k). Assume that the surrounding environ-ment can be described in terms of linear features, as usually hap-pens in indoor environments presenting walls, doors, or furniture.Then, a map of the environment can be given in terms of n linesegments, identified by the coordinates of their endpoints xfi , i =

1, . . . , n, defined in (1), expressed in the global reference frame.Since static features are considered, their location does not changewith time, i.e.

xfi(k + 1) = xfi(k), i = 1, . . . , n. (5)

By stacking into the vector x both the robot pose and the segmentendpoints

x(k) = [xTR(k) xTf1(k) · · · xTfn(k)]

T . (6)

Eqs. (4) and (5) can be rewritten in compact form as

x(k + 1) = F(x(k),u(k), εu(k)). (7)

Notice that features are incrementally detected as the robotexplores new regions of the environment, andhence thedimensionof the vector x ∈ R3+4n grows with time.

The robot is equipped with sensors able to take measurementsof linear features detected during the navigation. Let

mi(k) =αi(k) ρi(k) dAi(k) dBi(k)

T+ εmi(k), i = 1, . . . , n (8)

denote themeasurement of the i-th line segment at time k, affectedby noise εmi(k) (see Fig. 2). It is assumed that the measurement

Page 4: Cooperative SLAM using -Space representation of linear features

1270 D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278

Fig. 2. Measurementsmi .

noise can be modeled as a white noise, with E[εmi(k)] = 0 andE[εmi(k) εT

mi(k)] = Rmi(k). Measurements like those in (8) can be

easily obtained in practice if the vehicle is equipped with a laserrangefinder, returning pairs of range and bearing measurementstaken from planar scans of the environment. From the rawlaser readings, the measurements m can be extracted throughsegmentation and line fitting algorithms. As a by-product ofsuch extraction phase, also the covariance matrix Rmi(k) of themeasurement noise can be estimated, given the accuracy of thelaser readings [8]. Notice that dAi(k) and dBi(k) are present inthe measurement vector mi(k) only if the endpoints of the i-thidentified segment are detected (e.g., due to a corner). Fromgeometrical considerations, themeasurementmi can be expressedas a function of the robot pose and the segment endpoints

mi(k) = h(xR(k), xfi(k)) + εmi(k), i = 1, . . . , n. (9)

By grouping all the measurements into a single vector

m(k) = [mT1(k) · · · mT

n(k)]T ,

the measurement Eqs. (9) can be rewritten as

m(k) = H(x(k)) + εm(k) (10)

where εm = [εTm1

· · · εTmn

]T .

Within this framework, the SLAM problem boils down toestimating the state vector x given the measurements m. Moreprecisely, let x(0|0) be an estimate of the initial robot poseand feature coordinates. Given the dynamic model (7) and themeasurement Eq. (10), find an estimate x(k|k) of the robot pose andfeature coordinates x(k) based on the measurements m collectedup to time k.

When adopting theM-Space representation, the SLAMproblemcan be tackled by introducing an auxiliary state vector xs, whichincludes the robot pose xR in the global frame and the M-Spacecoordinates xpi of each feature, defined in (2)

xs(k) =xTR(k) x

Tp1(k) · · · xTpn(k)

T. (11)

A standard EKF is run to compute at each time k an estimate xs(k|k)of (11), and the covariance matrix

Pxs(k|k) = Exs(k|k) xTs (k|k)

of the corresponding estimation error xs. Then, the state correctionbased only on the measurement taken at time k

δxs(k) = xs(k|k) − xs(k|k − 1) (12)

is used to update the estimate of the original state vector x(k|k)in the following way. First, the previous estimate x(k − 1|k − 1)is propagated according to the motion model (7), obtaining thepredicted state

x(k|k − 1) = F(x(k − 1|k − 1),u(k − 1), 0).

Then, x(k|k−1) is corrected by projecting δxs(k) from theM-Spaceto the feature space according to the relationship (3). Thus, from(3), (6) and (11) one has

x(k|k) = x(k|k − 1) + Π δxs(k),

where Π = blkdiagI3, B(xf1), . . . ,B(xfn) is built by using theB(xfi) matrices in (3) evaluated at the current feature estimates

xfi(k|k − 1).

Remark 1. The estimation scheme described so far can beefficiently implementedwithout explicitly computing the nominalestimates of the auxiliary vector xs. It can be shown that thecorrection term δxs in (12) can be expressed as a function of thecurrent estimates in the feature space x, the currentmeasurementsm and the covariance matrix Pxs of the estimation error in the M-Space. A detailed description of this procedure can be found in [9].

Summarizing, the single-robot SLAM algorithm produces anestimate x(k) of the robot pose and the segment endpoints in theglobal frame, as well as the covariance matrix Pxs , that expressesrobot uncertainty in the global frame and feature uncertainty inthe M-Space. Notice that if one is interested in expressing theuncertainty of the map in the feature space, the matrix Pxs can beprojected to the feature space by resorting to the relationship (3),as it will be shown in Section 4.2.

3.1. Matching

A key issue for SLAM algorithms to be successful is the dataassociation mechanism. When a robot measures a feature, itmust first decide whether the measurements originate from anewly discovered item or they refer to a feature already presentin the map. In the latter case, a method for selecting whichfeature matches the measurements taken from the sensors isneeded. Several matching techniques have been proposed in theliterature, with different levels of trade-off between effectivenessand complexity [27]. One of the most popular approach is theNearest Neighbor (NN) algorithm, that associates to the measuredfeature the map feature with the smallest Mahalanobis distance.In this work, a modified NN algorithm, specifically tailored tolinear features, has been adopted [8]. First, three validation gateson the distance, orientation and overlapping of the features areemployed to determine beforehand the candidate pairings. Then,the NN algorithm is run among all the feature associations thathave passed the validation gates.

Let mi be the i-th measurement taken by an agent (expressedin the robot local reference frame), defined as in (8). Consider theestimate xfj of the j-th feature present in the map, containing theendpoints of the estimated segment in feature space. In order toassociate a measurement to a feature, the Mahalanobis distanceMij is evaluated, between the actual measurement mi and thepredicted measurement mj from the j-th feature estimate xfj . Themeasurement prediction can be easily computed from the currentrobot estimate xR as

mj = h(xR, xfj). (13)

If eij = mi − mj is the prediction error, then Mij takes on theform Mij = eTij(ΛSijΛT )−1eij where Sij = E[eijeTij] can be computedfrom the sensor noise covariance Rmi and the estimation errorcovariance Pxs , by linearizing the relationship (13). ThematrixΛ =

[Λα Λρ]T , where Λα = [1 0 0 0]T and Λρ = [0 1 0 0]T , removes

from the computation of the error eij the segment endpointsdA and dB possibly present in the measurement mi. Hence, theMahalanobis distanceMij takes care only of theρ andα parametersof the line. On the contrary, the segment endpoints are used tocompute the overlapping rate τij between the extracted segment

Page 5: Cooperative SLAM using -Space representation of linear features

D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278 1271

i and the one associated to the i-th feature, normalized between0 and 1. The latter quantity allows the matching module todiscriminate between different features lying on the same line(e.g., two walls separated by a door). The features in the mapwhich are candidate to be associated to the given measurementare selected by evaluating the following indicators:

• line orientation error:Mα,ij =(ΛT

αeij)2

ΛTαSijΛα

;

• line distance error:Mρ,ij =(ΛT

ρeij)2

ΛTρSijΛρ

;

• overlapping rate: τij.

At this point, the data association mechanism can be summa-rized as follows. Given a measurement mi, a feature xfj presentin the map is a candidate to the matching if and only if Mα,ij <Tα,Mρ,ij < Tρ and τij > Tτ , where the thresholds Tα , Tρ and Tτ aretuning knobs of the algorithm. Then, among all candidate features,the measurement is associated to the one with the smallest Maha-lanobis distance Mij. A measurement with zero candidate featuresis considered taken with respect to a new feature. Hence a newitem is inserted into a tentative list of newly discovered features,waiting for to be promoted into the state of the filter when deemedreliable enough.

4. Multi-robot SLAM

Suppose that there are two robots, R1 and R2, exploring anunknown area. Initially, each agent runs a single-robot SLAMalgorithm like that sketched in Section 3. This way, the robotshave maps of the environment in different reference frames.When the robots meet, they exchange with each other their localmaps to produce a single global map. In order to fuse mapscreated by different robots, whose initial poses are unknown,the transformation between their reference frames needs to bedetermined. This can be efficiently done if robot-to-robot mutualmeasurements are available.

The overall map fusion procedure can be summarized inthree steps. First, the two maps are aligned according to theestimated roto-translation relating the robot reference frames.Then, the covariance matrices of the estimation error of the mapsare updated according to the alignment transformation. Finally,duplicate features, due to partial overlapping between the localmaps, are sought. In such a case, this information is used toimpose constraints that improve the accuracy of the resultingmap.The map fusion procedure is described for a team composed oftwo robots, but can be applied to larger teams by repeating theprocedure for each pair of robots.

4.1. Map alignment

At the rendezvous, let 1x1 ∈ Rm1 and 2x2 ∈ Rm2 , where m1 =

3 + 4n1 and m2 = 3 + 4n2, be the estimate of the state vector x,built by robot R1 and R2, respectively. The integers n1 and n2 arethe number of features present in each map.

The map alignment problem consists in finding the roto-translation between the reference frames ⟨R1⟩ and ⟨R2⟩, in order toexpress themap estimated by a robot in the frame of the other one.Without loss of generality, suppose to be interested in estimatingthe vector 1x2, i.e. the map of robot R2 expressed in the frame⟨R1⟩. This problem can be tackled by processing robot-to-robotmeasurements.When the agents arewithin sensing distance, roboti measures the range and the bearing to the vehicle j (see Fig. 3):

izj =

ηiφj

+

εiη

εiφj

i, j = 1, 2, i = j,

Fig. 3. Robot-to-robot measurements.

where η is the distance between the two robots, iφj is the angleunder which robot Ri sees robot Rj. Measurement errors εiη andεiφj

are modeled as zero-mean, white noise. A more accurateestimate of the distance between the two robots can be computedas the weighted average of the two distance measurements, thusobtaining the combined measurement vector

z =

η1φ22φ1

+

εη

ε1φ2ε2φ1

= z + εz,

where z denotes the actual distance and relative bearings, and εzis a white measurement noise with covariance matrix

Rz = E[εzεTz ] = diag(σ 2

η , σ 21φ2

, σ 22φ1

). (14)

Geometrical considerations on the distance and angles η,1φ2,

2φ1 allowone to compute the exact roto-translation t betweenthe reference frames ⟨R1⟩ and ⟨R2⟩ (see [19] for details):

1x2 = t(1x1, 2x2, z). (15)

Since the arguments of the function t(·) are not known exactly,the estimate 1x2 is computed by replacing them with thecorresponding estimates:

1x2 = t(1x1, 2x2, z). (16)

Eq. (16) is a compact notation describing the overallmap alignmentprocedure. Basically, it means that the map of robot R2 in thereference frame ⟨R1⟩ is obtained by roto-translating all the featuresxfi present in the map according to t(·).

4.2. Updating map uncertainty

At the end of the map alignment procedure, each robot has amap of the overall environment explored by the two robots, inits own reference frame. However, in order for each robot to startover the navigation by running the single-robot SLAM algorithmoutlined in Section 3, it is necessary to update the uncertaintyassociated to the aligned map. This must be done by taking intoaccount the fact that the feature uncertainties computed so farare related to different reference frames. Hence, the covariancematrix of the aligned state vector must be modified according tothe transformation performed in the map alignment stage. In thefollowing, the map uncertainty update is described in detail.

Let us stack together the two M-Space state vectors as Xs =

[1xTs1

2xTs2 ]T and let

Ps = E[XsXTs ] = blkdiag(Pxs1 , Pxs2 ) (17)

be the covariance matrix of the estimation error (see Table 1 fora summary of symbols used in this section). Notice that Ps refersto the estimation errors of the two maps before the fusion, henceexpressed in two different reference frames. In order to properlyupdate the uncertainty of the merged map, the covariance matrixof the filter state after the map alignment has to be computed.Again,without loss of generality, let us consider themapmergedbyrobot R1. Define theM-Space aligned state vector Xa

s = [1xTs1

1xTs2 ]T ,

Page 6: Cooperative SLAM using -Space representation of linear features

1272 D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278

Table 1Correspondence of symbols inM-Space and in feature space.

Quantity M-Space Feature space

Segment coordinates xp xfSingle-robot state vector xs = [xTR xTp1 · · · xTpn ]

T x = [xTR xTf1 · · · xTfn ]T

Multi-robot state vector Xs = [1xTs1

2xTs2 ]T X = [

1xT12xT2 ]

T

Estimation error covariance Ps = E[XsXTs ] P = E[XXT

]

Aligned state vector Xas = [

1xTs11xTs2 ]

T Xa= [

1xT11xT2 ]

T

Estimation error covariance Pas = E[Xa

s (Xa

s )T] Pa

= E[Xa(Xa)T ]

and let

Pas = E[Xa

s (Xa

s )T] (18)

be the covariance matrix of the corresponding estimation errorXas .

The objective is to show how the matrix Pas can be computed from

themap transformation t(·) in (15) and the covariancematrix Ps inEq. (17). Since the roto-translation t(·) relates quantities expressedin the feature space, themap uncertainty updatewill be performedin the same space. To this purpose, let us introduce two auxiliarycovariance matrices

P = E[XXT], (19)

Pa= E[Xa(Xa)T ], (20)

where X is the estimation error of vector X = [1xT1

2xT2]T , and Xa

is the estimation error of vector Xa= [

1xT11xT2]

T , both expressedin feature space. Matrices P and Pa represent the feature spacecounterpart of matrices (17) and (18), respectively. The covariancematrix update can be broken down into three steps:1. project the covariance matrix Ps in the M-Space to the

covariance matrix P in the feature space;2. update the covariance matrix P in the feature space according

to the map alignment, thus obtaining Pa;3. project the covariance matrix Pa back to its counterpart Pa

s intheM-Space.

The first task can be accomplished by resorting to the projectionEqs. (3), which relate the estimation error in the M-Space Xs andthe estimation error in the feature space X. In fact, let 1xfi,1 be afeature in the map of robot R1 expressed in the frame ⟨R1⟩, and let2xfj,2 be a feature in themap of R2 expressed in ⟨R2⟩. The estimationerrors can be projected from the M-Space to the feature spaceaccording to (3) as1xfi,1 =B(1xfi,1)1xpi,1, (21)2xfj,2 =B(2xfj,2)2xpj,2. (22)

By stacking all the features, Eqs. (21)–(22) can be rewritten asX = DXs, (23)

whereD is a block diagonal matrix, defined asD = blkdiag(1D1,2D2),

1D1 = blkdiag(I3,B(1xf1,1), . . . ,B(1xfn1 ,1)),

2D2 = blkdiag(I3,B(2xf1,2), . . . ,B(2xfn2 ,2)).

From (23) and the definitions (17) and (19), it follows that thefeature space covariance matrix P corresponding to the M-Spacecovariance matrix Ps is given by

P = DPsDT .

The second step consists in computing the covariancematrix Pa

of the estimation error of the alignedmaps, in the feature space. Byrecalling the definition of the aligned state vector

Xa=

1x11x2

=

1x1t(1x1, 2x2, z)

,

the estimation error Xa can be derived as a function of theestimation errorX. By linearizing Eq. (15), one gets

Xa=

Im1 0m1×m2T1 T2

X +

0m1×3

Γ2

εz, (24)

where the matrices T1, T2 and Γ2 are the Jacobians of thetransformation t(·) with respect to 1x1, 2x2 and z, respectively,computed at the estimates 1x1, 2x2 and at the measurement z. Theinterested reader is referred to [28] for the analytical expression ofmatrices T1, T2 and Γ2. From (19), (20) and (24), the covariance ofthe aligned augmented vector is given by

Pa= TPT T

+ Γ RzΓT , (25)

where

T =

Im1 0m1×m2T1 T2

, Γ =

0m1×3

Γ2

,

and the matrix Rz is defined in (14).The third step can be tackled similarly to what has been done

in the first step. By exploiting again Eq. (3), the estimation error ofthe aligned vector in theM-Space can be written asXa

s = DXa, (26)

where

D = blkdiag(1D1,1D2),

1D1 = blkdiag(I3, B(1xf1,1), . . . , B(1xfn1 ,1)),

1D2 = blkdiag(I3, B(1xf1,2), . . . , B(1xfn2 ,2))

and 1xfi,2 have been obtained via (16). Finally, from (18) and (26)the covariance matrix Pa

s can be computed as

Pas = DPaDT ,

where Pa is given by (25). The overall covariance matrix updateprocedure is summarized in Fig. 4.

4.3. Map fusion

The matching algorithm described in Section 3.1 comes inhandy for the map fusion too. As a matter of fact, it is very likelythat the areas covered by the two robots before rendezvous sharecommon regions (e.g., the neighborhoodof themeeting point). Thisimplies that a number of features will appear as duplicates in thenew vector Xa after the map alignment. In this case, redundantfeatures must be removed from themap, and the state of the filter,as well as the corresponding covariance matrix, properly reduced.In order to detect duplicate features, for each xfi,1 coming fromthe map of robot R1, the feature xfj,2 coming from the map ofR2 with smallest Mahalanobis distance is searched for. If the twosegments are close enough, and overlap significantly, then they areconsidered the same feature.

It is important to notice that removing duplicate features canactually improve the alignment and the accuracy of the final map.To illustrate how this can be done, suppose that in themapmergedby robot R1 the features xfi,1 and xfj,2 match. The one imported inthemapduring the fusion, i.e. xfj,2, is used as a pseudo-measurementof the corresponding feature xfi,1. The estimate xfj,2 is treatedlike a measurement of the state component xfi,1. In order to fusethe information contained in both estimates, a correction stepof the EKF is performed by processing the pseudo-measurementxfj,2. Finally, the feature xfj,2 is removed from the vector Xa. Thisprocedure is repeated for all matching features between the twoaligned maps. It is interesting to note that processing the pseudo-measurement has a twofold effect. Clearly, the uncertainty of

Page 7: Cooperative SLAM using -Space representation of linear features

D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278 1273

Fig. 4. Basic steps of the overall map uncertainty update procedure.

the estimate resulting from the fusion of xfi,1 and xfj,2 is smallerthan the uncertainty of both original estimates. Less obvious isthe improvement of the map alignment that is observed afterthe EKF correction step. Even just a few features present inboth maps before the rendezvous can significantly enhance themap registration and reduce the overall uncertainty of the finalmap. This is due to the correlation existing among the featureestimates in both maps before the fusion, and it is a well-knownphenomenon occurring in the SLAM problem [29]. As a result,updating the estimate of a duplicate feature affects the estimatesof the entire imported map, which undergoes a sort of rigid roto-translation. Analogously, the uncertainty reduction of the fusedfeature estimates results in an improvement of the accuracy of theoverallmap. An example of such a behavior is illustrated in the nextsection (see Fig. 6).

5. Results

The multi-robot SLAM algorithm has been extensively testedboth in simulations and in real-world experiments. The formerallow one to quantify the performance of the estimation algorithm,in terms of map accuracy, robot localization and consistency of theestimates. The latter are used to assess the viability of the proposedapproach in the presence of a number of uncertainty sources andnon modeled phenomena.

5.1. Simulations

Initially, the proposed technique has been tested in a simulatedscenario, with the aid of a customMATLAB simulator developed onpurpose. Robots are modeled as unicycles, and they are supposedto be equipped with a laser rangefinder. In this case, the robotmotion model (4) takes on the form

xR(k + 1) = xR(k) + ∆T (v(k) + ϵv(k)) cos(θR(k)), (27)yR(k + 1) = yR(k) + ∆T (v(k) + ϵv(k)) sin(θR(k)), (28)θR(k + 1) = θR(k) + ∆T (ω(k) + ϵω(k)), (29)

where∆T is the sampling time and ϵv(k) and ϵω(k)model the noiseaffecting the linear and angular speed. Raw laser data, consisting ofrange and bearing measurements taken from planar scans of therobot surroundings, are synthetically generated by a ray tracingalgorithm applied to a CADmap of the environment. The laser fieldof view is limited to 180°, with an angular resolution of 1° and amaximummeasurable distance of 8m. The simulations are carriedout in a simplified map resembling the ‘‘S. Niccolò’’ building (seegray thin line in Fig. 5), which hosts the Department of InformationEngineering of the University of Siena (about 3000 m2). All thenoise covariances have been tuned according to the mobile robotPioneer 3AT and its sensory equipment. The covariancematrixQ ofthe robot motion noise is diagonal, and the standard deviations of

Courtyard A

Courtyard B

Fig. 5. Simulation of themulti-robot SLAMalgorithm: CADmapof the environment(thin gray line), path traveled by robot R1 (true: dash–dotted; estimated: solid), finalmapbuilt byR1 at the endof the simulation (blue segments), 99% confidence ellipsesof the the identified segment endpoints (pink ellipses). (For interpretation of thereferences to colour in this figure legend, the reader is referred to the web versionof this article.)

the velocity errors are set to be proportional to the absolute valueof the speed, i.e. Q (k) = diag(σ 2

v (k), σ 2ω(k)), where

σv(k) = 0.03|v(k)| (m/s),σω(k) = 0.05|ω(k)| + 0.0017 (rad/s).The covariance matrix of the measurement noise associated toeach raw range and bearing laser reading is Rl = diag

σ 2r , σ 2

b

,

where σr = 0.003 (m) and σb = 0.003 (rad). The measurementsm(k) in (8) are extracted from the raw laser data, togetherwith theassociated error covariance matrix Rm(k), through a segmentationprocedure [8]. In order to determinewhether ameasurementm(k)refers to a line already present in the map or it is a newly detectedfeature, the data-association algorithm described in Section 3.1is employed. Moreover, to avoid including spurious features inthe map, new lines are first inserted in a tentative list until theyare deemed reliable enough. The entries of the robot-to-robotmeasurement noise covariance matrix Rz in (14), are set to ση =

0.02 (m), σ1φ2= 0.07 (rad) and σ2φ1

= 0.07 (rad). Notice that therobot-to-robot observations are much less accurate than the laserraw readings, as they actually occur in real-world experiments.

Page 8: Cooperative SLAM using -Space representation of linear features

1274 D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278

a b

Fig. 6. Incorporating the map built by robot R2 (red) into the map of robot R1 (blue): (a) the resulting map after the alignment procedure (Sections 4.1 and 4.2) and (b) themerged map after removing duplicate features (Section 4.3). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version ofthis article.)

Figs. 5–7 report the results of a typical run. The robots firstexplore the area around two different courtyards (named A andB in Fig. 5), and then meet to fuse the two maps somewhere inthemiddle between the two courtyards. Then, the experiment goeson with each robot exploring the area covered before by the otherrobot. The information gained during the map fusion allows eachrobot to stay localized quite well along the entire path.

The overall map fusion procedure is summarized in Fig. 6.Fig. 6(a) shows the map of robot R1 (bottom, blue) and that ofrobot R2 transformed in the frame ⟨R1⟩ (top, red) at the rendezvous.Notice that the imported map is misaligned due to the uncertaintyaffecting the robot-to-robot measurements. This reflects in largeconfidence ellipses of the segment endpoints present in themap ofR2, extracted from matrix Pa in (20). Common features present inboth maps before the fusion, and corresponding to areas exploredby both agents, may result in duplicate features in themergedmap(see segments between the two courtyards in Fig. 6). As describedin Section 4.3, possible duplicate features that are detected inthe merged map after the alignment can be used as constraintsto improve the map fusion. The resulting map after removingduplicate features is shown in Fig. 6(b). Notice how the twocourtyards appear now more aligned (as they really are), and theuncertainty of the whole map is significantly reduced as shown bythe smaller size of the confidence ellipses. In this regard, it is worthremarking that the estimates of the features present only in theimportedmap (in this case, the features of themap of robot R2) arestill affected by larger uncertainty, due to the noisy robot mutualmeasurements. As a consequence, themergedmapwill bemadeupof estimates whose average uncertainty can be larger than that ofthe original single robot map. In a sense, this is the price to pay foracquiring information about unvisited regions of the environment,and hence to shorten the exploration completion time. The x, y andθ localization errors of robot R1 are shown in Fig. 7, along with the3σ confidence intervals. It can be observed that most of the timesthe estimates are consistent.

A campaign of simulations has been carried out to evaluatethe performance of the proposedmulti-robot SLAM algorithm. The

Fig. 7. The robots xR, yR and θR localization errors, along with the correspondent3σ confidence intervals (robot R1).

environment is the same depicted in Fig. 5. In Tables 2 and 3 theresults of 100 simulation runs are reported. Columns S1 and S2 referto single-robot SLAM performed by robot R1 and R2, respectively.ColumnsM1 andM2 correspond to multi-robot SLAM by R1 and R2,respectively. Columns MH1 and MH2 in Table 3 refer to the mapavailable to robot R1 and R2 at half time of the experiment (afterthe rendezvous and map fusion). The tables report the averageabsolute errors of the robot and feature estimates.

It can be observed that the estimation errors in the single- andmulti-robot case are comparable, the robot pose estimates beingslightly better in the single-robot case. In particular, the map co-operatively built by the multi-robot SLAM approach shows at halftime of the experiments the same level of accuracy of themap pro-

Page 9: Cooperative SLAM using -Space representation of linear features

D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278 1275

a b

Fig. 8. Effect of incorrect feature matching: (a) single-robot map, (b) multi-robot map.

Table 2Robot pose estimation error: average value over 100 simulation runs.

Robot pose errors

S1 S2 M1 M2

Position error (m) 0.24 0.19 0.24 0.23Orientation error (°) 0.50 0.37 0.50 0.45

Table 3Map estimation error and number of features: average value over 100 simulationruns.

Map errors

S1 S2 M1 M2 MH1 MH2

Line parameter α error (°) 0.48 0.36 0.45 0.43 0.42 0.38Line parameter ρ error (m) 0.09 0.07 0.11 0.09 0.10 0.08Endpoint error (m) 0.25 0.17 0.25 0.22 0.25 0.19Number of features 182 183 205 201 178 179Number of endpoints 84 92 131 122 121 121

vided by the single-robot SLAM at the end of the experiment. Thisconfirms the viability of the proposed multi-robot approach. No-tice that the absolute values of the errors are quite small if com-pared to the size and the complexity of the environment. Table 3also shows that the multi-robot approach is able to detect a largernumber of features (and in particular a larger number of segmentendpoints), which results in a more detailed map, without wors-ening the average accuracy of the map itself.

The results shown in the previous tables refer to experiments inwhich the number of duplicate features is usually very small andthe estimated map is in good agreement with the true map of theenvironment. This is due to the fact that no incorrect line featurematching has occurred, both in the single-robot and in the multi-robot case. However, in the simulation campaign several odd runshave been experienced, showing a much larger map estimationerror in the single-robot SLAM. It has been checked that thisbehavior is due to incorrect feature associations, which eventuallylead to large map errors. An example is shown in Fig. 8(a), wherethe remarkable misalignment in the map produced by robot R1 is

due to the fact that a newly detected feature has been wronglyassociated to a different feature already present in the map. Thisdoes not occur in the multi-robot case (see Fig. 8(b)) becausethe same feature had been already detected by robot R2 beforethe rendezvous and hence has been inherited by robot R1 duringthe map fusion. It is well known that correct data association iscrucial for a successful map construction in feature-based SLAM.In this respect, one may claim that using multiple robots helpsthe matching mechanism to avoid incorrect associations, thuspreserving a good quality of the overall map.

5.2. Experiments

A set of experiments with real robots has also been performed,to validate the multi-robot SLAM algorithm in a real-world setup.To this purpose, two mobile robots Pioneer 3AT equipped with alaser rangefinder have been used [30]. All the experiments havebeen carried out in the second floor of the S. Niccolò building,whose simplified CAD map is shown in Fig. 5. The robot motionmodel adopted in the EKF is given by Eqs. (27)–(29), and allthe parameters of the filter are set to the same value used inthe simulations described before. Also the path followed by eachvehicle is similar to that chosen for the simulations, i.e. eachrobot explores a different courtyard and then merges its mapwith that of the other agent in proximity of the middle of thebuilding. Fig. 9 shows a comparison of the multi-robot algorithmwith respect to the single-robot case, in terms of quality of thefinal map built. Specifically, the map depicted in Fig. 9(a) resultsfrom the single-robot SLAM performed by robot R1, whereas inFig. 9(b) the finalmapyielded by themulti-robot SLAMalgorithm isshown. Although a detailed quantitative comparison is not possiblesince the ground truth of the robots and an accurate map of thebuilding are not available, a look at Fig. 9 suggests that the multi-robot algorithm is actually able to provide an improvement tothe built map, also in real environments and with real robots.In fact, in spite of a number of uncertainty sources present inreality and often neglected in simulations, the employment of two

Page 10: Cooperative SLAM using -Space representation of linear features

1276 D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278

a b

Fig. 9. Final map at the end of an experiment in the S. Niccolò building: (a) single-robot SLAM, (b) multi-robot SLAM.

coordinated robots results in a more reliable map, as is testifiedby the better alignment of the two courtyards. The experimentalresults obtained are in good agreement with the simulationspreviously performed. In the multi-vehicle case, the average areaof the 99% confidence ellipses of the estimates of the segmentendpoints is smaller than 5 cm2. At the end of the experiment, theuncertainty affecting the estimate of the robot pose is smaller than10 cm and 0.5° for the x, y and θ coordinates, respectively.

Besides the advantage provided by the multi-robot SLAMalgorithm of being able to build a map of the environment muchfaster than its single-robot counterpart, an additional benefit hasbeen observed during the tests. As is well known, closing largeloops is a challenging taskwhen it comes to SLAM. Asmatter of fact,in several experiments the single-robot SLAM algorithm proved tobe unable to close the largest loop around the two courtyards, arectangular path about 200m long. In this respect, the multi-robotSLAM technique can benefit from the information on unexploredareas shared during the map fusion stage. When the two robotsstart exploring different courtyards and then merge their mapsat rendezvous, the map fusion has the effect of ‘‘shortening thelength of the loop’’, since the imported map conveys informationabout the remaining part of the loop. This behavior observed inthe experiments is actually in agreement with what has beenexperienced in simulation runs like the one depicted in Fig. 8.

5.3. Statistics of repeated experiments

A multi-robot experiment like that described before has beenrepeated 10 times to extract some statistics on the performancein a real-world setup. Both the robot localization capability andthe map accuracy have been taken into account. Concerning therobot localization, the standard deviation of the estimates of therobot pose have been computed, averaged over thewhole path andover all the 10 runs. Concerning the map, the following quantitieshave been computed: the standard deviation of the estimates ofthe line parameters α and ρ (see Fig. 1); the area of the 3σconfidence ellipses of the estimates of the segment endpoints; thenumber of features present in the map. All the quantities havebeen averaged over all the features present in the final map andover all the 10 runs. The results are summarized in Tables 4 and 5.

Table 4Robot pose uncertainty: sample statistics from real experiments (10 runs).

Robot pose uncertainty

S1 S2 M1 M2 S M

xR standard deviation (m) 0.20 0.13 0.22 0.18 0.17 0.20yR standard deviation (m) 0.18 0.16 0.20 0.18 0.17 0.19θR standard deviation (°) 0.55 0.51 0.70 0.64 0.53 0.67

There, for each parameter two different scenarios are considered:single-robot SLAM (columns S1 and S2 corresponding to robot R1and R2, respectively), and multi-robot SLAM (columns M1 and M2corresponding to robot R1 and R2, respectively). Column S and Mreports the average values for the single-robot case and the multi-robot case, respectively.

By inspecting the table entries, a rough comparison betweenthe single-robot and themulti-robot SLAM algorithm can bemade.As far as the robot localization is concerned (Table 4), the twoalgorithms seem to perform basically the same, with the single-robot being slightly better. Again, this is in good agreement withthe simulation campaign reported in Section 5.1. More interestingis the comparison on the map building process. The expectedquality of the map in both cases seems to be very similar (comparethe last two columns of Table 5), with a small increment of theconfidence ellipses in the multi-robot case. Nonetheless, in lightof the considerations made in Sections 5.1 and 5.2, this means thatthemany advantages brought in by themulti-robot framework arepaid at little or almost no cost. Despite the absence of informationon the relative initial robot poses, the map fusion technique basedon robot mutual measurements is able to effectively share thewhole information available at the time of rendezvous among thetwo robots. The effectiveness of the map fusion is also confirmedby the number of features present in the final map (fourth row ofTable 5). The increase in the multi-robot scenario (174 vs. 136) isdue to the fact that in general, at the end of an experiment, thetotal area covered with two robots is larger than that exploredwith a single one. At the same time, the number of features inthe multi-robot case is small enough to conclude that the mapimported at the rendezvous is correctly exploited when the robotreach the corresponding region of the environment, i.e. the regionis recognized as already visited and no redundant features areinserted in the map.

Page 11: Cooperative SLAM using -Space representation of linear features

D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278 1277

Table 5Map uncertainty: sample statistics from real experiments (10 runs).

Map uncertainty

S1 S2 M1 M2 S M

Line parameter α STD: σα (°) 0.54 0.52 0.53 0.51 0.53 0.52Line parameter ρ STD: σρ (m) 0.29 0.24 0.28 0.25 0.27 0.27Endpoint 3σ confidence ellipses (cm2) 4.31 2.84 3.32 5.28 3.58 4.30Average number of features 139 132 171 178 136 174

6. Conclusions and future work

A multi-robot SLAM algorithm for linear features has beenpresented. The proposed solution takes advantage of the M-Spaceframework for efficiently parameterizing lines and segments, andexploits mutual robot measurements to merge local maps duringa rendezvous. Among the benefits of the adopted representation,there is the possibility to early initialize feature estimates andto exploit measurements of partially observed features. The mapmerging technique, which does not require any a priori knowledgeon the initial relative robot pose, has been suitably arrangedto fit the specific needs of the chosen feature representation.In particular, the uncertainty of the merged map has beenanalytically computed in order to be used by the single-robot SLAMalgorithm later on. Simulations and experimental tests have shownthat the proposed approach enjoys the speed and robustnesscharacteristics typical of multi-robot architectures, while at thesame time preserving the quality of the final map.

Several aspects are currently under investigation. To fullyexploit the versatility of theM-Space representation paradigm, themap description is going to be enriched by including new kindof features, like corners or poles. Simulations and experimentaltests involving larger teams of robots are planned to evaluate thebehavior of the map merging scheme after multiple map fusions.In this respect, an open issue is how to avoid multiple processingof the same information, due to repeatedmap fusions between thesame robots, which clearly affects the consistency of the estimates.Finally, more sophisticated data association algorithms, like thejoint compatibility test, could bring in a significant improvement ofthemergedmapquality by identifying a larger number of duplicatefeatures.

References

[1] J.A. Castellanos, J.D. Tardos, Mobile Robot Localization and Map Building: AMultisensor Fusion Approach, Kluwer Academic Publisher, Boston, 1999.

[2] S. Thrun, Robotic mapping: a survey, in: G. Lakemeyer, B. Nebel (Eds.),Exploring Artificial Intelligence in the New Millennium, Morgan Kaufmann,2002, pp. 1–35.

[3] S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics, MIT Press, 2005.[4] H. Durrant-Whyte, T. Bailey, Simultaneous localization and mapping: part I,

IEEE Robotics & Automation Magazine 13 (2) (2006) 99–110.[5] T. Bailey, H. Durrant-Whyte, Simultaneous localization and mapping (SLAM):

part II, IEEE Robotics & Automation Magazine 13 (3) (2006) 108–117.[6] U. Frese, A discussion of simultaneous localization andmapping, Autonomous

Robots 20 (1) (2006) 25–42.[7] A.T. Pfister, S.I. Roumeliotis, J.W. Burdick, Weighted line fitting algorithms for

mobile robot map building and efficient data representation, in: Proceedingsof the 2003 IEEE International Conference on Robotics and Automation,Taiwan, 2003, pp. 1304–1311.

[8] A. Garulli, A. Giannitrapani, A. Rossi, A. Vicino, Mobile robot SLAM forline-based environment representation, in: Proceedings of the 44th IEEEConference on Decision and Control and 2005 European Control Conference,2005, pp. 2041–2046.

[9] J. Folkesson, P. Jensfelt, H. Christensen, TheM-space feature representation forSLAM, IEEE Transactions on Robotics 23 (5) (2007) 1024–1035.

[10] M. Bosse, P. Newman, J. Leonard, S. Teller, Simultaneous localization andmap building in large-scale cyclic environments using the Atlas framework,International Journal of Robotics Research 23 (12) (2004) 1113.

[11] C. Estrada, J. Neira, J. Tardós, Hierarchical SLAM: real-time accurate mappingof large environments, IEEE Transactions on Robotics 21 (4) (2005) 588–596.

[12] B. Williams, M. Cummins, J. Neira, P. Newman, I. Reid, J. Tardós, A comparisonof loop closing techniques in monocular SLAM, Robotics and AutonomousSystems 57 (12) (2009) 1188–1197.

[13] J. Fenwick, P. Newman, J. Leonard, Cooperative concurrent mapping andlocalization, in: Proceedings of the IEEE International Conference on Roboticsand Automation, 2002, pp. 1810–1817.

[14] S. Williams, G. Dissanayake, H. Durrant-Whyte, Towards multi-vehicle simul-taneous localisation and mapping, in: Proceedings of the IEEE InternationalConference on Robotics and Automation, 2002, pp. 2743–2748.

[15] S. Thrun, Y. Liu, Multi-robot SLAM with sparse extended information filers,in: P. Dario, R. Chatila (Eds.), Robotics Research, in: Springer Tracts in AdvancedRobotics, vol. 15, Springer, Berlin, Heidelberg, 2005, pp. 254–266.

[16] A. Howard, Multi-robot simultaneous localization and mapping using particlefilters, International Journal of Robotics Research 25 (12) (2006) 1243–1256.

[17] M. Di Marco, A. Garulli, A. Giannitrapani, A. Vicino, Simultaneous localizationand map building for a team of cooperating robots: a set membershipapproach, IEEE Transactions on Robotics and Automation 19 (2) (2003)238–249.

[18] R. Reid, T. Braunl, Large-scale multi-robot mapping in MAGIC 2010, in:Proceedings of the 2011 IEEE Conference on Robotics, Automation andMechatronics,RAM, 2011, pp. 239–244.

[19] X. Zhou, S. Roumeliotis, Multi-robot SLAM with unknown initial correspon-dence: the robot rendezvous case, in: Proceedings of the 2006 IEEE/RSJInternational Conference on Intelligent Robots and Systems, 2006, pp. 1785–1792.

[20] S. Carpin, Fast and accurate map merging for multi-robot systems, Au-tonomous Robots 25 (3) (2008) 305–316.

[21] H. Lee, S. Lee, M. Choi, B. Lee, Probabilistic map merging for multi-robotRBPF-SLAM with unknown initial poses, Robotica. Available on: CJO 2011http://dx.doi.org/10.1017/S026357471100049X.

[22] H. Chang, C. Lee, Y. Hu, Y. Lu, Multi-robot SLAMwith topological/metric maps,in: Proceedings of the 2007 IEEE/RSJ International Conference on IntelligentRobots and Systems, 2007, pp. 1467–1472.

[23] T. Vidal-Calleja, C. Berger, S. Lacroix, Event-driven loop closure in multi-robotmapping, in: Proceedings of the 2009 IEEE/RSJ International Conference onIntelligent Robots and Systems, 2009, pp. 1535–1540.

[24] T. Vidal-Calleja, C. Berger, J. Solà, S. Lacroix, Large scale multiple robot visualmapping with heterogeneous landmarks in semi-structured terrain, Roboticsand Autonomous Systems 59 (9) (2011) 654–674.

[25] A. Gil, Ó Reinoso, M. Ballesta, M. Juliá, Multi-robot visual SLAM using aRao–Blackwellized particle filter, Robotics and Autonomous Systems 58 (1)(2010) 68–80.

[26] D. Benedettelli, A. Garulli, A. Giannitrapani, Multi-Robot SLAM using M-space feature representation, in: Proceedings of the 49th IEEE Conference onDecision and Control, 2010, pp. 3826–3831.

[27] Y. Bar-Shalom, T. Fortman, Tracking and Data Association, Academic Press,1988.

[28] D. Benedettelli, Multi-robot SLAM using M-space feature representation,Master’s Thesis, Faculty of Engineering, University of Siena, 2009. Availableat: http://robotics.benedettelli.com/robots/publications/BenedettelliMSc.pdf.

[29] M. Dissanayake, P. Newman, S. Clark, H. Durrant-Whyte, M. Csorba, A solutionto the simultaneous localization and map building (SLAM) problem, IEEETransactions on Robotics and Automation 17 (3) (2001) 229–241.

[30] Adept mobilerobots, Pioneer 3-AT, 2011. http://www.mobilerobots.com.

Daniele Benedettelli was born in Grosseto, Italy, in 1984.He obtained a B.Sc. degree in Computer Engineering in2006 from the University of Siena. He received a M.Sc.degree in Robotics and Automation from the University ofSiena. In 2006, he was selected by The LEGO Company as amember of the MINDSTORMS Developer Program, and in2007 as one of MINDSTORMS’ Community Partners.

Page 12: Cooperative SLAM using -Space representation of linear features

1278 D. Benedettelli et al. / Robotics and Autonomous Systems 60 (2012) 1267–1278

Andrea Garulli was born in Bologna, Italy, in 1968.He received the Laurea in Electronic Engineering fromthe Università di Firenze in 1993, and the Ph.D. inSystem Engineering from the Università di Bologna in1997. In 1996 he joined the Dipartimento di Ingegneriadell’Informazione of the Università di Siena, where heis currently Professor of Control Systems. He has beenmember of the Conference Editorial Board of the IEEEControl Systems Society and Associate Editor of the IEEETransactions on Automatic Control. He currently servesas Associate Editor for the Journal of Control Science and

Engineering. He is the author of more than 140 technical publications and co-editor of the books ‘‘Robustness in Identification and Control’’, Springer 1999, and‘‘Positive Polynomials in Control’’, Springer 2005. His present research interestsinclude system identification, robust estimation and filtering, robust control,mobilerobotics and autonomous navigation.

Antonio Giannitrapaniwas born in Salerno, Italy, in 1975.He received the Laurea degree in Computer Engineeringin 2000, and the Ph.D. in Control Systems Engineering in2004, both from the University of Siena. In 2005 he joinedthe Dipartimento di Ingegneria dell’Informazione of thesame university, where he is currently Assistant Professorof Robotics. His research interests include localizationand map building for mobile robots, collective motionfor teams of autonomous agents, nonlinear estimationtechniques for autonomous navigation and mobile hapticinterfaces.