Transcript
Page 1: Extending the Limits of Feature-Based SLAM With B-Splines

IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009 353

Extending the Limits of Feature-Based SLAMWith B-Splines

Luis Pedraza, Member, IEEE, Diego Rodriguez-Losada, Member, IEEE, Fernando Matıa, Member, IEEE,Gamini Dissanayake, Member, IEEE, and Jaime Valls Miro

Abstract—This paper describes a simultaneous localization andmapping (SLAM) algorithm for use in unstructured environmentsthat is effective regardless of the geometric complexity of the envi-ronment. Features are described using B-splines as modeling tool,and the set of control points defining their shape is used to form acomplete and compact description of the environment, thus mak-ing it feasible to use an extended Kalman-filter (EKF) based SLAMalgorithm. This method is the first known EKF-SLAM implemen-tation capable of describing general free-form features in a para-metric manner. Efficient strategies for computing the relevant Ja-cobians, perform data association, initialization, and map enlarge-ment are presented. The algorithms are evaluated for accuracy andconsistency using computer simulations, and for effectiveness usingexperimental data gathered from different real environments.

Index Terms—Kalman filtering, mobile robots, simultaneous lo-calization and mapping (SLAM), spline functions.

I. INTRODUCTION

ONE of the current key challenges of the simultaneouslocalization and mapping (SLAM) problem is the devel-

opment of appropriate parameterizations to represent environ-ments of increasing complexity. While a substantial body ofliterature exists in methods for representing unstructured envi-ronments, most of them are not suitable for use in one of thecommon frameworks developed during the past decades, as itis the solution based on an extended Kalman filter (EKF). Forexample, the use of occupancy grids [1], based on dividing theenvironment into small cells of predefined size, and classifyingthem as occupied or not, and its many variants, would result inan impracticably large state vector. The importance and effec-tiveness of these techniques are undeniable, but we propose adifferent approach to the problem.

Much of the early SLAM work relied on simple point featuresfor describing the environment [2]–[4]. While this approach

Manuscript received December 17, 2007. First published February 18, 2009;current version published April 3, 2009. This paper was recommended for pub-lication by Associate Editor G. Dudek and Editor L. Parker upon evaluation ofthe reviewers’ comments. This work was supported in part by the DireccionGeneral de Investigacion of Spain under Project Robonauta DPI2007-66846-C02-01. The work of L. Pedraza was supported by the “Consejerıa de-Educacionde-la-Comunidad de-Madrid” under for the Ph.D. scholarship.

L. Pedraza is with the Intelligent Control Group, Universidad Politecnica deMadrid, E-28006 Madrid, Spain (e-mail: [email protected]).

D. Rodriguez-Losada and F. Matıa are with the Intelligent Control Group,Universidad Politecnica de Madrid, E-28006 Madrid, Spain.

G. Dissanayake and J. V. Miro are with the School of Electrical, Mechanical,and Mechatronic Systems, University of Technology, Sydney, N.S.W., Australia.

The paper has supplementary downloadable material available at http://ieeexplore.ieee.org, provided by the authors. These videos shows two exper-iments of SLAM with B-splines with data gathered from real environments.This material is 9.5 MB and 112 MB in size.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TRO.2009.2013496

simplifies the formulation of the SLAM estimator, two maindisadvantages arise when relying solely on this representation.The first and obvious problem emerges when the environmentdoes not have sufficient structure to robustly extract point fea-tures; for example, in an underground mine [5]. The second andmore significant issue is the fact that only a small fraction ofinformation available from popular sensors, such as laser rangefinders, is exploited. Much of the data that do not correspond tothe expected features are discarded.

A number of SLAM algorithms that do not use use a specificgeometric model have emerged recently. Here, the completerobot trajectory is present in the state vector, and the raw sensorinformation from different robot poses is processed to obtain anaccurate relationship between these poses [6]. Equivalent ver-sions of FastSLAM that exploit complete laser scans have alsobeen presented [7], [8]. While these strategies have been suc-cessfully used to generate accurate visualizations of complexstructures and detailed maps of the environments, they cannotexploit the inherent information gain available in feature-basedSLAM, where map quality is increased due to precise mathemat-ical description of features, stochastic information relating all ofthem is available, and the interpretability of the map is increased.

Several strategies for incorporating a larger fraction of theinformation gathered by the sensors, using more complex geo-metric primitives, have recently been reported. The use of linesegments [9]–[11] and polylines [12], while promising, raisesome issues related to the consistency of the solution and re-quire further work. While segment-based solutions have beensuccessful in typical indoor environments, the increasing pres-ence of curved geometries, more popular every day in modernconstructions, can create significant problems. In particular, at-tempting to interpret information using an incorrect geometricmodel is one of the major causes of failure of many estimation al-gorithms. Some efforts have been made when circle features areavailable [13], but that is still a major simplification. Thus, moregeneric representations of the environment can potentially im-prove the robustness, effectiveness, and reliability of the SLAMimplementations.

A hybrid strategy consists of modeling the region surround-ing a point feature using a shape model in a coordinate frameattached to the feature [14], allowing the point features map tobe updated while the shape models are only used to increase theinformation content of the observation. A full parameterizationthat captures all the information available in the observations,able to be updated in a statistically consistent manner, remainsan interesting challenge.

Recently, we presented BS-SLAM, a novel feature-basedsolution to the SLAM problem, based on the utilization of

1552-3098/$25.00 © 2009 IEEE

Page 2: Extending the Limits of Feature-Based SLAM With B-Splines

354 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

Fig. 1. Examples of cubic splines (κ = 4) and their corresponding basis functions using (a) a clamped knot vector (Ξc : ξ0 = · · · = ξ3 ≤ . . . ≤ ξ6 = · · · = ξ9 )and (b) an unclamped knot vector (Ξu : ξ0 ≤ · · · ≤ ξ9 ). Knots’ locations are represented as circles and control points as squares. The dashed blue line is the splinecontrol polygon, which joins all the control points.

B-spline curves to represent the boundary between occupied andunoccupied regions in complex environments [15]. B-splinesprovide naturally compact descriptions for both straight andcurved geometries, consisting of a set of control points defin-ing their shape. These points, grouped in a state vector, whichconveys no angular information for the static elements of themap, fully describe the environment. Computationally efficientstrategies for: 1) initializing and extending the state vector; 2)formulating a suitable observation equation; and 3) evaluationof appropriate Jacobians for easy implementation of EKF equa-tions, were introduced in [15].

This paper extends those results presenting additional experi-mental evidences, using both real and simulated data, making anemphasis on the importance of some critical aspects: 1) a studyof the impact of the defining elements of B-spline curves on theexpected performance of the algorithm; 2) providing some gen-eral guidelines to adequately choose the parameters involved inthe segmentation process; and 3) a deep consistency analysis,showing the limitations of the algorithm, as occurring with anyother EKF-SLAM implementation [16].

The paper is organized as follows. Section II introduces fun-damental concepts regarding the extensive theory of B-splinecurves. Section III shows how these powerful tools naturally fitinto the EKF-SLAM framework. Finally, extensive experimen-tal results and conclusions are presented in Sections IV and V.

II. BASIC THEORY OF SPLINES

In this section, some fundamental concepts of the B-splinestheory are presented. The term spline refers to a wide class offunctions that are used in a range of technical and scientificapplications, where interpolation or smoothing of noisy data ina flexible yet computationally efficient way is required. A splineof degree κ (order κ − 1) is a piecewise polynomial curve, i.e., acurve composed of several polynomial pieces of degree κ. Theirmost common representation is based on the linear combinationof a special type of basis functions, known as B-spline basisfunctions.

A. B-Splines Definition

Letting s (t) be the position vector along an m-dimensionalcurve as a function of the parameter t ∈ , a spline curve oforder κ, with control points xi ∈ m (i = 0, . . . , n) and knotvector Ξ = ξ0 , . . . , ξn+k can be expressed as

s (t) =n∑

i=0

xiβi,κ (t) (1)

where βi,κ (t) are the normalized B-spline basis functions of or-der κ. These functions are defined by the Cox-de Boor recursionformulas [17], [18]

βi,1 (t) =

1, if ξi ≤ t < ξi+10, otherwise

(2)

and for all κ > 1

βi,κ (t) =(t − ξi)

ξi+κ−1 − ξiβi,κ−1 (t) +

(ξi+κ − t)ξi+κ − ξi+1

βi+1,κ−1 (t) .

(3)The knot vector Ξ is any nondecreasing sequence of real num-

bers (ξi ≤ ξi+1 for i = 0, . . . , n + κ − 1), and its structure canlie in two different categories: clamped, when the multiplicity ofthe extreme knot values is equal to the order κ of the curve, andunclamped when this does not occur [17], [19]. When clampedknot vectors are used, first and last control points are coincident,respectively, with the beginning and end of the spline curve, asFig. 1 illustrates.

B. Properties of Spline Curves

B-splines exhibit many interesting mathematical and geomet-rical properties. For their special significance in the followingdiscussions, some of them are enumerated here.

1) The maximum order of the curve equals the number ofcontrol points. Equivalently, the minimum number of con-trol points equals the order of the curve. This means, forexample, that a cubic B-spline requires at least four controlpoints to be defined.

Page 3: Extending the Limits of Feature-Based SLAM With B-Splines

PEDRAZA et al.: EXTENDING THE LIMITS OF FEATURE-BASED SLAM WITH B-SPLINES 355

2) The curve generally follows the shape of the controlpolygon.

3) Any affine transformation is applied to the curve byapplying it to the control polygon.

4) Each basis function βi,κ (t) is a piecewise polynomial oforder κ with breaks ξi, . . . , ξi+κ that vanish outside theinterval [ξi, ξi+κ) and is positive on the interior of thatinterval

βi,κ(t) > 0 ⇔ ξi ≤ t < ξi+κ . (4)

5) As a consequence of property 4), the value of s (t) ata given parameter location ξj ≤ t ≤ ξj+1 for some j ∈κ − 1, . . . , n depends only on κ of the coefficients

s (t) =j∑

i=j−κ+1

xiβi,κ (t) . (5)

6) When clamped knot vectors are used, the sum of all theB-spline basis functions for any value of the parameter tis 1, i.e.

n∑i=0

βi,κ (t) = 1. (6)

7) The derivative of a spline of order κ is a spline of one orderκ − 1. The control points of the derived function can beobtained by differencing the original ones [18]

ds (t)dt

= s′ (t) = (κ − 1)n∑

i=0

xi − xi−1

ξi+κ−1 − ξiβi,κ−1 (t )

(7)For further information and justification of these properties,

see [17]–[20].

C. Curve Fitting

One of the greatest appeals of splines curves is their abilityfor approximating noise-contaminated data. In this section, weconsider the problem of obtaining a spline curve that fits a set ofdata points dj , j = 0, . . . , m. If a data point lies on the splinecurve, then (1) must be satisfied

dj = β0,κ (tj )x0 + · · · + βn,κ (tj )xn , j = 0, . . . , m.

This system of equations can be more compactly written as

d = Bx

d = [d0 d1 . . . dm ]T

x = [x0 x1 . . . xn ]T

B =

β0,κ (t0) . . . βn,κ (t0)...

. . ....

β0,κ (tm ) . . . βn,κ (tm )

.

(8)

Matrix B is usually referred to as the collocation matrix. Eachof its rows has at most κ nonnull values [recall property 4)]. Theparameter value tj defines the position of each data point dj

along the curve, and can be approximated by the cumulatedchord length between consecutive data points

t0 = 0

tj =j∑

s=1

‖ds − ds−1‖ , j ≥ 1

(9)

where ‖·‖ is the Euclidean norm. The total length of the curveis

=m∑

s=1

‖ds − ds−1‖ (10)

which is taken as the maximum value of the knot vector.When fitting noisy data acquired by a laser range finder,

the most general case occurs for 2 ≤ κ ≤ n + 1 < m + 1; theproblem is overspecified and a least-squares solution can beobtained using the pseudoinverse matrix of B

x =[BT B

]−1BT d = Φd. (11)

If the order of the spline curve is predefined, the numberof control points n + 1 (or equivalently, the number of knotsn + κ + 1) and the parameter values along the curve are knownas indicated in (9), then the basis functions βi,κ (tj ), and hence,the matrix B can be obtained.

In BS-SLAM, clamped knot vectors are generated taking thetotal length of the curve and defining a knot spacing ρ, whichdepends on the complexity of the environment. Recall that knotsare the joints of the individual polynomial pieces that a splineis composed of, so complex environments containing objectswith high curvatures need a small spacing (high knot density),while straight features can be described properly using longerpolynomial pieces (lower knot density).

To show the effects of both the curve order and the knotspacing on the quality of the fitting, we performed experimentswith a simulated static robot in front of a curved wall. Thisunique feature was described by the polar equation

z = 5 + 0.5 sin (5ϕ) (12)

where z is the distance in meters from the curve to the originand ϕ is the polar angle. The simulated range-bearing sensorhad a forward-facing 180 field-of-view and maximum range of8 m. For each measure across the angular range of the sensor,with a resolution of 1, synthetic noise with normal distributionN (0, σL = 5 mm) was added to the ideal measure computedusing (12).

Fig. 2 shows the effect of varying the knot spacing used inthe data fitting with a cubic spline. For each experiment, theaverage value of theMSE over 50 Monte Carlo experimentswas computed. The residuals are the differences between thereal distance from the sensor to the wall and the distance fromthe sensor to the obtained spline for the same laser beam. It isclear that, as the number of polynomial pieces increases, theapproximation is better.

Fig. 3 shows similar experiments for different orders of thespline used in the fitting process. Here, the knot spacing isconstant (ρ = 2.92 m). The average MSE over 50 runs of aMonte Carlo simulation is clearly smaller for the cubic splinegiven a fixed knot density. Better results could be obtained forthe quadric spline by decreasing the knot density. However, foreach additional knot, a new control point should be introduced,increasing the number of necessary elements for describing thisshape as compared to the cubic spline.

Page 4: Extending the Limits of Feature-Based SLAM With B-Splines

356 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

Fig. 2. Fitting of noisy data with cubic splines using different spacings for the knot vector generation. The total length of the curved wall is 20,42 m. In eachcase, the B-spline is composed by: (a) 9 (ρa = 2,27 m), (b) 8 (ρb = 2,55 m), and (c) 5 (ρc = 4,1 m) polynomial pieces.

Fig. 3. Fitting of noisy data with: (a) Cubic, (b) quadratic, and (c) linear B-splines.

In general, cubic splines offer a fairly good compromise be-tween mathematical complexity and geometric flexibility, andare the most widely used in technical and scientific applications.Most of the experimental results presented in this paper will usecubic splines for modeling the environment. However, it willbe shown how easily linear splines (order two) can be used forconstructing maps of environments with flat features. The inter-ested reader is referred to [19]–[21], where more informationabout curve fitting methods can be found.

III. SOLVING THE SLAM PROBLEM WITH B-SPLINES

In this section, all the procedures and formulas necessary tomake the splines theory fit into the EKF-SLAM framework aredescribed in detail. First, a simple though effective segmenta-tion mechanism is presented, followed by the description of arobust method to perform the always delicate process of dataassociation. Finally, suitable state and observation models aredeveloped, and the necessary Jacobians for applying an EKF-based SLAM algorithm, and building the map as new areas ofthe environment are explored, are presented.

A. Laser Scan Segmentation

The most commonly exteroceptive sensor used in mobilerobotics, for its properties of accuracy, speed, and resolution, isthe laser range finder. This sensor provides for each observationa set of m data points di ∈ 2 (bounding the problem to a 2-Dscenario). In this section, the necessary procedures for extractinga set of splines representing the detected physical objects arepresented.

The main difference between the SLAM methodology pre-sented in this paper, and traditional feature-based algorithms, isthat we are not relying on a specific geometry to be detected. Weattempt to describe the environment as accurately as possible,making no assumptions or dangerous simplifications. This seg-

mentation methodology is based on the analysis of the relativepositions of two consecutive laser data points. We define a setof m − 1 vectors connecting the raw data (see Fig. 4)

pi = di − di−1 . (13)

Then, the following comparisons are performed:

|αi | ≤ αmax ⇔ cos (αi) ≥ cos (αmax) (14)

max (‖pi‖ , ‖pi+1‖) ≤ ηmin (‖pi‖ , ‖pi+1‖) . (15)

Typically, αmax ∈ [0, π/4] and η ∈ [1.5, 2] are fairly goodvalues. When a set of mF consecutive data points accomplishboth previous relationships, they are assumed as belonging to thesame feature and a fitting process is performed, as described inSection II-C. Additional restrictions can eventually be imposedas demanding a minimum number of data points to be fittedor a minimum total length for the obtained curve. These lasttwo additional restrictions usually become indispensable forrejecting dynamic objects, as people, avoiding their inclusion inthe map.

Fig. 4(a) illustrates the process with a laser sample obtainedfrom a real dataset. In this case, a total number of four featureshave been detected. Note that F2 and F4 would be certainlydifficult to describe with traditional feature-based map repre-sentations, as segments or even circles. Fig. 4(b) and (c) showsthe effects of choosing incorrect values for αmax . High valuesof this angular threshold produce the incorrect identification ofimportant locations, such as corners, whereas too small valuesproduce excessive segmentation and the wasting of many datapoints that are not assigned to any feature.

B. Data Association With Splines

At each sampling time, a new set of splines is obtained asdescribed before, being necessary to establish a correspondence

Page 5: Extending the Limits of Feature-Based SLAM With B-Splines

PEDRAZA et al.: EXTENDING THE LIMITS OF FEATURE-BASED SLAM WITH B-SPLINES 357

Fig. 4. Segmentation of raw laser scan data. (a) Most of the information canbe used in the estimation process αm ax = π/4. (b) and (c) Segmentation errorsdue to excessively small or large values for αm ax .

between the N features contained in the map (sm,1 , . . . , sm,N )and the No features detected by the robot (so,1 , . . . , so,No

).Possibly, the main drawback of using splines as a modeling

tool is that control points are not observable. For any given ge-ometry, there are infinite ways of describing its shape dependingon the chosen knot vector and the particular area of the featuredetected by the sensors.

The proposed matching process is described with the help ofFig. 5 and the pseudocode in Fig. 6. First, the distances from thecontrol points of each of the detected splines to the control pointsof the splines contained in the map are calculated (map can besimplified choosing only features that are close to the robot po-sition). This way, couples of matching candidates are identifiedby simple calculation of the Euclidean distance between theircontrol points. A distance threshold of half the knot spacing(dmincp = ρ/2) is enough for this preliminary association.

Fig. 5. Example of data association process. (a) Comparison of control pointpositions. (b) Parameter correspondence.

Fig. 6. Pseudocode for data association.

If a spline is close enough to a map feature, then it is neces-sary to obtain a matching between their points, as depicted inFig. 5(b). The process is as follows.

1) One of the end points of the observed spline is considered(point a).

2) The closest point on the map spline to the point a iscalculated (point b).

3) If b is one of the end points of the map spline, then theclosest point to b on the observed spline is calculated(point c).

4) The process is repeated using as starting point the otherextreme of the observed spline (point d in the picture,which is associated with point e).

Once the last final conditions in Fig. 6 are checked(avoiding, for example, matching an observation withthe hidden side of a wall), the association is estab-lished. At the end of this process, not only correspon-dent pairs of points (c ≡ so,1 (uini) ,b ≡ sm,1 (tini)) and(d ≡ so,1 (ufin) , e ≡ sm,1 (tfin)) are obtained, but also a cor-respondence between the map spline parameter t and the ob-served spline parameter u. This information is very useful whena spline extension is required and for the observation model thatwe propose.

The described data association process, though quite simpleand based only upon Euclidean distance metric, has performedvery robustly in our experiments.

Page 6: Extending the Limits of Feature-Based SLAM With B-Splines

358 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

C. State Model

The state of the system is composed of the robot pose (theonly nonstatic element) and all the map features, which aremodeled as B-spline curves. When a spline curve is expressed aslinear combination of B-splines, its state can be described by thepositions of its control points. Remember from Section II that aspline is defined by both the control points and the knot vector,but this last element is considered fixed, defined during thefeature initialization stage, and only modified when the elementis enlarged, as will be described in Section III-F.

Referring all the positions and orientations to a global refer-ence frame uW ,vW , and letting the robot be the first featurein the map (F0), the following expressions fully describe thestate of the system at a certain time k

xF0 = xr = [xr , yr , φr ]T (16)

xFi= xsi

= [xi,0 , . . . , xi,ni, yi,0 , . . . , yi,ni

]T ,

i = 1, . . . , N (17)

and finally

x =[xT

r ,xTs1

, . . . ,xTsN

]T. (18)

In the previous equations, N is the number of map static ele-ments, each of them defined by ni + 1 control points. Note thatthe number of control points for each of the splines containedin the map is not bound to be fixed. Splines can be prolonged,their knot vectors can be extended and, therefore, new controlpoints can eventually be inserted in the map when new areas ofthe environment are explored, as we will see further. The stateof the system is assumed to follow a normal random distributionwith mean value

x (k|k) = [ xr (k|k) xs1 (k|k) · · · xsN(k|k) ] (19)

and coavariance matrix

P (k|k) =

Prr (k|k) Prs1 (k|k) . . . PrsN(k|k)

Ps1 r (k|k) Ps1 s2 (k|k) . . . Ps1 sN(k|k)

......

. . ....

PsN s1 (k|k) PsN s2 (k|k) . . . PsN sN(k|k)

.

(20)

D. Observation Model

Using an EKF for solving the SLAM problem requires anobservation model, i.e., some expression that allows us to pre-dict the measurements that are likely to be obtained by therobot sensors given the robot pose and the current knowledgeof the environment. As mentioned before, control points of mapsplines are not observable.

However, it is still possible to predict every single laser mea-surement for each position of the laser beam across its angularrange. This way the problem is reduced to the calculation of theintersection of the straight line defined by a laser beam (for eachangular position p) with the splines contained in the map.

Unfortunately, calculating the intersection of a straight linewith a parametric curve in the form s (t) = [sx (t) , sy (t)]T

is not suitable for an explicit mathematical formulation. This

Fig. 7. Observation model. The expected intersection of each laser beamacross the angular range of the sensor with the map spline is computed expressingthe map spline (a) in the up , vp reference frame (b).

problem is known in the literature as ray tracing [22], and isschematically depicted in Fig. 7.

In BS-SLAM, the predicted measurement is calculated mak-ing use of the following two elements.

1) Property 3 in Section II-B, which states that any affinetransformation can be applied to a spline curve by applyingit to its control points.

2) The Newton–Raphson method for calculating the roots ofa function.

The first step is to define an orthonormal reference frameup ,vp centered in the robot reference frame ur ,vr, andwith up defined by the laser beam direction and orientation (seeFig. 7 depicting an arbitrary laser beam and its intersection witha spline curve that needs to be computed). Let s (xi (xi ,xr ) , t)be the position vector along a spline curve expressed in such areference system (we are making here explicit the functional de-pendency between a spline and its control points). The relation-ship between control points xi = [xi, yi ]

T and xi = [xi , yi ]T ,

i = 0 . . . n, is given by

[xi

yi

]=

[cos µp sin µp

− sin µp cos µp

] [xi − xr

yi − yr

](21)

where µp is the angle of the considered laser beam in the globalreference frame.

In this context, the measurement prediction zp = h (xi ,xr )is given by sx (xi (xi ,xr ) , t∗), where t∗ is the value of theparameter t that makes sy (yi (xi ,xr ) , t∗) = 0.

Despite the lack of an explicit observation model, it is pos-sible to compute its derivatives with respect to the state in anapproximate way. Once the value t∗ that makes sy (t∗) = 0 iscalculated, the expected measurement in the nearness of thisparameter location, assuming small perturbations in the statevector, can be approximated by (subindex p is omitted in thefollowing equations)

h (xi ,xr ) = sx

(xi (xi ,xr ) , t∗ − sy (yi (xi ,xr ) , t∗)

s′y (yi (xi ,xr ) , t∗)

).

(22)

Page 7: Extending the Limits of Feature-Based SLAM With B-Splines

PEDRAZA et al.: EXTENDING THE LIMITS OF FEATURE-BASED SLAM WITH B-SPLINES 359

Derivating with respect to the control points positions andmaking use of (1) and (21), we can write [15]

∂h

∂xi= βi,k (t∗)

[cos µ +

sin µ

tan (η − µ)

](23)

∂h

∂yi= βi,k (t∗)

[sin µ − cos µ

tan (η − µ)

]. (24)

Similarly, making use of property 6, (1), and (21), we obtain

∂h

∂xr= − cos µ − sin µ

tan (η − µ)(25)

∂h

∂yr= − sin µ +

cos µ

tan (η − µ)(26)

∂h

∂φr=

z

tan (η − µ). (27)

These formulas will allow the efficient calculation of therelevant Jacobians in the following sections.

E. Applying the EKF

In this section, all previously obtained results are combinedin the working frame of the EKF [2], allowing the incrementalbuilding of the map of an environment where all features aremodeled using cubic splines.

1) Kalman Filter Prediction: Between the times k andk + 1, the robot makes a relative movement, given by thestochastic variable

u (k + 1) ∼ N (u (k + 1) ,Q (k + 1)) . (28)

Under the hypothesis that the only moving object in the mapis the robot, the a priori estimation of the state at time k + 1 isgiven by

xr (k + 1|k) = fr (xr (k|k) , u (k + 1)) (29)

xsi(k + 1|k) = xsi

(k|k) (30)

where fr is the motion model, which depends on the mobileplatform being used, and its covariance

P (k + 1|k) = Fx (k + 1)P (k|k)FTx (k + 1)

+ Fu (k + 1)Q (k + 1)FTu (k + 1) . (31)

The Jacobian matrices are

Fx (k + 1) =

∂fr∂xr

∣∣∣∣xr (k |k),u(k+1)

0 . . . 0

0 In1 . . . 0...

.... . .

...0 0 . . . InN

(32)

Fu (k + 1) =

∂fr∂u

∣∣∣∣xr (k |k),u(k+1)

0...0

. (33)

2) Kalman Filter Update: Once the expected measurementsfor each of the laser beam’s positions of an observation associ-ated with a map spline are obtained, the innovation covariancematrix is given by [2]

S (k + 1) = Hx (k + 1)P (k + 1|k)HTx (k + 1) + R (k + 1)

(34)where R is the sensor covariance matrix and the Jacobian is

Hx (k + 1) =[

∂h∂xr

0 . . . 0∂h

∂xsi

0 . . . 0]

.

(35)In the previous equation, the term ∂h

∂xris calculated making

use of (25)–(27), and ∂h∂xs i

is calculated from (23) and (24). Thegain matrix is calculated as follows:

W (k + 1) = P (k + 1|k)HTx (k + 1)S−1 (k + 1) . (36)

Finally, the state estimation and its covariance are updatedaccording to

x (k + 1|k + 1) = x (k + 1|k) + W (k + 1) ν (k + 1) (37)

P (k + 1|k + 1) = [I − W (k + 1)Hx (k + 1)]P (k + 1|k)

(38)

with the innovation calculated as

ν (k + 1) = z (k + 1) − z (k + 1|k) . (39)

F. Map Enlargement

The stochastic map is incrementally built in two differentways: adding new objects and extending objects already con-tained in the map. In this section, algorithms for both initializingnew objects and prolonging existing features as new areas areexplored are described in detail.

1) Adding New Objects to the Map: Observations that can-not satisfy the association procedure described in Section III-Bare considered as a newly discovered features, and the splinesdefining their shapes must be added to the map. Given amap containing N static features and a set of measurementsz = zi, i = p, . . . , p + q obtained for the laser angular posi-tions in the robot reference frame (see Fig. 7) corresponding to anew feature FN +1 , the augmented state vector can be computedas

xa = g (x, z) ⇔

xar = xr

xasi

= xsi, i = 1, . . . , N

xasN + 1

= gsN + 1 (xr , z) .(40)

This means that the fact of adding a new object does notchange the current map structure (the robot pose and the controlpoints of the N existing features). Function gsN + 1 (xr , z) isthe fitting function of the q + 1 new data points as described inSection II-C. This way, we can obtain the control points of thenew feature as a linear function of the data points and the robotpose

xN +1,0

...xN +1,nN

= Φ

xr + zp cos (φr + τp)...

xr + zp+q cos (φr + τp+q )

(41)

Page 8: Extending the Limits of Feature-Based SLAM With B-Splines

360 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

Fig. 8. Map spline extension with new data after Kalman filter update.(a) Original spline sj (t) needs to be unclamped and last three control pointsare displaced, allowing the insertion of the new data in the extended knot vector.(b) New knot ξ11 is inserted at the end of the extended curve se

j (t), which isdefined using a clamped knot vector.

yN +1,0...

yN +1,nN

= Φ

yr + zp sin (φr + τp)...

yr + zp+q sin (φr + τp+q )

. (42)

The new covariance matrix for the augmented state vector is

Pa = GxPGTx + GzRGT

z (43)

and the Jacobians Gx = ∂g∂x and Gz = ∂g

∂z

Gx =

Ir 0 . . . 00 In1 . . . 0...

.... . .

...0 0 . . . InN

∂gsN + 1

∂xr0 . . . 0

,Gz =

00...0

∂gSN + 1

∂z

(44)

with

∂gsN + 1

∂xr=

Φ

1 0 −zp sin µp

......

...1 0 −zp+q sin µp+q

Φ

0 1 zp cos µp

......

...0 1 zp+q sin µp+q

(45)

∂gsN + 1

∂z=

Φ

cos µp . . . 0...

. . ....

0 . . . cos µp+q

Φ

sin µp . . . 0...

. . ....

0 . . . sin µp+q

. (46)

2) Extending Map Objects: Frequently, observations areonly partially associated with a map feature [as in Fig. 5(b)].This means that a new unexplored part of a map object is beingdetected, and consequently, this spline must be extended. Take,for instance, the situation displayed in Fig. 8, where the jth mapspline has been partially associated with an observation, and theinformation contained in a new set of m + 1 data points

di =[

dxi

dyi

]=

[xr + zi cos µi

yr + zi sin µi

], i = q, . . . , q + m (47)

must be integrated into the map feature. The extended statevector will be

xe = ge (x, z) ⇔

xer = xr

xesi

= xsi, i = j

xesj

= gesj

(xr ,xj , z) .(48)

Function gesj

(xr ,xj , z) is constructed following a similarscheme to the one used in the data fitting process.

An iterative unclamping algorithm is proposed in [19], whichis successfully applied in [23] with the goal of extending a B-spline curve to a single target point, inserting one additionalcontrol point and one additional knot for each extension. Ourproblem is to extend a map spline given a new set of measureddata points, and maintaining the strictly necessary number ofnew control points bounded. The combination of the unclampingalgorithm with the approximation scheme previously proposedmakes it possible to extend the map features as new measure-ments are obtained. Given a spline curve defined by a set ofcontrol points xi and clamped knot vector in the form

Ξ: ξ0 = · · · = ξκ−1︸ ︷︷ ︸κ

≤ ξκ ≤ · · · ≤ ξn ≤ ξn+1 = · · · = ξκ+n︸ ︷︷ ︸κ

the unclamping algorithm proposed in [19] calculates the newcontrol points corresponding to the unclamped knot vectors

Ξr : ξ0 = · · · = ξκ−1︸ ︷︷ ︸κ

≤ ξκ ≤ · · · ≤ ξn ≤ ξn+1 ≤ · · · ≤ ξκ+n

Ξl : ξ0 ≤ · · · ≤ ξκ−1 ≤ ξκ ≤ · · · ≤ ξn ≤ ξn+1 = · · · = ξκ+n︸ ︷︷ ︸κ

.

The particularization of the iterative algorithm for a givenspline order permits us to obtain linear relationships betweenthe old and new control points values. The trivial case occursfor linear splines (order κ = 2). Here, the control points of theunclamped curve remain unchanged.

Here, we show the adaptation of the algorithm for cubicsplines, but similar expressions can be obtained for other or-ders, if desired. The right-unclamping algorithm (on the sidecorresponding to higher values for the parameter t) of a cubicspline (order κ = 4) converts the knot vector Ξ into Ξr andobtains the new control points

xri = xi , i = 0, . . . , n − 2

xrn−1= −Γ2

n−1xn−2 +1

γ2n−1

xn−1

xrn = Γ2

nΓ2n−1xn−2 −

(Γ2

n

γ2n−1

+Γ1

n

γ2n

)xn−1 +

1γ1

nγ2n

xn

(49)where

γji =

ξn+1 − ξi

ξi+j+1 − ξiand Γj

i =1 − γj

i

γji

. (50)

Page 9: Extending the Limits of Feature-Based SLAM With B-Splines

PEDRAZA et al.: EXTENDING THE LIMITS OF FEATURE-BASED SLAM WITH B-SPLINES 361

Similar results can be obtained when converting a clampedknot vector Ξ into a left-unclamped one Ξl

xl0 =

1ω1

0ω20x0 −

(Ω2

0

ω21

+Ω1

0

ω20

)x1 + Ω2

0Ω10x2

xl1 = 1

ω 21x1 − Ω2

1x2

xli = xi , i = 2, . . . , n

(51)

with

ωji =

ξκ−1 − ξi+κ

ξi+κ−j−1 − ξi+κand Ωj

i =1 − ωj

i

ωji

. (52)

These results can be combined with the methodology pro-posed in Section II-C for obtaining new splines as new data areacquired, being aware of the following considerations.

1) New data points must be associated with the existing pa-rameterization of the map spline. This relation can beobtained from the data association stage.

2) The knot vector needs to be unclamped and might need tobe extended with additional knots in order to make roomfor the new span being added. The number of new knotsis chosen taking into account the specified knot spacingand the final length of the curve.

This way, system (8) is written for the new data points, ex-tended with the unclamping (49), (50), and/or (51), (52), and itsleast-squares solution provides a matrix-form linear relationshipbetween the old control points xj and the sampled data di , andthe new control points xe

j . For example, for a right extension,we can obtain

xej,0...

xej,nj +p

= Φe

dxq

...dx

q+m

xj,0...

xj,nj

,

yej,0...

yej,nj +p

= Φe

dyq

...dy

q+m

yj,0...

yj,nj

(53)

where Φe is a constant matrix, which depends only on the con-figuration of the new knot vector. The new covariance matrixafter extending the jth spline is

Pe = GexPGe

xT + Ge

zRGezT (54)

where the involved Jacobians Gex = ∂ge

∂x and Gez = ∂ge

∂z havethe following appearance:

Gex =

Ir 0 . . . 0 . . . 00 In1 . . . 0 . . . 0...

.... . .

......

gesj

∂xr0 . . .

∂gesj

∂xsj

. . . 0

......

.... . .

...0 0 . . . 0 . . . InN

,Gez=

00...

∂gesj

∂z...0

(55)

Fig. 9. Synthetic environments for consistency experiments.

where

∂gesj

∂xr=

Φe

1 0 −zq sin µq

......

...1 0 −zq+m sin µq+m

0 0 0

Φe

0 1 zp cos µp

......

...0 1 zp+q sin µp+q

0 0 0

,

∂gesj

∂xsj

=

Φe

[0

Inj

00

]

Φe

[00

0Inj

]

and

∂gesj

∂z=

Φe

cos µq . . . 0...

. . ....

0 . . . cos µq+m

0 . . . 0

Φe

sin µq . . . 0...

. . ....

0 . . . sin µq+m

0 . . . 0

.

IV. EXPERIMENTAL RESULTS

Several experiments have been performed with both real andsimulated data in order to validate the methodologies and al-gorithms presented in this paper. In all the experiments, thefollowing motion model has been used:

xr,k+1|k = xr,k |k + ∆xr cos(φr,k |k

)− ∆yr sin

(φr,k |k

)yr,k+1|k = yr,k |k + ∆xr sin

(φr,k |k

)+ ∆yr cos

(φr,k |k

)φr,k+1|k = φr,k |k + ∆φr .

Page 10: Extending the Limits of Feature-Based SLAM With B-Splines

362 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

Fig. 10. Consistency experimental results. Left column shows the simulated environments (black line) and the obtained maps (magenta line) along with theodometry, real, and filtered robot trajectories. For each experiment, the central graphics show consistency results for robot position and orientation with ±2σbounding limits. All linear dimensions are in meters and angular dimensions in radians.

A. Simulation Experiments

It is well known that one of the main limitations of theEKF solution to the SLAM problem is the inconsistency ofthe algorithm due to linearization errors [24], which can leadto catastrophic failure when the true uncertainty of the robot’sorientation exceeds a limit [16]. The source and factors of in-consistency when landmarks convey angular information havealso been thoroughly studied, as it is the case when indoor wallsare modeled as segments [25].

In this section, several experiments are performed showingthe limitations of BS-SLAM from the point of view of the filterconsistency.

1) Consistency of BS-SLAM: For consistency experiments,three synthetic environments have been generated combiningboth straight and curved features (Fig. 9). A simulated mobilerobot performs a double loop in each of these environments,and odometry and laser measurements are contaminated withGaussian noise with mean 0 and covariances detailed in Fig. 9.In all cases, laser sensor is simulated with a maximum range of8 m.

Results of these simulations are displayed in Fig. 10. In ex-periments E1.A, E2.A, and E3.A, map splines are progressivelyextended as new areas of the environment are explored, while inexperiments E1.B, E2.B, and E3.B, splines are not extended andnew features are added to the stochastic map when no matchingis successful. It can be clearly seen how extra simplifications

Fig. 11. Information gain in heading for a stationary vehicle. The estimatedstandard deviation σφ decreases after several updates. The results show theaverage values for 50 Monte Carlo simulations for three different initial values.

introduced during the extension process produce a much earlierappearance of inconsistency since the filter is optimistic, i.e.,the real location uncertainty is greater than the estimated value.

2) One Symptom of Inconsistency: Excessive InformationGain: As Bailey et al. pointed out in [16], one symptom ofinconsistency is the excessive information gain, which makes

Page 11: Extending the Limits of Feature-Based SLAM With B-Splines

PEDRAZA et al.: EXTENDING THE LIMITS OF FEATURE-BASED SLAM WITH B-SPLINES 363

Fig. 12. Monte Carlo experiments for consistency (50 runs). The first column displays the mean localization error versus 2σx , 2σy , and 2σφ (linear dimensionsare in meters and angular dimensions in radians). The second column is the NEES and the third column the MSE. (a) No extensions are performed. (b) Noextensions are performed and sensor noise is inflated by a factor of 3 in the filter. (c) Extensions are performed.

any EKF-SLAM implementation to become optimistic after acertain period of time (the estimated covariance is less than thetrue covariance). To analyze this effect in BS-SLAM, a staticrobot has been placed in the curved environment defined by(12) that can be seen in Figs. 2 and 3. The observation noiseis σL = 5 mm for the laser distance readings. Having a staticvehicle making successive observations of this environment, itstrue uncertainty may never decrease.

Fig. 11 shows the results of this experiment using dif-ferent initial values for the heading typical deviation of therobot (σφ (0|0)=0.01 rad, σφ (0|0)=0.05 rad, and σφ (0|0)=0.2 rad). The graphic shows the mean result for 50 runs of aMonte Carlo simulation for each initial value. For each case,the heading standard deviation shows an immediate abrupt de-crease, and the larger initial uncertainties tend to produce thesmallest final estimations.

3) Montecarlo Tests of Filter Consistency: The experimen-tal results in Fig. 10 suggest that the splines extension hassome impact on the consistency properties of the algorithm.

This section analyzes this effect using an environment of theshape depicted in Fig. 9(b) for a = 14 m, b = 10 m, r1 = 4 m,and r2 = 2 m. The process noise standard deviations are σx =5 + 0.1∆x mm, σy = 5 + 0.1∆y mm, and σφ = 0.02 + 0.1∆φrad, and the typical deviation of a laser range finder with aforward 180 field of view and maximum range of 8 m isσL = 5 mm.

When the true state of the vehicle xr (k) is known, the nor-malized estimation error squared (NEES) can be used to char-acterize the filter performance [16]

ε (k) = (xr (k) − xr (k|k))T P−1r (k|k) (xr (k) − xr (k|k)) .

(56)The average NEES over N Monte Carlo simulations, un-

der the hypotheses of a consistent and approximately linear-Gaussian filter, is a χ2 distribution with dim (xr (k)) degreesof freedom. Hence, the average value of ε (k) tends towardthe dimension of the state (3, considering the robot pose).Fig. 12 shows the average results for these experiments after

Page 12: Extending the Limits of Feature-Based SLAM With B-Splines

364 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

Fig. 13. Map of a fair held at the School of Industrial Engineering, UniversidadPolitecnica de Madrid.

50 Monte Carlo runs. The 95% probability concentration regionis bounded by the interval [2.36, 3.72].

Fig. 12(a) shows the results when splines are not extended.Fig. 12(b) shows the results when splines are not extended andan inflated sensor covariance is used, multiplying by a fac-tor of 3 matrix R in (34), and Fig. 12(c) shows the resultswhen splines are extended and the true matrix R is used. In allthe cases, the filter become optimistic when 50 samples havebeen processed, and the fact of raising the covariance of theobservation model does not ameliorate the result. The appear-ance of inconsistency is even more abrupt when splines areextended.

B. Experiments With Real Data

Several experiments have been carried out with data obtainedfrom real environments. Fig. 13 shows a map representing anenvironment with predominantly flat features (segments), whichare modeled using linear splines (order κ = 2). The map con-tains 83 linear splines (166 control points). An alternative map,using cubic splines can be found in [15] (in that case, 81 splinesdefined by 332 control points were necessary to describe theenvironment).

Fig. 14 depicts the map of a bigger and more complex envi-ronment with a mixture of both straight and curved features.1

In this case, 461 control points defining a total of 96 cubicsplines are necessary. In experiments of Figs. 13 and 14, a B21rrobot with a SICK laser was used for the data acquisition witha sampling frequency of 5 Hz. A knot spacing of ρ = 2 m wasused for knot vectors generation. Please note that no geometricconstraints have been used in the map building process.

1This paper has supplementary downloadable material available athttp://ieeexplore.ieee.org provided by the authors. This includes a video show-ing the map building process of Figs. 14 and 15, and the dataset of the mapdisplayed in Fig. 14

Fig. 14. (Left) Map of the Museum of Science “Prıncipe Felipe” (Valencia,Spain). (Right) Two detail views and picture of the museum.

When building large maps, computational cost (quadraticwith the size of the environment) arises as an unavoidable prob-lem. With the aim of demonstrating the accuracy of the proposedmethodology, which allows the exploitation of the majority ofthe rich information provided by a laser range finder, the mapin Fig. 15 is provided. This map shows the interior of the In-tel Research Lab, Seattle. The dataset was obtained from theRobotics Data Set Repository [26] (thanks go to Dieter Foxfor providing these data). It is built using only the localizationmethod, i.e., the only feature in the stochastic map is the robot,and features are added to the map with zero uncertainty. The lowaccumulated localization error allows the good results obtained.An alternative representation using occupancy grid maps can befound in [7].

We want to call the attention of the reader to the detail viewin Fig. 15. The fact of using a fixed degree for the spline curvesreduces the performance of the method, increasing the strictlynecessary density of control points for describing simple ge-ometries as segments. The minimum number of control pointsfor a spline of degree κ is precisely κ. We are using in this maponly cubic splines, but it is clear that for the correct descrip-tion of flat features, a lower degree (κ = 2) should suffice, asoccurs in the map in Fig. 25. Videos of the experiments can bedownloaded from [27].

Page 13: Extending the Limits of Feature-Based SLAM With B-Splines

PEDRAZA et al.: EXTENDING THE LIMITS OF FEATURE-BASED SLAM WITH B-SPLINES 365

Fig. 15. Intel Laboratories map and detail view showing control points.

V. CONCLUSION

A new methodology for simultaneous localization and map-ping in complex environments with a mixture of flat and curvedgeometries has been described and experimentally tested. Thepower and computational efficiency of spline curves has beenused in an EKF-SLAM framework, thus allowing the rep-resentation of complex structures in a parametric way. BS-SLAM provides a set of simple and easily programmablematrix-form expressions, allowing a successful symbiosis be-tween EKF-SLAM and B-splines theory. When simple de-scriptions of the environment are insufficient or unfeasible,any other SLAM algorithm could benefit from this newrepresentation.

Control points contained in the stochastic map incrementallyencapsulate all the uncertainties conveyed in the odometry andlaser sensor readings. Finally, this new representation consti-tutes a big step forward compared to current SLAM techniquesbased on geometric maps. In particular, no other feature-basedSLAM algorithm could adequately describe some of the envi-ronments modeled in this paper. Now, the mathematical inter-pretation and reasoning over the map features, regardless of theirshape, is possible given that they are described using parametricfunctions.

Our current research involves further extension of the ideaspresented in this paper on the following topics.

1) Improvement of segmentation strategies, developing moresophisticated clustering techniques, and enhancing the de-tection and isolation of features prior to the curve fittingprocess.

2) Intelligent control points selection, making the most ofspline management techniques: degree elevation, degreereduction, knot insertion, and knot deletion. The aim is touse only the strictly necessary degree for correct featuresdescription, deleting unnecessary knots, and introducingnew ones when a shape refinement is necessary.

3) Exploitation of the parametric feature representation inthe data association process.

4) Finally, we believe that techniques and concepts presentedhere could find a natural extension to a 3-D scenario, giventhe similarities in the formulations for spline curves andspline surfaces.

ACKNOWLEDGMENT

The first author would like to thank the “ARC Centre ofExcellence for Autonomous Systems,’ UTS node for havinghim as a visiting postgraduate research student during somemonths in 2006.

REFERENCES

[1] H. Moravec and A. Elfes, “High resolution maps from wide angle sonar,”in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Mar. 1985, vol. 2, pp. 116–121.

[2] G. Dissanayake, P. M. Newman, H. F. Durrant-Whyte, S. Clark, andM. Csorba, “A solution to the simultaneous localization and map building(SLAM) problem,” IEEE Trans. Robot. Autom., vol. 17, no. 3, pp. 229–241, Jun. 2001.

[3] J. Guivant, E. Nebot, and H. Durrant-Whyte, “Simultaneous localizationand map building using natural features in outdoor environments,” in Proc.Intell. Auton. Syst. 6, Venice, Italy, Jul. 2000, vol. 1, pp. 581–588.

[4] J. J. Leonard, H. F. Durrant-Whyte, and I. J. Cox, “Dynamic map buildingfor an autonomous mobile robot,” Int. J. Robot. Res., vol. 11, no. 4,pp. 286–298, Aug. 1992.

[5] R. Madhavan, G. Dissanayake, and H. F. Durrant-Whyte, “Autonomousunderground navigation of an LHD using a combined ICP–EKF ap-proach,” in Proc. IEEE Int. Conf. Robot. Autom., 1998, vol. 4, pp. 3703–3708.

[6] R. Eustice, H. Singh, and J. Leonard, “Exactly sparse delayed-state filtersfor view-based SLAM,” IEEE Trans. Robot., vol. 22, no. 6, pp. 1100–1114, Dec. 2006.

[7] D. Hahnel, W. Burgard, D. Fox, and S. Thrun, “An efficient fast SLAMalgorithm for generating maps of large-scale cyclic environments from rawlaser range measurements,” in Proc. IEEE/RSJ Int. Conf. Intell. RobotsSyst., Las Vegas, NV, Oct. 2003, vol. 1, pp. 206–211.

[8] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAMwith Rao-Blackwellized particle filters by adaptive proposals and selec-tive resampling,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2005,pp. 2443–2448.

[9] J. A. Castellanos, J. M. M. Montiel, J. Neira, and J. D. Tardos, “TheSPmap: A probabilistic framework for simultaneous localization and mapbuilding,” IEEE Trans. Robot. Autom., vol. 15, no. 5, pp. 948–952, Oct.1999.

[10] D. Rodriguez-Losada, F. Matıa, and R. Galan, “Building geometric featurebased maps for indoor service robots,” Robot. Auton. Syst., vol. 54, no. 7,pp. 546–558, Jul. 2006.

[11] P. MacKenzie and G. Dudek, “Precise positioning using model-basedmaps,” in Proc. IEEE Int. Conf. Robot. Autom., 1994, pp. 1615–1621.

[12] M. Veeck and W. Burgard, “Learning polyline maps from range scan dataacquired with mobile robots,” in Proc. IEEE/RSJ Int. Conf. Intell. RobotsSyst., Sendai, Japan, 2004, vol. 2, pp. 1065–1070.

[13] S. Zhang, L. Xie, M. Adams, and F. Tang, “Geometrical feature extractionusing 2D range scanner,” in Proc. 4th Int. Conf. Cont. Autom., Montreal,Canada, Jun. 2003, pp. 901–905.

[14] J. Nieto, T. Bailey, and E. Nebot, “Scan-SLAM: Combining EKF-SLAMand scan correlation,” in Proc. Int. Conf. Field Serv. Robot., 2005, pp. 129–140.

[15] L. Pedraza, G. Dissanayake, J. Valls Miro, D. Rodriguez-Losada, andF. Matıa, “BS-SLAM: Shaping the world,” presented at the, Atlanta, GA,Jun. 2007.

[16] T. Bailey, J. Nieto, J. Guivant, M. Stevens, and E. Nebot, “Consistency ofthe EKF-SLAM algorithm,” in Proc. IEEE/RSJ Int. Conf. Intell. Robotsand Syst., 2006, pp. 3562–3568.

[17] D. F. Rogers, An Introduction to NURBS: With Historical Perspective.San Francisco, CA: Morgan Kaufmann, 2001.

[18] C. de Boor, A Practical Guide to Splines, J. Marsden and L. Sirovich,Eds., New York: Springer-Verlag, 1978.

[19] L. Piegl and W. Tiller, The NURBS Book, 2nd ed. New York: Springer-Verlag, 1997.

[20] J. H. Ahlberg, E. N. Nilson, and J. L. Walsh, The Theory of Splines andtheir Applications. New York: Academic, 1967.

[21] A. Atieg and G. A. Watson, “A class of methods for fitting a curveor surface to data by minimizing the sum of squares of orthogonaldistances,” J. Comput. Appl. Math., vol. 158, no. 2, pp. 277–296,2003.

Page 14: Extending the Limits of Feature-Based SLAM With B-Splines

366 IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2, APRIL 2009

[22] A. J. Sweeney and R. H. Barrels, “Ray tracing free-form B-spline sur-faces,” IEEE Comput. Graph. Appl., vol. 6, no. 2, pp. 41–49, Feb. 1986.

[23] S.-M. Hu, C.-L. Tai, and S.-H. Zhang, “An extension algorithm for B-Splines by curve unclamping,” Comput.-Aided Des., vol. 34, no. 5,pp. 415–419, Apr. 2002.

[24] J. A. Castellanos, J. Neira, and J. D. Tardos, “Limits to the consistency ofEKF-based SLAM,” presented at the 5th IFAC Symp. Intell. Auton. Veh.(IAV 2004), Lisbon, Portugal, Jul.

[25] D. Rodriguez-Losada, F. Matıa, L. Pedraza, A. Jimenez, and R. Galan,“Consistency of SLAM-EKF algorithms for indoor environments,” J.Intell. Robot. Syst., vol. 50, no. 4, pp. 375–397, Dec. 2007.

[26] A. Howard and N. Roy. (2003). The robotics data set repository (Radish)[Online]. Available: http://radish.sourceforge.net

[27] L. Pedraza. (2008). Experiments of SLAM with B-splines. [Online]. Avail-able: http://www.luispedraza.es/research.php

Luis Pedraza (M’06) received the B.Eng. degreein industrial engineering and the M.Eng. degree incontrol engineering, in 1998 and 2002 respectively,from the Universidad Politecnica de Madrid, Madrid,Spain, where he is currently working toward the Ph.D.degree in robotics.

He was a predoctoral student at the Industrial Au-tomation Institute, Spanish National Research Coun-cil (CSIC), where he was involved in several projectsrelated to industrial automation and legged robots.Between August and November 2006, he was a Vis-

iting Scholar at the University of Technology, Sydney (UTS) node of the ARCCentre of Excellence for Autonomous Systems. His current research interestsinclude simultaneous localization and mapping and service robots, focused onnew modeling techniques for complex environments and interactive mobilerobots.

Diego Rodriguez-Losada (M’05) received the M.Sc.degree in mechanics (industrial engineering) andthe Ph.D. degree in robotics from the UniversidadPolitecnica de Madrid, Madrid, Spain, in 1999 and2004, respectively.

He is currently an Assistant Professor at the Uni-versidad Politecnica de Madrid. He has been involvedin many national and European R&D projects inrobotics, as well as many industrial projects as Soft-ware Engineer and Developer. His current researchinterests include simultaneous localization and map-

ping and service robots, mainly focused on interactive mobile robots as tourguides.

Fernando Matıa (M’93) received the Ph.D. degreefrom the Universidad Politecnica de Madrid, Madrid,Spain, in 1994.

Since 1997, he is an Associate Professor at the Uni-versidad Politecnica de Madrid, where he has beenengaged in in control engineering since 1988, andin robotics since 1994. His main R&D activities arefuzzy control and mobile robotics. He has experiencein eight European projects, and 13 national researchand industrial projects related with the previous top-ics. He is the author or coauthor of two books, 13

book chapters, 19 articles in journals, and 65 papers in conferences. He is thereviewer of 12 international journals.

Gamini Dissanayake (M’06) received the Graduatedegree in mechanical/production engineering fromthe University of Peradeniya, Peradeniya, Sri Lanka,in 1977, and the M.Sc. degree in machine tool tech-nology and the Ph.D. degree in mechanical engineer-ing (robotics) from the University of Birmingham,Birmingham, U.K., in 1981 and 1985, respectively.

He is currently the James N. Kirby Professorof mechanical and mechatronic engineering at theUniversity of Technology, Sydney (UTS), N.S.W.,Australia. He has expertise in a broad range of topics

in robotics including robot localization, mapping, and simultaneous localiza-tion and mapping (SLAM) using sensors such as laser, radar, vision and inertialmeasurement units, terrain mapping, multirobot coordination for SLAM, targettracking and probabilistic search, motion planning for single and multiple robotmanipulators, legged robots and cranes, and application of robotic systems inurban search and rescue. He leads the UTS node of the ARC Centre of Excel-lence for Autonomous Systems.

Jaime Valls Miro received the B.Eng. and M.Eng.degrees in computer science systems from ValenciaPolytechnic University, Valencia, Spain, in 1990 and1993, respectively, and the Ph.D. degree in roboticsfrom Middlesex University, London, U.K., in 1998.

He was in the underwater robot industry for fiveyears. He is currently a Senior Research Fellow atthe ARC Centre of Excellence for Autonomous Sys-tems, University of Technology, Sydney (UTS node),N.S.W., Australia . His current research interests in-clude autonomous navigation and mapping of un-

structured scenarios—urban search and rescue in particular, visual simultane-ous localization and mapping (SLAM), underwater and health care robotics,kino-dynamic manipulator control, and human–robot interaction.