8
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION,VOL. 18, NO. 1, FEBRUARY 2002 87 Let , be the columns of the 3 2 matrix on the right-hand side of (45), then Hence the posture variables are controllable. When , let and be the columns of the 4 2 matrix on the right-hand side of (43), then Thus, is controllable. The control laws in actuator failure are proposed and demonstrated in [6]. VI. CONCLUSION In this paper, the mobility augmentation method was proposed to produce omnidirectional motion with conventional wheeled bases with the degree of mobility less than three. Combined with a revolute/pris- matic joint, some conventional wheeled bases with the degree of mo- bility less than three could produce omnidirectional motion if some conditions were satisfied. The structural conditions for mobility aug- mentation were derived and several examples were provided. Such a nonconventional omnidirectional mechanism did not suffer from typ- ical problems that conventional omnidirectional mechanisms have. One of the illustrated structures was further improved to develop the OmniKity robot series. A novel gear train was devised so that, in addition to the omnidirectional property, the mechanism provides a fault-tolerance property in case of the failure of any one of the three motors. REFERENCES [1] G. Campion, G. Bastin, and B. D’Andrea-Novel, “Structural properties and classification of kinematic and dynamic models of wheeled mobile robots,” IEEE Trans. Robot. Automat., vol. 12, pp. 47–62, 1996. [2] S. L. Dickerson, “Control of an omni-directional robotic vehicle with mecanum wheels,” in National Telesystems Conf., 1991, pp. 323–328. [3] L. Ferriere and B. Raucent, “ROLLMOBS, a new universal wheel concept,” in IEEE Int. Conf. on Robotics and Automation, 1998, pp. 1877–1882. [4] R. Holmberg and O. Khatib, “Development and control of a holonomic mobile robot for mobile manipulation tasks,” Int. J. Robot. Res., vol. 19, no. 11, pp. 1066–1074, 2000. [5] M.-J. Jung, H.-S. Shim, H.-S. Kim, and J.-H. Kim, “The miniature omni- directional mobile robot OmniKity-I (OK-I),” in IEEE Int. Conf. on Robotics and Automation, 1999, pp. 2686–2691. [6] M.-J. Jung and J.-H. Kim, “Fault tolerant control strategy of Om- niKity-III,” in IEEE Int. Conf. on Robotics and Automation, 2001, pp. 3370–3375. [7] S. M. Killough and F. G. Pin, “Design of an omnidirectional and holo- nomic wheeled platform prototype,” in IEEE Int. Conf. on Robotics and Automation, 1992, pp. 84–90. [8] P. F. Muir and C. P. Neuman, “Kinematic modeling for feedback con- trol of an omnidirectional wheeled mobile robot,” in IEEE Int. Conf. on Robotics and Automation, 1987, pp. 1772–1778. [9] M. Wada and S. Mori, “Holonomic and omnidirectional vehicle with conventional tires,” in IEEE Int. Conf. on Robotics and Automation, 1996, pp. 3671–3676. [10] M. West and H. H. Asada, “Design and control of ball wheel omnidirec- tional vehicles,” in IEEE Int. Conf. on Robotics and Automation, 1995, pp. 1931–1938. [11] B. Yi and W. K. Kim, “The kinematics for redundantly actuated omni-di- rectional mobile robots,” in IEEE Int. Conf. on Robotics and Automa- tion, 2000, pp. 2485–2492. Optimal Mobile Robot Pose Estimation Using Geometrical Maps Geovany Araujo Borges and Marie-José Aldon Abstract—We propose a weighted least-squares (WLS) algorithm for op- timal pose estimation of mobile robots using geometrical maps as environ- ment models. Pose estimation is achieved from feature correspondences in a nonlinear framework without linearization. The proposed WLS approach yields optimal estimates in the least-squares sense, is applicable to hetero- geneous geometrical features decomposed in points and lines, and has an computation time. Index Terms—Heterogeneous features, nonlinear optimization, optimal 2-D pose estimation, weighted least-squares. I. INTRODUCTION In mobile robot absolute localization, depending on the employed approach, we encounter the problem of estimating the 2-D transforma- tion parameters (rotation and translation) that relate two sets of geo- metrical features extracted from two different maps. In the context of absolute localization, we have a global map , which may be contin- uously updated, and a local map which is composed of geomet- rical features extracted from exteroceptive sensor readings such as a video camera [1], a laser rangefinder [2], ultrasonic ranging [3], [4], or a combination of them [5]. Usually, maps are composed of heteroge- neous geometrical features such as points, which correspond to corners and vertical edges in the environment, and lines which represent walls and sides of polygonal obstacles. Map building using different types of features allows a greater reliability for the pose estimation. Most of the proposed methods which use geometrical features for map-based pose estimation are divided into three phases: pose predic- tion, feature matching between maps and , and pose estimation from the matched features. Pose prediction provides an initial guess of the vehicle’s pose and an associated uncertainty measure, which are used as prior information for feature matching between and . From the feature correspondences, the estimation phase provides a pose estimate and an associated uncertainty measure which optimize some criterion. The work presented in this paper only concerns the pose es- timation phase. II. BACKGROUND The most used approach for map-based mobile robot pose estimation using geometrical features is the extended Kalman filter (EKF) [6], [7]. The formulation of the mobile robot localization problem using the EKF is elegant and allows one to fuse multisensory data. The problem is generally written as that of estimating the state of a discrete stochastic nonlinear dynamic system which evolves using proprioceptive sensor readings as (1) with modeling a Gaussian noise with distribution (mul- tivariate normal distribution with zero mean and covariance ). Manuscript received February 14, 2001; revised November 7, 2001. This paper was recommended for publication by Associate Editor M. Buehler and Editor S. Hutchinson upon evaluation of the reviewers’ comments. This work was supported in part by Capes, Brasília, Brazil, under Grant BEX2280/97-3 The authors are with the Robotics Department, Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier, UMR CNRS/UM2, C55060 Montpellier, France (e-mail: [email protected]; [email protected]). Publisher Item Identifier S 1042-296X(02)01775-5. 1042–296X/02$17.00 © 2002 IEEE

Optimal mobile robot pose estimation using geometrical maps

  • Upload
    mj

  • View
    217

  • Download
    2

Embed Size (px)

Citation preview

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002 87

Let g1, g2 be the columns of the 3� 2 matrix on the right-hand sideof (45), then

rank [ g1 g2 [g1;g2] ] = 3: (46)

Hence the posture variables are controllable.Whenb 6= 0, let g1 andg2 be the columns of the 4� 2 matrix on

the right-hand side of (43), then

rank [ g1 g2 [g1;g2] [g2; [g1; g2]] ] = 4: (47)

Thus,RpE is controllable.The control laws in actuator failure are proposed and demonstrated

in [6].

VI. CONCLUSION

In this paper, the mobility augmentation method was proposed toproduce omnidirectional motion with conventional wheeled bases withthe degree of mobility less than three. Combined with a revolute/pris-matic joint, some conventional wheeled bases with the degree of mo-bility less than three could produce omnidirectional motion if someconditions were satisfied. The structural conditions for mobility aug-mentation were derived and several examples were provided. Such anonconventional omnidirectional mechanism did not suffer from typ-ical problems that conventional omnidirectional mechanisms have.

One of the illustrated structures was further improved to developthe OmniKity robot series. A novel gear train was devised so that,in addition to the omnidirectional property, the mechanism providesa fault-tolerance property in case of the failure of any one of the threemotors.

REFERENCES

[1] G. Campion, G. Bastin, and B. D’Andrea-Novel, “Structural propertiesand classification of kinematic and dynamic models of wheeled mobilerobots,”IEEE Trans. Robot. Automat., vol. 12, pp. 47–62, 1996.

[2] S. L. Dickerson, “Control of an omni-directional robotic vehicle withmecanum wheels,” inNational Telesystems Conf., 1991, pp. 323–328.

[3] L. Ferriere and B. Raucent, “ROLLMOBS, a new universal wheelconcept,” inIEEE Int. Conf. on Robotics and Automation, 1998, pp.1877–1882.

[4] R. Holmberg and O. Khatib, “Development and control of a holonomicmobile robot for mobile manipulation tasks,”Int. J. Robot. Res., vol. 19,no. 11, pp. 1066–1074, 2000.

[5] M.-J. Jung, H.-S. Shim, H.-S. Kim, and J.-H. Kim, “The miniature omni-directional mobile robot OmniKity-I (OK-I),” inIEEE Int. Conf. onRobotics and Automation, 1999, pp. 2686–2691.

[6] M.-J. Jung and J.-H. Kim, “Fault tolerant control strategy of Om-niKity-III,” in IEEE Int. Conf. on Robotics and Automation, 2001, pp.3370–3375.

[7] S. M. Killough and F. G. Pin, “Design of an omnidirectional and holo-nomic wheeled platform prototype,” inIEEE Int. Conf. on Robotics andAutomation, 1992, pp. 84–90.

[8] P. F. Muir and C. P. Neuman, “Kinematic modeling for feedback con-trol of an omnidirectional wheeled mobile robot,” inIEEE Int. Conf. onRobotics and Automation, 1987, pp. 1772–1778.

[9] M. Wada and S. Mori, “Holonomic and omnidirectional vehicle withconventional tires,” inIEEE Int. Conf. on Robotics and Automation,1996, pp. 3671–3676.

[10] M. West and H. H. Asada, “Design and control of ball wheel omnidirec-tional vehicles,” inIEEE Int. Conf. on Robotics and Automation, 1995,pp. 1931–1938.

[11] B. Yi and W. K. Kim, “The kinematics for redundantly actuated omni-di-rectional mobile robots,” inIEEE Int. Conf. on Robotics and Automa-tion, 2000, pp. 2485–2492.

Optimal Mobile Robot Pose Estimation UsingGeometrical Maps

Geovany Araujo Borges and Marie-José Aldon

Abstract—We propose a weighted least-squares (WLS) algorithm for op-timal pose estimation of mobile robots using geometrical maps as environ-ment models. Pose estimation is achieved from feature correspondences in anonlinear framework without linearization. The proposed WLS approachyields optimal estimates in the least-squares sense, is applicable to hetero-geneous geometrical features decomposed in points and lines, and has an

( ) computation time.

Index Terms—Heterogeneous features, nonlinear optimization, optimal2-D pose estimation, weighted least-squares.

I. INTRODUCTION

In mobile robot absolute localization, depending on the employedapproach, we encounter the problem of estimating the 2-D transforma-tion parameters (rotation and translation) that relate two sets of geo-metrical features extracted from two different maps. In the context ofabsolute localization, we have a global mapM, which may be contin-uously updated, and a local mapM0 which is composed of geomet-rical features extracted from exteroceptive sensor readings such as avideo camera [1], a laser rangefinder [2], ultrasonic ranging [3], [4], ora combination of them [5]. Usually, maps are composed of heteroge-neous geometrical features such as points, which correspond to cornersand vertical edges in the environment, and lines which represent wallsand sides of polygonal obstacles. Map building using different types offeatures allows a greater reliability for the pose estimation.

Most of the proposed methods which use geometrical features formap-based pose estimation are divided into three phases: pose predic-tion, feature matching between mapsM andM0, and pose estimationfrom the matched features. Pose prediction provides an initial guess ofthe vehicle’s pose and an associated uncertainty measure, which areused as prior information for feature matching betweenM andM0.From the feature correspondences, the estimation phase provides a poseestimate and an associated uncertainty measure which optimize somecriterion. The work presented in this paper only concerns the pose es-timation phase.

II. BACKGROUND

The most used approach for map-based mobile robot pose estimationusing geometrical features is the extended Kalman filter (EKF) [6], [7].The formulation of the mobile robot localization problem using theEKF is elegant and allows one to fuse multisensory data. The problemis generally written as that of estimating the state of a discrete stochasticnonlinear dynamic system which evolves using proprioceptive sensorreadingsu as

z(k) = g(z(k � 1); u(k)) + v(k) (1)

with v modeling a Gaussian noise with distributionN(0; PPP v(k)) (mul-tivariate normal distribution with zero mean and covariancePPP v(k)).

Manuscript received February 14, 2001; revised November 7, 2001. Thispaper was recommended for publication by Associate Editor M. Buehler andEditor S. Hutchinson upon evaluation of the reviewers’ comments. This workwas supported in part by Capes, Brasília, Brazil, under Grant BEX2280/97-3

The authors are with the Robotics Department, Laboratoire d’Informatique,de Robotique et de Microélectronique de Montpellier, UMR CNRS/UM2,C55060 Montpellier, France (e-mail: [email protected]; [email protected]).

Publisher Item Identifier S 1042-296X(02)01775-5.

1042–296X/02$17.00 © 2002 IEEE

88 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002

The system state represents the vehicle’s posez = (x; y; �)T , andz(k), k = 0; 1; . . . is assumed to be a Gaussian random sequence.Equation (1) is usually derived from the kinematics model of the robot,whereu is provided by motion encoders coupled to the robot axes. Atsome moment, exteroceptive measurementsyyy0 are taken following

yyy0(k) = hhh(z(k); y(k)) +www(k): (2)

In the case of map-based positioning,y is a vector containing thefeature parameters of the global mapM andyyy0 is its local observationin M0. Obviously, such observation depends onzzz as explicitly shownin (2). The measurement is perturbed by a Gaussian noisewww, followingN(0; PPPw(k)). In the above models,ggg andhhh are intrinsical nonlinearfunctions of the robot posezzz. The problem here becomes obtainingestimates of the first and second moments ofzzz(k), z(k) andPPP z(k),respectively, which are sufficient to compute the distribution ofz(k).

In the EKF framework, system state estimation encompasses twophases: prediction and estimation. During the former, a prediction ofthe robot pose and of its associated covariance matrix is obtained bylinearizing (1) as

z(kjk � 1) =ggg(z(k � 1); u(k)) (3)

PPPzzz(kjk � 1) =rggg � PPPzzz(k � 1) � rgggT + PPP v(k) (4)

withrggg being the derivative ofg with respect toz evaluated atz(k�1).When the measurements arrive, the EKF estimation phase is performedfollowing the innovation form [8]

� =yyy0(k)� hhh(z(kjk � 1); yyy(k)) (5)

SSS(k) =rhhh � PPP z(kjk � 1) � rhhhT + PPPw(k) (6)

KKK(k) =PPPzzz(kjk � 1) � rhhhT � SSS�1(k) (7)

z(k) =z(kjk � 1) +KKK(k) � � (8)

PPP z(k) =PPP z(kjk � 1)�KKK(k) � SSS(k) �KKKT (k): (9)

In (5)–(9),� is the innovation andSSS(k) is its estimated covariance,KKK(k) is the Kalman filter gain, andrhhh is the derivative ofhhh with re-spect toz evaluated atz(kjk � 1).PPPw incorporates the measurementmodel uncertainties, as well as the uncertainties ofyyy andyyy0. Theseequations are obtained by applying the original linear Kalman filterformulation [9] to the linearized measurement model around the pre-dicted posez(kjk � 1).

With a fast and recurrent form, this filter has been successfully usedfor real-time mobile robot localization. However, the user should beaware of some limitations of the EKF and of the known proposed so-lutions in order to achieve a good design.

• Due to the linearization of the measurement model (2), this filteris very sensitive to nonlinear aspects ofhhh and gives suboptimalestimates. With such an approximation, neitherPzzz correspondsto the actual covariance, nor is the EKF a minimum variance es-timator [6]. The growth of thePPP v(k) andPPPw(k) matrices areproposed solutions to minimize the effects of the linearization onthe nonconvergence ofPPPzzz , resulting in suboptimal but consistentestimates (PPPzzz(k) > PPPzzz(k)). This solution requires an offlinetuning using experimental data.

• If the predictionz(kjk�1) is far enough from the actual value ofz, nonconvergence problems may arise due to truncation errors.The iterated EKF [7] is a proposed solution.

• The errors induced by linearization (2) on the pose estimate aswell as in the associated covariance matrix are not known in ad-vance. An alternative approach consists of estimatingPPPw(k) andnoise bias in the same way that the adaptive EKF [10]. Julieret al. [11] have proposed a more accurate filter which is able to

achieve consistent covariance propagation through the estimationprocess.

• Since the filter assumes a Gaussian distribution of the estimatez, multiple mode distributions cannot be directly handled. Suchcases can arise from multiple correspondences in real clutteredenvironments as well as from the nonlinear aspect of the problem.One solution is to apply the multiple hypothesis Kalman filter for-mulation as shown in [7]. Monte Carlo localization is another ap-proach recently developed which approximates the pose distribu-tion as a large set of filters. It allows us to represent non-Gaussianmultimodal distributions. In [12], there is an application usinggrid maps.

• The computing time of the EKF rapidly grows inO(N3), withN being the number of corresponding features. This is the conse-quence of the inversion of matrixSSS(k) in (7). A proposed solutionis the use of the EKF sequential formulation [7], where measure-ments are sequentially presented to the filter at timek. However,the filter may not converge if the presentation sequence is notwell chosen. In [13], the sequential formulation is successfullyused, on which a special treatment for multiple correspondenceshas been applied. The sequential filter has anO(N) computationtime.

All the above issues contribute to the difficulty of fine tuning theEKF, as cited by Julieret al. in [11]. Some proposed alternativesdeal with specific issues and give suboptimal estimates by using thelinearized measurement model. When all operating conditions assuregood predictionsz(kjk � 1) and almost perfectfeature extractionfor local map building, satisfactory results can be obtained with theEKF. However, such operating conditions cannot be guaranteed whennavigating in highly cluttered and noncontrolled environments. Wemean that in such environments false feature correspondences mayoccur as well as wheel slippage. As a consequence, the Gaussianassumption is no longer valid, and in such a case the filter presents norobustness. Furthermore, it is very difficult to evaluate which part ofthe estimation errors is due to the linearization procedure.

In this paper, we propose a new algorithm for two-dimensional (2-D)pose estimation that provides an optimal pose estimate given two sets ofcorresponding heterogeneous features. Using a unified representationfor these features, the problem is formulated as a nonlinear weightedleast-squares (WLS) optimization. This results in a batch, nonapproxi-mated optimal pose estimator inO(N) computing time growing. Pre-vious versions of the proposed algorithm were presented in [14] and[15].

The problem discussed in this paper is formulated in Section III. Sec-tion IV introduces a unified feature representation for 2-D lines andpoints. The pose and uncertainty estimation procedures are derived inSection V and Section VI, respectively. A complexity analysis of theproposed solution is described in Section VII. Section VIII reports asimulation on which we discuss the limitations of the proposed ap-proach. Experimental results in a cluttered real environment are pre-sented in Section IX.

III. PROBLEM STATEMENT

It is established that the robot posezzz = (tttT ; �)T , with ttt = (tx; ty)T ,

represents a 2-D coordinate transformation such thatttt 2 2 and�� <

� � �. These parameters relate a global 2-D coordinate systemXY toanother local coordinate systemX 0Y 0. In the context of mobile robotlocalization, the mapsM andM0 are respectively attached toXY andX 0Y 0. Also, vectorttt corresponds to the 2-D robot position(x; y)T

relative to the frame of the global mapM, and the orientation of themobile robot with respect to theX axis is given by the angle�.

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002 89

Fig. 1. Features description in the 2-D Euclidian space.

Thus,(px; py)T being the coordinates of a static point inXY and(p0

x; p0

y)T its coordinates inX 0Y 0, we have

p0

x

p0

y

= RRR(�) �px

py� ttt (10)

with

RRR(�) =cos(�) sin(�)

� sin(�) cos(�): (11)

FromM andM0, we have two sets ofN corresponding geomet-rical features named, respectively,F = ffff

1; . . . ; fffNg andF 0 =

ffff 0

1; . . . ; fff 0

Ng. These features may be lines and/or points. We supposethat the correspondence problem between features of the same type hasbeen solved, i.e., featuresfffn andfff 0

n are corresponding. The correspon-dences are weighted by positive scalar factors�n. Each feature vectorfffn (fff 0

n) is associated with its uncertainty, represented by a 2� 2 co-variance matrixPPPfff ( PPPfff ).

IV. THE UNIFIED 2-D FEATURE REPRESENTATION

We propose a new unified 2-D feature representation for points andlines in the Euclidean space (Fig. 1). It allows to embed lines and pointsin a unique WLS framework. Features are represented by a vectorfff =(x; y)T in the global coordinate systemXY (fff 0 in the X 0Y 0 localcoordinate frame), with:

• (x; y)T = (xp; yp)T for point features;

• (x; y)T = (xl; yl)T for line features [(xl; yl)T are the coordi-

nates of the pointL on the line, such thatxl = � � cos(�) andyl = � � sin(�)] (see Fig. 1).

For a more reliable pose estimation, we consider only the infiniteline parameters. The end-points may be used as point features if theyare sufficiently reliable.

For a given static feature in the environment, the relation betweenvectorsfff andfff 0 in the two coordinate framesXY andX 0Y 0, respec-tively, is derived from (10). For points, these vectors are directly relatedby (10), resulting in

fff0 = RRR(�) � ffff � tttg : (12)

In order to develop the motion transformation between two lines, weuse a parametric form which relates any pointppp belonging to a line withparametersfff by means of a parameterk. In this way, it suffices to findthe parametersfff 0 of the transformed line knowing that the belongingpointsppp0 follow (12). Thus, we introduce the parametric line form

ppp(k) = fff + k � lll: (13)

In (13), k is a scalar andlll = ( sin(�);� cos(�))T is the unitaryvector that defines the line direction. We can verify thatfff andlll are

orthogonal:lllT � fff =0. The corresponding pointsppp0(k) in X 0Y 0 areobtained by applying the coordinate frame transformation toppp as

ppp0(k) = RRR(�) � fppp(k)� tttg = RRR(�) � ffff � tttg+ k �RRR(�) � lll: (14)

From (14), we can see that the line passes through the pointRRR(�) �ffff � tttg and has as unitary direction vectorlll0 = RRR(�) � lll in theX 0Y 0

coordinate frame. Thus,�0 = � � � in X 0Y 0. From the definition ofthe line parameter vector,fff 0 is given by the coordinates of the pointon the transformed line that is closest to the origin ofX 0Y 0. Thus, thispoint is parameterized byk0 such that

k0 =argmin

kppp

0(k)2

=argminkkRRR(�) � ffff + k � lll� tttgk2

=argminkkfff + k � lll� tttk2 : (15)

We can verify thatk0 = lllT � (ttt � fff) = lllT � ttt, sincefff andlll areorthogonal. Thus, by applyingk0 to (14), thefff 0 = ppp0(k0) parametervector is

fff0 = RRR(�) � fff � (III2 � lll � lT ) � t (16)

with III2 being the 2-D identity matrix. The explicit form ofIIIIIIIII2 � lll � lT

is

IIIIIIIII2 � lll � lT =cos2(�) sin(�) cos(�)

sin(�) cos(�) sin2(�):

The similarity between (12) and (16) allows us to define a unifiedframe transformation as

fff0 = RRR(�) � fff �EEE � ttt (17)

with

EEE =a2 c

c b2:

The entries of theEEE matrix depend on the feature type representedby fff : a = b = 1 andc = 0 for a point feature, anda = cos(�),b = sin(�), andc = a � b for a line feature.

V. OPTIMAL POSEESTIMATION

In this section, we present the mathematical development that resultsin optimal estimatesttt = (tx; ty)

T and� given the feature correspon-dencesF andF 0. Thus, we propose to findttt and�, as the global min-imum of the weighted cost function

J(z) =

N

n=1

�n � d2(fffn; fff

0

n; ttt; �): (18)

where the distanced2(fffn; fff0

n; ttt; �) is given by

d2(fffn; fff

0

n; ttt; �) = krrrnk2

2: (19)

In (19),

rrrn = fff0

n �RRR(�) � fffn �EEEn � ttt (20)

is thenth residual,rrrn being a nonconvex function of the transforma-tion parameters, andJ(z) may have multiple minima. The procedureadopted here consists in finding all local minimaz� and using as poseestimatez the z� satisfying a selection criterion. In doing so, as a first

90 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002

condition, all local minima ofJ(z) satisfy the following set of equa-tions:

@J(z)

@tx=

@J(z)

@ty=

@J(z)

@�= 0: (21)

Further development of (21) results in

MMMt�NNN� =qqq (22)

�T � rrr + SNSNSNT ttt =0 (23)

with �T = ( cos(�) sin(�) ), and

MMM =

N

n=1

�nc2n + a4n cn(a

2

n + b2n)

cn(a2

n + b2n) c2n + b4n(24)

NNN =

N

n=1

�n�a2nx

0

n � cny0

n a2ny0

n � cnx0

n

�cnx0

n � b2ny0

n cny0

n � b2nx0

n

(25)

q =

N

n=1

�ncnyn + a2nxncnxn + b2nyn

(26)

r =

N

n=1

�n�xny

0

n + ynx0

n

�yny0

n � xnx0

n

(27)

SSS =0 1

�1 0: (28)

From (22) and (23), we obtain

�T�� + �T ��� = 0 (29)

with

� =SNSNSNTMMM�1NNN (30)

� =rrr + SNSNSNTMMM�1qqq: (31)

We can verify that� = (� � rrr)NNN , and thatNNNTMMM�1NNN is a sym-metric matrix sinceMMM is also symmetric. Let the entries of� and��� bereferred to as

� =�11 �12

�21 �22

; � =���1

���2

:

Thus,NNNTMMM�1NNN being symmetric and given the form of matrixSSS,we have�11 = ��22. With this simple relation, further developmentof (29) results in

�11 cos(2�) +(�12 +�21)

2sin(2�) + �1 cos(�) + �2 sin(�) = 0:

(32)As presented in Appendix A, the trigonometric equation (32) has at

most four solutions for�, named��i , i = 1; . . . ; 4. Since these solutionssatisfy (21), they are the rotation components of the local minima andmaxima ofJ . The translationttt

i associated with each��i is obtainedfrom (22) and given by

ttt�

i =MMM�1(NNN��i + qqq) (33)

with ��i = ( cos(�i) sin(�i) )T : These solutions respect the first

condition given by (21), which is also satisfied by the local maxima.Thus, as a restrictive condition, we retain as candidate solutions to�only the minima which satisfy@2J(z)=@��2i > 0. Necessarily, theycan be at most two with real values. If we have only one real minima, itis used as pose estimate. On the other hand, we have verified two localminima cases in symmetric map configurations. Thus, in such cases,

the final decision may be based on the side of view of line features oron the minimum distance with respect to the predicted pose. The poseestimation can fail if the feature configuration allows a complete non-approximate solution (see the simulation in Section VIII). This can beverified in real time by a test on the conditioning of theMMM matrix.

VI. POSEUNCERTAINTY ESTIMATION

The uncertainty measure associated withz can be used as a valida-tion gate for the pose estimate, or for global map updating. In order toestimate this uncertainty, we assume that all residualsrrrn follow a mul-tivariate normal distribution with zero mean. So, we consider that theyare independently and identically distributed (i.i.d.) asN(0; PPPrrr). Wealso take into account the features’ uncertainty given by the covariancematricesPfff andPfff .

In order to estimate the covariance matrixPPP z , we propagate the un-certainties associated withrrrn, fffn, andfff 0n to the pose estimatez, ac-cording to the methodology described in [16]:

PPP z =@g

@z

�1

� �rrr +�fff +�fff �

@g

@z

�1

(34)

with ggg =@J=@z and

�rrr =

N

n=1

@g

@rn� PPPrrr �

@g

@rn

T

(35)

�fff =

N

n=1

@g

@fffn� PPPfff �

@g

@fffn

T

(36)

�fff =

N

n=1

@g

@fff 0n� PPPfff �

@g

@fff 0n

T

: (37)

PPPfff andPPPfff are provided as inputs, andPPPrrr is the estimatedresidual covariance matrix. We use the weighted sample covariance asan estimate ofPPPrrr given by

PPPrrr =N

n=1�n(rrrn � rrrTn )N

n=1�n

: (38)

@g=@z being the Hessian ofJ with respect toz, this matrix is non-singular on the local minimaz of the cost function. Thus, its reciprocalalways exists and is used to computePPP z . Equation (34) has been ob-tained by assuming independence betweenrrrn, fffn, andfff 0n. However,we can see clearly from (20) that residuals and featuresfffn andfff 0n arecorrelated. In the above development, we have assumed independenceonly to easily computePPP z . Furthermore,PPP z may take into accountonly the feature uncertainties by doingPPPrrr = 0. Instead, we prefer toconsider the residuals because they embody spurious effects such asbias on feature parameters and matching errors.

VII. A LGORITHM COMPLEXITY

The WLS algorithm presents anO(N) polynomial complexity. Withrespect to the batch version of the EKF which isO(N3), the WLS isfar less complex. On the other hand, anO(N) polynomial complexityis also achieved by the sequential version of the EKF which uses therecursive form of the EKF to treat the measurements sequentially, byassuming that all measurements are independent.

In Fig. 2, the computing time of theC language implementationsof these algorithms was precisely measured in a real-time kernel, theRTX produced by Venturcom Inc., running on a PIII 450-MHz CPU.The corresponding features were randomly generated. For medium sizemaps with 0 corresponding features, the WLS and the sequential EKFpresent almost the same computing time, but they are three orders of

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002 91

Fig. 2. WLS and EKF computing times in function of the number ofcorresponding features.

Fig. 3. The two lines and one point problem.

magnitude faster than the batch EKF. This results in a faster pose es-timation, even with large environment maps. It must be pointed outthat in a real-time localization system the localization task usually hasa lower priority than control and data acquisition tasks. It means thatthe execution time for the EKF may be greatly increased when higherpriority tasks get CPU control during its execution. Thus, the use ofthe batch EKF becomes prohibitive for a large number of feature cor-respondences in real-time mobile robot localization.

VIII. SIMULATION

In Fig. 3, we present thetwo lines and one point problem. The co-ordinate systemsXY andX0Y 0 are related byttt = (2 m; 3 m) and� = 2�=3 rad. This scenario can be found in real indoor mobile robotlocalization when a corner composed by the intersection of two walls(features 1 and 2) is detected, and only one of the wall extremities (fea-ture 3) is reliably observed. This problem is not separable, i.e., it is notpossible to estimate the frame transformation parameters from only onetype of feature. An intuitive hybrid procedure may be firstly to reasononly with the lines and find the two possible solutions which differ from� rad in rotation, and secondly to test each solution for the point andchoose the one with the lowest residual. Nevertheless, it is not possibleto extend this procedure to any greater number of features, and its es-timates are suboptimal if the features do not match exactly.

Fig. 4. The Omni mobile robot.

The unified feature vectors for the problem in Fig. 3 are

fff1=

0

4; fff2 =

4

0; fff3 =

1

4

fff 0

1 =

p3

2

� 1

2

; fff 02 =�1�p3

; fff 03 =

p3+1

2p3�12

:

The WLS method solves this problem yieldingttt = (2 m; 3 m) and� = 2�=3 rad. In this simulation, we have employed equal correspon-dence weights�1 = �2 = �3 = 1. This example illustrates the abilityof our algorithm to solve the pose estimation problem with a minimumnumber of features. However, the problem must be well conditioned.This would not be the case if the point feature was not detected and ifthe only used features were the lines. This problem can be avoided byusing the lines’ intersection point as a feature. In this case, we have mul-tiple minima of the cost function, with angles��=3 rad and2�=3 rad.As discussed in Section V, the final decision would be taken based onthe distance of the estimates with respect to a predicted pose, or by con-sidering the lines’ observation side. It is clear that a solution could notbe given for two parallel lines.

IX. EXPERIMENTAL EVALUATION

This experimental evaluation allowed us to compare the performanceof the WLS and EKF algorithms in a real cluttered environment. Itwas carried out with our omnidirectional mobile robot Omni shownin Fig. 4. This platform is equipped with a laser rangefinder, a videocamera, a high-precision gyrometer, and optical encoders associatedwith the motion axes. We have a PIII 450-MHz CPU-based computerinstalled on the robot, running Windows NT 4.0 with the real-timeextension RTX. The global map used for this experimentation shownin Fig. 5 was hand built from a set of superposed range images. Theinitial robot pose was at the origin of the global frame. Ninety-fiverange images were acquired with the robot in motion during 95 s. Thereference poses, shown as small squares (with some pose indexes),are provided by a combined gyrometer-encoders dead-reckoning tech-nique. The high accuracy of this method allows us to use it as reference

92 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002

Fig. 5. The global map of the experimental environment. The superposedscans are shown to illustrate how cluttered the environment is.

for the limited time duration of this experiment. As the range imageswere acquired in motion, the deformation of these images was cor-rected in real time using the encoders data [14]. The line features of thelocal maps were extracted using a split-and-merge algorithm derivedfrom the fuzzy C-means [17]. Point features are reliable breakpointslocated at the extremities of the lines, and some ellipsoidal clusters cor-responding to small obstacles. The unified feature representation wasused for both algorithms. In these experiments, video images were notused. In the sequel,zk(tjt � 1) and zk(t) are the predicted and esti-mated poses corresponding to thekth pose index.

In order to compare the EKF and the WLS under the same condi-tions, this evaluation was carried out off-line by using real data, i.e.,laser range images, encoder readings, and reference poses at each poseindexk, k = 1; . . . ; 95.

We have performed three experiments, denotedA, B andC, as fol-lows.

• In experimentA: zk(tjt�1) is computed from real encoder mea-surements, which method is far less precise than the referencepose. This experiment allows us to evaluate the algorithms per-formances when the prediction phase uses only odometry;

• In experimentB: zk(tjt� 1) is the reference pose contaminatedby a simulated additive Gaussian noise with covariance matrixPPP z (tjt � 1) = diag ( (0:25 m)2 (0:25 m)2 (3�)2 ). Here,the prediction uncertainty is larger than in experimentA.

• In experimentC: zk(tjt � 1) is the reference pose perturbedby two random noises: one Gaussian noise with covariance ma-trix PPPw = diag ( (0:15 m)2 (0:15 m)2 (2�)2 ), and anothernoise with a uniform distribution in the range [�0.3 m, 0.3 m]for xk(tjt � 1) andyk(tjt � 1); and in the range [�3�, 3�] for�k(tjt� 1). The Gaussian noise simulates the uncertainty propa-gated by odometry, and the uniform noise represents wheel slip-page or other non-Gaussian error sources. The map matching andpose estimation procedures have knowledge of the Gaussian ef-fect only. This experiment evaluates the robustness of the dif-ferent approaches when errors onzk(tjt � 1) do not follow aGaussian distribution.

For thekth pose index of the above experiments, the set of corre-sponding features is found by applying the classical Mahalanobis test

to all possible feature correspondences betweenM andM0 given thepredicted posezk(tjt � 1).

The conditions of experimentsBandC require robustness of the poseestimation approaches. A fast robust iterated version of the WLS isobtained by using the following weighting function which is updatedat each iteration:

�n = exp �krrrnk

�(39)

with � = �2:6 � median(kr1k ; . . . ; krNk)= ln(0:1), rrrn = fff 0

n �hhh(z; fffn) is thenth residual and� determines the acceptation windowwidth.

For the first iteration, all weights are equal to 1. In this way, the ro-bust WLS first computes a pose estimate which may be better then thezk(tjt � 1) provided by odometry. In our implementation, the max-imum number of iterations is fixed at 5. During iterations, only thepose estimate is computed. Its associated covariance matrix is com-puted using the weights of the last iteration. This allows us to obtaina faster algorithm. If we use a robust function such as the Huber’sweighting function [18], we obtain an iterated M-estimator. However,in order to converge, the M-estimator may require a large number ofiterations. The proposed iterated estimator has presented satisfactoryresults in our experiments.

Table I presents the mean absolute errors (MAEs) of the algorithmsfor the three experiments (best results are in bold font). The MAE of"�is almost the same for the two algorithms. However, the robust WLSgave better results fortx andty estimation in all experiments. The re-sults of the robust WLS are still better for experimentsBandC. In theseexperiments, the theoretical foundations on which the EKF is based arenot verified. In experimentB, zk(tjt� 1) is far from the true pose, andtruncation errors caused by the linearization of the measurement modeljeopardize the EKF performance. In experimentC, the uncertainties onzk(tjt� 1) are not Gaussian. Even if the differences do not seem verysignificant, it should be pointed out that they correspond to the meanvalues. Moreover, in a system based on feature tracking, an increase ofthe estimation errors at time stepk is propagated to the next estimationcycle atk + 1 by odometry and may result in the failure of the posetracking.

In these experiments, we have used a handbuilt global map. Thus, itis likely that feature correspondences are not perfect and this may in-ject extra non-Gaussian noise to the pose estimation system. In order toobtain lower estimation errors with the EKF, extra artificial noise hasbeen added in feature correspondences. This results in final estimateswhich are closer to the prediction, and observations have less influenceon pose updating. Without adding such an artificial noise, our first eval-uations led the filter to divergence.

X. CONCLUDING REMARKS

We have presented in this paper an absolute pose estimation algo-rithm for map-based mobile robot localization. The absolute pose esti-mate is optimal in the weighted least-squares sense. This method doesnot suffer from the nonconvergence problems associated with lineariza-tion as in the classical EKF approach. The computational time of theproposed solution isO(N),N being the number of feature correspon-dences. The batch EKF has anO(N3) computational cost. Hence, thealgorithm presented in this paper is an alternative to achieve more accu-rate mobile robot localization which is a necessary but not a sufficientcondition for autonomous navigation. An experimental comparison ofthe WLS with the EKF algorithm has been reported, with a robustapproach to compute the correspondence weights in the WLS. Sincethe algorithm only needs as input a set of corresponding features, itsuse for mobile robot relocalization is straightforward by using a global

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002 93

TABLE IEXPERIMENTAL EVALUATION : MAES FORROBUST WLS AND EKF ALGORITHMS.

matching approach to find such correspondences. Furthermore, an it-erated reweighted M-estimator can be implemented by using robustweighting functions to compute the feature correspondence weights.Since such estimators are well known to be robust against outliers,which may result from false matches, we would expect more accuratepose estimation even in the occurrence of wheel slippage or nonsys-tematic odometric errors. In Section VIII, we have shown a simulationwhich illustrates the limitation of the proposed approach to compute anestimate in a noncomplete case. In such cases, failure can be detectedand an alternative recursive and fast approach such as the sequentialEKF can be used.

Finally, some additional considerations should be noted. This is abatch approach, not a recursive one. As a consequence, a pose estimateis only obtained after processing all current measurements. Thus, onecould claim that the use of a batch approach instead of the sequentialEKF filter is not very exciting when measurements are sequential (forexample, with a laser rangefinder). In such a case, we agree that rawdata are acquired sequentially. However, geometrical feature extractionis in most cases iterative and local map building using heterogeneousfeatures is essentially a batch process. Therefore, even when raw dataare acquired sequentially, the local map features are only available afterthe complete sensor data processing. A recent example is the work ofArraset al.[13]. In this paper, as mentioned in Section II, the EKF for-mulation was successfully used with a multisensory system composedof a video camera and a laser rangefinder. The processing of measure-ments is done sequentially, but only after feature extraction. So, in thiscase, the choice of the sequential filter is based only on the real-timerequirements of the system, not on the manner on which data are ac-quired.

Recently, we have developed a multisensory localization system sim-ilar to that in [13], but our method uses explicitly local environmentmaps and our pose estimation is provided by the robust WLS version.Both implementations are more immune to robot motion when per-forming dynamic localization [15] (also referred as on-the-fly localiza-tion in [13]). In our case, multiple feature correspondences are accepted(they are very common), and it is the role of the pose estimator to givea larger weight to the most likely correspondences as the number of it-erations increase. We have obtained promising results in long corridorsat reasonable robot speeds, even in the presence of moving obstacles.

APPENDIX

SOLVING THE ROTATION TRIGONOMETRICEQUATION

The roots� of (32) are the rotation components��i of all local minimaof the cost functionJ [see (18)]. By using Maple, a commercial sym-bolic mathematical software produced by Waterloo Maple, Inc., (32)has four roots��i , with i = 1; . . . ; 4, calculated as

��

i = arctan �

2�11x2

i ��11 + �1xi

(�12 +�21)xi + �2; xi (40)

where thexi ’s are the roots of the following fourth-order polynomial:

�4x4 + �3x

3 + �2x2 + �1x+ �0 = 0: (41)

The polynomial (41) has real coefficients computed from the entriesof � and� as

�4 =4�2

11 + (�12 +�21)2

�3 =4�1�11 + 2�2(�12 +�21)

�2 =� 4�2

11 + �2

2 + �2

1 � (�12 +�21)2

�1 =� 2(�1�11 + �2(�12 +�21));

�0 =�2

11 � �2

2 :

In our implementation, the roots of (41) are computed by using aneigenvalue approach presented in [19].

REFERENCES

[1] A. Ohya, A. Kosaka, and A. Kak, “Vision-based navigation by a mobilerobot with obstacle avoidance using single-camera vision and ultrasonicsensing,”IEEE Trans. Robot. Automat., vol. 14, pp. 969–978, Dec 1998.

[2] G. Borghi and V. Caglioti, “Minimum uncertainty explorations in theself-localization of mobile robots,”IEEE Trans. Robot. Automat., vol.14, pp. 902–911, Dec 1998.

[3] J. J. Leonard and H. F. Durrant-Whyte, “Mobile robot localization bytracking geometric beacons,”IEEE Trans. Robot. Automat., vol. 7, pp.376–382, June 1991.

[4] J. L. Crowley, “World modeling and position estimation for a mobilerobot using ultrasonic ranging,” inIEEE Int. Conf. on Robotics and Au-tomation, 1989, pp. 674–680.

[5] J. Neira, J. D. Tardós, J. Horn, and G. Schmidt, “Fusing range and inten-sity images for mobile robot localization,”IEEE Trans. Robot. Automat.,vol. 15, pp. 76–84, Feb 1999.

[6] A. H. Jazwinski, Stochastic Processes and Filtering Theory. NewYork: Academic, 1970.

[7] Y. Bar-Shalom and X.-R. Li,Estimation and Tracking: Principles, Tech-niques and Software. Norwood, MA: Artech House, 1987.

[8] M. A. Abidi and R. C. Gonzalez, Eds.,Data fusion in Robotics andMachine Intelligence. New York: Academic, 1992.

[9] R. E. Kalman, “A new approach to linear filtering and prediction prob-lems,”Trans. ASME, Ser. D: J. Basic Eng, vol. 82, pp. 35–45, 1960.

[10] L. Jetto, S. Longhi, and G. Venturini, “Development and experimentalvalidation of an adaptive extended kalman filter for the localization ofmobile robots,”IEEE Trans. Robot. Automat., vol. 15, pp. 219–229, Apr.1999.

[11] S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte, “A new methodfor the non linear transformation of means and covariances in filters andestimators,”IEEE Trans. Automat. Contr., vol. 45, pp. 477–482, Mar.2000.

[12] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust Monte Carlolocalization for mobile robots,”Artif. Intell. J., vol. 128, no. 1-2, pp.99–141, May 2001.

[13] K. O. Arras, N. Tomatis, B. T. Jensen, and R. Siegwart, “Multisensoron-the-fly localization: Precision and reliability for applications,”Robot.Autonomous Syst., vol. 34, pp. 131–143, 2001.

[14] G. A. Borges, M.-J. Aldon, and T. Gil, “An optimal pose estimator formap-based mobile robot dynamic localization: Experimental compar-ison with the EKF,” inIEEE Int. Conf. Robot. Automation, 2001, pp.1585–1590.

[15] G. A. Borges and M.-J. Aldon, “Design of a robust real-time dynamiclocalization system for mobile robots,” inProc. 9th Int. Symp. on Intel-ligent Robotic Systems, 2001, pp. 425–434.

[16] R. M. Haralick, “Propagating covariances in computer vision,” inInt.Conf. Pattern Recognition, 1994, pp. 493–498.

[17] G. A. Borges and M.-J. Aldon, “A split-and-merge segmentation algo-rithm for line extraction in 2-D range images,” in15th Int. Conf. on Pat-tern Recognition, Sep 2000, pp. 441–444.

94 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 18, NO. 1, FEBRUARY 2002

[18] P. J. Huber,Robust Statistics, 1st ed. New York: Wiley, 1981.[19] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery,Nume-

rial Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge,U.K.: Cambridge Univ. Press, 1992.

Control of Cooperating Mobile Manipulators

Thomas G. Sugar and Vijay Kumar

Abstract—We describe a framework and control algorithms for coor-dinating multiple mobile robots with manipulators focusing on tasks thatrequire grasping, manipulation and transporting large and possibly flex-ible objects without special purpose fixtures. Because each robot has anindependent controller and is autonomous, the coordination and synergyare realized through sensing and communication. The robots can coopera-tively transport objects and march in a tightly controlled formation, whilealso having the capability to navigate autonomously. We describe the keyaspects of the overall hierarchy and the basic algorithms, with specific ap-plications to our experimental testbed consisting of three robots. We de-scribe results from many experiments that demonstrate the ability of thesystem to carry flexible boards and large boxes as well as the system’s ro-bustness to alignment and odometry errors.

Index Terms—Compliance, locomotion and grasping, mobile robot coop-eration.

I. INTRODUCTION

We address the coordination of a small team of mobile manipula-tors that cooperatively perform such manipulation tasks as grasping alarge, flexible object and transporting it in a two dimensional environ-ment with obstacles. Such a system of robots is useful in material han-dling applications where there are no special purpose material handlingdevices (for example, conveyors, fixtures or pallets are not present).In particular, when the team is remotely controlled or supervised, thesystem can be used for the clean-up of hazardous waste material [1].

We focus on tasks that simply cannot be performed by a single mo-bile robot. As examples, consider the transportation of the large box orthe flexible board shown in Fig. 1. The object, a box or a flexible boardin our experiments, is large enough that one platform cannot carry it byitself without expensive, special purpose tooling that is specific to theobject. In our approach, two or more robots are coordinated to accom-plish the task of carrying an object. We demonstrate four key features ofour system: (a) coordination and cooperation between tightly coupledmobile robots; (b) palm-like grasping of an object; (c) decentralizedcontrol of cooperating robots; and (d) robust grasping and transporta-tion.

The physical interaction between the robots and the tight couplingrequired of autonomous controllers pose many challenging engineeringproblems including:

1) the manipulators must be capable of controlling the grasp forcesin a robust fashion;

Manuscript received September 25, 2000; revised August 23, 2001. Thispaper was recommended for publication by Associate Editor L. Kavraki andEditor I. Walker upon evaluation of the reviewers’ comments.

T. G. Sugar is with the Department of Mechanical and AerospaceEngineering, Arizona State University, Tempe, AZ 85287 USA (e-mail:[email protected].).

V. Kumar is with the GRASP Laboratory, University of Pennsylvania.Philadelphia, PA 19104-6228 USA.

Publisher Item Identifier S 1042-296X(02)01375-7.

2) there needs to be an efficient way of communicating and sharinginformation in real time;

3) it should be possible to organize the robots differently for dif-ferent tasks forcing the controllers to be independent and yet ableto function in a tightly coupled architecture when carrying ob-jects;

4) the robots must coordinate their trajectories in order to maintaina desired formation while maintaining the grasp; Unlike the taskof pushing a box, the robots must maintain a formation whilegrasping andcarrying the object;

5) the team must be robust with respect to errors that include robotpositioning errors and modeling errors.

There is extensive literature on the subject of coordinating a group ofrobots. The behavior based control paradigm of [2]–[4] has been shownto be successful in controlling a large team of loosely coupled robots. Itis possible to synthesize an impressive array of group behaviors [5] andcoordinate robots for tasks like cooperative pushing [6]–[8], clusteringin formations [9] and exploration [10] using variations of this basicapproach. However, it is not clear that this approach can be used in itscurrent form to maintain a tight formation and control grasp forces forholding and transporting objects.

When two or three robots are tightly coupled together in a speci-fied formation for a specified manipulation task, the control problemis well-defined and the feedback control laws for coordinating a smallteam of robots are reasonably well understood [11]–[13]. The feedbacklaws for coordinated control of manipulation and locomotion are dis-cussed by [14], [15]. Our work differs from such approaches in threerespects.

In contrast to approaches in which the planning and control problemsfor the multirobot team are done centrally [16], our goal is to decen-tralize the control and planning to the extent possible. Our approachto decentralized control relies on thedecouplingof the subproblems ofcontrolling locomotion and manipulation [11], [17]. Second, our for-mulation will allow for changes in formation, as well as changes inthe composition of the team. Third, our formulation allows for a smallnumber of robots, although there are many unresolved questions aboutthe optimal organization of large teams of robots.

Finally, it is important to note that our method for decoupling thecontrol issues in manipulation and locomotion relies on a novel manip-ulator design. Ideally, we want each mobile manipulator to be equippedwith an end effector that allows it to exert controlled forces and mo-ments on the object and can accommodate small position and orienta-tion errors resulting from the use of nonholonomic robots. However, thecontrol of contact forces and moments is well known to be notoriouslydifficult [18]. We propose a parallel manipulator design in which thecontrol of contact interactions are accomplished via a set of inexpen-sive position controlled motors. Further, as shown in the paper, we willonly require one of the two or three team members to be equipped withan actively controlled end effector. While the details of the design ofthe manipulator are available elsewhere [19], we will review the basicprinciples in as much as they relate to the performance of the system.

An overall framework for the system is described. The experimentalsystem and control architecture is given next. Finally, multiple experi-ments are presented that demonstrate the ability of our system to trans-port different objects.

II. THE TEAM OF ROBOTS

A. Organization

In this section, we briefly discuss the organization of the team of mo-bile manipulators and outline the basic assumptions for this work. We

1042–296X/02$17.00 © 2002 IEEE