13
IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006 523 Omnidirectional Vision Scan Matching for Robot Localization in Dynamic Environments Emanuele Menegatti, Member, IEEE, Alberto Pretto, Alberto Scarpa, and Enrico Pagello, Member, IEEE Abstract—The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirec- tional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advan- tage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experi- ments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware. Index Terms—Mobile robot localization, Monte Carlo localiza- tion, omnidirectional vision, scan matching. I. INTRODUCTION L OCALIZATION is the fundamental problem of estimating the pose of the robot inside the environment. Some of the most successful implementations of robust localization systems are based on the Monte Carlo localization approach [5], [23]. The Monte Carlo localization approach has been implemented on robots fitted either with rangefinder sensors or with vision sensors. Lately, vision sensors have been preferred over rangefinders, because they are cheaper and provide richer information about the environment. Moreover, they are passive sensors, so they do not interfere with other sensors and do not pose safety concerns in populated environments. In this paper, we consider the problem of Monte Carlo local- ization using an omnidirectional camera. The vision system has been designed to extract the distances of the closest color transi- tions of interest existing in the environment. Our system uses an Manuscript received March 11, 2005; revised November 11, 2005. This paper was recommended for publication by Associate Editor J. Kosecka and Editor F. Park upon evaluation of the reviewers’ comments. This paper was presented in part at the RoboCup Symposium, Lisbon, Portugal, July 2004, and in part at the International Conference on Intelligent Robots and Systems, Sendai, Japan, October 2004. E. Menegatti, A. Pretto, and A. Scarpa are with the Intelligent Autonomous Systems Laboratory, Department of Information Engineering, University of Padua, I-35131 Padua, Italy (e-mail: [email protected]). E. Pagello is with the Intelligent Autonomous Systems Laboratory, Depart- ment of Information Engineering, University of Padua, I-35131 Padua, Italy, and also with Institute ISIB, CNR Padua, Italy. Digital Object Identifier 10.1109/TRO.2006.875495 omnidirectional camera to emulate and enhance the behavior of rangefinder sensors. This results in a scan of the current location similar to the one obtained with a laser rangefinder, enabling the use of Monte Carlo algorithms only slightly modified to account for this type of sensor. The most significant advan- tages with respect to classical rangefinders are: 1) a conven- tional rangefinder device senses the vertical obstacles in the environment, while our system is sensitive to the chromatic tran- sitions in the environment, thus gathering richer information; and 2) our system can reject measurements if an occlusion is detected. Combining the omnidirectional vision scans with the Monte Carlo algorithms provides a localization system robust to occlusions and to localization failures, capable of exploiting as localization clues the natural color transitions existing in the environment. The Middle-Size RoboCup field is a highly suit- able testbed for studying the localization problem in a highly dynamic and densely populated environment. In a dynamic mul- tiagent world, precise localization is necessary to effectively perform high-level coordination behaviors. At the same time, the presence of other robots makes localization harder to per- form. In fact, if the density of moving obstacles in the environ- ment is high, occlusion of the robot’s sensors is very frequent. Moreover, if, like in RoboCup Middle-Size, collisions among robots are frequent, the localization system must be able to re- cover from errors after collisions. In this paper, we explicitly discuss the robustness of our system with respect to the classical problems of global local- ization, position tracking, and robot kidnapping [17]. We also provide a detailed discussion of the robustness against sensor occlusion when the robot moves in a densely populated environ- ment [18]. In addition, we present experimental evidence that the system developed is not limited to the RoboCup domain, but works in a generic unmodified office-like environment. The only requirements for our system are: 1) an environment with meaningful color transitions; 2) a geometric map of that environment; and 3) inclusion of the color transitions in the map. The following section discusses previous research related to this topic. Section III describes how we process the omnidirec- tional image to obtain range information. Section IV summa- rizes the well-known Monte Carlo localization algorithm and discusses the motion model and sensor model used in the ex- periments, as well as the modifications to the classical Monte Carlo localization to adapt it to our sensor. In Section V, we present the experiments performed with the robot in a RoboCup Middle-Size field and in the corridors of our department. A de- tailed analysis of the performance and robustness of the local- ization system is presented, paying particular attention to occlu- 1552-3098/$20.00 © 2006 IEEE

Omnidirectional vision scan matching for robot localization in dynamic environments

  • Upload
    e

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Omnidirectional vision scan matching for robot localization in dynamic environments

IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006 523

Omnidirectional Vision Scan Matching for RobotLocalization in Dynamic Environments

Emanuele Menegatti, Member, IEEE, Alberto Pretto, Alberto Scarpa, and Enrico Pagello, Member, IEEE

Abstract—The localization problem for an autonomous robotmoving in a known environment is a well-studied problem whichhas seen many elegant solutions. Robot localization in a dynamicenvironment populated by several moving obstacles, however, isstill a challenge for research. In this paper, we use an omnidirec-tional camera mounted on a mobile robot to perform a sort of scanmatching. The omnidirectional vision system finds the distances ofthe closest color transitions in the environment, mimicking the waylaser rangefinders detect the closest obstacles. The similarity ofour sensor with classical rangefinders allows the use of practicallyunmodified Monte Carlo algorithms, with the additional advan-tage of being able to easily detect occlusions caused by movingobstacles. The proposed system was initially implemented in theRoboCup Middle-Size domain, but the experiments we presentin this paper prove it to be valid in a general indoor environmentwith natural color transitions. We present localization experi-ments both in the RoboCup environment and in an unmodifiedoffice environment. In addition, we assessed the robustness of thesystem to sensor occlusions caused by other moving robots. Thelocalization system runs in real-time on low-cost hardware.

Index Terms—Mobile robot localization, Monte Carlo localiza-tion, omnidirectional vision, scan matching.

I. INTRODUCTION

LOCALIZATION is the fundamental problem of estimatingthe pose of the robot inside the environment. Some of

the most successful implementations of robust localizationsystems are based on the Monte Carlo localization approach[5], [23]. The Monte Carlo localization approach has beenimplemented on robots fitted either with rangefinder sensors orwith vision sensors. Lately, vision sensors have been preferredover rangefinders, because they are cheaper and provide richerinformation about the environment. Moreover, they are passivesensors, so they do not interfere with other sensors and do notpose safety concerns in populated environments.

In this paper, we consider the problem of Monte Carlo local-ization using an omnidirectional camera. The vision system hasbeen designed to extract the distances of the closest color transi-tions of interest existing in the environment. Our system uses an

Manuscript received March 11, 2005; revised November 11, 2005. This paperwas recommended for publication by Associate Editor J. Kosecka and EditorF. Park upon evaluation of the reviewers’ comments. This paper was presentedin part at the RoboCup Symposium, Lisbon, Portugal, July 2004, and in part atthe International Conference on Intelligent Robots and Systems, Sendai, Japan,October 2004.

E. Menegatti, A. Pretto, and A. Scarpa are with the Intelligent AutonomousSystems Laboratory, Department of Information Engineering, University ofPadua, I-35131 Padua, Italy (e-mail: [email protected]).

E. Pagello is with the Intelligent Autonomous Systems Laboratory, Depart-ment of Information Engineering, University of Padua, I-35131 Padua, Italy,and also with Institute ISIB, CNR Padua, Italy.

Digital Object Identifier 10.1109/TRO.2006.875495

omnidirectional camera to emulate and enhance the behavior ofrangefinder sensors. This results in a scan of the current locationsimilar to the one obtained with a laser rangefinder, enablingthe use of Monte Carlo algorithms only slightly modified toaccount for this type of sensor. The most significant advan-tages with respect to classical rangefinders are: 1) a conven-tional rangefinder device senses the vertical obstacles in theenvironment, while our system is sensitive to the chromatic tran-sitions in the environment, thus gathering richer information;and 2) our system can reject measurements if an occlusion isdetected. Combining the omnidirectional vision scans with theMonte Carlo algorithms provides a localization system robustto occlusions and to localization failures, capable of exploitingas localization clues the natural color transitions existing in theenvironment. The Middle-Size RoboCup field is a highly suit-able testbed for studying the localization problem in a highlydynamic and densely populated environment. In a dynamic mul-tiagent world, precise localization is necessary to effectivelyperform high-level coordination behaviors. At the same time,the presence of other robots makes localization harder to per-form. In fact, if the density of moving obstacles in the environ-ment is high, occlusion of the robot’s sensors is very frequent.Moreover, if, like in RoboCup Middle-Size, collisions amongrobots are frequent, the localization system must be able to re-cover from errors after collisions.

In this paper, we explicitly discuss the robustness of oursystem with respect to the classical problems of global local-ization, position tracking, and robot kidnapping [17]. We alsoprovide a detailed discussion of the robustness against sensorocclusion when the robot moves in a densely populated environ-ment [18]. In addition, we present experimental evidence thatthe system developed is not limited to the RoboCup domain,but works in a generic unmodified office-like environment.The only requirements for our system are: 1) an environmentwith meaningful color transitions; 2) a geometric map of thatenvironment; and 3) inclusion of the color transitions in themap.

The following section discusses previous research related tothis topic. Section III describes how we process the omnidirec-tional image to obtain range information. Section IV summa-rizes the well-known Monte Carlo localization algorithm anddiscusses the motion model and sensor model used in the ex-periments, as well as the modifications to the classical MonteCarlo localization to adapt it to our sensor. In Section V, wepresent the experiments performed with the robot in a RoboCupMiddle-Size field and in the corridors of our department. A de-tailed analysis of the performance and robustness of the local-ization system is presented, paying particular attention to occlu-

1552-3098/$20.00 © 2006 IEEE

Page 2: Omnidirectional vision scan matching for robot localization in dynamic environments

524 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

Fig. 1. Metric maps used for the computation of the expected scans.A represents the static obstacles (they are too sparse for an effective localiza-tion). B represents all the chromatic transitions of interest in the environment.(Color version available online at http://ieeexplore.ieee.org.)

sions caused by other robots. Finally, in Section VI, conclusionsare drawn.

II. RELATED WORK

The seminal works on Monte Carlo localization for mobilerobots used rangefinders as the main sensors. The rangefinderswere used to perform scans of static obstacles around the robot,and the localization is calculated by matching those scans with ametric map of the environment [5], [24]. However, in dynamicenvironments, the static features that are detectable are oftennot enough for a robust localization (as illustrated in Fig. 1),or they may be occluded by moving obstacles. One possibilityis to design algorithms able to filter out the moving obstaclesin the range scans, leaving only the static obstacles that can beused as landmarks. This is exemplified by distance filters [9],but it should be noted that usually these algorithms are compu-tationally intensive. Another possibility is to use a sensor, suchas a color camera, that provides richer information and con-sequently, a more reliable “scan matching.” In this paper, weadopted the latter approach.

Usually, Monte Carlo localization systems with vision sen-sors use cameras to recognize characteristic landmarks subse-quently matched within a map [7], [21], or to find the refer-ence image most similar to the image currently grabbed by therobot [11], [19], [20], [28], [29]. However, when the robot hasto match the current reading with a previous reading, movingobstacles like people or other robots can impair the localizationprocess. Several solutions have been adopted. One possibility isto look at features that cannot be occluded by moving obstacles,like the ceiling in a museum hall [4]. However, these features arenot always available or they do not carry enough information.

Our sensor is able to detect occlusions as nonexpected colortransitions, so it can use only a subset of reliable distances in thescan, obtaining a more precise localization. Color transitions areusually available and yield rich information about the environ-ment structure (e.g., change of carpet color, doors with a colordifferent from the walls, etc.).

In the RoboCup Middle-Size competitions, an approachbased only on laser rangefinders was used, very effectively,by the CS Freiburg Team [27]. They extracted the lines of thewalls from the laser scans and matched them against a model ofthe field of play. However, when in 2002 the walls surroundingthe field were removed, the reliability of this approach was im-paired by the lack of static features detectable by a rangefindersensor.

The static objects detectable by a rangefinder in the Middle-Size field (with the layout used since 2003) are presented inFig. 1(a). The only detectable objects are the two goals and thefour corner posts. With a vision system sensitive to color transi-tions, one can detect not only these static objects, but also all ex-isting color transitions [Fig. 1(b)]. Schulenburg et al. combineda laser rangefinder and an omnidirectional camera to extend CSFreiburg’s approach by detecting lines using both sensors [22].However, the integration of laser data with vision data does notsignificantly improve the localization with respect to vision dataalone (due to the shortage of laser detectable features). More-over, the image-processing algorithms used to extract the fieldlines are computationally demanding. Most of the approachesthat use a camera as the only sensor extract geometric features(like lines and corners) from the images, performing templatematching against a model of the environment [2]. Geometricfeatures have also been determined by detecting color transi-tions in visual receptors placed along radial lines (as previouslyproposed [3]) and the environment is represented by a 3-D com-puter-aided drawing (CAD) model [12], [25]. Roefer et al. lo-cated geometric features and bearings of landmarks by detectingcolor transitions along vertical lines [21]. In our approach, thechromatic transitions are not used to extract geometric features,but to perform scan matching against a 2-D image file that isa geometric map of the environment. In fact, we use the rawrange scans, and this paper shows that a robust localization isachievable. Lenser and Veloso have developed a system whichmimics the working of a sonar sensor using a monocular camerathat detects obstacles (as color transitions) along previously de-termined lines in the image [14]. However, although the basicidea is similar, our aim is much broader: we want to mimic theworking of a laser rangefinder with an omnidirectional camera[therefore, with a 360 field of view (FOV)] in order to be able,not only to avoid the obstacles as Lenser, but also to localize therobot with a Monte Carlo localization software almost unalteredfrom the one proposed by Thrun et al. [24], in which they usedlaser rangefinders. Another difference with the work of Lenserand Veloso is that we do not color-segment the whole image, butwe just look for color transitions along the radial lines of Fig. 2,saving a considerable amount of computation.

III. OMNIDIRECTIONAL CAMERA AS A RANGEFINDER

The main sensor of our robot is an omnidirectional camera.The camera is calibrated in order to be able to relate the dis-tances measured in the image with the distances in the realworld. Our “rangefinder” has a 360 FOV, much larger thanthat of conventional rangefinders. The omnidirectional camerais composed by a perspective camera pointed upward to a mul-tipart mirror with a custom profile, depicted in Fig. 3 [16]. Thisprofile was designed to have good accuracy both for short- andlong-range measurements. In fact, conic omnidirectional mir-rors fail to obtain good accuracy for short distance measure-ments (because the area close to the robot is mapped in a verysmall image area), while hyperbolic mirrors fail to obtain goodaccuracy for long distance measurements (because of the lowradial resolution far away from the sensor). With our mirror, thearea surrounding the robot is imaged in the wide external ringof the mirror, and the area far away from the robot is imaged in

Page 3: Omnidirectional vision scan matching for robot localization in dynamic environments

MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 525

Fig. 2. Omnidirectional image with the detected chromatic transitions.Green–white chromatic transitions are highlighted with light gray crosses,green–yellow transitions with dark gray crosses, and black pixels represent thereceptor pixels used for the scan that is performed in a discrete set of distances.(Color version available online at http://ieeexplore.ieee.org.)

Fig. 3. Plot of the profile of the omnidirectional mirror mounted on the robot.(Color version available online at http://ieeexplore.ieee.org.)

the inner part of the mirror [16]. The inner part of the mirror isused to measure objects farther than 1 m away from the robot,while the outer part is used to measure objects closer than 1 mfrom the robot, see Fig. 2.

The omnidirectional image is scanned for what we calledchromatic transitions of interest. In the RoboCup domain, weare interested in green–white, green–blue, and green–yellowtransitions. These transitions are related to the structure of theRoboCup field, where the playground is green, lines are white,and goals and corner posts are blue or yellow. In the officescenario, we are interested in the red–white and red–gray transi-tions, due to the colors available in the environment. The imageis scanned along radial lines 6 apart, and with a samplingstep corresponding to 4 cm in the world coordinate system,as shown in Fig. 2. We first scan for chromatic transitions ofinterest close to the robot’s body (i.e., in the outer mirror part);

we then scan the inner part of the image for transitions up to 4m away from the robot’s body.

In RoboCup, a color quantization is usually performed on theimage before any further image processing. Our system looksfor the chromatic transitions of interest only along the recep-tors of the 60 rays depicted in Fig. 2. Therefore, we do not needto color quantize the whole image, but only some of the pixelslying along the 60 rays need to be classified into one of the 8RoboCup colors,1 plus a further class that includes all colors notincluded in the former classes (called unknown color). At thesetup stage, the red–green–blue (RGB) color space is quantizedinto nine color classes. To achieve a real-time color quantiza-tion, a look-up table is stored in the main memory of the robot.The look-up table associates every possible RGB triple to oneof the nine color classes.

The distances to the nearest chromatic transition of interestare stored in three vectors,2 one for each color transitionof interest. During the radial scan, we can distinguish threesituations:

1) a chromatic transition of interest is found, then the realdistance of that point is stored in the corresponding vector;

2) no transition of interest is detected, then a characteristicvalue called INFINITY is stored in the vector (this meansno transition can be found along this ray);

3) a nonexpected transition is found, then a characteristicvalue called FAKE\_RAY is stored in the vector (thismeans something is occluding the vision sensor).

Moreover, we use the information about the static obstaclesextracted from the map of Fig. 1(a) to improve the scanningprocess (e.g., if we find a yellow pixel, this is a goal or a cornerpost, so it is not worth looking further for a white line, and westop the scanning process along this ray).

Algorithm 1 Omnidirectional vision as an enhancedrangefinder.Function QUANT returns quantized color of pixel .Function REAL_DIST returns distance in real world ofpixel .Pixel (0,0) is located in the image center.

Ensure: [N RAY], [N RAY],

[N RAY]

for [N RAYS] do

INFINITY

QUANT

for MAX RAY do

,

QUANT

if isn’t unknown or green then

1In the RoboCup environment, the ball is red, the lines are white, one goal isblue and the other is yellow, the robots are black, the robots’ markers are cyanand magenta.

2The three vectors are called “scans” in the remainder of the paper.

Page 4: Omnidirectional vision scan matching for robot localization in dynamic environments

526 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

if is blue and is greenthen

REAL DIST

break

else if is yellow andis green then

REAL DIST

break

else if is white then

if then

if is green then

REAL DIST

else

FAKE RAY

end if

end if

else

FAKE RAY

if then

FAKE RAY

end if

break

end if

end if

end for

end for

The algorithm to find the nearest chromatic transitions of in-terest is presented in pseudocode in Algorithm 1 (to simplifythe comprehension, only the scan in the inner section of the mul-tipart omnidirectional mirror is presented).

The scan obtained from the image is compared with the scansextracted from the chromatic map of the environment, called ex-pected scans. The map in Fig. 1(b) shows the chromatic charac-teristics of the environment. We use this map to compute the ex-pected scan by ray tracing, as will be explained in Section IV-B.

In summary, the advantages with respect to conventionalrangefinders are that we have three scans for every poseof the robot (one for every chromatic transition of interest:green–white; green–blue; and green–yellow) and we immedi-ately know which rays of the scan should be discarded because

of occluding objects (detected by nonexpected chromatic tran-sitions). The limitations of our sensor are a smaller accuracythan laser rangefinders and the sensitivity to changes in theillumination, strong enough to alter the appearance of the colorsin the environment.

To manage the uncertainty in the measurements, we slightlymodified the classical Monte Carlo localization algorithm.

IV. MONTE CARLO LOCALIZATION

Monte Carlo localization is a well-known probabilisticmethod, in which the current pose of the robot is modeled asa posterior distribution conditioned by the sensors’ data (1).The posterior probability distribution of the robot pose is alsocalled the robot’s belief. The belief about the robot’s positionis represented with a set of discrete points in the configurationspace of the robot. These points are called particles. To updatethe belief over time, the particles are updated. Each particle isan hypothesis of the robot’s pose, and it is weighted accordingto the posteriors. The belief about the robot’s position is up-dated every time the robot makes a new measurement (i.e., itgrabs a new image or a new odometry measure is available).This belief can be described by

(1)

where is the robot pose at time , and andare, respectively, the sensor and the odometry readings at

time . To calculate (1), two conditional densities, called mo-tion model and sensor model, are needed. The motion modelexpresses the probability the robot moved to a certain positiongiven the odometry measures (kinematics); see Section IV-A.The sensor model describes the probability of having a certainsensor measurement in a certain pose, see Section IV-B. Themotion model and the sensor model depend, respectively, on theparticular robot platform and on the particular sensor. The local-ization algorithm is composed of three steps:

1) all particles are moved according to the motion model ofthe last kinematics measure;

2) the weights of the particles are determined according to thesensor model for the current sensor reading;

3) a resampling step is performed: high-probability particlesare replicated, low-probability ones are discarded. Theprocess repeats from the beginning.

The resampling step is performed with the sampling impor-tance resampling (SIR) algorithm [10] with the resampling tech-nique of [13]. The final estimation on the pose of the robot is ob-tained by simply averaging the poses of all particles. For moredetails, refer to [5] and [24].

A. Motion Model

The motion model is a probabilistic represen-tation of the robot kinematics, which describes a posterior den-sity over possible successive robot poses. We implemented theMonte Carlo localization system on a holonomic robot, calledBarney. The peculiarity of this robot is that it can move in any di-rection without the need of a previous rotation. A movement be-tween two poses and

Page 5: Omnidirectional vision scan matching for robot localization in dynamic environments

MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 527

can thus be described with , where is the dif-ference of heading between the two poses, is the transla-tion, and is the motion direction. Updating the robot posi-tion according only to the kinematics does not take into accounterrors given by odometry inaccuracy and possible collisionsof the robot with other obstacles. Therefore, a random noiseterm is added to the values given by the last odometry reading.Noise is modeled with Gaussian zero-centered random variables

. They depend on both the amount of trans-lation and of rotation. So, the motion model can be written as

For our holonomic platform, we found that good valuesfor the standard deviations of the added noise contributionsare , mm/m, ,

m. We experimentally verified that these valuesoverestimate the actual errors and so provide good performance.

B. Sensor Model

The sensor model describes the likelihood to obtaina certain sensor reading given a robot pose. The sensor model isused to compute the weights of the particles. For each particle, located in the pose , the associated weight is proportional to

(i.e., to the likelihood of obtaining the sensor readingwhen the robot has pose ). To calculate , we need to

know the “expected scan” . The expected scan is a scan thatan ideal noise-free sensor would measure in that pose, if therewere no obstacles in the environment [Fig. 4(a)]. Given , therobot pose, the expected scan for one of the three chromatictransitions of interest is composed of a set of expected distances,one for each , the rays of the scan (the black radial lines inFig. 2): N RAYS . We can computethe expected distances for an ideal noise-free sensor inan empty environment, using a ray-tracing technique. The basicidea is: 1) to reproduce the pose of the robot in the metric mapsof Fig. 1; 2) to trace the rays exiting from the robot until theyencounter the first chromatic transition of interest; 3) to storethe length of these rays in the expected scan. The likelihood

can be calculated as . In otherwords, the probability models the noise in the scanby the expected scan [5], [24].

Fig. 4 compares the expected scan A and the real sensor scanC; in B is the image grabbed by the robot. The scan is lookingfor the green–yellow chromatic transition of interest. As such,only rays with a correct value in the distance vector are depictedin Fig. 4(a) (i.e., the rays intercepting the yellow goal and thecorner posts). Due to the noise in the image, it might happen thata color transition is not detected. Examples are the rays strikingthe lower part of the yellow goal and the ray striking the lowercorner post [compare Fig. 4(a) and Fig. 4(c)]; the color transitioncan also be detected at the wrong distance [like the fourth raystarting from the top in Fig. 4(c)], or be falsely detected [likethe second ray in Fig. 4(c)]. It might also happen that a colortransition is not detected because of occlusion (e.g., in Fig. 4(b),the goalkeeper occludes part of the yellow goal), but we will

Fig. 4. Example of expected and measured scans for the green–yellow transi-tion. Given a pose, A represents the expected scan for an ideal noise-free sensorin a free environment. B shows the frame grabbed by the robot in that pose.C represents the corresponding measured scan. Only the rays with a correctmeasurement are shown. Rays with INFINITY or FAKE\_RAY values are notdisplayed. (a) Expected scan. (b) Real image. (c) Measured scan. (Color versionavailable online at http://ieeexplore.ieee.org.)

discuss this in detail in Section V-B. As such, we need to createa model of the sensor’s noise.

1) Sensor Noise: The probability models the noisein the measured scan conditioned on the expected scan. Forevery frame grabbed by the sensor, we obtain three scans (onefor each chromatic transition of interest), so we have to calcu-late three probability values. Since every scan is composed of aset of distances, one for each ray, we first model the probability

Page 6: Omnidirectional vision scan matching for robot localization in dynamic environments

528 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

that a single ray correctly detects the chromatic transition, andthen we combine the measurements of all rays. Ultimately, weneed to combine the three probability values given by the threechromatic transitions of interest.

The scan performed by the sensor is composed by a setof distances, one for each : N RAYS .To compute (i.e., the probability to obtain for a singleray a distance given the pose ), we can consider di-rectly the single expected distance , so we can write

. To create a statistical model of thedistance measurement along a single ray of the scan, wecollected a large number of omnidirectional images (about2000) in different known poses in the field of play. For everyimage, we calculated the estimated distance of the chromatictransition of interest. The resulting measures are distributedalong a different probability density, one for each chromatictransition of interest. As an example, the probability densityof the measured distance for the green–white colortransition is plotted in Fig. 5(a). We described this densitywith the mixture of three probability densities of (2). The threeterms in (2) are, respectively, an Erlang probability density,a Gaussian probability density, and a discrete density. Thenumerical values of the parameters in (2) are calculated with amodified expectation-maximization (EM) algorithm iterativelyrun on the 2000 images [6]. The resulting mixture, for thegreen–white transition, is plotted in Fig. 5(b). The Erlang vari-able models wrong readings in the scan caused by image noiseand nonperfect color segmentation. The index depends onthe profile of the omnidirectional mirror used in the sensor. Ourmirror (Section III) maps the area around the robot in the outerimage ring where we have good accuracy and almost no noise,while in the inner part a certain amount of noise is present. Weset the value of , the Erlang variable, equal to the index of thefirst pixel scanned in the inner part of the image. As such, theErlang density will have a peak at the distance correspondingto the transition between the two mirror parts. The Gaussiandensity models the density around the maximum-likelihoodregion (i.e., the region around the true value of the expecteddistance). The discrete density represents the probability tomiss the detection of the chromatic transition, obtaining anINFINITY value in the scan vector, as described in Section III

(2)

The mixture coefficients are , , , and normalization im-plies . A different density mixture was computedfor each one of the three chromatic transitions.

Once the is computed, it is possible to compute theprobability of the whole scan, given a pose multiplying all the

(3)

2) Sensor Occlusion: To cope with unexpected measures dueto occlusion of the sensor by moving objects in the environment

Fig. 5. A: the experimental distribution of measured distances for an expectedknown distance. The peak is at the expected distance. The measures beforethe expected one are due to the image noise. The last peak on the right ofthe plot means that due to image noise, several times, the chromatic transitionhas not been detected. B: the density p(ojl) that represent our sensor modelcomputed using EM algorithm. The curve is the result of three contributions:1) an Erlang variable with indexn which depends on the geometry of the mirror;2) a Gaussian distribution centered at the expected distance; and 3) a discretedistribution representing the measurements resulting in the INFINITY value.

(i.e., the other robots in the field or the ball), we filtered out allthe rays in which the distance equals the FAKE\_RAY value,see Section III [the FAKE\_RAY value is represented by in(4)]. We called this process ray discrimination. The detectionof occluding obstacles along the rays of a scan is very frequent ina densely crowded environment like the Middle-Size RoboCupfield. In conventional rangefinders, there is no ray-discrimina-tion system, so all measured distances contribute to the com-putation of . If a large number of distances are affectedby the presence of other agents around the robot, the localiza-tion process might fail. Our ray-discrimination technique en-ables computing the sensor model only with a subset of reliabledistances, obtaining a faster and more reliable localization

(4)

Page 7: Omnidirectional vision scan matching for robot localization in dynamic environments

MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 529

Fig. 6. Probability distributions p(o jl ) for all possible poses l = (x; y; �)of the robot in the field, given the scans of a single image (the heading is notrepresented). Darker points correspond to a higher likelihood. The arrowheadrepresents the actual robot pose. In A, B, and C are represented the probabilitiesgiven the scan for the green–white, green–blue, and green–yellow transition,respectively. D shows the combination of the three. (Color version availableonline at http://ieeexplore.ieee.org.)

From this equation, it follows that if the occlusion of thesensor increases, more and more rays will be discriminated andless information will be available for localization. Neverthe-less, in our system, all reliable information is exploited. As willbe shown in Section V, the ray-discrimination technique en-ables correctly localizing the robot, even in situations of severeocclusion.

C. Weights Calculation

Returning to the Monte Carlo localization, we can now com-pute the weight associated with each particle . We firstcalculate the quantity using (3), then all arenormalized such that

(5)

Since our system scans the acquired image for the three chro-matic transitions of interest, this provides three scans for everyframe, so three weight values are associated with every particle.To obtain a single weight value, we compute the product of thethree weights (6), and renormalize all weights with (5) again

(6)

In Fig. 6, we give a pictorial visualization of the weights cal-culated by the three different scans of the three chromatic tran-sitions of interest. The real pose of the robot is marked by thearrowhead. Higher weight values are depicted as darker points,

lower weight values are depicted as lighter points. The weightcontributions calculated by the scan looking for the green–whitetransition are represented in Fig. 6(a); due to the symmetry ofthe white lines in the field, two symmetric positions resulted inhigh likelihood. The weight contributions calculated by the scanlooking for the green–blue transition are depicted in Fig. 6(b);all positions far away from the blue goal have a high likelihood,because no green–blue transition was found in the image scan.The weight contributions calculated by the scan looking for thegreen–yellow transition are represented in Fig. 6(c); there is anapproximate symmetry around the yellow goal. All these con-tributions are combined with (6) to calculate the overall weightsas depicted in Fig. 6(d), where indeed, the weights with highervalues are clustered only around the actual position of the robot.

V. EXPERIMENTS

The robot we used in the experiments is a holonomic custom-built platform, equipped with the omnidirectional sensor de-scribed in Section III. This section is divided in three parts. Inthe first, we evaluate the performance of the localization systemdepending on the number of particles used. In the second, therobustness of the system to sensor occlusion is evaluated. In thethird, we present experiments in the corridors of our departmentto show that the proposed system can be applied in any environ-ment in which stable color transitions can be identified.

In order to improve the time performance of the system, thedistances in the RoboCup and in the office environment are di-vided in a grid of 5 5 cm cells; similar approaches have beensuccessfully used previously [9]. The expected distances for allposes and the probabilities for all can beprecomputed and stored in two look-up tables for every chro-matic transition. Each look-up table takes about 13 Mb. In theRoboCup field, we have six look-up tables (three chromatictransitions of interest), and in the office environment, we havefour look-up tables (two chromatic transitions of interest). Inthis way, the probability can be quickly computed withtwo look-up operations, which enables our system to work inreal-time at 10 Hz on a PC-104 Pentium III 700 MHz fitted with128 Mb of RAM using 1000 particles.

A. Localization in the RoboCup Field of Play

We tested the system on five different paths (an example pathis shown in Fig. 7). For each path, we collected a sequence ofomnidirectional images with the ground-truth positions wherethose images were grabbed and with the odometry readings be-tween two consecutive positions. During the experiments, inorder to take into account the odometric errors, the robot wasmoved using its own motors between the reference locations.This was done by sending position commands to the robot con-troller and not by moving the robot by hand. We tested our algo-rithms using different amounts of particles calculating the meanlocalization error for the three fundamental localization prob-lems: 1) the global localization problem (the robot must be lo-calized without any a priori knowledge on its actual position);

Page 8: Omnidirectional vision scan matching for robot localization in dynamic environments

530 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

Fig. 7. Sequence of global localization and position tracking. The gray spotrepresents the actual robot pose, the red line represents the ground-truth path,the black line represents the estimated path of the robot, the black dots representthe particles (1000 particles are used). Note the heading of the particle is notdisplayed. (Color version available online at http://ieeexplore.ieee.org.)

2) the position tracking problem (a well-localized robot mustmaintain the localization); and 3) the kidnapped robot problem(a well-localized robot is moved to a different pose without anyodometry information). In our experiments, the kidnap was done

Fig. 8. Average error in the global localization problem for a specific path withdifferent amounts of particles.

by lifting the robot by hand and moving it about 3 m away. Inevery trial, the robot was moved to a different location; this isintended to simulate situations in which, for whatever reason,the robot is lost and must be able to recalculate its correct lo-calization starting from a wrong belief. Moving the robot by3 m takes into account also possible situations of incorrect lo-calization generated by collisions with other robots (a problemthat can frequently occur in a highly populated environmentlike the RoboCup games, where robots often push each otherwhile trying to win the ball). We performed specific experi-ments about collisions, in which we pushed or blocked a movingrobot, but the results are very similar to those of the kidnappingexperiment; the only difference is that the localization error isusually smaller for a collision than for a kidnapping situation.To address the kidnapped robot problem, we adopted the clas-sical technique to reserve a certain percentage of the particles tothis scope, and to randomly scatter them in the environment toact as seeds for a relocalization process in case of localizationfailure [8].

One of the five test paths is shown in Fig. 7. Initially, the par-ticles are uniformly distributed (no knowledge is available onrobot position), Fig. 7(a). After the robot moved 2 m, havinggrabbed four images and obtained four odometry readings, theparticles are condensed around three possible poses, Fig. 7(b).After 4 m, six images, and six odometry readings, uncertaintyis solved and particles are condensed around the actual pose ofthe robot, Fig. 7(c). After 14 steps, one can see that the positionof the robot is well-tracked along the ground-truth path (posi-tion tracking), as shown in Fig. 7(d). The particles that are stilldispersed in the environment are the particles scattered to solvethe kidnapped robot problem.

Both reactivity and the accuracy of the localization system in-crease with the number of particles, but the computational loadis also increased. We tested the performance of the system withdifferent numbers of particles. In Fig. 8, we show the averagelocalization error for global localization using 100, 500, 1000,and 10 000 particles when the same path is repeated 100 times.A thousand particles is compatible with real-time requirementsand assures a robust and accurate localization. This number is

Page 9: Omnidirectional vision scan matching for robot localization in dynamic environments

MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 531

Fig. 9. Average error and maximum error in the position tracking problem overthe five reference paths, calculated with different amounts of particles.

Fig. 10. Average error in the relocalization phase after kidnapping the robot,using 1000 particles and varying the rate of uniformly distributed particles.

also a good value for the position tracking problem. In Fig. 9,we show the average and the maximum localization error in theposition tracking phase for all test paths using different amountsof particles. With 1000 particles, it is already possible to achievea good accuracy, an acceptable average error (about 10 cm), andan acceptable maximum error (about 30 cm), without burdeningthe CPU of the robot.

In Fig. 10, we show the error for a kidnapped robot episodeusing 1000 particles and different rate of particles uniformly dis-tributed in the environment. With a higher rate of particles scat-tered in the environment, the relocalization is faster (there is ahigher likelihood that a particle is close to the actual positionof the robot), but the average error is higher due to the lowernumber of particles clustered around the correct robot’s pose.Notice that with 20% the relocalization is faster, but once thecorrect localization was recovered, the average position errorduring position tracking is higher. This is because the numberof the randomly distributed particles is so high that their con-tribution in the calculation of the center of gravity of the parti-cles spoils the correct estimation of the robot pose. We thereforechose to distribute uniformly 10% of the 1000 particles. This en-sures low contribution in the calculation of the center of gravityand acceptably fast recovery from the kidnap situation.

Fig. 11. Sequence of global localization and position tracking in presence of12.5% of sensor occlusion. Note that with respect to Fig. 7, the particles aremore scattered around the true position of the robot. (Color version availableonline at http://ieeexplore.ieee.org.)

B. Robustness to Sensor Occlusion

In order to show the robustness of our approach in crowdedenvironments, we tested the system on six different paths; an ex-ample path is shown in Fig. 11. To understand how occlusion ofthe omnidirectional camera affects the localization process, con-sider the images in Fig. 12. The corresponding plots (to the rightof each image) show the probability distributions of the robot’spose. As occlusion increases (0%, 25%, and 50%, respectively)the particles become more dispersed around the true position ofthe robot. Uncertainty increases, but most of the probability isstill condensed around the correct position. This is the result ofthe ray-discrimination technique presented in Section IV-B.2.To obtain a measurable amount of occlusion, the sensor wascovered with black strips; every strip covers 12.5% of the sensorand adequately simulates the presence of one robot close to thesensor. In real situations, like the one depicted in Fig. 13, it isextremely hard to have more than two robots close to the sensor,while other robots are usually quite far and occlude only a smallfraction of the sensor. The actual amount of occlusion during areal game strongly depends on the shape of the opponent robots.There are robots like the ones of the Philips team [1] or IsocRobteam [15] that are quite large and tall, while other robots, likethe ones of Fu-Fighter team [26], are rather small and very short.Our experiment shows that the performance of our system de-grades slowly when occlusion increases. We estimated that a50% continuous occlusion is well above the maximum occlu-sion that one robot can experience, but even in this case, oursystem functions correctly.

For each path, we collected five sequences of omnidirectionalimages with 0%, 12.5%, 25%, 37.5%, and 50% occlusion, re-spectively. For every image, we recorded the ground-truth poseof the robot and the odometric readings between two consec-utive positions. In order to take into account the odometryerrors, robot movements were performed by sending positioncommands to the robot. We tested our algorithms for thethree fundamental localization problems: global localization

Page 10: Omnidirectional vision scan matching for robot localization in dynamic environments

532 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

Fig. 12. (On the left) Occlusion of the sensor is obtained with black stripessimulating the presence of other robots close to the sensor. This was done inorder to have a measurable amount of sensor’s occlusion (A 0% of occlusion,C for 25%, and E for 50%). (On the right) Probability distributions calculatedfor the corresponding amount of sensor’s occlusion. Notice that in the situationsof higher occlusion D, F, the particles are more scattered than in B, but mostof the probability is still condensed around the correct position. (Color versionavailable online at http://ieeexplore.ieee.org.)

[Fig. 11(a) and (b)]; position tracking [Fig. 11(c) and (d)]; andkidnapped robot (not shown).

In Fig. 14, we show the average error for a global localiza-tion experiment along the same reference path for three differentamounts of sensor occlusion. Obviously, without occlusion, lo-calization is fast and accurate. Also in a “densely crowded” en-vironment (sensor always 50% covered), the robot is able to lo-calize itself and to maintain localization with good accuracy. Weobtained very good results also in the kidnapped robot problem.Recovery from a localization failure is obtained thanks to asmall number of particles (10% of the total number of particles)uniformly distributed in the environment. A few steps after akidnapping episode, most of the particles are again concentratedaround the correct position, and the situation is the same as thatof the global localization experiment.

Finally, we performed a statistical evaluation of our approachin the conventional situation of position tracking, repeating allreference paths 100 times with different amounts of occlusion.In Fig. 15, we report the average error and the maximum errorover all reference paths. Notice that both remain small also in adensely and constantly crowded environment.

C. Localization in an Office Environment

Even though our system was developed for the RoboCup do-main, it was designed from the beginning having in mind its

Fig. 13. Example of expected and measured scans for the green–white colortransition in presence of occlusion. The robot’s pose is represented by the blackspot. A represents the expected scan for an ideal noise-free sensor in an emptyenvironment. B shows the frame grabbed by the robot in that pose. C representsthe corresponding measured scan. In C, the solid lines represent the measureddistances, while the dotted lines represents the rays in which an unexpectedtransitions was detected (FAKE\_RAYS). This can be caused by image noise orother robots (represented with gray spots). As can be seen in B, there is a robot(the goalkeeper) at the yellow goal; three rays of the scan detected it, as shown inC. Along these rays, a black, unexpected color was detected and FAKE\_RAYSvalues were stored instead of the proper distance. (Color version available onlineat http://ieeexplore.ieee.org.)

applicability to everyday environments. As a result, the local-ization system is not dependent on specific chromatic transi-tions. Chromatic transitions of interest can be of any numberand color combination. As stated before, the only requirements

Page 11: Omnidirectional vision scan matching for robot localization in dynamic environments

MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 533

Fig. 14. Plots compare the global localization errors for a specific path withdifferent amount of sensor’s occlusion.

Fig. 15. Statistical evaluation of our system in the position tracking problemfor all our reference paths. Accuracy (average error and maximum error) is rep-resented for different amounts of sensor’s occlusion (0%, 12.5%, 25%, 37.5%,50%).

for our system are: 1) an environment with meaningful colortransitions; 2) a geometric map of that environment; and 3) in-clusion of the color transitions in the map.

The environment in which we tested the generality of oursystem are the corridors of our department, as shown in Fig. 16.The floor of the corridor is composed of red tiles, the walls arepainted white, while doors and furniture are gray. The corridoris 26 m long, and its width ranges from 2 to 4 m. The trapezoidalroom is about 4 5 m. Along the corridors, there are two graylockers 2 m wide.

This environment is much more challenging than theRoboCup environment, due to uneven illumination and to thelow contrast between existing colors. Nevertheless, as ourexperiments demonstrate, the system is able to sucessfullylocate the robot, even though the vision system sometimesmistakes the type or misses the chromatic transitions. A typicalinput image for the robot is the image shown in Fig. 17. The

Fig. 16. Office-like environment in which our localization system was tested toprove its portability to real-world environments. (Color version available onlineat http://ieeexplore.ieee.org.)

Fig. 17. Scanning algorithm at work on an image grabbed in a corridor of anoffice-like environment. The colored crosses highlight the color transitions ofinterest of the environment detected along the dotted radial lines in the omnidi-rectional image. (Color version available online at http://ieeexplore.ieee.org.)

chromatic transitions of interest we selected in this environ-ment are red–white and red–gray. The dark gray and light graycrosses mark red–white and red–gray transitions, respectively.In Fig. 18, we show a comparison between the expected andthe real scans of the office-like environment for red–whitetransitions.

To test the robustness of the system in a general indoorenvironment without any lighting control, we performed thetest on an overcast day. Due to low ambient light, the noise inthe image is high and the contrast between white and gray islow. In this situation, some chromatic transitions of interest arenot detected [e.g., the one pointed by the arrow in Fig. 18(a)]or are erroneously detected [e.g., the one pointed by the arrowFig. 18(c)]. The probability distribution calculated from thered–white transitions in the image shown in Fig. 18 are depictedin Fig. 19. Dark and light regions represent, respectively, highand low probabilities to represent the correct robot pose. As

Page 12: Omnidirectional vision scan matching for robot localization in dynamic environments

534 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

Fig. 18. Detection of red–white transitions in an office environment. The flooris represented in light gray, the gray objects are drawn in black, and the rays ofthe scan are painted in dark gray. The black spot represents the pose of the robot.A depicts a detail of the map with the expected scan for the red–white transi-tions. B shows an acquired image, while C shows the extracted scan. Note that Ccontains several wrong detections due to noise in the image. The arrow in A rep-resents a transition not detected in C; the arrow in C represents an erroneouslydetected transition. (Color version available online at http://ieeexplore.ieee.org.)

Fig. 19. Probability distribution calculated from the red–white transition.(Color version available online at http://ieeexplore.ieee.org.)

can be seen, the probability distribution is quite sparse. Nev-ertheless, by combining both the information from the secondchromatic transition of interest, as well as the informationcoming from different measurements using the Monte Carlolocalization algorithm, a robust localization can be achieved,as shown in Fig. 20. Starting without any knowledge aboutthe robot’s position [Fig. 20(a)], a few steps later most of theparticles condense around the true position [Fig. 20(d)]. Asbefore, we randomly spread 10% of the particles to address thekidnapped robot problem.

Fig. 20. Example of global localization in the office-like environment in ourdepartment. 10% of the particles are randomly distributed in the environment torecover in case of wrong localization (kidnapped robot). (Color version availableonline at http://ieeexplore.ieee.org.)

VI. CONCLUSIONS

In this paper, we propose a vision-based Monte Carlo lo-calization system particularly suitable for densely populatedenvironments, using a ray-discrimination technique. The omni-directional vision system emulates the behavior of rangefinderdevices and, due to the ability to distinguish different colortransitions, it can detect and reject wrong measurements causedby occlusions. We developed our system in the Middle-SizeRoboCup domain, but we showed that it can be used to localizethe robot in any environment in which meaningful chromatictransitions exist. Our system requires only a map with the metricand chromatic characteristics of the environment. This mapmust contain the static obstacles and the chromatic transitionsof interest, and can be as simple as a drawing stored in an imagefile (representing a floor plan plus the information on colortransitions). From such a map, the system will automaticallyrecalculate all look-up tables used in the localization process.

The reliability of the localization system could be furtherimproved if a more robust color transition detection algorithmis used (like the ones proposed in [12] and [21]), but suchstudies are beyond the scope of this paper. We are currentlyimplementing a color-transition detector robust to illuminationchanges to test our system in outdoor environments.

This paper presented successful experiments of global lo-calization, position tracking, and robot kidnapping, both in theRoboCup environment and in the corridors of our department.We experimentally showed the robustness of the localizationsystem to sensor occlusion and to chromatic transitions withpoor contrast. The proposed system has characteristics that en-able its use in a variety of applications, including navigationin populated environment, outdoor localization, and integrationwith other localization systems.

REFERENCES

[1] Philips team description [Online]. Available: http://www.apptech.philips.com/robocup/

[2] G. Adorni, S. Cagnoni, S. Enderle, G. Kraetzschmar, M. Mordonini, M.Plagge, M. Ritter, S. Sablatnög, and A. Zell, “Vision-based localizationfor mobile robots,” Robot. Auton. Syst., vol. 36, pp. 103–119, 2001.

[3] A. Bonarini, “The body, the mind or the eye, first?,” in Proc. 3rd Int.Workshop Robocup, 1999, pp. 40–50.

Page 13: Omnidirectional vision scan matching for robot localization in dynamic environments

MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 535

[4] F. Dellaert, W. Burgard, D. Fox, and S. Thrun, “Using the CONDEN-SATION algorithm for robust, vision-based mobile robot localization,”in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog., Jun.1999, vol. 2, pp. 588–594.

[5] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localiza-tion for mobile robots,” in Proc. IEEE Int. Conf. Robot. Autom., May1999, vol. 2, pp. 1322–1328.

[6] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihoodfrom incomplete data via the EM algorithm,” J. Roy. Statist. Soc., vol.39, no. 1, pp. 1–38, 1977.

[7] S. Enderle, M. Ritter, D. Fox, S. Sablatnög, G. Kraetzschmar, andG. Palm, “Soccer-robot locatization using sporadic visual features,” inProc. 6th Int. Conf. Intell. Auton. Syst., E. Pagello, F. Groen, T. Arai,R. Dillman, and A. Stentz, Eds., 2000, pp. 959–966.

[8] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte Carlo local-ization: Efficient position estimation for mobile robots,” in Proc. Nat.Conf. Artif. Intell., Jul. 1999, pp. 343–349.

[9] D. Fox, W. Burgard, and S. Thrun, “Markov localization for mobilerobots in dynamic environments,” J. Artif. Intell. Res., vol. 11, pp. 391,427, 1999.

[10] N. Gordon, D. Salmond, and A. F. M. Smith, “Novel approach to non-linear and non-gaussian bayesian state estimation,” IEEE Proc.-F, vol.140, no. 2, pp. 107–113, Apr. 1993.

[11] H.-M. Gross, A. Koenig, C. Schroeter, and H.-J. Boehme, “Omnivi-sion-based probabilistic self-localization for a mobile shopping assis-tant continued,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., LasVegas, NV, Oct. 2003, pp. 1505–1511.

[12] F. Hundelschausen, S. Behnke, and R. Rojas, “An omnidirectionalvision system that finds and tracks color edges and blobs,” in Proc.RoboCup-2001: Robot Soccer World Cup V, A. Birk, S. Coradeschi,and S. Tadokoro, Eds., pp. 374–379.

[13] G. Kitagawa, “Monte Carlo filter and smoother for non-Gaussian non-linear state space models,” J. Comput. Graphical Statist., vol. 5, no. 1,pp. 1–25, 1996.

[14] S. Lenser and M. Veloso, “Visual sonar: Fast obstacle avoidance usingmonocular vision,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst.,Las Vegas, NV, Oct. 2003, pp. 886–891.

[15] P. Lima, L. Custódio, P. Marcelino, H. Costelha, G. Neto, V. Pires,M. Arroz, and B. Vecht, “ISOCROB 2004 team description paper,” inProc. RoboCup-2004—Proc. Int. Symp., 2004.

[16] E. Menegatti, F. Nori, E. Pagello, C. Pellizzari, and D. Spagnoli, “De-signing an omnidirectional vision system for a goalkeeper robot,” inProc. RoboCup-2001: Robot Soccer World Cup V, A. Birk, S. Corade-schi, and S. Tadokoro, Eds., pp. 78–87.

[17] E. Menegatti, A. Pretto, and E. Pagello, “A new omnidirectionalvision sensor for Monte Carlo localisation,” in Proc. RoboCup Symp.:RoboCup 2004: Robot Soccer World Cup VIII, 2004, pp. 97–109,(Berlin, Germany: Springer-Verlag, 2005).

[18] ——, “Testing omnidirectional vision-based Monte Carlo localizationunder occlusion,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Sep.2004, pp. 2487–2494.

[19] E. Menegatti, M. Zoccarato, E. Pagello, and H. Ishiguro, “Hierarchicalimage-based localisation for mobile robots with Monte Carlo localisa-tion,” in Proc. Eur. Conf. Mobile Robots, Sep. 2003, pp. 13–20.

[20] ——, “Image-based Monte Carlo localisation with omnidirectional im-ages,” Robot. Auton. Syst., vol. 48, no. 1, pp. 17–30, Aug. 2004.

[21] T. Röfer and M. Jüngel, “Vision-based fast and reactive Monte Carlolocalization,” in Proc. Int. Conf. Robot. Autom., Sep. 2003, vol. 1, pp.256–861.

[22] E. Schulenburg, T. Weigel, and A. Kleiner, “Self-localization in dy-namic environments based on laser and vision data,” in Proc. IEEE/RSJInt. Conf. Intell. Robots Syst., Las Vegas, NV, Oct. 2003, vol. 1, pp.998–1004.

[23] S. Thrun, A. Bücken, W. Burgard, D. Fox, T. Fröhlinghaus, D.Henning, T. Hofmann, M. Krell, and T. Schmidt, “Map learning andhigh-speed navigation in RHINO,” in AI-based Mobile Robots: CaseStudies of Successful Robot Systems, D. Kortenkamp, R. Bonasso, andR. Murphy, Eds. Cambridge, MA: MIT Press, 1998.

[24] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust Monte Carlolocalization for mobile robots,” Artif. Intell., vol. 128, no. 1–2, pp.99–141, 2000.

[25] F. von Hundelshausen, “An omnidirectional vision system for soccerrobots,” Master’s thesis, Inst. Informatik, Freie Univ. Berlin, Berlin,Germany, Apr. 2001.

[26] F. von Hundelshausen, R. Rojas, F. Wiesel, E. Cuevas, D. Zaldivar,and K. Gunarsson, “Fu-fighters team description 2003,” in RoboCup2003-Proc. Int. Symp., D. Polani, B. Browning, A. Bonarini, and K. Y., Eds., 2003, CD-ROM.

[27] T. Weigel, J.-S. Gutmann, M. Dietl, A. Kleiner, and B. Nebel, “CSFreiburg: Coordinating robots for successful soccer playing,” IEEETrans. Robot. Autom., vol. 18, no. 5, pp. 685–699, Oct. 2002.

[28] T. Wilhelm, H.-J. Böhme, and H.-M. Gross, “A multi-modal system fortracking and analyzing faces on a mobile robot,” Robot. Auton. Syst.,vol. 48, pp. 31–40, Aug. 2004.

[29] J. Wolf, W. Burgard, and H. Burkhardt, “Robust vision-based localiza-tion by combining an image-retrieval system with Monte Carlo local-ization,” IEEE Trans. Robot., vol. 21, no. 2, pp. 208–216, Apr. 2005.

Emanuele Menegatti (M’00) received the “Laurea”in physics from the University of Padua, Padua, Italy,in 1998, the M.Sc. degree in artificial intelligencefrom the University of Edinburgh, Edinburgh, U.K.,in 2000, and the Ph.D. degree in computer sciencefrom the University of Padua in 2002.

From 2003 to 2004, he was a Postdoctoral Re-searcher with the University of Padua. Since 2005,he has been an Assistant Professor of ComputerScience with the University of Padua. In 2004, hewas a Visiting Researcher for some months at both

Osaka University, Osaka, Japan, and the Georgia Institute of Technology,Atlanta. His research interests are in the field of robot vision and, in particular,in omnidirectional vision.

Alberto Pretto received the Laurea degree in elec-tronics engineering in 2003 from the University ofPadua, Padua, Italy, where he is currently working to-ward the Ph.D. degree.

After his graduation in 2003, he was a ScientificCollaborator with the Intelligent Autonomous Sys-tems Laboratory (IAS-Lab), University of Padua.In 2004 and 2005, he was a Software Engineerwith Trastec Scpa, Padua, Italy. His main researchinterests are in the field of robot vision and robotnavigation.

Alberto Scarpa has been a student member ofthe Intelligent Autonomous Systems Laboratory(IAS-Lab), University of Padua, Padua, Italy, since2001. He has worked on several projects on mobilerobot localization and on robot vision, both for theArtisti Veneti (the RoboCup team of the Universityof Padua) and for other IAS-Lab projects.

Enrico Pagello (M’90) received the Laurea degree inelectronics engineering from the University of Padua,Padua, Italy, in 1972.

From 1976 until 1983, he was a Research As-sociate at the Institute on System Science andBiomedical Engineering, National Research Councilof Italy, where now he is a part-time Collaborator.Since 1983, he has been a Professor of ComputerScience in the Department of Information Engi-neering, University of Padua. During 1977–1978, hewas a Visiting Scholar at the Laboratory of Artificial

Intelligence of Stanford University, Stanford, CA. Since 1994, he has visitedregularly in the Department of Precision Engineering of the University ofTokyo, Tokyo, Japan, in the framework of a joint scientific agreement betweenPadua and Tokyo Universities. His current research interests are in applyingartificial intelligence to robotics with particular emphasis in the multirobotsystems area.

Dr. Pagello has been a member of the Editorial Board of the IEEETRANSACTIONS ON ROBOTICS AND AUTOMATION, and is currently a memberof the Editorial Board of Robotics and Autonomous Systems. He was theGeneral Chair of the Sixth International Conference on Intelligent AutonomousSystems in July 2000. He was a Vice-President of the RoboCup InternationalFederation, and a General Chairman of RoboCup 2003, July 2003. He iscurrently a President of the Intelligent Autonomous Systems InternationalSociety.