Omnidirectional vision scan matching for robot localization in dynamic environments

  • Published on
    14-Apr-2017

  • View
    212

  • Download
    0

Transcript

  • IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006 523

    Omnidirectional Vision Scan Matching for RobotLocalization in Dynamic Environments

    Emanuele Menegatti, Member, IEEE, Alberto Pretto, Alberto Scarpa, and Enrico Pagello, Member, IEEE

    AbstractThe localization problem for an autonomous robotmoving in a known environment is a well-studied problem whichhas seen many elegant solutions. Robot localization in a dynamicenvironment populated by several moving obstacles, however, isstill a challenge for research. In this paper, we use an omnidirec-tional camera mounted on a mobile robot to perform a sort of scanmatching. The omnidirectional vision system finds the distances ofthe closest color transitions in the environment, mimicking the waylaser rangefinders detect the closest obstacles. The similarity ofour sensor with classical rangefinders allows the use of practicallyunmodified Monte Carlo algorithms, with the additional advan-tage of being able to easily detect occlusions caused by movingobstacles. The proposed system was initially implemented in theRoboCup Middle-Size domain, but the experiments we presentin this paper prove it to be valid in a general indoor environmentwith natural color transitions. We present localization experi-ments both in the RoboCup environment and in an unmodifiedoffice environment. In addition, we assessed the robustness of thesystem to sensor occlusions caused by other moving robots. Thelocalization system runs in real-time on low-cost hardware.

    Index TermsMobile robot localization, Monte Carlo localiza-tion, omnidirectional vision, scan matching.

    I. INTRODUCTION

    LOCALIZATION is the fundamental problem of estimatingthe pose of the robot inside the environment. Some ofthe most successful implementations of robust localizationsystems are based on the Monte Carlo localization approach[5], [23]. The Monte Carlo localization approach has beenimplemented on robots fitted either with rangefinder sensors orwith vision sensors. Lately, vision sensors have been preferredover rangefinders, because they are cheaper and provide richerinformation about the environment. Moreover, they are passivesensors, so they do not interfere with other sensors and do notpose safety concerns in populated environments.

    In this paper, we consider the problem of Monte Carlo local-ization using an omnidirectional camera. The vision system hasbeen designed to extract the distances of the closest color transi-tions of interest existing in the environment. Our system uses an

    Manuscript received March 11, 2005; revised November 11, 2005. This paperwas recommended for publication by Associate Editor J. Kosecka and EditorF. Park upon evaluation of the reviewers comments. This paper was presentedin part at the RoboCup Symposium, Lisbon, Portugal, July 2004, and in part atthe International Conference on Intelligent Robots and Systems, Sendai, Japan,October 2004.

    E. Menegatti, A. Pretto, and A. Scarpa are with the Intelligent AutonomousSystems Laboratory, Department of Information Engineering, University ofPadua, I-35131 Padua, Italy (e-mail: emg@dei.unipd.it).

    E. Pagello is with the Intelligent Autonomous Systems Laboratory, Depart-ment of Information Engineering, University of Padua, I-35131 Padua, Italy,and also with Institute ISIB, CNR Padua, Italy.

    Digital Object Identifier 10.1109/TRO.2006.875495

    omnidirectional camera to emulate and enhance the behavior ofrangefinder sensors. This results in a scan of the current locationsimilar to the one obtained with a laser rangefinder, enablingthe use of Monte Carlo algorithms only slightly modified toaccount for this type of sensor. The most significant advan-tages with respect to classical rangefinders are: 1) a conven-tional rangefinder device senses the vertical obstacles in theenvironment, while our system is sensitive to the chromatic tran-sitions in the environment, thus gathering richer information;and 2) our system can reject measurements if an occlusion isdetected. Combining the omnidirectional vision scans with theMonte Carlo algorithms provides a localization system robustto occlusions and to localization failures, capable of exploitingas localization clues the natural color transitions existing in theenvironment. The Middle-Size RoboCup field is a highly suit-able testbed for studying the localization problem in a highlydynamic and densely populated environment. In a dynamic mul-tiagent world, precise localization is necessary to effectivelyperform high-level coordination behaviors. At the same time,the presence of other robots makes localization harder to per-form. In fact, if the density of moving obstacles in the environ-ment is high, occlusion of the robots sensors is very frequent.Moreover, if, like in RoboCup Middle-Size, collisions amongrobots are frequent, the localization system must be able to re-cover from errors after collisions.

    In this paper, we explicitly discuss the robustness of oursystem with respect to the classical problems of global local-ization, position tracking, and robot kidnapping [17]. We alsoprovide a detailed discussion of the robustness against sensorocclusion when the robot moves in a densely populated environ-ment [18]. In addition, we present experimental evidence thatthe system developed is not limited to the RoboCup domain,but works in a generic unmodified office-like environment.The only requirements for our system are: 1) an environmentwith meaningful color transitions; 2) a geometric map of thatenvironment; and 3) inclusion of the color transitions in themap.

    The following section discusses previous research related tothis topic. Section III describes how we process the omnidirec-tional image to obtain range information. Section IV summa-rizes the well-known Monte Carlo localization algorithm anddiscusses the motion model and sensor model used in the ex-periments, as well as the modifications to the classical MonteCarlo localization to adapt it to our sensor. In Section V, wepresent the experiments performed with the robot in a RoboCupMiddle-Size field and in the corridors of our department. A de-tailed analysis of the performance and robustness of the local-ization system is presented, paying particular attention to occlu-

    1552-3098/$20.00 2006 IEEE

  • 524 IEEE TRANSACTIONS ON ROBOTICS, VOL. 22, NO. 3, JUNE 2006

    Fig. 1. Metric maps used for the computation of the expected scans.A represents the static obstacles (they are too sparse for an effective localiza-tion). B represents all the chromatic transitions of interest in the environment.(Color version available online at http://ieeexplore.ieee.org.)

    sions caused by other robots. Finally, in Section VI, conclusionsare drawn.

    II. RELATED WORK

    The seminal works on Monte Carlo localization for mobilerobots used rangefinders as the main sensors. The rangefinderswere used to perform scans of static obstacles around the robot,and the localization is calculated by matching those scans with ametric map of the environment [5], [24]. However, in dynamicenvironments, the static features that are detectable are oftennot enough for a robust localization (as illustrated in Fig. 1),or they may be occluded by moving obstacles. One possibilityis to design algorithms able to filter out the moving obstaclesin the range scans, leaving only the static obstacles that can beused as landmarks. This is exemplified by distance filters [9],but it should be noted that usually these algorithms are compu-tationally intensive. Another possibility is to use a sensor, suchas a color camera, that provides richer information and con-sequently, a more reliable scan matching. In this paper, weadopted the latter approach.

    Usually, Monte Carlo localization systems with vision sen-sors use cameras to recognize characteristic landmarks subse-quently matched within a map [7], [21], or to find the refer-ence image most similar to the image currently grabbed by therobot [11], [19], [20], [28], [29]. However, when the robot hasto match the current reading with a previous reading, movingobstacles like people or other robots can impair the localizationprocess. Several solutions have been adopted. One possibility isto look at features that cannot be occluded by moving obstacles,like the ceiling in a museum hall [4]. However, these features arenot always available or they do not carry enough information.

    Our sensor is able to detect occlusions as nonexpected colortransitions, so it can use only a subset of reliable distances in thescan, obtaining a more precise localization. Color transitions areusually available and yield rich information about the environ-ment structure (e.g., change of carpet color, doors with a colordifferent from the walls, etc.).

    In the RoboCup Middle-Size competitions, an approachbased only on laser rangefinders was used, very effectively,by the CS Freiburg Team [27]. They extracted the lines of thewalls from the laser scans and matched them against a model ofthe field of play. However, when in 2002 the walls surroundingthe field were removed, the reliability of this approach was im-paired by the lack of static features detectable by a rangefindersensor.

    The static objects detectable by a rangefinder in the Middle-Size field (with the layout used since 2003) are presented inFig. 1(a). The only detectable objects are the two goals and thefour corner posts. With a vision system sensitive to color transi-tions, one can detect not only these static objects, but also all ex-isting color transitions [Fig. 1(b)]. Schulenburg et al. combineda laser rangefinder and an omnidirectional camera to extend CSFreiburgs approach by detecting lines using both sensors [22].However, the integration of laser data with vision data does notsignificantly improve the localization with respect to vision dataalone (due to the shortage of laser detectable features). More-over, the image-processing algorithms used to extract the fieldlines are computationally demanding. Most of the approachesthat use a camera as the only sensor extract geometric features(like lines and corners) from the images, performing templatematching against a model of the environment [2]. Geometricfeatures have also been determined by detecting color transi-tions in visual receptors placed along radial lines (as previouslyproposed [3]) and the environment is represented by a 3-D com-puter-aided drawing (CAD) model [12], [25]. Roefer et al. lo-cated geometric features and bearings of landmarks by detectingcolor transitions along vertical lines [21]. In our approach, thechromatic transitions are not used to extract geometric features,but to perform scan matching against a 2-D image file that isa geometric map of the environment. In fact, we use the rawrange scans, and this paper shows that a robust localization isachievable. Lenser and Veloso have developed a system whichmimics the working of a sonar sensor using a monocular camerathat detects obstacles (as color transitions) along previously de-termined lines in the image [14]. However, although the basicidea is similar, our aim is much broader: we want to mimic theworking of a laser rangefinder with an omnidirectional camera[therefore, with a 360 field of view (FOV)] in order to be able,not only to avoid the obstacles as Lenser, but also to localize therobot with a Monte Carlo localization software almost unalteredfrom the one proposed by Thrun et al. [24], in which they usedlaser rangefinders. Another difference with the work of Lenserand Veloso is that we do not color-segment the whole image, butwe just look for color transitions along the radial lines of Fig. 2,saving a considerable amount of computation.

    III. OMNIDIRECTIONAL CAMERA AS A RANGEFINDER

    The main sensor of our robot is an omnidirectional camera.The camera is calibrated in order to be able to relate the dis-tances measured in the image with the distances in the realworld. Our rangefinder has a 360 FOV, much larger thanthat of conventional rangefinders. The omnidirectional camerais composed by a perspective camera pointed upward to a mul-tipart mirror with a custom profile, depicted in Fig. 3 [16]. Thisprofile was designed to have good accuracy both for short- andlong-range measurements. In fact, conic omnidirectional mir-rors fail to obtain good accuracy for short distance measure-ments (because the area close to the robot is mapped in a verysmall image area), while hyperbolic mirrors fail to obtain goodaccuracy for long distance measurements (because of the lowradial resolution far away from the sensor). With our mirror, thearea surrounding the robot is imaged in the wide external ringof the mirror, and the area far away from the robot is imaged in

  • MENEGATTI et al.: OMNIDIRECTIONAL VISION SCAN MATCHING FOR ROBOT LOCALIZATION IN DYNAMIC ENVIRONMENTS 525

    Fig. 2. Omnidirectional image with the detected chromatic transitions.Greenwhite chromatic transitions are highlighted with light gray crosses,greenyellow transitions with dark gray crosses, and black pixels represent thereceptor pixels used for the scan that is performed in a discrete set of distances.(Color version available online at http://ieeexplore.ieee.org.)

    Fig. 3. Plot of the profile of the omnidirectional mirror mounted on the robot.(Color version available online at http://ieeexplore.ieee.org.)

    the inner part of the mirror [16]. The inner part of the mirror isused to measure objects farther than 1 m away from the robot,while the outer part is used to measure objects closer than 1 mfrom the robot, see Fig. 2.

    The omnidirectional image is scanned for what we calledchromatic transitions of interest. In the RoboCup domain, weare interested in greenwhite, greenblue, and greenyellowtransitions. These transitions are related to the structure of theRoboCup field, where the playground is green, lines are white,and goals and corner posts are blue or yellow. In the officescenario, we are interested in the redwhite and redgray transi-tions, due to the colors available in the environment. The imageis scanned along radial lines 6 apart, and with a samplingstep corresponding to 4 cm in the world coordinate system,as shown in Fig. 2. We first scan for chromatic transitions ofinterest close to the robots body (i.e., in the outer mirror part);

    we then scan the inner part of the image for transitions up to 4m away from the robots body.

    In RoboCup, a color quantization is usually performed on theimage before any further image processing. Our system looksfor the chromatic transitions of interest only along the recep-tors of the 60 rays depicted in Fig. 2. Therefore, we do not needto color quantize the whole image, but only some of the pixelslying along the 60 rays need to be classified into one of the 8RoboCup colors,1 plus a further class that includes all colors notincluded in the former classes (called unknown color). At thesetup stage, the redgreenblue (RGB) color space is quantizedinto nine color classes. To achieve a real-time color quantiza-tion, a look-up table is stored in the main memory of...

Recommended

View more >