28
Performance Evaluation and Benchmarking for Intelligent Robots and Systems Characterizing Mobile Robot Localization and Mapping Raj Madhavan and Chris Scrapper Intelligent Systems Division National Institute of Standards and Technology (NIST) Gaithersburg, MD, U.S.A. Presented by: Raj Madhavan, Ph.D. [email protected] September 26 th 2008

Characterizing Mobile Robot Localization and Mapping · 2011-07-26 · (Localization and Mapping) Core Competencies for the Next Generation of Mobile Robots ... • Focus has been

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Characterizing Mobile Robot Localization and Mapping

    Raj Madhavan and Chris ScrapperIntelligent Systems DivisionNational Institute of Standards and Technology (NIST)Gaithersburg, MD, U.S.A.

    Presented by: Raj Madhavan, [email protected] 26th 2008

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    • No accepted standard for quantitatively measuring the performance of robotic systems against user–defined requirements;

    • No consensus on what objective evaluation procedures need to be followed to deduce the performance;

    • The lack of reproducible and repeatable test methods have precluded researchers working towards a common goal from

    • exchanging and communicating results, • inter–comparing robot performance, and • leveraging previous work that could otherwise avoid

    duplication and expedite technology transfer. • Replication of algorithms is not straightforward;• Lack of cohesion in the community hinders the progress in

    many domains (e.g. manufacturing, service, healthcare, and security, …).

    Motivation & BackgroundMotivation & Background

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    • Robot Standards and Reference Architectures (RoSta)–

    http://www.robot-standards.eu/

    & http://www.robot-standards.org/– Coordination Action funded under the European Union’s Sixth Framework

    Programme (FP6)– Robot standards and reference architectures in service robotics – RAS Standing Committee on Standards Activities (SCSA)

    • Official sponsor of IEEE Standards • “promote common measures and definitions, measurability and comparability,

    and integratability, portability and reusability of robotics and automation technology”

    • RAWSEEDS http://rawseeds.elet.polimi.it/home/– Benchmarking toolkit for SLAM includes:

    • high-quality multisensorial data sets, benchmark problems based on them, and benchmark solutions to these problems

    • EUropean RObotics research Network (EURON) http://www.euron.org/– Benchmarking Initiative http://www.euron.org/activities/benchmarks/– Research Roadmap http://www.euron.org/activities/roadmap.html/

    • OpenSLAM http://www.openslam.org/– Share SLAM algorithms (source code)

    • The Robotics Data Set Repository (RADISH) http://radish.sourceforge.net/– Standard data sets for the robotics community

    Related EffortsRelated Efforts

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Mobile Robot Navigation & Problems in Different Domains

    Mobile Robot Navigation & Problems in Different Domains

    • AGVs in Manufacturing– Bottlenecks associated with centralized control– Highly-structured indoor environments

    • Urban Search and Rescue (US&R)– Mobile robots can only provide responders ‘remote situation

    awareness’– Terrain conditions, unstructured environments, sensors, …

    • Limits the flexibility and adaptability – Lack of autonomy & onboard sensing

    • Integrating autonomous capabilities– Minimize dependencies on infrastructure – More flexible and technically capable– Facilitate the safe integration into existing workforce

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Navigation Solutions:System’s ability to sense the environment,

    create internal representations of its environment, and estimate pose with respect to a fixed coordinate frame

    (Localization and Mapping)

    Core Competencies for the Next Generation of Mobile Robots

    Core Competencies for the Next Generation of Mobile Robots

    Core Competencies:• Cope with unknown environments• Localize itself within the environment• Create maps of the environment through exploration• Intelligently adapt to changes in the environment

    (dynamic/kinematic constraints)

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Case in Point: Two Mapping ExamplesCase in Point: Two Mapping Examples

    Maps produced by various teams at the RoboCupRescue Virtual League Competition (geotiff format)

    Generated map comparison with ground truth map (RoboCupRescue Physical League Competition)

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Performance Metrics & Objective Evaluation for Localization and Mapping

    Performance Metrics & Objective Evaluation for Localization and Mapping

    • Qualitative comparison of resulting maps are used to assess performance, e.g. visual inspection– does not allow for better understanding of what errors specific

    systems are prone to and what systems meet the needs• Common practice in the literature to compare newly developed

    mapping algorithms with former methods by presenting images of generated maps– suboptimal, particularly when applied to large–scale maps– clearly not a good choice of evaluation

    • Prevalent problem spanning multiple domains: USAR, Service Robotics, …

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Characterizing Navigation Solutions Characterizing Navigation Solutions

    Characterize the performance of navigation solution with respect to constraints imposed by:

    Cost-Benefit(Utility-Benefit)

    Analysis

    Enables end-users to select appropriate navigation

    solutions that satisfy their constraints

    • Requirements of end-users• Capabilities and limitations (sensors and platform)• Operational domains• Budget

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Expected OutcomesExpected Outcomes

    • Implementing Standardized Evaluation Procedures • Developing Reproducible/Repeatable Test Scenarios• Disseminating Reference Data Sets• Compiling Open-Source Library of Algorithms

    Expedite Tech. Transfer

    andFoster R&D

    • Rapid exchange of results• Inter-comparison of performance• Leverage previous work • Avoid duplication

    Bring together an amorphous research community to define standardized test methods for the quantitative evaluation of navigation solutions in dynamic and unstructured environments by:

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Work @ NISTWork @ NIST• Focus has been on physical attributes

    of a robot• Reference Test Arenas for US&R

    Robots – Provides an efficient way to test algorithms

    without having to• incur the costs associated with maintaining

    functional robots and• traveling to one of the permanent arena sites

    for validation and practice – Consists of real sensor datasets and

    simulated environments in addition to physical versions propagated internationally

    • RoboCup Rescue Competitions (Virtual and Physical Leagues)

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    QuickTime™ and a decompressor

    are needed to see this picture.

    Cache Packaging

    Human Factors

    Visual Acuity

    Radio Comms

    Confined Space

    Stairs

    Mobility/Endurance

    Situational Awarene

    Directed PerceptioManipulator Dexterity

    Example Test Methods in Standards

    ProcessE54.08

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    YELLOW SECTIONRANDOM MAZEPITCH & ROLL RAMP FLOORING (10°) DIRECTIONAL VICTIM BOXES (FOR AUTONOMOUS ROBOTS)

    ORANGE SECTIONPITCH & ROLL RAMP FLOORING (10°, 15°)

    HALF CUBIC STEPFIELDSCONFINED SPACES (UNDER ELEVATED FLOORS)

    VICTIM BOXES WITH HOLES

    RED SECTIONFULL CUBIC STEPFIELDS

    STAIRS (40°, 20CM RISERS)RAMP (45° WITH CARPET)

    PIPE STEPS (20CM)DIRECTIONAL VICTIM BOXES

    Standard Test Methods are Embedded

    RoboCup Rescue Competitions: Tying the Real-world to Research

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Performance Singularity Identification and Testing Performance Singularity Identification and Testing

    • Performance Evaluation– System level– Compares system output (e.g. pose estimate, maps)– Uses artifacts and inconsistencies to identify divergent

    behavior• Performance Analysis

    – Algorithmic level– Insight into performance singularities– Discover cause of errors a system is prone to

    Performance Singularity:The point when a system fails to be “well-behaved”,

    due to systematic and non-systematic errors

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Quantitative Assessment of Navigation Solutions

    Quantitative Assessment of Navigation Solutions

    • Decouple pose estimates from the robot-generated map – For assessment purposes only– Spatial relationships between

    pose estimates and features are highly correlated

    – Using ground truth as baseline for comparison

    • Measure accuracy and stability of pose estimates

    • Measure the metric quality and utility of robot-generated maps

    Produced by Ground Truth

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Scan-MatchingScan-Matching

    • What is Scan-Matching?– 3D range image registration technique– Based on Iterative Closest Point (ICP)

    • Observation model in SLAM• Visual Odometry based on range image

    • Scan-Matching Increasing in Popularity– Works with raw data– Does not require presence of features or

    shape primitives• Shortcomings of Scan-Matching

    – Represents surfaces as set of discrete points

    – Prone to correspondence errors– Converges to local minima

    Model Data

    Observed Data

    Two consecutive scans used by scan-matching algorithm

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Performance EvaluationPerformance Evaluation

    • Deviation of pose estimates from ground truth enables the inter-comparison of pose estimates

    • Identifies situations where performance singularities might have occurred– Provides insight into the development of testing scenarios

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Testing ScenariosTesting Scenarios

    • Elemental tests that target specific shortcoming of a navigation solution– Environments with occluded features tests the scan-matching

    algorithms’ ability to determine valid correspondences

    Testing Scenario Environments with Occluded Features

    Individual scans in the testing scenario shows occlusions

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Performance AnalysisPerformance Analysis

    • Convergence Profiles– How well the scan-matching algorithms converge– Provides insight into the stability of pose estimate– Provides meta-level knowledge about the performance of the

    algorithm

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Performance Evaluation Tools• Graphs depict errors in pose

    estimate• Geo-referenced images highlight

    divergent behaviors• Client-Server architecture enables

    easy integration to systems

    Data Capture and Visualization Tools• Automated tool for capturing referenced

    data sets using a high-fidelity simulator• Helps visualize data collection

    Developing Evaluation ToolsDeveloping Evaluation Tools

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Website http://www.isd.mel.nist.gov/navigation_solutions/

    Document lessons learned, Inform community & Share data sets

    Website http://www.isd.mel.nist.gov/navigation_solutions/

    Document lessons learned, Inform community & Share data sets

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    PublicationsPublications• Journal of Field Robotics Special Issue

    – R. Madhavan, A. Jacoff and E. Messina (eds.), Quantitative Performance Evaluation of Robotic and Intelligent Systems, John Wiley & Sons, Inc., Vol. 24, Issue 8-9, pp. 623-799, August-September 2007.

    • Performance Evaluation– C. Scrapper, R. Madhavan, S. Balakirsky, Stable Navigation Solutions

    for Robots in Complex Environments, IEEE International Workshop on Safety, Security, and Rescue Robotics (SSRR 2007).

    • Performance Singularity Identification and Testing– C. Scrapper, R. Madhavan, S. Balakirsky, Using a High-Fidelity

    Simulation Framework for Performance Singularity Identification and Testing, IEEE Applied Imagery Pattern Recognition (AIPR 2007).

    • Performance Analysis– C. Scrapper, R. Madhavan, S. Balakirsky Performance Analysis for

    Stable Mobile Robot Navigation Solutions (SPIE 2008).

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    PerMIS’08 Special Session:http://www.isd.mel.nist.gov/PerMIS_2008/The emphasis of this special session will be on how to assess the quality of robot- generated maps and the development of standardized test methods that target specific aspects of the navigation solutions.

    PerMIS’08 Special Session:http://www.isd.mel.nist.gov/PerMIS_2008/The emphasis of this special session will be on how to assess the quality of robot- generated maps and the development of standardized test methods that target specific aspects of the navigation solutions.

    WorkshopsWorkshops

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Robotics Science and Systems (RSS’08) Workshop

    Raj Madhavan, Chris Scrapper, Alex Kleiner (Organizers), Quantitative Performance Evaluation of Navigation Solutions for Mobile Robots, June 2008, Zurich, Switzerland.

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Upcoming PublicationsUpcoming Publications• Autonomous Robots Journal Special Issue

    – Characterizing Mobile Robot Localization and Mapping (Editors: Raj Madhavan, Chris Scrapper, and Alex Kleiner)

    – CfP being circulated– Deadline Feb. 1, 2009– Expected publication date: Late 2009

    • IEEE Technical Committee http://tab.ieee-ras.org/– Proposal submitted– TC on Performance Evaluation and Benchmarking of Intelligent Systems

    (Chairs: Raj Madhavan, Angel del Pobil, and Elena Messina)

    • Springer Edited Book Volume– Performance Evaluation and Benchmarking of Intelligent Systems

    (Editors: Raj Madhavan, Eddie Tunstel (JHU-APL), and Elena Messina)– Expanded versions of 15 papers selected from PerMIS’08

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Mapping CampMapping Camp• Where:

    – Co-located with the Disaster City Response Robot Evaluation in College Station, Texas

    • When:– November 17th-21st, 2008

    • Why:– To develop proposed standard

    test methods for robotic mapping in interior/exterior environments

    – To develop standard representation(s) and tools for evaluating map quality and accuracy

    – To develop ground truth referenced data sets with key sensors

    Upcoming EventsUpcoming Events

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Little Red Riding Hood “Over hills and through the woods…”

    Little Red Riding Hood “Over hills and through the woods…”

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    Future WorkFuture Work• Continue efforts and strengthen ties with the research

    community • Continue to develop and formalize performance

    evaluation procedures and work on defining a de facto standard for characterizing the performance of navigation solutions

    • Make the evaluation tools, testing scenarios, and algorithms accessible to the public by hosting a website that will serve as focal point for community and provide repository data

    • Organize Workshops and Events• Publish Journal/Book Special Issues

  • Performance Evaluation and Benchmarking for Intelligent Robots and Systems

    We welcome & value your ideas/suggestions/criticisms!

    Consider contributing to this technical initiative by collaborating, sharing, participating at any level.

    We welcome & value your ideas/suggestions/criticisms!

    Consider contributing to this technical initiative by collaborating, sharing, participating at any level.

    Characterizing Mobile Robot Localization and MappingMotivation & BackgroundRelated EffortsMobile Robot Navigation & Problems in Different DomainsCore Competencies for the Next Generation of Mobile RobotsCase in Point: Two Mapping ExamplesPerformance Metrics & Objective Evaluation for Localization and MappingCharacterizing Navigation Solutions Expected OutcomesWork @ NISTSlide Number 11Slide Number 12Performance Singularity �Identification and TestingQuantitative Assessment of �Navigation SolutionsScan-MatchingPerformance EvaluationTesting ScenariosPerformance AnalysisDeveloping Evaluation ToolsWebsite�http://www.isd.mel.nist.gov/navigation_solutions/�Document lessons learned, Inform community & Share data setsPublicationsRobotics Science and Systems (RSS’08) Workshop �Raj Madhavan, Chris Scrapper, Alex Kleiner (Organizers), Quantitative Performance Evaluation of Navigation Solutions for Mobile Robots, June 2008, Zurich, Switzerland.Upcoming PublicationsMapping CampLittle Red Riding Hood�“Over hills and through the woods…”Future Work