Upload
kshahar
View
116
Download
1
Embed Size (px)
Citation preview
An Effective User-Guided
Interface For Multi-Robot Search
Shahar Kosti1, Gal A. Kaminka1,2, David Sarne1
1. Computer Science Department
2. Gonda Brain Science CenterBar-Ilan University, Israel
Robotic Search (Exploration)
Gain knowledge of an unknown area
Human in the loop
Variety of applications
Urban Search and Rescue
– Locate and rescue in closed spaces
– Mapping, locating victims
Source: Chuck Simmins
Single operator, single robot
Camera-based teleoperation
– The robot carries camera and sensors
– Controlled by an input device (joystick)
Operator tasks
– Steering and navigation
– Watch video
Requires
– High-bandwidth, low-latency
– Operator attention
Source: Foster-Miller
Multiple robots
Navigating multiple robots - can be automated
How to display imagery from multiple robots?
Operator can watch multiple video feeds
– Hard to scale with the number of robots
Asynchronous video
Robots explore autonomously
Operator watches recorded imagery
– Instead of live video
How to select the most relevant image?
How to display it?
Image database
Input: recorded images, sensor data
For each image we keep:
– Robot location and heading
– Field-of-view (FOV) polygon
FOVRobot location and heading
Screen layout
2D Map
Selected image
Relevant images (thumbnails)
Image navigation – user action
Click on POI (point of interest)
Image navigation - automatic
Rank and present images that cover POI
Ranking process (point p)
Find all images that cover p
Group the images in sectors
For each sector
– Compute the utility value u of all images
Output:
– Best: highest-ranked images from each sector
– Other: all other images
Ranking process output
Best images
Other images
Highest rankedBest image
Grouping Images
Divide to sectors by resolution r
Group by angle between robot location, POI
Images that cover p
Point of interest - p
Image utility computation
Goals:
– Maximize image area
– Minimize distance from POI
– The image should be centered as possible
Linear combination of the above
– Weights determined in a pilot session
Interface evaluation
Two asynchronous interfaces
Our interface – POI
State of the art – Best-First
– By Wang et al
Both assume fully-autonomous robots
Best-first interface
Images are ordered by utility (unseen area)
Image navigation – “Next” button
– Displays the highest-ranked image
Thumbnails are ordered chronologically
Map is used only to locate victims
Best-first example
Simulated environment
Two environments for USAR
– Office environments, generated with USARSim
– 20 human characters (“victims”)
– One larger than the other
Pre-recorded data (not “live”)
Simulated robot steered manually
Experiment design and participants
Experiment
– Mission: locate and mark victims
– Training session
– Two 10-minutes sessions
32 paid participants – adult students
– Balanced for gender and other conditions
– Fixed show-up fee, variable bonus for success
Between-subjects design
Evaluating marks success
Set of marks for each participant
Assign marks to nearest victim
– 1m accuracy
Choose a category:
– Found
– Duplicate
– False positive (distance > 1m)
(distance ≤ 1m)
Mark results - found
Map 2Map 1
Correct marks / Time (Map 1)
Correct marks / Time (Map 2)
Conclusions
User-guided UI for robotic search missions
Improved performance under certain conditions
Allows flexible image selection
Can apply to other domains
Future Work
Improve image storage and retrieval
Improve the ranking and grouping process
Support 3D mapping
Evaluate best-first and poi together
Evaluate our interface with real robots
Thank You!