View
225
Download
5
Tags:
Embed Size (px)
Citation preview
Range-Only SLAM for Robots Operating Cooperatively with
Sensor NetworksAuthors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang
Reading Assignment16-735: Robot Motion Planning
Fall 2007
Presented byMichael Dille
Umashankar Nagarajan
Introduction to Paper
• A mobile robot operating among beacons (ad hoc sensor network)– Robot has beacon too
Pioneer I
Introduction to Paper
• A mobile robot operating among beacons (ad hoc sensor network)– Robot has beacon too
• Everything communicates using radio• Measure range to other beacons using sonar
– No practical implementations of radio-based ranging available
• The beacons can also move
Extended Kalman Filter (Review)
• Noise-corrupted process:• Noise-corrupted measurements:
• Predict (project state forward)– –
• Correct (incorporate measurements)– – –
)0,,ˆ(ˆ 1 kkk uqfq
Tkkk
Tkkkk WQWAPAP 11
1)( Tkkk
Tkkk
Tkkk VRVHPHHPK
))0,ˆ((ˆˆ 1 kkkkk qhyKqq kkkk PHKIP )(
),,( 11 kkkk wuqfq
),( kkk vqhy
)0,,ˆ( 1][
][],[ kk
j
iji uq
q
fA
)0,,ˆ( 1][
][],[ kk
j
iji uq
w
fW
)0,,ˆ(][
][],[ kk
j
iji uq
q
hH
)0,,ˆ(][
][],[ kk
j
iji uq
v
hV
Introduction to SLAM
• Scenario– Landmarks & sensor to detect them
• Localization– Landmark locations known– Use successive measurements to derive position
• Mapping– Robot position known– Use successive measurements to determine
landmark locations
Introduction to SLAM
• Simultaneous Localization & Mapping– Know neither robot nor landmark locations– Use successive measurements to derive position
of robot and landmarks
Range-only SLAM
• In “typical SLAM,” get range & bearing(displacement vector)
• May only get range readings– Often, no line of sight
• Underwater• Inside buildings• Emergency response: smoky, flame-filled areas
– Sensors: radio, sonar, …• Range from signal strength, time of flight, etc.
Related: Bearing-only SLAM
• Monocular structure from motion in computer vision – Each point in camera image corresponds to a ray
in space
EKF Localization(assuming landmark locations known)
• Robot state: – Position & orientation
• Motion model:– System dynamics, control input, noise
• Measurement model:– Sensing of relative landmark locations, noise– In 2-D:
kq
)()()(1 kkkk qvuGqFq
)()( kkk qwqHy
),(2atan
)()( 22
riri
riri
xxyy
yyxxH
Extending EKF for SLAM
• Localize with Kalman filter as before• Discover landmarks while moving• Add landmark locations to state estimate
– In 2-D:
• Need some initial estimate of landmark locations
• Dreaded data association problem– Landmark measurement?
Tlnlnllkkkk yxyxyxq ...11
Paper Contribution
Comparative study of five different algorithms– Just range-only localization
• EKF on robot measuring distances to known beacon locations
– Basic range-only SLAM• Same EKF, beacon locations unknown
– Extensions• Add beacon-to-beacon measurements to EKF• Offline optimization of robot path & beacon locations• Robot as virtual node, solve submaps with MDS, patch together
Range-only Localization
EKF model used in paper:Robot state (Position and Orientation):
Dynamics of the wheeled robot:
where is a noise vector, is the odometric distance traveled is the orientation changeControl input vector:
k
kk
kkk
kkk
k Dy
Dx
q
)sin(
)cos(
1
Tkkkk yxq
k kD
k
Tkkk Du
Range-only Localization
EKF measurement model in paper:
Range Measurement: where is the estimate of the range from the beacon to the current state, is the location of the beacon from which the measurement was received, is the zero mean Gaussian noise.
kbkbkk yyxxr 22 )()(ˆ
kr̂),( bb yx
k
Range-only SLAM
• Same as general SLAM formalism, just no bearings– Use same EKF as localization– Add beacon locations to state
• State vector now:
• Still need decent initial estimates!
Tbnbnbbkkkk yxyxyxq ...11
Range-only SLAM
To find initial beacon location estimates…Drive around a bit and build 2-D probability grid:
Adding Beacon-to-Beacon Measurements
• Extending SLAM
Beacon-to-Beacon measurements are used to update the states of the beacons that create the measurement
The measurement model is modified to include the beacon-to-beacon as well as robot-to-beacon measurements
The estimation process then proceeds via a standard application of the EKF
Adding Beacon-to-Beacon Measurements
• Improving SLAM Map
Uses a post-processing step to improve the map that results from a SLAM run
Finds the state that minimizes the cost function, which is Mahalanobis distance from SLAM map added to the sum of squares of beacon-to-beacon range innovations
Adding Beacon-to-Beacon Measurements
• Improving SLAM MapCost Function Mahalanobis Distance
where,( , P) q̂� – Kalman Filter SLAM estimate at the end of a run; P is the Kalman
covariance matrix for the state q̂�(q̂�M, PM) – portions of and P that correspond to the beacon locationsq̂�
qiM – part of the state that corresponds to (x,y) location of the ith beacon
rij – range measurement between beacons i and j
σb2 – covariance of the error in range measurements
Adding Beacon-to-Beacon Measurements
• Self-Localization with Virtual Nodes– Adds virtual nodes (different robot locations) as a
part of the beacon network– Distances between virtual nodes are calculated
using robot odometry and they form edges in the network
Self-Localization
• If each beacon can obtain measurements to enough of its neighbors, then it is theoretically possible to build a map using only beacon-to-beacon measurements
• Self-Localization techniques available in literature require cliques (fully interconnected subgraphs) of degree four or higher
• Multidimensional scaling (MDS) is used to determine the relative locations of the clique nodes
Multidimensional Scaling (MDS)
• Set of related statistical techniques often used in data visualization for exploring similarities or dissimilarities in data
• An MDS algorithm starts with a matrix of item-item similarities, then assigns a location of each item in a low-dimensional space, suitable for graphing or 3D visualization
• Applications include scientific visualization and data mining in fields such as cognitive science, information science, psychophysics, psychometrics, marketing and ecology
MDS (of our interest)
• Given– n points in m-D space (usually n > 3 for m = 2)– distances between these
• Returns “best” estimate of locations of points– (up to a rigid transformation)
• Even if some distances are unknown, it can still be applied by optimizing over these
Adding Beacon-to-Beacon Measurements
• Self-Localization with Virtual Nodes– The robot follows a path and the corners (places
where it stops and turns) of the path are taken as virtual nodes
– First clique appears at the third robot corner– At the seventh robot corner, there is enough
information in the graph to uniquely localize every beacon
Adding Beacon-to-Beacon Measurements
• Self-Localization with Virtual Nodes
Results and Summary
where, (All in meters)μ : Mean absolute Cross Track Error (XTE)σ : Standard DeviationRMS : Root Mean Square error
Cross Track Error : Distance off track
Results and Summary
• Localization using Kalman Filter (KF) and the reference ground truth (GT)• Location of the beacons is known apriori
Results and Summary
• Kalman Filter (KF) SLAM and the reference ground truth (GT)• Location of the beacons is unknown at the start of SLAM• The path and beacon estimates shown include an affine transform that re-
aligned the local solution into a global coordinate frame
Results and Summary
• Kalman Filter (KF) SLAM with inter-beacon measurements and the reference ground truth (GT)
• Location of the beacons is unknown at the start of SLAM• The path and beacon estimates shown include an affine transform that re-
aligned the local solution into a global coordinate frame
Results and Summary
• Localization on a map from Kalman Filter (KF) SLAM after offline optimization step
• The map from Kalman Filter SLAM is used as initial conditions for the optimization process
Results and Summary
• Kalman Filter (KF) SLAM with beacons initialized through self-localization method with virtual nodes
• The path and beacon estimates shown include an affine transform that re-aligned the local solution into a global coordinate frame
• The sudden jumps seen in the Kalman Filter result is an artifact that can be observed when the filter switches its behavior to not only use odometric information but also any range measurements it receives from any initialized beacon
Questions ?