Dynamic Obstacle Detection using Visually Informed Scan Matching
Tae-Seok Leea, Heon-Cheol Lee, Won-Sok Yoo, Doojin Kim and Beom-Hee Lee
ASRI, Seoul National University, Korea
Keywords: Dynamic obstacle detection, Polar scan matching (PSM), Mobile robot
Abstract. This paper presents a real-time dynamic obstacle detection algorithm using a scan matching
method considering image information from a mobile robot equipped with a camera and a laser
scanner. By combining image and laser scan data, we extract a scan segment corresponding to the
dynamic obstacle. To complement the performance of scan matching, poor in dynamic environments,
the extracted scan segment is temporarily removed. After obtaining a good robot position, the position
of the dynamic obstacle is calculated based on the robot’s position. Through two experimental
scenarios, the performance of the proposed algorithm is demonstrated.
Introduction
As mobile robots are employed in many real applications, the robotics researches for operating robots
in real complex environments have been studied. To conduct missions in complex environments, the
robot should understand the environments which changes over time. Especially when the robots
interact with humans in same space, the robot should be able to perceive moving objects. Therefore,
recognition and identification of moving object is important technique. A moving obstacle detection
technique in dynamic environment of a single mobile robot is developed in this paper.
Moving object detection using image information is a classic problem in the field of computer
vision techniques. There are numerous algorithms as pedestrians and vehicles tracking, object
recognition, however, usually the motion pattern or basic image model of the moving object is
necessary [1-3]. On the other hand, the algorithm does not require a priori information such as optical
flow is inaccurate for a mobile robot in real application. Regardless of whether detection algorithm is
used, we need exact camera calibration depending on the hardware settings or disparity information
from stereo camera to acquire position of moving object from image data.
The accuracy of environmental information obtained from a mobile robot is based on the exact
robot location, so it also highly associated with Simultaneous Localization and Mapping (SLAM)
technique. Recently, the visual SLAM is actively studied, however it is hard to implement in real-time
because of its large computational load. In this reason, scan matching method using a laser scanner is
popular because a laser scanner has fast in data acquisition, fine resolution and high accuracy [4-6].
Many of previous researches have assumptions about dynamic model, and tracking the dynamic
obstacle using corrected robot position after scan matching. However, occlusions and moving scan
data may cause performance degradation, so filtering dynamic scan data, comes from a moving
obstacle, before scan matching is required to improve the performance of scan matching.
In this paper, a visually informed scan matching method fusing image information and laser scan
data is performed to estimate the position of the dynamic obstacle. We predict the location of dynamic
obstacle by image input and use the prediction value to adjust the laser scan data. Then, scan matching
is conducted. The position of the dynamic obstacle is updated based on the robot position from the
scan matching result. It is assumed that we have no priori information about the environments and the
dynamic obstacle. To apply in real environments, we adopt Polar Scan Matching (PSM) for scan
matching, because of its fast calculation time [7].
The following section describes how to extract the estimated dynamic obstacle direction using
image data. In the third section, we introduce the visually informed scan matching method with the
estimated dynamic obstacle direction. The proposed algorithm is validated by experiments, and
conclusions are addressed in the last part of this paper.
Applied Mechanics and Materials Vols. 313-314 (2013) pp 1192-1196Online available since 2013/Mar/25 at www.scientific.net© (2013) Trans Tech Publications, Switzerlanddoi:10.4028/www.scientific.net/AMM.313-314.1192
All rights reserved. No part of contents of this paper may be reproduced or transmitted in any form or by any means without the written permission of TTP,www.ttp.net. (ID: 130.207.50.37, Georgia Tech Library, Atlanta, USA-16/11/14,05:23:12)
Estimated Dynamic Obstacle Direction
Optical flow is one of the general methods that can find the movement of the obstacle without any
prior knowledge about the obstacle. It is the algorithm that numerically expresses the relative motion
between the camera and the scene, and used for motion detection, object segmentation and other
applications. In this research, optical flow is applied in continuous input images, and then the pixels
with the image flow vector over the threshold value α are left. α is proportional to the speed of the
robot |vr| and the speed of the dynamic obstacle |vr|, and inverse to the distance between the robot and
the obstacle dro as follows:
(initial state)r o
r
ro
v vv
dα
⋅∝ ∝ (1)
However, there is no dynamic obstacle information at the initial states, therefore, we can set the
threshold value based on the speed of the robot.
Fig. 1. Detected pixels by optical flow and their
image vector flow(red dots and lines).
Fig. 2. Relation between object ψ and robot.
When the image flow vectors are generated as shown in Fig. 1, the pixel coordinates which has
larger vector magnitude than α are extracted. We can calculate the relative angle between the robot
and dynamic obstacle by assuming the center position of the extracted pixel coordinates is the center
point of the obstacle. As shown in Fig. 2, by assuming that the surface of the obstacle is a plane and
the normal vector of the plane is toward the robot, the value θ in (2) is the relative angle between the
robot and the dynamic obstacle.
1 2
cos sin 0 cos sin( ) ( )
sin cos 0 1 sin cost
tA HR T R
ψ ψ ϕ ϕψ ϕ λ
ψ ψ ϕ ϕ− −
= =
(2)
A is the homography matrix of the plane, λ is the zoom parameter, φ is the longitude, ψ is the rotation
parameter, θ is the latitude and θ=arccos(1/t). The details about this issue are beyond the scope of this
paper.
Because we have no knowledge about the obstacle model, we cannot assure the calculated relative
angle is exact value. Therefore, the acquired relative angle is named the estimated dynamic obstacle
direction.
Visually Informed Scan Matching
Linear scan restoration using estimated dynamic obstacle direction. Diosi and Kleeman [7]
developed PSM which is a kind of point-to-point scan matching techniques. PSM not only takes the
advantage of the structure of the laser measurements but also eliminates an expensive search for
corresponding points differently from the standard ICP. The outstanding point of PSM is its low
computation time even though it requires an iteration process. Scan pre-processing to eliminate
outliers of a reference scan and a current scan is the basic step of PSM. After that, the current scan is
projected into the reference scan coordinates. The next step is to estimate translations in the x and y
directions. Here, PSM is generally aided by a robot odometer for the sake of obtaining a possible
angle φp. For the projected points P = {pi} by the possible angle, translations are estimated as follows:
[ ] 2
[ ]
arg min ( , )c c
X Y i i i c cx y i
T T w r p x y
= − ∑ (3)
Applied Mechanics and Materials Vols. 313-314 1193
where pi(xc,yc) is the point translated by xc and yc from the i-th projected current scan point. Finally,
the rotation angle is improved by a quadratic interpolation method. When the change amount from the
initially obtained possible angle is ∆φ, the final rotation angle is computed by
O p bϕ ϕ ϕ= + ∆ (4)
where b is a heuristic constant. The detailed description of PSM is stated in the literature [7].
(a) (b)
Fig. 3. (a) Mismatched scan, (b) Large error in
rotation angle.
(a) (b)
Fig. 4. (a) Laser scan data with dynamic obstacle
(thick line), (b) Restored laser scan data (thick line) at
same time.
However, the proportion of the dynamic obstacle in the scan data becomes large, the point-to-point
matching portion of PSM is reduced. In this case, as shown in Fig. 3, it is difficult to obtain accurate
matching results. The other hand, more correct scan matching results can be obtained by removing
and restoring the laser scan data of the dynamic obstacle.
First, segmentation is conducted according to the continuity of laser scan data. After that, the laser
scan segment which has the same direction with the estimated dynamic obstacle direction from the
above section is selected for restore. The selected laser scan segment is changed by taking linear
interpolation based on the value of each side of the segment. In this manner, the raw laser scan data as
in Fig. 4(a) changes to the restored laser scan as in Fig. 4(b).
Visually informed PSM and dynamic obstacle detection. In this research, the general PSM
using raw scan data is not used, the visually informed PSM which adjusts laser scan data
corresponding to image information about the dynamic obstacle is conducted. The environmental
information is updated based on the robot position acquired from the proposed PSM method. The
original data of the linearly interpolated scan segment can be represented in global coordinates, and it
means the position of the dynamic obstacle. Through comparison with the previous frame, the
velocity of the dynamic obstacle could be measured. The dynamic obstacle detection using visually
informed PSM follows the flowchart in Fig. 5. The performance of the visually informed PSM is
presented in next section.
Fig. 5. Flowchart of the visually informed PSM and dynamic obstacle detection.
Experimental Results
The experiments are performed in two scenarios in Fig. 6. As shown in Fig. 7, the space is
490cm×800cm and surrounded by walls on three sides. The mobile robot, Pioneer 3DX, equipped
with Sony SNC-RZ50 camera and Hokuyo UTM-30LS laser scanner was driven straight forward with
20cm/s. The dynamic obstacle was driven 1) 45° and 2) perpendicular to the robot’s path. The
1194 Machinery Electronics and Control Engineering II
obstacle has 80cm diameter and kept 15cm/s. The obstacle moved 5 seconds after the robot started.
Visually informed PSM was conducted every 0.1 seconds. The proposed method, visually informed
PSM, is compared with the general PSM without restoration for each scenario.
(a) (b)
Fig. 6. Experimental environments: (a) Obstacle
accesses to 45° to robot’s path, (b) Obstacle passes
robot’s path vertically.
Fig. 7. Snapshots of scenario 1.
(a)
(b)
(c)
(d)
t=6s t=8s t=10s t=12s
Fig. 8. Environmental map along time (thick line: restored laser scan): (a) Visually informed PSM with
scenario 1, (b) General PSM with scenario 1, (c) Visually informed PSM with scenario 2, (d) General PSM
with scenario 2.
(a) (b)
Fig. 9. Transition of dynamic obstacle and estimated values: (a) Scenario 1, (b) Scenario 2.
As shown in Fig. 8, visually informed PSM presents more accurate result in both scenarios,
because the scan segment of the dynamic obstacle was removed. As the obstacle closes to the robot,
large portion of the scan data becomes the dynamic segment, and then the rotation error of the general
PSM significantly increases. The transition of the dynamic obstacle obtained from two PSM methods
are represented in Fig. 9. The position error at the end point is small in the general PSM case.
However, the dynamic obstacle cannot be tracked continuously with the general PSM, because the
Applied Mechanics and Materials Vols. 313-314 1195
deviation is very large as stated in Table 1. On the other hand, we can estimate the dynamic obstacle
using visually informed PSM due to its small deviation. If the rotation error of the general PSM is
corrected, then the estimated coordinates of the obstacle may have greater error than results of the
visually informed PSM. The performance of proposed algorithm is powerful, since the average error
of the obstacle’s position is about 10% and the standard deviation is about 2% of the results of the
conventional PSM. Because the scenario 1 gives larger size of the dynamic segment than the scenario
2, the errors tend to increase.
Table 1Error of dynamic obstacle's position. (Unit: cm)
Visually informed PSM General PSM
Scenario 1
(45° path)
Average 61.27 496.49
Standard deviation 42.67 1698.67
Scenario 2
(Perpendicular)
Average 36.73 343.52
Standard deviation 21.56 936.36
Conclusion
In this paper, we develop visually informed PSM which is effective for detecting a moving obstacle in
dynamic environments. To find the obstacle with no priori information, image information and scan
data are considered together. Estimated dynamic obstacle’s direction by optical flow method is used
to segment the laser scan, and then the segmented scan data is applied to PSM after restoration. As a
result of experiments, the proposed algorithm presents about 0.12 times average error than error of the
general PSM.
Acknowledgement
This work was supported in part by a Korea Science and Engineering Foundation (KOSEF) NRL
Program grant funded by a Korean government (MEST) (No.R0A-2008-000-20004-0), in part by the
Brain Korea 21 Project, and in part by the Industrial Foundation Technology Development Program
of MKE/KEIT [Development of CIRT(Collective Intelligence Robot Technologies)].
References
[1] A. Ess, K. Schindler, B. Leibe, and L. Van Gool: Object detection and tracking for autonomous
navigation in dynamic environments, International Journal of Robotics Research, 29 (14),
(2010), pp. 1707-1725.
[2] A. Ess, B. Leibe, K. Schindler, and L. Van Gool: A mobile vision system for robust multi-person
tracking, IEEE Conference on Computer Vision and Pattern Recognition, (2008), No. 44587581.
[3] D. M. Gavrila, and S. Munder: Multi-cue pedestrian detection and tracking from a moving
vehicle, International Journal of Computer Vision, 73 (1), (2007), pp. 41-59.
[4] C. Wang, C. Thorpe, S. Thrun, M. Hebert, and H. Durrant-whyte: Simultaneous localization,
mapping and moving object tracking, International Journal of Robotics Research, 26 (9), (2007),
pp. 889-916.
[5] M. Becker, R. Hall, S. Kolski, K. Macek, R. Siegwart, and B. Jensen: 2D laser-based
probabilistic motion tracking in urban-like environments, Journal of Brazilian Society of
Mechanical Sciences and Engineering, 31 (2), (2009), pp. 83-96.
[6] T. Vu, J. Burlet, and O. Aycard: Grid-based localization and local mapping with moving object
detection and tracking, Information Fusion, 12 (1), (2011), pp. 58-69.
[7] A. Diosi, and L. Kleeman: Fast laser scan matching using polar coordinates, International
Journal of Robotics Research, 26 (10), (2007), pp. 1125-1153.
1196 Machinery Electronics and Control Engineering II
Machinery Electronics and Control Engineering II 10.4028/www.scientific.net/AMM.313-314 Dynamic Obstacle Detection Using Visually Informed Scan Matching 10.4028/www.scientific.net/AMM.313-314.1192