Vision-Based Coordinated Localization for Mobile Sensor ... ?· Vision-Based Coordinated Localization…

Embed Size (px)

Text of Vision-Based Coordinated Localization for Mobile Sensor ... ?· Vision-Based Coordinated...

  • 1

    Vision-Based Coordinated Localization forMobile Sensor Networks

    Junghun Suh, Student Member, IEEE, Seungil You, Student Member, IEEE, Sungjoon Choi, Student Member,IEEE and Songhwai Oh, Member, IEEE

    AbstractIn this paper, we propose a coordinated localizationalgorithm for mobile sensor networks with camera sensorsto operate under GPS denied areas or indoor environments.Mobile robots are partitioned into two groups. One groupmoves within the field of views of remaining stationary robots.The moving robots are tracked by stationary robots and theirtrajectories are used as spatio-temporal features. From thesespatio-temporal features, relative poses of robots are computedusing multi-view geometry and a group of robots is localizedwith respect to the reference coordinate based on the proposedmulti-robot localization. Once poses of all robots are recovered,a group of robots moves from one location to another whilemaintaining the formation of robots for coordinated localizationunder the proposed multi-robot navigation strategy. By takingthe advantage of a multi-agent system, we can reliably localizerobots over time as they perform a group task. In experiment,we demonstrate that the proposed method consistently achieves alocalization error rate of 0.37% or less for trajectories of lengthbetween 715 cm and 890 cm using an inexpensive off-the-shelfrobotic platform.

    Note to Practitioners In order for a team of robots tooperate indoor, we need to provide precise location information.However, indoor localization is a challenging problem and thereis no available solution with an accuracy required by a low-costrobot, which lacks a capability of simultaneous localization andmapping. In this paper, we demonstrate that precise localizationis possible for a multi-robot system using inexpensive camerasensors and the proposed method is suitable for an inexpensiveoff-the-shelf robotic platform.

    Index TermsLocalization, multi-robot coordination, mobilesensor network

    I. INTRODUCTION

    AWireless sensor network has been successfully applied tomany areas for monitoring, event detection, and control,including environment monitoring, building comfort control,traffic control, manufacturing and plant automation, and mili-tary surveillance applications (see [2] and references therein).However, faced with the uncertain nature of the environment,stationary sensor networks are sometimes inadequate and a

    This work was supported in part by Basic Science Research Programthrough the National Research Foundation of Korea (NRF) funded by theMinistry of Education (NRF-2013R1A1A2009348) and by ICT R&D programof MSIP/IITP. [14-824-09-013, Resilient Cyber-Physical Systems Research]

    Junghun Suh, Sungjoon Choi, and Songhwai Oh are with the Departmentof Electrical and Computer Engineering and ASRI, Seoul National Uni-versity, Seoul 151-744, Korea (emails: {junghun.suh, sungjoon.choi, songh-wai.oh}@cpslab.snu.ac.kr). Seungil You is with the Control and DynamicSystems, California Institute of Technology, Pasadena, CA 91125, USA(email: syou@caltech.edu).

    A preliminary version appeared in the IEEE International Conference onAutomation Science and Engineering 2012 [1].

    mobile sensing technology shows superior performance interms of its adaptability and high-resolution sampling capa-bility [3].

    A mobile sensor network can efficiently acquire informationby increasing sensing coverage both in space and time, therebyresulting in robust sensing under the dynamic and uncertainenvironments. While a mobile sensor network shares thesame limitations of wireless sensor networks in terms ofits short communication range, limited memory, and limitedcomputational power, it can perform complex tasks, rangingfrom scouting and reconnaissance to environmental monitoringand surveillance by cooperating with other agents as a group.There is a growing interest in mobile sensor networks and ithas received significant attention recently [4][8].

    In order to perform sensing or coordination using mo-bile sensor networks, localization of all sensor nodes is ofparamount importance. A number of localization algorithmshave been proposed for stationary sensor networks, e.g., [9],[10]. But they are applicable for outdoor environment andprecise indoor localization is still a challenging problem [11],[12]. (For more information about various localization meth-ods for wireless sensor networks, see references in [9][12].)One promising approach to indoor localization is based on theultra-wideband (UWB) radio technology [13]. But as stated in[13], the minimum achievable positioning error can be in theorder of 10 cms and it is not accurate enough to control andcoordinate a group of robots. In addition, the method requireshighly accurate time synchronization. In order to address theseissues, UWB based localization is combined with infraredsensors using a team of mobile agents in [14]. However, itrequires the deployment of UWB detectors in advance, whichis not suitable for mobile sensor networks operating underuncertain or unstructured environments.

    Localization using camera sensors has been widely studiedin the computer vision community. Taylor et al. [15] used con-trollable light sources to localize sensor nodes in a stationarycamera network. A distributed version of camera localizationis proposed by Funiak et al. [16], in which relative positionsof cameras are recovered by tracking a moving object. Thesensor placement scheme is presented for the problem ofminimizing the localization uncertainty in [17]. They proposeda triangulation-based state estimation method using bearingmeasurements obtained from two sensors. Meingast et al.[18] proposed a multi-target tracking based camera networklocalization algorithm. The critical concept applied in [18] isthe use of spatio-temporal features, an approach taken in thispaper. Tracks of moving objects are used as spatio-temporal

  • 2

    features (tracks are detected by a multi-target tracking algo-rithm from [19]). In order to find matching features over a pairof cameras, detection times of spatial features are used as wellas spatial features such as Harris corners and scale-invariantfeature transform (SIFT) keypoints [20]. Then the relativeposition and orientation between cameras are computed usingmulti-view geometry. Since an incorrect matching betweenspatio-temporal features is extremely rare compared to spatialfeatures, the method provided outstanding performance undera wide baseline and varying lighting conditions.

    But the aforementioned methods are designed for stationarycamera networks and are not suitable for dynamic mobilesensor networks. In fact, in mobile sensor networks, we cantake the advantage of mobility to improve the efficiency oflocalization. For instance, Zhang et al. [21] proposed a methodto control the formation of robots for better localization.They estimated the quality of team localization dependingon the sensing graph and the shape of formation. A multi-robot localization algorithm based on the particle filter methodis presented in [22]. They proposed a reciprocal samplingmethod which selects a small number of particles whenperforming a localization process. Some authors have consid-ered cooperative localization of multiple robots using bearingmeasurements. Giguere et al. [23] addressed the problem ofreconstructing relative positions under the condition of mutualobservations between robots. The constraint was later relaxedby adding landmarks in [24]. They used nonlinear observabil-ity analysis to derive the number of landmarks needed forfull observability of the system and an extended informationfilter was applied to estimate the states of a team of robotsusing bearing-only measurements. Ahmad et al. [25] applieda cooperative localization approach to robot soccer games.They modeled the problem as a least squares minimizationand solved the problem using a graph-based optimizationmethod, given static landmarks at known positions. Tullyet al. [26] used a leap-frog method for a team of threerobots performing cooperative localization, which is similar tothe proposed method. In [26], two stationary robots localizethe third moving robot from bearing measurements using anextended Kalman filter. After completing a single move, therole of each robot is switched and the process is repeated. Intheir experiments, robots covered a region of size 20m30mand showed a localization error of 1.15m for a trajectory oflength approximately 140m. However, the experiments wereconducted using an expensive hardware platform includingthree on-board computers, four stereo cameras, and a cus-tomized ground vehicle with many sensors. Hence, it is unclearif the approach is suitable for an inexpensive off-the-shelfrobotic platform considered in this paper.

    We propose a coordinated localization algorithm for mobilesensor networks under GPS denied areas or indoor environ-ments using an inexpensive off-the-shelf robotic platform.We take the advantage of mobile sensor networks. In orderto localize mobile robots, we first partition robots into twogroups: stationary robots and moving robots. We assume each

    robot carries a camera and two markers1. The moving robotsmove within the field of views (FOVs) of stationary robots.The stationary robots observe the moving robots and recordthe positions of markers of moving robots. Based on the tra-jectories of markers, i.e., spatio-temporal features, we localizeall the robots using multi-view geometry. Localization requiresrecovering relative positions, i.e., translation and orientation.While the translation between cameras can be recovered onlyup to a scaling factor in [18], we can recover the exacttranslation using the known distance between markers in theproposed algorithm.

    A multi-robot navigation strategy is also developed usingthe rapidly-exploring random tree (RRT) [27], which movesa group of rob

Recommended

View more >