Self-calibration of gyro using monocular SLAM for an indoor mobile robot

  • Published on
    28-Aug-2016

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

<ul><li><p>International Journal of Control, Automation, and Systems (2012) 10(3):558-566 DOI 10.1007/s12555-012-0312-x </p><p>ISSN:1598-6446 eISSN:2005-4092http://www.springer.com/12555</p><p>Self-calibration of Gyro using Monocular SLAM for an Indoor Mobile Robot </p><p>Hyoung-Ki Lee, Kiwan Choi, Jiyoung Park, and Hyun Myung* </p><p> Abstract: Rate gyros are widely used to calculate the heading angle for mobile robot localization. They are normally calibrated in the factory using an expensive rate table prior to their use. In this paper, a self-calibration method using a monocular camera without a rate table is proposed. The suggested method saves time and cost for extra calibration procedure. SLAM (Simultaneous Localization And Mapping) based on visual features and odometry gives reference heading (yaw) angles. Using these, the coefficients of a scale factor function are estimated through Kalman filtering. A new undelayed fea-ture initialization method is proposed to estimate the heading angle without any delay. Experimental results show the efficiency of the proposed method. Keywords: Feature initialization, gyro, Kalman filter, scale factor, self-calibration, SLAM. </p><p>1. INTRODUCTION </p><p> SAIT (Samsung Advanced Institute of Technology) </p><p>has been developing cleaning robots for home environments that can generate a coverage path and clean floors, minimizing overlap of the traveled region. To meet this requirement, the robot needs the capability of localization and map building. For localization, the robot is equipped with a camera for external position measurements. The camera captures visual features from the ceiling image for SLAM (Simultaneous Localization And Mapping) [1]. The view of the camera is often blocked, however, when the robot travels under objects such as beds and tables in home environments. Thus, dead reckoning should be added to the visual SLAM. More accurate calculation of the heading (yaw) angle is required to improve the dead reckoning accuracy. </p><p>To measure the heading angle, a rate gyro is widely used with integration of its rate output. The scale factor of the rate gyro needs to be calibrated in order to optimize performance. The conventional method for calibration is to use a rate table to obtain the reference angular rate. However, this is time consuming and increases the cost of the gyro. Furthermore, scale factors </p><p>change with time due to aging. As such, self-calibration and on-line calibration during normal operation are preferable. </p><p>In the field of self-calibration of gyro sensors, a method was proposed to calibrate rate gyro biases with a GPS/inertial attitude reference system [2]. Inertial navigation system errors are analyzed using the GPS as a reference and the scale factors and biases of inertial sensors are obtained utilizing an optimal Kalman filter/smoother [3]. A GPS signal is not available, however, in indoor environments. Other researchers have demonstrated self-calibration of odometry and gyro parameters using absolute reference measurements based on an infrared scanner [4]. The range sensor used in this approach is not affordable for cleaning robot products. Some researchers are also trying to use visual and inertial sensors at the same time to calibrate the inertial sensor. The relative displacement and rotation between visual and inertial sensors are estimated in [5] and [6]. Although the IMU biases are also calibrated using a camera in [5], the scale factors are not considered for calibration. </p><p>In this paper, a self-calibration method using a monocular camera and odometry is proposed. Fig. 1 illustrates the concept of the proposed scheme. In order to use a monocular camera as a heading angle reference system, the pose of the camera should be calculated. The probabilistic SLAM is a well-known solution for this. </p><p> ICROS, KIEE and Springer 2012 </p><p>__________ </p><p> Manuscript received February 24, 2011; revised December 12,2011 and March 27, 2012; accepted April 5, 2012. Recommendedby Editorial Board member Chan Gook Park under the directionof Editor Hyouk Ryeol Choi. The revision of this work was supported by the EnergyEfficiency &amp; Resources of the Korean Institute of EnergyTechnology Evaluation and Planning (grant No. 2011201030001D-11-2-200), funded by the Korean Government Ministryof Knowledge Economy. Hyoung-Ki Lee, Kiwan Choi, and, Jiyoung Park are with MSLab. of Samsung Advanced Institute of Technology, SamsungElectronics Co., Ltd., Nongseo-dong, Giheung-gu, Yongin,Gyeonggi-do 449-712, Korea (e-mails: {twinclee, kiwan,ji.young.park}@samsung.com). Hyun Myung is with the Dept. of Civil &amp; EnvironmentalEngineering, KAIST, Guseong-dong, Yuseong-gu, Daejeon 305-701, Korea (e-mail: hmyung@kaist.ac.kr). </p><p>* Corresponding author. </p><p>Reference </p><p>angle</p><p>GyrosEncoders</p><p>Nominal </p><p>scale factor</p><p>Calibrated </p><p>scale factor</p><p>Kalman Filter</p><p>Reference </p><p>angle</p><p>GyrosEncoders</p><p>Nominal </p><p>scale factor</p><p>Calibrated </p><p>scale factor</p><p>Kalman Filter</p><p>Fig. 1. Self-calibration of scale factor of gyro based on SLAM. </p></li><li><p>Self-calibration of Gyro using Monocular SLAM for an Indoor Mobile Robot </p><p>559</p><p>Davison [7] proved that the standard EKF (Extended Kalman Filter) formulation of SLAM can be very successful even when the only source of information is video from an agile single camera. </p><p>The main problem with Davisons approach lies in initializing uncertain depth estimates, because the camera is a projective sensor that measures the bearing of images. Davison used a particle approach to estimate a features depth. After the depth uncertainty has become sufficiently small to be considered as a Gaussian estimate, the feature is initialized. However, this method causes a delay in initialization of features. Our cleaning robot moves along a seed-sowing path that consists of 90-degree turns and straight movements. The gyro scale factor cannot be estimated during straight movements but only during rotations, and the time for parameter convergence is not sufficient. As such, undelayed feature initialization is preferable. </p><p>For undelayed feature initialization several methods have been proposed. Montiel [8] showed that a camera can be used as a visual compass to indicate the heading angle based on SLAM. It is assumed that all features are at infinity and thus the direction to features is coded as an angle azimuth-elevation pair, ignoring the depth. However, difficulties arise in that we do not know in advance which features have infinite depth and which do not. </p><p>Eade and Drummond [9] presented an inverse depth concept in FastSLAM-type particle filter, and Montiel et al. [10,11] applied this concept to EKF SLAM. In these approaches, the state vector of landmark includes the inverse of the landmark depth. However, the state vector of the features is represented by six elements, even though 3 elements are enough to represent a position in 3D space, thus leading to additional computational burden. </p><p>In this paper, a new undelayed initialization scheme for use in EKF-based SLAM is introduced, where the measurement model is represented in the 3D camera frame, in contrast to the 2D pixel coordinates of all previous works. This new measurement model has low linearization error due to the absence of the perspective projection. While EKF-based SLAM calculates accurate yaw angles, yaw gyro bias and scale factor function coefficients are estimated with the EKF, because they are augmented into the state vector. The following are the main contributions of this paper: </p><p>a) Proposal of a SLAM formulation augmenting the coefficient of the scale factor into a state vector; </p><p>b) Introduction of a new undelayed feature initializa-tion scheme with low linearization error in the measure-ment model of the EKF. The overall schematic diagram of the proposed method can be seen in Fig. 2. There is no need to use a rate table to determine the random bias and scale factor parameters. These parameters of the gyro are estimated using the proposed Kalman filter. </p><p>This paper is organized as follows. In the next section we briefly outline the SLAM system augmented by the gyro scale factor coefficients. Section 3 presents experimental results showing the accuracy of the heading angle using the self-calibrated gyro. </p><p>EKF Prediction</p><p>New </p><p>feature?</p><p>EKF Update</p><p>Encoder</p><p>Gyro</p><p>Undelayed feature </p><p>initializationObservation Y</p><p>N</p><p>Robot position, </p><p>Feature locations,</p><p>Gyro bias, scale factor </p><p>Fig. 2. Overall schematic diagram of the proposed method. </p><p>2. GYRO SCALE FACTOR ESTIMATION </p><p> 2.1. Robot kinematics modeling 2.1.1 Coordinate frame </p><p>A robot moving on an irregular floor of a home environment undergoes attitude change. The attitude of a robot is a set of three angles measured between the robots body and the absolute world coordinate frame. The body frame can be thought of as being embedded in the robot body such that its x-axis points forward, the y-axis points to the left, and the z-axis points upward. The body axes are labeled xb, yb, and zb. Referring to Fig. 3, three Euler angles, roll , pitch , and yaw are defined to be rotation angles around the xb, yb, and zb axes, respectively. The camera frame is shown with the subscript c in the figure. The camera faces the ceiling to capture the ceiling image. </p><p> 2.1.2 Robot kinematics model </p><p>The state vector to represent the position, pW, and </p><p>orientation, ( ) ,W T = of the robot in the </p><p>world frame is given by </p><p>( )( ) .</p><p>( )</p><p>W</p><p>vW</p><p>p kk</p><p>k</p><p> = </p><p>x (1) </p><p>In the state vector, Euler angles are used rather than a quaternion, since the pitch angle of our robot is always between /2 and /2. No velocity term appears in the </p><p>zb</p><p>yb</p><p>xb</p><p>zc</p><p>yc</p><p>xc</p><p>zW</p><p>xW</p><p>yW</p><p>Wp</p><p>i-th feature</p><p>im ( )=m</p><p>C C C C T</p><p>i i i ix y z</p><p>b</p><p>offset</p><p>p</p><p>Fig. 3. Coordinate frames. </p></li><li><p>Hyoung-Ki Lee, Kiwan Choi, Jiyoung Park, and Hyun Myung </p><p>560 </p><p>state vector, because a dead-reckoning system can provide good prediction information in our system. Our dead-reckoning system consists of a 3-axis gyroscope and two wheel encoders. The system can calculate translation in the xb axis of the body frame and rotations around the xb, yb, and zb axes. Since it cannot give information about translation in the yb and zb directions, they are assumed to be unchanged. Instead, their uncertainty will increase as the robot moves. </p><p>The prediction model of the Kalman filter is given by the following: </p><p>( 1)( ( ))</p><p>( 1)</p><p>( ) ( )( 0 0),</p><p>( ) ( ) ( )</p><p>W</p><p>v vW</p><p>W W b T</p><p>b</p><p>W W b</p><p>b</p><p>p kf k</p><p>k</p><p>p k C k x</p><p>k E k k t</p><p> += </p><p> + </p><p> + = + </p><p>x</p><p> (2) </p><p>where b is the body frame referenced rotation rates, which are provided by gyro sensors, xb is given by the wheel encoders, and t is the sampling time. The </p><p>direction cosine matrix WbC and rotation rate transfor-</p><p>mation matrix WbE between the body and world frames </p><p>are given by: </p><p>,</p><p>W</p><p>b</p><p>c c c s s s c c s c s s</p><p>C s c s s s c c s s s c s</p><p>s c s c c</p><p> + </p><p>= + </p><p> (3) </p><p>1</p><p>0 ,</p><p>0 sec sec</p><p>W</p><p>b</p><p>s t c t</p><p>E c s</p><p>s c</p><p>= </p><p> (4) </p><p>where s(.), c(.), and t(.) represent sin(.), cos(.), and tan(.), respectively [18]. </p><p> 2.2. Gyro modeling </p><p>An accurate heading angle is critical to dead reckoning performance. Hence, the yaw gyro is calibrated to obtain a more accurate scale factor than a nominal one. Roll and pitch gyros do not need to be calibrated, because the roll and pitch angles of the robot do not change very much on the floor in a home environment. The measurement of turn rate provided by the yaw rate gyro can be expressed in terms of the true input rate and the error terms as follows [18]: </p><p>( )</p><p>( , , ) ,</p><p>z z z z x x y y</p><p>g x y z z</p><p>u S B M M</p><p>B a a a n</p><p> = + + +</p><p>+ +</p><p> (5) </p><p>where uz is the gyros rate output (unit: Volt); z is the turn rate of the gyro about its z-axis; Sz(.) is a scale factor function; x and y are the turn rate of the gyro about its x-axis and y-axis, respectively; Mx and My are the cross-coupling coefficients; Bg(.) is an acceleration related error and a(.) is the acceleration along the x, y, z axes; Bz is the bias, which may drift very slowly; and nz is the zero-mean random bias. </p><p>A practical and simple model for MEMS gyros </p><p>neglecting small errors is given by ( ) .z z z z</p><p>u S B= + </p><p>With the measured output uz, the rotation rates z can be modeled as a nominal model (raw data) plus residual errors (z) as follows: </p><p>1z,nom , ,( ) ,z z z nom z z nom zS u B </p><p>= + = + (6) </p><p>where Bz,nom is the deterministic bias and S1</p><p>z,nom is the nominal scale factor provided in manufacturer supplied specification sheets. The scale factors of low-cost MEMS gyros are usually modeled with a linear function. However, nonlinearity in the scale factor cannot be neglected for navigational use of the gyro [12]. z represents the nonlinearity in the scale factor and slowly varying bias drift. When the nonlinearity is modeled with a polynomial function, we obtain </p><p>2</p><p>1 2,</p><p>n</p><p>z z z n z za u a u a u B = + + + + (7) </p><p>where ai are the coefficients of the polynomial and the bias Bz is random bias that drifts with very low frequency mainly due to thermal effects. It can be modeled by </p><p>1 2( 1) ( ) ( ) ,</p><p>z z B</p><p>T tB k B k C C n</p><p>T t T t</p><p>+ = + + +</p><p>+ + (8) </p><p>where the parameters C1, C2, and T should be tuned when zero input is applied and nB is a Gaussian white-noise process [16]. T determines the varying rate. As T becomes large, the bias drift increases or decreases very slowly. If T is large enough, a simple model of </p><p>( 1) ( )z z B</p><p>B k B k n+ = + can be adopted as shown in [17]. </p><p>In equation (2), b(k) is given by: </p><p>, , ,</p><p>( ) .b Tx nom y nom z nom z = + (9) </p><p>Notice that the roll and pitch rates are modeled as nominal models, i.e., raw data, because their errors have little effect on the localization performance. </p><p> 2.3. Extended Kalman filter design 2.3.1 Prediction model of Kalman filter </p><p>Substituting (7), (8), and (9) into (2), we obtain a new state update equation to include gyro modeling errors: </p><p>z 1 2</p><p>( 1)</p><p>x ( 1) ( 1)</p><p>( 1)</p><p>( ( ), ( ), ( ), )</p><p>( ) ( ) ,</p><p>( )</p><p>v</p><p>aug z</p><p>i</p><p>v v z i</p><p>tTT t T t</p><p>i</p><p>k</p><p>k B k</p><p>a k</p><p>f k B k a k</p><p>B k C C w</p><p>a k</p><p>+ +</p><p>+ </p><p>+ = + + </p><p>= + + + </p><p>x</p><p>x U</p><p> (10) </p><p>where w is the Gaussian white noise process vector and the system input U is given by U = (uz uz</p><p>2 uzn)T. </p><p>Notice that the coefficients of the scale factor error, ai, are augmented into the above equation as constants. To do SLAM, feature locations, i.e., map elements that represent the environment should be estimated simul-taneously as well as robot position. Feature locations are assumed to be stationary and thus the process model for </p></li><li><p>Self-calibration of Gyro using Monocular SLAM for an Indoor Mobile Robot </p><p>561</p><p>the position of the i-th feature is given as: </p><p>( 1) ( ) ( ) .Ti i i i i</p><p>m k m k x y z+ = = (11) </p><p> 2.3.2 Measurement model of Kalman filter </p><p>Referring to Figs. 3 and 4, the observation of a feature </p><p>mi from a camera location defines a ray expressed in the </p><p>camera frame as ( ) .C C C C Ti i i i</p><p>x y z=m The camera </p><p>does not directly observe ,Ci</p><p>m but its projection in the </p><p>image plane according to the pinhole model. In all </p><p>previous works, the measurement model of the Kalman </p><p>filter utilizes the pixel coordinates z ( )Ti i i</p><p>u v= in the </p><p>image plane, as shown in Fig. 4. The conventional </p><p>m...</p></li></ul>

Recommended

View more >