15
Autonomous Robots manuscript No. (will be inserted by the editor) A Robust and Fast Method for 6DoF Motion Estimation from Generalized 3D Data Diego Viejo · Miguel Cazorla Received: date / Accepted: date Abstract Nowadays, there is an increasing number of robotic applications that need to act in real three-dimen- sional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solu- tions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six Degrees of Free- dom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments. Keywords 6DoF pose registration · 3D mapping · Mobile robots · Scene Modeling 1 Introduction This paper is focused on studying the movement per- formed by a mobile robot just using the environment information collected by this robot with a 3D sensor device such as a 3D laser, a stereo or a time-of-flight camera. The trajectory followed by the robot can be re- constructed from the observations at each pose and thus Diego Viejo Ciencia de la Computaci´ on e Inteligencia Artificial University of Alicante PO. Box 99, E-03080 Alicante, Spain Tel.: +34-965-903400 ext: 2072 Fax: +34-965-903902 E-mail: [email protected] Miguel Cazorla Ciencia de la Computaci´ on e Inteligencia Artificial University of Alicante E-mail: [email protected] a 3D map of the robot environment can be built. The problem of automatic map building has been challeng- ing mobile robotics researchers during the last years. The main contribution of this paper is a new method for computing accurately the movement performed by a mobile robot in 6DoF just using the 3D data collected from two consecutive poses, independently of the 3D sensor used. The actual movement performed by a robot between two consecutive poses for a given action command is one of the main issues that has to be known in order to per- form any mobile robotic task. This information can be obtained by several means, depending on the accuracy needed for a given system. The least accurate approach consists in assuming that the robot has performed ex- actly the movement corresponding to the received ac- tion command. In practice, however, this assumption is far away from the real movement. Measuring the movement of the robot with its inter- nal sensors is a more accurate solution. This informa- tion called odometry is the most widely used solution to estimate robot’s movement. However, odometry is not error free. Wheel slipping or bouncing, rough ter- rains or even mechanical wear may produce errors in the odometry data. Moreover, for many mobile robotic applications odom- etry is not enough as more accurate movement infor- mation is required. In these cases the approach known as egomotion or pose registration can be used Agrawal (2006); Koch and Teller (2007); Goecke et al (2007). These methods compute the robot’s movement from the differences between two consecutive observations. In this paper we describe a new method that uses three dimensional (3D) information in order to compute the movement performed by a robot. Since our method uses 3D data, it can register a six degrees of freedom (6DoF)

A Robust and Fast Method for 6DoF Motion Estimation from

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Robust and Fast Method for 6DoF Motion Estimation from

Autonomous Robots manuscript No.(will be inserted by the editor)

A Robust and Fast Method for 6DoF Motion Estimationfrom Generalized 3D Data

Diego Viejo · Miguel Cazorla

Received: date / Accepted: date

Abstract Nowadays, there is an increasing number of

robotic applications that need to act in real three-dimen-

sional (3D) scenarios. In this paper we present a new

mobile robotics orientated 3D registration method that

improves previous Iterative Closest Points based solu-

tions both in speed and accuracy. As an initial step,

we perform a low cost computational method to obtain

descriptions for 3D scenes planar surfaces. Then, from

these descriptions we apply a force system in order to

compute accurately and efficiently a six Degrees of Free-

dom egomotion. We describe the basis of our approach

and demonstrate its validity with several experiments

using different kinds of 3D sensors and different 3D real

environments.

Keywords 6DoF pose registration · 3D mapping ·Mobile robots · Scene Modeling

1 Introduction

This paper is focused on studying the movement per-

formed by a mobile robot just using the environment

information collected by this robot with a 3D sensor

device such as a 3D laser, a stereo or a time-of-flight

camera. The trajectory followed by the robot can be re-

constructed from the observations at each pose and thus

Diego Viejo

Ciencia de la Computacion e Inteligencia ArtificialUniversity of AlicantePO. Box 99, E-03080 Alicante, SpainTel.: +34-965-903400 ext: 2072

Fax: +34-965-903902E-mail: [email protected]

Miguel Cazorla

Ciencia de la Computacion e Inteligencia Artificial

University of AlicanteE-mail: [email protected]

a 3D map of the robot environment can be built. The

problem of automatic map building has been challeng-

ing mobile robotics researchers during the last years.

The main contribution of this paper is a new method

for computing accurately the movement performed by a

mobile robot in 6DoF just using the 3D data collected

from two consecutive poses, independently of the 3D

sensor used.

The actual movement performed by a robot between

two consecutive poses for a given action command is one

of the main issues that has to be known in order to per-

form any mobile robotic task. This information can be

obtained by several means, depending on the accuracy

needed for a given system. The least accurate approach

consists in assuming that the robot has performed ex-

actly the movement corresponding to the received ac-tion command. In practice, however, this assumption is

far away from the real movement.

Measuring the movement of the robot with its inter-

nal sensors is a more accurate solution. This informa-

tion called odometry is the most widely used solution

to estimate robot’s movement. However, odometry is

not error free. Wheel slipping or bouncing, rough ter-

rains or even mechanical wear may produce errors in

the odometry data.

Moreover, for many mobile robotic applications odom-

etry is not enough as more accurate movement infor-

mation is required. In these cases the approach known

as egomotion or pose registration can be used Agrawal

(2006); Koch and Teller (2007); Goecke et al (2007).

These methods compute the robot’s movement from

the differences between two consecutive observations.

In this paper we describe a new method that uses three

dimensional (3D) information in order to compute the

movement performed by a robot. Since our method uses

3D data, it can register a six degrees of freedom (6DoF)

Page 2: A Robust and Fast Method for 6DoF Motion Estimation from

2

movement. We show how this method can be used to

build a 3D map. In addition, we designed our approach

to work regardless of the hardware used and therefore

our method can be applied on different robots and 3D

sensors. The input for our method consists in two sets

of 3D points from which the 6DoF transformation that

best aligns them is obtained. Figure 1 shows a couple

of 3D scenes captured by different 3D sensors. The one

on the left is captured by a stereo camera, an infrared

time-of-flight camera was used to obtain the scene in

the middle, while the right one corresponds to a 3D

range laser scanner.

The process of computing the robot movement from

two consecutive 3D observations can be seen as a par-

ticular case of the most general data range registration

problem. This problem is faced in many areas such as

industrial parts modeling, topography or photogram-

metry. The Iterative Closest Point (ICP) methodology

is widely used for registering two sets of 3D range data.

Usually, these sets are called the scene that we are ob-

serving and the model represented by the previous ob-

servation respectively. ICP was first proposed in Chen

and Medioni (1991) and Besl and McKay (1992), and

has become the most successful method for aligning 3D

range data. Summarizing, ICP is an iterative method

that is divided in two main steps. In the first one, the

relationship between the two sets of input data is ob-

tained by searching for the closest points. This search

gives a list of matched points. The second step consists

in computing the transformation that best aligns the

two sets from the matches found in the previous step.

This transformation is used to transform the input sets

before the first step of the next iteration. The algorithm

iterates until some convergence parameter is reached.

The method presented in this paper represents an ICP

modification. However, its main contribution is that it

improves the time and the accuracy of ICP and makes

it suitable for use in a wide variety of mobile robotic

applications.

ICP method needs an initial approximate transfor-

mation between the two sets before computing its align-

ment. In mobile robotics, this initial transformation

can be obtained from the odometry or by applying

some method in order to obtain a coarse alignment

Brunnstrom and Stoddart (1996); Chung et al (1998);

Chen et al (1999); Johnson and Hebert (1997); Kim

et al (2003) before applying ICP. Some of these ap-

proaches, like Brunnstrom and Stoddart (1996), take

too much time to obtain the result in order to be used in

a mobile robotic task. On the other hand, other meth-

ods, like those based on PCA Chung et al (1998); Kim

et al (2003), can obtain a result in a short time but

with less accuracy. The approach described in this pa-

per does not require an initial coarse transformation in

order to compute the 3D registration. This is possible

because we use the information of the planar surfaces

present in the scenes. The extraction of the descriptions

for these surfaces in a 3D scene gives us not only the

information about the position of objects in the scene

but also information about its orientation. This extra

information can be used both to find the correct align-

ment for two 3D scenes without an initial approximate

transformation and to reduce the size of the input data

and therefore the time needed to obtain the result.

In the literature there are several proposals for im-

proving the original ICP method both in speed and ac-

curacy. In Turk and Levoy (1994); Masuda et al (1996);

Weik (1997); Rusinkiewicz and Levoy (2001) different

approaches are shown to reduce the size of the input

data by selecting just a subset of points for the match-

ing step. However, the amount of information discarded

by these techniques is quite important and may affect

the accuracy of the results. The addition of some con-

straints, such as color similarity Godin et al (1994);

Pulli (1997) or surface normal vectors Pulli (1999), in

the matching step can improve the consistency of the

matched pairs and thus the final alignment computed

by the ICP. The main problem in these methods is that

these constraints depend on the kind of sensor used.

Once the matches have been established, outliers should

be detected in order to improve the results. We use

a similar approach like the ones used in Godin et al

(1994); Pulli (1999); Rusinkiewicz and Levoy (2001) in

which each match is weighed by the inverse of the dis-

tance between the points involved in the match, but in

the present paper, both object position and orientation

are used to compute this distance. Other approaches

used to reject invalid matches based on some compati-

bility test Turk and Levoy (1994); Masuda et al (1996);

Dorai et al (1998) also depend on the 3D sensor used.

In Salvi et al (2007) a deeper study comparing differ-

ent solutions for ICP in terms of speed and accuracy is

performed. Nowadays, more ICP variations are still ap-

pearing Du et al (2007a,b); Nuchter et al (2007); Censi

(2008); Armesto et al (2010).

Here we propose a new mobile robotics oriented 3D

registration method that improves previous ICP solu-

tions both in speed and accuracy. The successful re-

construction is based in the use of three-dimensional

features extracted from 3D scenes. Thus we can reduce

the number of iterations required as well as improve the

response to outliers and enhance the independence of

the initial situation of the data sets. Our work began

with Viejo and Cazorla (2007) where we proposed a

planar patches based method for pose registration lim-

ited to 3DoF. Here we present a generalization of this

Page 3: A Robust and Fast Method for 6DoF Motion Estimation from

3

Fig. 1 Input 3D point scenes. The scene on the left was captured indoors by a stereo camera. A SR4000 infrared camera was used to

capture the image shown in the middle. Finally, the right one comes from an outdoors environment captured by a 3D range laser.

method for a robot movement in 6DoF. Also, we include

a comprehensive testing, both quantitative and quali-

tative, demonstrating the functionality of the method.

Although starting from isolated research, the proposals

here are similar to those in Segal et al (2009).

The rest of the paper is organized as follows. Pre-

liminary computations to obtain both scene modeliza-

tion and rotation matrix for aligning two sets of pla-

nar patches, together with a description of the mapping

problem, are covered in section 2. Then, our 3D planar

patches alignment method is described in section 3. Af-

ter that, in section 4 the results for several experiments

are shown. Finally in section 5 we will discuss our con-

clusions.

2 Preliminaries

Our goal is to construct a 3D map from a set of obser-

vations D = {dt}, t = 1 . . . T taken from any 3D mea-

surement device (from now, 3D device). The 3D device,

placed on the top of a robot or other platform, is mov-

ing around the environment (in Figure 2 the different

robots and 3D devices used in this paper are shown).

Our method can process 3D data coming from dif-

ferent devices, each of them having different errors and

features. First, we use a stereo camera Digiclops from

Point Grey. It has a range of 8 meters and is ideal for

indoor environments. It can provide up to 24 images

per second with gray level information for each point.

However, it suffers from the lack of texture: areas in

the environment without texture, can not provide 3D

data. Moreover, it has a measurement error of 10%. For

outdoor environments we use a 3D sweeping laser unit,

a LMS-200 Sick laser mounted on a sweeping unit. It

does not suffer from the lack of texture and its range

is 80 meters with a measurement accuracy of ±15mm.

The main disadvantage of this unit is the data cap-

turing time: it takes it more than one minute to take

a shot. Also, it does not provide color information. In

addition, we test a SR4000 camera, which is a time-of-

flight camera, based on infrared light. It also does not

suffer from the lack of texture, but its range is limited

up to 10 meters, providing gray level color from the in-

frared spectrum. Finally, we use a data set Borrmann

and Nuchter (2012) from a Riegl VZ400 3D range laser.

It has a very fast capturing time (up to 300 measure-

ments per second), a high measurement range (up to

600 m.) and a high accuracy (¡5mm.).

Fig. 2 Two robots used in this paper together with different 3Ddevices: from left to right, a stereo camera, a 3D sweeping laser

and a SR4000 camera.

At each time t, an observation, i.e. 3D data, is col-

lected, dt = {d1, d2, ..., dn}, where di = (X,Y, Z). How-

ever this data can contain more information (gray or

color information, etc.). The mapping (or registration)

problem can be summarized as:

m∗ = arg maxm

P (m|D,U) (1)

finding the best map which fits the observations D and

the movements of the robot U = {u1, u2, ..., ut}. In our

case, instead of using the robot movements, we propose

the use of observations for both mapping and pose es-

timation from two consecutive frames.

Thus, we estimate each robot movement ui using

the observations from the previous and posterior poses

of that movement. Once the movements have been esti-

Page 4: A Robust and Fast Method for 6DoF Motion Estimation from

4

mated, a classical method to build an incremental map,

using the calculated movements, is applied.

2.1 3D Scene Modelization

Our 3D registration method exploits geometrical and

spatial relationships between scenes surfaces. This extra

information can be obtained by performing a model of

the input raw data. Furthermore, the amount of infor-

mation gathered in each raw 3D scene is usually so huge

that the time needed to perform pairwise registration

can increase dramatically. This model is also useful for

reducing the input data complexity without reducing

the amount of information in the scene. In a previous

work Viejo and Cazorla (2008), we described a method

with which we can obtain a model for the planar sur-

faces in a 3D scene using planar patches. Furthermore,

this process can be obtained in a logarithmic time.

Summarizing, our method for extracting planar patches

from a 3D point set is done in two steps. In the first

one, we select a subset of points by means of classi-

cal computer vision seed selection methods Fan et al

(2005) Shih and Cheng (2005). Seeds are spread along

the surfaces of the scene. In the second step, for each

3D point selected, we check whether its neighbourhood

corresponds to a planar surface or not. The basic idea

consists in analyzing each point in the local neighbour-

hood by means of a robust estimator. A singular value

decomposition (SVD) based estimator is used to ob-

tain surface normal vectors. Using this method, when

the underlying surface is a plane, the minimum singu-

lar value σ3 is quite smaller than the other two singular

values (σ1 and σ2), and the singular vector related to

the minimum singular value is the normal vector of the

surface at this point. Following singular values we can

compute a thickness angle Martin Nevado et al (2004)

that measures the dispersion angle for the points in the

planar patch neighbourhood.

γ = arctan

(2√3· σ3√

σ1σ2

)(2)

Thickness angle gives us an idea about how 3D points

in a neighbourhood fit to a planar surface. We use it to

discard patches with a thickness angle greater than 0.05

degrees. In this way we achieve the highest accuracy for

our planar patches extraction method. Figure 3 shows

the result of computing 3D planar patches model for

two different 3D scenes. Planar patches are represented

with blue circles. The radius of these circles depends

on the size of the neighbourhood used to test planarity.

This method is proved to work with 3D scenes captured

from different kinds of 3D sensors and it can be applied

to other applications, like Munyoz-Salinas et al (2008)

or Katz et al (2010).

3 3D Scene Planar Patch Based Alignment

The main problem for all ICP solutions is that they

need an initial approximate transformation in order to

obtain an accurate solution. The method described in

this section resolves this problem computing the best

alignment for the input data without this initial ap-

proximation. This becomes possible by using the effi-

cient scene model described in the previous section. In

this way, the need for an initial transformation is re-

placed by the 3D scene geometry information extracted

during the modeling step. Furthermore, the use of a 3D

model makes possible the improvement of the accuracy

in the same way as the use of surface tangents proposed

in Zhang (1994). For our method this means that, in

contrast to the 3D points based ICP solutions, it is not

necessary to find exactly the same planar patches in the

two scenes in order to match them. It is enough to find

planar patches that belong to the same surface. Scene

geometry resolves the problem of identifying the differ-

ent surfaces since it is invariant to changes of the robot

pose in 6DoF. In the same way, we use scene geometry

in order to detect and remove outliers.

We want to find the 6DoF movement done by the

robot between two consecutive poses, from the data

taken in those poses, di and di+1. In the general case, we

need to find a 6DoF transformation which minimizes:

(R∗,T∗) = arg min(R,T)

=1

Ns

Ns∑i=1

||nmi − (Rnsi + T)||2

(3)

where nmi and nsi are the matched points from the two

consecutive poses.

Instead of calculating rotation and translation itera-

tively, we propose to decouple them. As planar patches

provide useful orientation information, we first calcu-

late the rotation and then, with this rotation applied

to the model set, we calculate the translation. In the

next subsections we describe how to calculate both.

3.1 Distance between planar patches

Here, we define a new distance function between two

planar patches in order to compute the alignment of two

sets of 3D planar patches. This new function allows us

to search the closest patches between the two input sets.

From now we define a planar patch by a point belonging

Page 5: A Robust and Fast Method for 6DoF Motion Estimation from

5

Fig. 3 3D scene model. Blue circles represent the 3D planar patches extracted from the input raw scenes. The scene shown on the left

was captured indoors by a stereo camera. For the middle one, a 3D range laser was used outdoors. Finally, a SR4000 infrared camerawas used to capture the one on the right.

to the patch (usually its center) and its normal vector

π = {p,n}. Let da(n1,n2) be the angle between two

vectors n1 and n2. Then the distance from a plane πito another πj can be expressed as

d(πi, πj) = da(ni,nj) + ξd(pi,pj) (4)

where ξ is a factor used to convert euclidean distances

between the centers of the patches into the range of

normal vector angles.

Once we have a way to compare two planar patches,

we can obtain a list of the patches from the scene set

closest to the model. Then we have to deal with the

outliers. In order to avoid the influence of outliers, we

test each match to be consistent with the others. The

concept of consistency is based on the kind of transfor-

mation we are looking for. Since this is an affine rigid

transformation, we can assume that the distance be-

tween the patches in a non-outlier pair is similar. In

contrast, we set a pair as an outlier when its distance

is far away from the mean of distances of all the pair

in the matches list. In this way, we compute the weight

for the i-th pair with the following expression

wi = e−(d(πi,πj)−µk)2/σ2

k (5)

where µk and σk are the mean and the standard de-

viation for the distribution of the distances between

matched planar patches in the k-th iteration. In this

way, we give a weight close to 1 to those matches whose

distances are close to the mean and thus, the further the

pair distance from the mean is, the lower its weight is

set.

3.2 Rotation alignment

From the description of planar patches extracted from

two 3D scenes, we can compute the best rotation for

the alignment of the two scenes. This can be done by

applying a well known method based on the Singular

Value Decomposition (SVD). Let us briefly review how

to compute this SVD based rotation alignment. Let S

be the scene set of Ns planar patches and M be the

model set of Nm planar patches. We first find the cor-

respondences between both sets, using the distance in

Equation 4. So each planar patch πsi ∈ S is matched

with one πmj ∈ M . Let R be a 3 × 3 rotation matrix,

then we have to find a matrix R∗ which minimizes the

following function:

R∗ = arg minR

fa(R) =1

Ns

Ns∑i=1

(da(nmi ,Rnsi ))2. (6)

fa is the mean square error for the angular distance

between matched planar patches. Note that we only use

da, the angular distance between two planar patches. In

order to minimize fa, we compute the cross-covariance

3×3 matrix Σsm for planar patches normal vector with

the expression

Σsm =

Ns∑i=1

winsi (n

mi )t, (7)

where wi is the weight of the i-th match (see Equa-

tion 5). This weight is used for outliers rejection. The

matrix Σsm can be decomposed as Σsm = V ∆UT ,

where ∆ is a diagonal matrix whose elements are the

singular values of Σsm. The rotation matrix that best

aligns the two set of planar patches is then computed

as

R = V

1 0 0

0 1 0

0 0 δ

UT (8)

Page 6: A Robust and Fast Method for 6DoF Motion Estimation from

6

where δ = 1 when det (V UT ) ≥ 0 and δ = −1 if

det (V UT ) < 0. The result of this procedure is a ro-

tation matrix that best aligns the two input planar

patches sets. Further information about this method

can be found in Kanatani (1993); Trucco and Verri

(1998).

3.3 Translation alignment

Once the rotation has been obtained, we have to com-

pute the transformation that minimizes the quadratic

mean error for the alignment of the input planar patches.

Original ICP uses a closed-form solution based on the

use of quaternions to resolv the least square problem

(Equation 3) that gives the rotation matrix for the so-

lution. From this rotation transformation, translation

is simply computed as the vector that joins the center

of mass of the two input sets of points. Nevertheless,

the final transformation obtained is inaccurate due to

outliers. As we mentioned before, a lot of methods to

avoid the error produced by outliers have been pro-

posed. These methods rely on a good initialization in

order to obtain accurate results. Our contribution here

consists in a new method that is less sensitive to the ini-

tial approximation of the input data sets by exploiting

the extra information given by planar patches in order

to obtain more accurate results.

Let us suppose that the rotation needed to align the

input sets has already been computed and applied fol-

lowing the method described in the previous section.

Then the problem is how to obtain the best estimation

for the translation. The solution that we propose con-

sists in considering that the matches list represents a

force system that can be used to compute the correct

solution. For the i-th match, we set the force vector fiin the direction of the normal from the model’s patch in

the match and with the magnitude of the distance be-

tween the planes defined by the patches. The resultant

translation for a force system created from N matches

is a vector computed by the following expression:

T =

∑Ni fiN

(9)

Furthermore, local minima can be avoided more suc-

cessfully if we consider the main directions of the scene.

These can be calculated from the Singular Value De-

composition of the cross-covariance matrix of all the

force vectors. Let us call D1,D2,D3 the three main

direction vectors. The equation 4 can be rewritten to

represent how much each pair of planar patches con-

tributes to a specific main direction

dk(πi, πj) = (da(ni,nj) + ξd(pi,pj)) · cos(da(ni,Dk))

(10)

At this point, da(ni,nj) should be close to zero as

rotation alignment has been solved. Using 10 we can ob-

tain the mean and standard deviation for the distances

along each main direction. In this way, equation 5 is

redefined as follows.

wdi = e−(d(πi,πj)−µk,d)2/σ2

k,d (11)

Accordingly to equations 10 and 11, equation 9 is

then transformed to take into account the main direc-

tions.

t1 = τ

N∑i=1

(fi ·D1)w1i

t2 = τ

N∑i=1

(fi ·D2)w2i (12)

t3 = τ

N∑i=1

(fi ·D3)w3i

where fi is the force vector from the i-th match, N

is the size of the matches list and τ is a normalizing

factor. Then, the transformation that reduces the forces

system’s energy is

Tj = t1 + t2 + t3 (13)

This computation is included in an ICP-like loop, in

which matches are recomputed after each iteration, in

order to reach a minimum energy state for the system.

Using this approach we can reduce the number of itera-

tions and we can achieve registration with greater inde-

pendence from the initialization. This method achieves

the alignment of the two input segment sets with a sig-

nificantly smaller error in comparison to other methods,

like ICP.

3.4 Complete algorithm

The method described in this section can be used for

computing the transformation that best aligns two sets

of 3D planar patches. The complete method for com-

puting the planar patches based 3D egomotion is di-

vided into two main ICP-like loops. The first one is used

to compute the best rotation alignment for the normal

Page 7: A Robust and Fast Method for 6DoF Motion Estimation from

7

vector of the 3D planar patches. Once the normal vec-

tors of the planar patches are aligned, the second loop

is used to compute the minimum energy force system

that represents the best translation alignment. The re-

sulting algorithm is shown in Algorithm 1. A complete

example on how this method works is shown in Figure 4.

Input data was taken by a robot in a real scenario. In

this case, the movement performed was a turn. The se-

quence goes from left to right and from top to bottom.

The scenes are shown from the top, looking towards

the floor. The picture in the upper-left corner shows

the initial state in which two sets of 3D points have

been recovered by a 3D range laser. The next picture

shows the planar patches extracted from the two input

sets. They are represented in green and blue. After that,

different steps of our alignment algorithm are shown in

the next pictures in which matches are represented by

red segments. The resulting 3D registration for the two

set of 3D planar patches is shown in the bottom-right

corner picture. Since the input data was taken from a

real scenario, there are also outliers. It can be observed

in the last picture where a lot of planar patches from

one set do not have a corresponding patch in the other

set and vice versa.

Algorithm 1 Plane Based Pose Registration (M , S:

3DPlanar Patch set))R = I3repeat

SR = ApplyRotation(S, R)Pt = FindClosests(M , SR)

Wt = ComputeWeights(Pt)

R = UpdateRotation(M , S, Pt, Wt)until convergence

SR = ApplyRotation(S, R)

T = [0, 0, 0]t

repeat

ST = ApplyTranslation(SR, T )

Pt = FindClosests(M , ST ){D1, D2, D3} = FindMainDirections(Pt)

Wt = ComputeWeights(Pt, {D1, D2, D3})

T = UpdateTranslation(M , S, Pt, Wt, {D1, D2, D3})until convergencereturn [R | T]

4 Results

In this section we show our results from different ex-

periments using the method presented in this paper

for aligning 3D planar patch sets. First, we study the

advantages we get by using our method in compari-

son to the ICP algorithm. This ICP algorithm incor-

porates the outlier rejection approach proposed in Dal-

ley and Flynn (2002). KD-tree and point subsampling

techniques are also included for reducing computational

time as described in Rusinkiewicz and Levoy (2001).

We analyze the performance and the accuracy of the

results. After that we show different results on auto-

matic map building using this approach.

The first experiment consists in testing the accu-

racy of the computed registration. We test both the

incidence of the outliers present in the scene and the

influence of different initial scene alignments. For the

first test, we have chosen a 3D scene captured by our

robot in a real environment to be the model set for

the alignment. The scene set is obtained by applying a

transformation to the model. Since this transformation

is known, it represents a ground truth for computing

the error for the alignment. Outliers are simulated by

removing areas from the scene set. This test is per-

formed on a large set of three-dimensional scenes to

obtain a robust estimation of the errors. A comparison

between the improved ICP and our approach for differ-

ent outliers rate can be observed in Figure 5. However,

our proposal always obtains better results than ICP.

This is because we do not require an initial approximate

transformation and because of the amount of outliers.

It has been described that ICP does not reach a global

minimum for a given outlier rate. In contrast, we can

obtain the correct alignment even for an outlier rate of

40-50 per cent.

For the second test we have used a rotating unit

from PowerCube that allows us to rotate a 3D sen-

sor along Y axis at different known orientations. The

experiment consists in performing a complete turn of

the sensor around itself getting 3D data sets at regu-

lar intervals. The experiment is performed several times

using different angular rotations at regular intervals of

4 degrees. The objective of this experiment consists in

testing the response of the method for different initial-

izations. Again, we compare our method with the above

mentioned ICP version. The results can be observed in

Figure 6. As expected, ICP depends on the initialization

and for changes of more than 10 degrees the algorithm

always ends in a local minimum. On the other hand,

our approach significantly improves this behavior and

allows us to obtain accurate results for displacements

of up to 40 degrees. Below 45 degrees the symmetries

of the environment can not be resolved and make our

method end in a local minimum.

We also compare the time needed to compute the

transformation with ICP and using our method. Both

methods have been implemented using Java and have

been executed on the 1.6 version of the Java Virtual

Machine running on a Intel T5600 1.83 GHz and 2 Gb

of RAM memory. While ICP needs more than 40 sec-

onds, our algorithm can find the solution in less than 5

Page 8: A Robust and Fast Method for 6DoF Motion Estimation from

8

Fig. 4 Overview of the working method. From the two input 3D point sets (upper-left corner) planar patches are extracted (upper-

right corner). Using the planar patches, the best alignment transformation is computed with our modified ICP-like approach. After

a few steps, the resulting transformation is obtained (bottom-right corner). The 3D planar patches that do not have a correspondingone in the other set are outliers.

seconds. Here we have to add the time needed to extract

the planar patches for each 3D scene. Nevertheless, this

is a computational low cost algorithm that usually takes

less than one second to modelling a 3D scene.

For the next experiments, our method has been used

to perform incremental 3D map building. As the robot

moves collecting 3D data with its sensor, the transfor-

mation between each two consecutive poses is obtained.

This transformation is used to hold all the captured 3D

points in a reference frame. A global rectification is not

performed, so the errors accumulate as the map is being

built. The proposed method is so accurate that these

errors do not affect the results too much. In order to

fuse the 3D points that come from the same object, an

occupancy grid has been used. At the time these exper-

iments were performed, we did not have the equipment

necessary to obtain a ground truth in order to compute

a trajectory error estimation. For this reason, in most

of the experiments the trajectory was chosen to be a

loop in order to compare the position in the map for

re-observed objects.

4.1 Indoors Stereo Data Set

For the first experiment on map building we have used

a Digiclops stereo camera as 3D sensor device. The ex-

periment was performed indoors. It is well known that

stereo systems do not work correctly when scene’s sur-

faces have a flat texture. Moreover, the maximum range

for our stereo camera is too short (up to 5-6 meters).

Our method needs planar surfaces in three orthogo-

nal directions. If this requirement is not fulfilled, our

method will fail to compute the robot movement cor-

rectly. The lower information is obtained by the sensor,

Page 9: A Robust and Fast Method for 6DoF Motion Estimation from

9

Fig. 5 Mean error and standard deviation for different rates of

outliers using our approach and an improved ICP algorithm. Re-

sults are separated for translation, in the top, and rotation, inthe bottom.

Fig. 6 Alignment error for different initializations. PlanarPatched based egomotion is less affected by initial displacement

of input data sets.

the fewer planar descriptions are found and thus the

method accuracy is decreased. The resulting 3D recon-

struction for a four consecutive 3D scenes alignment

can be observed in Figure 7. This experiment shows

how our method can deal with an inaccurate 3D sensor

like a stereo camera.

Fig. 7 Indoor map building experiment using a stereo camera:resulting 3D reconstruction.

4.2 Indoors SR4000 Data Set

In this experiment we used a SR4000 infrared camera. It

provides an interesting advantage against a stereo cam-

era: it obtains information independently of the scene’s

surface texture. The maximum range of this camera is

10 meters, which is quite enough for an indoor envi-

ronment. Figure 8 shows a 360 frames sequence in an

indoor environment.

4.3 Polytechnic College Data Set

Our next experiment was performed outdoors in a square

between the buildings of the Polytechnic College at the

University of Alicante. This time the 3D sensor used

was a sweeping range laser. 30 3D scenes were captured

during the 45 meters long almost-closed loop trajec-

tory. In Figure 9 the reconstructed map can be ob-

served from a zenithal view. It is superimposed to a

real aerial picture from the environment. The points

that come from the floor have been removed for a bet-

ter visualization. Furthermore, it can be noticed that

there are many points that were captured in the open

space. These points come from people that were mov-

ing freely around the robot during the experiment. Our

Page 10: A Robust and Fast Method for 6DoF Motion Estimation from

10

Fig. 8 Indoor map building experiment using a SR4000 camera:zenithal view of the resulting 3D reconstruction. The red line

shows the path follow by the robot.

registration method is not affected by dynamic objects

since it uses planar surfaces that remain mostly still.

Those planar surfaces that may change its position, like

doors, are detected by the outliers rejection step. The

computed trajectory for this experiment is represented

by a red line. The resulting map appears to be quite

accurate as the estimated final pose error is around 15

centimeters.

Figure 10 shows a free view of the Polytechnic Col-

lege data set experiment reconstructed 3D map. The

computed 3D trajectory is also represented. Building

facades, street lamps, trees and other objects can be

recognized. Double representation for some objects is

produced by accumulated errors.

4.4 Faculty of Science Data Set

Our next experiment represents the longest trajectory

performed by the robot. It corresponds to the surround-

ings of the Faculty of Science at the University of Ali-

cante. This is a low structured environment with lots of

trees, hedges, street-lamps, etc. The reconstructed 3D

map can be observed from a zenithal view in Figure 11.

This time, the map is built from 117 3D scenes along

a 130 meters trajectory. This time the error, it is said,

Fig. 9 Zenithal view of the resulting map of the PolytechnicCollege data set experiment using a 3D laser. Map points are

superimposed to a real aerial picture. The red line represents the

computed robot trajectory.

Fig. 10 3D map view of the experiment performed in the Poly-

technic College. Some objects in the environment such as trees,street-lamps and buildings can be observed.

the difference between objects observed at the begin-

ning and at the end of the trajectory is about 30 cm.

3D map from a free view can be observed in Figure 12

where environment details can be appreciated.

4.5 University of Alicante’s Museum Data Set

Since our previous experiments were performed in en-

vironments in which the floor was almost plane, for

our next experiment we looked for a place in which

the movement performed by the robot was clearly a

6DoF one. For this reason, we chose the entrance of

the University of Alicante’s Museum. This place is a

Page 11: A Robust and Fast Method for 6DoF Motion Estimation from

11

Fig. 11 Reconstructed map from the experiment performed in

the Faculty of Science. Data come again from a 3D range laser.The computed trajectory is represented by the red line.

Fig. 12 Free view of the 3D map reconstructed for the Faculty of

Science data set experiment. Objects such as buildings, different

kind of trees, street-lamps, hedges, etc. can be distinguished.

covered passageway with different slopes. In this place,

the movements performed by the robot are more com-

plex since we chose the robot to turn in the middle of

the slopes. Figure 13 shows the resulting 3D map for

this experiment and gives a clear idea of the trajec-

tory performed by the robot. In the upper picture the

zenithal view can be observed while the bottom picture

shows a lateral view in which slopes can be observed.

The map was built from 71 3D scenes captured by a

3D range laser in a 35 meters long trajectory. The dif-

ference between the first and the last pose is about 20

centimeters.

Fig. 13 3D map built from the data obtained in the Universityof Alicante’s Museum. The upper image shows a zenithal view in

which the trajectory represented by a red line can be appreciated.

The bottom image shows a lateral view for the reconstructed 3Dmap. The slopes present in the environment can be observed here.

4.6 Bremen Data Set

For our last experiment we use a data set recorded

by Dorit Borrmann and Andreas Nuchter from Jacobs

University Bremen gGmbH, Germany, as part of the

ThermalMapper project, and it is available to down-

load for free Borrmann and Nuchter (2012). This data

set was recorded using a Riegl VZ-400 and a Optris PI

IR camera. It contains several 3D scans taken in the city

center of Bremen, Germany. The data set consists of

data collected at 11 different poses. The points from the

laser scans are attributed with the thermal information

from the camera. Approximately, each two consecutive

poses are separated by 40 meters. For this experiment,

a ground truth is available as it includes pose files calcu-lated using 6D SLAM Andreas Nuchter and Surmann

(2007). Comparing the trajectory obtained using the

method presented in this paper with the ground truth,

we get a root mean square (RMS) error of 0.89 me-

ters in translation and 1.271 degrees in rotation. Even

though huge movement was performed between poses,

no odometry information was needed for most of the

poses. Just in three of them, whose rotations are big-

ger than 45 degrees, odometry was used to obtain the

correct transformation.

Finally, in Table 1 we present several statistics re-

covered during these experiments. The first column shows

the number of 3D scenes registered. The mean number

of 3D points per scene and the mean number of 3D pla-

nar patches are shown in the second and third column.

The last two columns show the mean time needed to

extract the planar patches and to apply the 3D regis-

tration. All our data sets are public and available to

download Viejo and Cazorla (2013).

Page 12: A Robust and Fast Method for 6DoF Motion Estimation from

12

Table 1 Statistical data recovered during 3D map building experiments. For each data set number of scenes, mean number of pointsper scene, mean number of planar patches extracted per scene, mean time for extracting planar patches and mean time for computing

the transformation between two consecutive scenes are shown.

Set Scenes Points Patches Patch Registration(mean) (mean) time time

(mean) (mean)

Stereo 4 134662 54 1.042s. 0.065s.SR4000 360 19371 106 0.098s. 0.567s.

P. College 30 67611 640 0.984s. 2.022s.

F. of Science 117 56674 657 0.600s 2.413s.Museum 71 75859 941 0.674s. 5.917s.

Bremen 11 7037852 2208 1.617s. 54.813s.

Fig. 14 Reconstructed map from Bremen data set. In this case,

data come from a Riegl VZ-400. Thermal information is used to

color the points. The computed trajectory is represented by thered line.

Fig. 15 Free view of the 3D map reconstructed for Bremen dataset experiment. The reconstruction of the movement in such abig environment can be achieved using our approach.

5 Conclusion

This paper covers the problem of finding the movement

performed by a robot using the information collected

by its 3D sensor. This process is called 3D pose regis-

tration and can be used for automatic map building.

Our method is designed to obtain accurate results even

when the scenes present a big amount of outliers. This

makes unnecessary the use of an initial rough trans-

formation between each pair of scenes. Furthermore,

since our method is designed to work with 3D data

it can obtain the registration of a six degrees of free-

dom movement. This makes it highly suitable for those

robotics applications in which odometry information is

not available or is not accurate enough such as aerial

or underwater robots.

Another important feature of our method is that

it works with raw 3D data, it is said, unorganized 3D

points sets. In contrast to other previous works on 6D

mapping like Borrmann et al (2008); Kummerle et al

(2008); Censi (2008); Armesto et al (2010) this feature

makes our method independent of the 3D sensor used

to obtain the data. In this way, applying our method for

different robots equipped with different sensors does not

require a big programming effort. The first step for our

algorithm is the modeling of the input data. This step

gives us three advantages. The first one is the reduction

of the data size without a significant reduction of the

amount of information present in the original scene.

This highly decreases the time needed by the algorithm

to obtain the results. The second advantage consists

in the extraction of important structural relations for

the data that are exploited in our algorithm in order

to improve the results. Finally, the third advantage we

get using planar surfaces description is that it makes

easier to obviate dynamic objects, such as people, while

computing the registration.

The ICP algorithm is an unstable method that not

always reaches an optimal solution. Usually, small changes

in the input data make the ICP finish in a local mini-

mum. Although there are many approaches that repre-

sent an improvement in the reduction of outliers effects,

ICP can still not obtain good results when the amount

of outliers is quite representative. For mobile robotics,

Page 13: A Robust and Fast Method for 6DoF Motion Estimation from

13

we consider outliers all the data that can be observed

from a pose but not from the previous one and vice

versa. As a robot moves, the maximum range of the

robot sensors and the movement itself introduce out-

liers in the data. The amount of outliers depends on

the distance travelled by the robot between two con-

secutive poses. The better the aligning method for de-

tecting and avoiding outliers is, the bigger distance the

robot can move and be registered. In contrast to most

of the methods that can be found in the literature for

improving the consistency of matched points and for

reducing the influence of the outliers, our method does

not use the internal data representation from the sen-

sor and in this way our method is independent of the

sensor used to collect the data.

The proper functioning of the proposed method has

been demonstrated with several experiments. The first

experiment was designed to study the influence of out-

liers in the resulting transformation. As it was demon-

strated, our method can handle up to 40 per cent of out-

liers in the data, that is quite better than the amount

of outliers tolerated by ICP (less than 15 %). This is

an important result for mobile robotics since outliers

are introduced into the data mainly by the robot move-

ment. In order to appreciate the accuracy of the pro-

posed method, several experiments for 3D automatic

map building were carried out. Several scenarios were

chosen, both outdoors and indoors. We also test the use

of different 3D sensors such as a stereo camera, a time-

of-flight camera and two different 3D range lasers. In

all the experiments the map was built from the align-

ment of every two consecutive 3D images without any

kind of global rectification. This means that the errors

produced during the alignment of two images are prop-

agated to the next poses of the robot’s trajectory. Al-

though this error propagation may highly affect the re-

sults, the accuracy of the proposed method makes pos-

sible the reconstruction of the 3D maps. In our first out-

door experiments, carried out with a common SICK 3D

laser, we obtain very good mapping results. The RMS

error for these experiments is about 2 cm. for a typi-

cal displacement of 1 meter and 1.25 degrees for a turn

of 25 degrees. A similar error is obtained in the experi-

ment carried out with Thermal Mapper Data Set. Each

scene in this data set is obtained in a semi-structured

environment from a 360 degrees scan, and sensor used

features are high accuray and long range. Under these

conditions we achieve a RMS of 0.89 meters and 1.27

degrees for displacement and turn, but in this case, the

movement between each pose is bigger than 40 meters.

In the case of using sensors with low accuracy, as in

our indoor experiments, the RMS error is about 1 cm.

for a 10 cm. displacement and 1 degree for a turn of 3

degrees. The main problem for the indoor experiments

is the short range of the 3D sensor used. This prob-

lem may lead to some situations in which the robot can

obtain almost no information about its environment.

Under this kind of circumstances, our method fails to

find planar descriptions for the scene’s surfaces and so,

the robot movement estimation is incorrect. Our data

sets are public and available to download Viejo and

Cazorla (2013). As future work, we are working on the

extraction of other 3D surface features such as corners.

We intend to improve the robustness of our method by

including these new 3D features.

Another similar proposal was made by Weingarten

in Weingarten et al (2004); Weingarten and Siegwart

(2005). He maintains an Extended Kalman Filter (EKF)

on every plane extracted from each 3D scene. EKF

are then used to compute a global rectification for the

trajectory performed by the robot. The main differ-

ence is that Weingarten uses planes as landmarks and

robot odometry in order to update the EKF informa-

tion. Then, when a loop is detected, a robot pose error

reduction is back-propagated. Our proposal is mainly

focused on replacing the odometry information. In fact,

both methods can be used together, ours for obtaining

the robot movement estimation and Weingarten’s for

computing a global rectification.

Acknowledgements This work has been supported by project

DPI2009-07144 from Ministerio de Educacion y Ciencia (Spain)

and GRE10-35 from Universidad de Alicante (Spain).

References

Agrawal M (2006) A lie algebraic approach for consis-

tent pose registration for general euclidean motion.

In: Proc. IEEE/RSJ International Conference on In-

telligent Robots and Systems, pp 1891–1897

Andreas Nuchter JH Kai Lingemann, Surmann H

(2007) 6d slam - 3d mapping outdoor environments.

Journal of Field Robotics 24, Issue 8-9:699 – 722

Armesto L, Minguez J, Montesano L (2010) A gener-

alization of the metric-based iterative closest point

technique for 3d scan matching. In: IEEE Inter-

national Conference on Robotics and Automation

ICRA 2010, Alaska, USA

Besl P, McKay N (1992) A method for registration of

3-d shapes. IEEE Trans on Pattern Analysis and Ma-

chine Intelligence 14(2):239–256

Borrmann D, Nuchter A (2012) Thermo bre-

men data set. URL http://kos.informatik.uni-

osnabrueck.de/3Dscans/

Page 14: A Robust and Fast Method for 6DoF Motion Estimation from

14

Borrmann D, Elseberg J, Lingemann K, Nuchter A,

Hertzberg J (2008) Globally consistent 3d mapping

with scan matching. Robot Auton Syst 56(2):130–142

Brunnstrom K, Stoddart A (1996) Genetic algorithms

for free-form surface matching. In: Stoddart A (ed)

Pattern Recognition, 1996., Proceedings of the 13th

International Conference on, vol 4, pp 689–693 vol.4

Censi A (2008) An ICP variant using a point-to-line

metric. In: Proceedings of the IEEE International

Conference on Robotics and Automation (ICRA

2008), Pasadena, CA

Chen CS, Hung YP, Cheng JB (1999) Ransac-based

darces: A new approach to fast automatic registration

of partially overlapping range images. IEEE Trans

Pattern Anal Mach Intell 21(11):1229–1234

Chen Y, Medioni G (1991) Object modeling by regis-

tration of multiple range images. In: Medioni G (ed)

1991 Proceedings., IEEE International Conference on

Robotics and Automation, 1991., pp 2724–2729 vol.3

Chung D, Yun I, Lee S (1998) Registration of multiple

range views using the reverse calibration technique.

Pattern Recognition 31:457–464

Dalley G, Flynn P (2002) Pair-wise range image regis-

tration: a study in outlier classification. Comput Vis

Image Underst 87(1-3):104–115

Dorai C, Wang G, Jain A, Mercer C (1998) Registra-

tion and integration of multiple object views for 3d

model construction. Pattern Analysis and Machine

Intelligence, IEEE Transactions on 20(1):83–89

Du S, Zheng N, Ying S, Wei J (2007a) Icp with bounded

scale for registration of m-d point sets. In: Proc. IEEE

International Conference on Multimedia and Expo

(2007), pp 1291–1294

Du S, Zheng N, Ying S, You Q, Wu Y (2007b) An ex-

tension of the icp algorithm considering scale factor.

In: Proc. IEEE International Conference on Image

Processing ICIP 2007, vol 5, pp V–193–V–196

Fan J, Zeng M, Body M, Hacid MS (2005) Seeded re-

gion growing: an extensive and comparative study.

Pattern Recognition Letters 26:1139–1156

Godin G, Rioux M, Baribeau R (1994) Three-

dimensional registration using range and intensity

information. In: El-Hakim SF (ed) Proc. SPIE Vol.

2350, p. 279-290, Videometrics III, Sabry F. El-

Hakim; Ed., Presented at the Society of Photo-

Optical Instrumentation Engineers (SPIE) Confer-

ence, vol 2350, pp 279–290

Goecke R, Asthana A, Pettersson N, Petersson L

(2007) Visual vehicle egomotion estimation using the

fourier-mellin transform. In: Proc. IEEE Intelligent

Vehicles Symposium, pp 450–455

Johnson A, Hebert M (1997) Surface registration by

matching oriented points. In: Hebert M (ed) 3-D Dig-

ital Imaging and Modeling, 1997. Proceedings., In-

ternational Conference on Recent Advances in, pp

121–128

Kanatani K (1993) Geometric computation for machine

vision. Oxford University Press, Inc.

Katz R, Nieto J, Nebot E (2010) Unsupervised classi-

fication of dynamic obstacles in urban environments.

J Field Robot 27:450–472

Kim S, Jho C, Hong H (2003) Automatic registration of

3d data sets from unknown viewpoints. In: Workshop

on Frontiers of Computer Vision

Koch O, Teller S (2007) Wide-area egomotion estima-

tion from known 3d structure. In: Proc. IEEE Con-

ference on Computer Vision and Pattern Recognition

CVPR ’07, pp 1–8

Kummerle R, Triebel R, Pfaff P, Burgard W (2008)

Monte carlo localization in outdoor terrains using

multilevel surface maps. J Field Robot 25(6-7):346–

359

Martin Nevado M, Gomez Garcia-Bermejo J, Za-

lama Casanova E (2004) Obtaining 3d models of in-

door environments with a mobile robot by estimating

local surface directions. Robotics and Autonomous

Systems 48(2-3):131–143

Masuda T, Sakaue K, Yokoya N (1996) Registration

and integration of multiple range images for 3-d

model construction. In: Proceedings of the 1996

International Conference on Pattern Recognition

(ICPR ’96) Volume I - Volume 7270, IEEE Computer

Society, p 879

Munyoz-Salinas R, Aguirre E, Garcıa-Silvente M, Gon-

zalez A (2008) A multiple object tracking approach

that combines colour and depth information using

a confidence measure. Pattern Recognition Letters

29(10):1504 – 1514

Nuchter A, Lingemann K, Hertzberg J (2007) Cached

k-d tree search for icp algorithms. In: Proc. Sixth

International Conference on 3-D Digital Imaging and

Modeling 3DIM ’07, pp 419–426

Pulli K (1999) Multiview registration for large data

sets. In: Proc. Second International Conference on

3-D Digital Imaging and Modeling, pp 160–168

Pulli KA (1997) Surface reconstruction and display

from range and color data. PhD thesis, University

of Washington

Rusinkiewicz S, Levoy M (2001) Efficient variants of the

icp algorithm. In: Proc. Third International Confer-

ence on 3-D Digital Imaging and Modeling, pp 145–

152

Salvi J, Matabosch C, Fofi D, Forest J (2007) A review

of recent range image registration methods with ac-

curacy evaluation. Image Vision Comput 25(5):578–

596

Page 15: A Robust and Fast Method for 6DoF Motion Estimation from

15

Segal A, Haehnel D, Thrun S (2009) Generalized-icp.

In: Proceedings of Robotics: Science and Systems,

Seattle, USA

Shih FY, Cheng S (2005) Automatic seeded region

growing for color image segmentation. Image and Vi-

sion Computing 23:877–886

Trucco S, Verri A (1998) Introductory Techiques in

Computer Vision. Prentice-Hall

Turk G, Levoy M (1994) Zippered polygon meshes from

range images. In: Proceedings of the 21st annual con-

ference on Computer graphics and interactive tech-

niques, ACM, pp 311–318

Viejo D, Cazorla M (2007) 3d plane-based egomo-

tion for slam on semi-structured environment. In:

Intelligent Robots and Systems, 2007. IROS 2007.

IEEE/RSJ International Conference on, pp 2761–

2766, DOI 10.1109/IROS.2007.4399138

Viejo D, Cazorla M (2008) 3d model based map build-

ing. In: International Symposium on Robotics, ISR

2008, Seul, Korea

Viejo D, Cazorla M (2013) Rovit data set. URL

http://www.rovit.ua.es/dataset/

Weik S (1997) Registration of 3-d partial surface mod-

els using luminance and depth information. In: 3-D

Digital Imaging and Modeling, 1997. Proceedings.,

International Conference on Recent Advances in, pp

93–100

Weingarten J, Siegwart R (2005) Ekf-based 3d slam for

structured environment reconstruction. In: Siegwart

R (ed) Intelligent Robots and Systems, 2005. (IROS

2005). 2005 IEEE/RSJ International Conference on,

pp 3834–3839

Weingarten J, Gruener G, Siegwart R (2004) Prob-

abilistic plane fitting in 3d and an application to

robotic mapping. In: Proc. IEEE International Con-

ference on Robotics and Automation ICRA ’04, vol 1,

pp 927–932

Zhang Z (1994) Iterative point matching for registra-

tion of free-form curves and surfaces. International

Journal of Computer Vision 13(2):119–152