13
Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active sensing strategy for contact location without tactile sensors using robot geometry and kinematics Hooman Lee · Jaeheung Park Received: 28 February 2013 / Accepted: 11 October 2013 / Published online: 29 November 2013 © Springer Science+Business Media New York 2013 Abstract In this study, we develop new techniques to sense contact locations and control robots in contact situations in order to enable articulated robotic systems to perform manip- ulations and grasping actions. Active sensing approaches are investigated by utilizing robot kinematics and geometry to improve upon existing sensing methods for contact. Compli- ant motion control is used so that a robot can actively search for and localize the desired contact location. Robot control in a contact situation is improved by the precise estimation of the contact location. From this viewpoint, we investigate a new control strategy to accommodate the proposed sens- ing techniques in contact situations. The proposed estimation algorithm and the control strategy both work complementar- ily. Then, we verify the proposed algorithm through experi- ments using 7-DOF hardware and a simulation environment. The two major contributions of the proposed active sens- ing strategy are the estimation algorithm for contact location without any tactile sensors, and the control strategy comple- menting the proposed estimation algorithm. Keywords Active sensing · Contact estimation · Contact location · Compliant motion control · Manipulation H. Lee (B ) Department of Robot/Cognitive Systems Research, Electronics and Telecommunications Research Institute (ETRI), 218 Gajeongno, Yuseong-gu, Daejeon 305-700, Korea e-mail: [email protected]; [email protected] J. Park Department of Transdisciplinary Studies, Advanced Institutes of Convergence Technology, Seoul National University, 864-1 Iui-dong, Yeongtong-gu, Suwon 443-270, Korea e-mail: [email protected] 1 Introduction In recent times, many human-like robots have been devel- oped and used practically. Accordingly, there is a strong need for such robots to be capable of operating in a human envi- ronment. Unlike the straightforward factory assembly line environments in which industrial robots operate, the human environment is more complex and unstructured. To adapt to this environment, robots require a perception ability similar to that of humans. When performing tasks, robots use tactile sensors that are typically installed on their end-effectors. Therefore, control algorithms are designed for performing tasks using the end- effector. In contrast, when performing tasks, humans strongly rely on the sense of touch experienced through their arms, legs, or torso. In particular, when humans face difficulty in securing a clear view, they preferentially use the tactility acquired from the body. Consider a human aiming to sit down in a chair. A person would roughly know the position of the chair behind him/her and the approach to take; however, the person must also use the tactile response of the body to estimate the chair posi- tion and make necessary corrections, because the sense of vision is not available. For human-like robots, Hirukawa et al. (2005) and Ogura et al. (2006) indicated that it is similarly far more beneficial and natural to sense and utilize contacts along the entire body and links. Many studies have focused on control theory for contact- based manipulation. Park (2006) established control strate- gies for robots in contact. Park and Khatib (2008) demon- strated multiple contact control. Sentis et al. (2010) proposed a compliant control strategy for humanoids in multi-contact by analyzing internal forces and center-of-mass. However, most studies assume that the robot can recognize the con- tact location on the link using the touch sensors. In other 123

An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121DOI 10.1007/s10514-013-9368-6

An active sensing strategy for contact location without tactilesensors using robot geometry and kinematics

Hooman Lee · Jaeheung Park

Received: 28 February 2013 / Accepted: 11 October 2013 / Published online: 29 November 2013© Springer Science+Business Media New York 2013

Abstract In this study, we develop new techniques to sensecontact locations and control robots in contact situations inorder to enable articulated robotic systems to perform manip-ulations and grasping actions. Active sensing approaches areinvestigated by utilizing robot kinematics and geometry toimprove upon existing sensing methods for contact. Compli-ant motion control is used so that a robot can actively searchfor and localize the desired contact location. Robot controlin a contact situation is improved by the precise estimationof the contact location. From this viewpoint, we investigatea new control strategy to accommodate the proposed sens-ing techniques in contact situations. The proposed estimationalgorithm and the control strategy both work complementar-ily. Then, we verify the proposed algorithm through experi-ments using 7-DOF hardware and a simulation environment.The two major contributions of the proposed active sens-ing strategy are the estimation algorithm for contact locationwithout any tactile sensors, and the control strategy comple-menting the proposed estimation algorithm.

Keywords Active sensing · Contact estimation · Contactlocation · Compliant motion control · Manipulation

H. Lee (B)Department of Robot/Cognitive Systems Research, Electronicsand Telecommunications Research Institute (ETRI),218 Gajeongno, Yuseong-gu, Daejeon 305-700, Koreae-mail: [email protected]; [email protected]

J. ParkDepartment of Transdisciplinary Studies, Advanced Institutesof Convergence Technology, Seoul National University,864-1 Iui-dong, Yeongtong-gu, Suwon 443-270, Koreae-mail: [email protected]

1 Introduction

In recent times, many human-like robots have been devel-oped and used practically. Accordingly, there is a strong needfor such robots to be capable of operating in a human envi-ronment. Unlike the straightforward factory assembly lineenvironments in which industrial robots operate, the humanenvironment is more complex and unstructured. To adapt tothis environment, robots require a perception ability similarto that of humans.

When performing tasks, robots use tactile sensors that aretypically installed on their end-effectors. Therefore, controlalgorithms are designed for performing tasks using the end-effector. In contrast, when performing tasks, humans stronglyrely on the sense of touch experienced through their arms,legs, or torso. In particular, when humans face difficulty insecuring a clear view, they preferentially use the tactilityacquired from the body.

Consider a human aiming to sit down in a chair. A personwould roughly know the position of the chair behind him/herand the approach to take; however, the person must also usethe tactile response of the body to estimate the chair posi-tion and make necessary corrections, because the sense ofvision is not available. For human-like robots, Hirukawa etal. (2005) and Ogura et al. (2006) indicated that it is similarlyfar more beneficial and natural to sense and utilize contactsalong the entire body and links.

Many studies have focused on control theory for contact-based manipulation. Park (2006) established control strate-gies for robots in contact. Park and Khatib (2008) demon-strated multiple contact control. Sentis et al. (2010) proposeda compliant control strategy for humanoids in multi-contactby analyzing internal forces and center-of-mass. However,most studies assume that the robot can recognize the con-tact location on the link using the touch sensors. In other

123

Page 2: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

110 Auton Robot (2014) 36:109–121

words, contact location estimation has thus far been con-sidered only using touch sensors. Many different types oftactile sensors have been developed previously (Howe 1993;Suwanratchatamanee et al. 2010; Ponce Wong et al. 2012),but these cannot easily be applied to large robotic systemsowing to cost and implementation issues.

This study proposes an alternative approach to traditionalforce and tactile sensing. This approach relies on the use ofrobot geometry and kinematics to reveal a contact’s location.Petrovskaya et al. (2007) previously studied a probabilisticapproach for contact estimation without a tactile sensor byconsidering a point contact situation with an edge.

Although contact is an essential interaction for manipula-tors, most robotic systems rely on a priori positional infor-mation to perform tasks. In fact, current robotic systems donot adequately sense or use contact information. To realizerobust physical interactions with the environment, the robotshould allow contact between the environment and any ofthe links. However, typical position control approaches onlyconsider contact through the end-effector, and they do notconsider the system dynamics. In this study, to overcomethese issues, new sensing algorithms and control approachesare developed to simultaneously identify the contact positionand control the robot in more general contact situations.

2 Problem statement and related works

When a robot performs a task in a contact situation, some con-tacts may not be detected by vision sensors owing to occlu-sion (Fig. 1). The blue and red areas respectively indicateregions that the vision sensor can and cannot detect. In thissituation, the contact location cannot be defined by visionsensors without using tactile sensors on the bottom of theforearms.

The proposed contact location estimation algorithm usesa geometric model and the joint configurations of the robot.Without the exact contact location, typical position control

Fig. 1 Robots under undetected contact situation. The blue and redareas respectively indicate regions that the vision sensor can and cannotdetect (Color figure online)

schemes cannot be adopted because of stability and safetyissues. Because the robot can be controlled in a mannerthat is responsive and compliant to the environment, compli-ant motion control schemes (Chiaverini and Sciavicco 1993;Mason 1981; Khatib 1987) have to be adopted.

Schutter et al. (1999) introduced contact models using akinematic method. General contact types could be derivedfrom point contact between two smooth surface. Lefebvre etal. (2001) improved the modeling of contact formations forthe estimation of the geometrical parameters of rigid poly-hedral objects during force-controlled compliant motion. Inorder to estimate the geometrical parameters, these methodsused the force, velocity, and position measurements.

In this study, the operational space control framework incontact (Park and Khatib 2008; Khatib 1987) is used. Severalcontact situations are analyzed through the contact locationestimation algorithm under the assumption that contact hasoccurred on a certain link but the location of contact is notknown. The result suggests that the proposed algorithm canbe extended to general contact situations.

Hebert et al. (2011) developed a method to estimate thelocation of a grasped object within the robot’s hand. Thismethod uses a stereo camera, a 6-axis force-torque sensor,and joint angle encoder measurements to estimate the fingercontact location. It also requires a few sensors to estimate thecontact locations of the robot. A disadvantage of this methodis that it is difficult to determine the contact location of theother link of the robot, because a force-torque sensor is alsoinstalled on the wrist of the robot.

In this study, it is assumed which link is in contact. Thisproblem can be solved through another study about contactdetection without tactile sensors (García et al. 2003; Mori-naga and Kosuge 2003; De Luca and Mattone 2005).

De Luca and Mattone proposed a fault detection and iso-lation (FDI) technique for the real-time detection of collisionbetween a manipulator and the environment. This techniquedoes not require acceleration measurements or an inversionof the robot inertia matrix. It only requires accurate torquemeasurements to identify the residual joint torque caused bycontact. The authors validated their approach through a sim-ulation with a 2-DOF RR manipulator; however, the exactcontact location and associated Cartesian force could not beindependently identified. Furthermore, the computation timeshould be considered carefully in this technique when themanipulator has many DOFs. Although this FDI techniqueis not directly related to our study, its implementation wouldwell complement our active sensing technique.

3 Active sensing approaches for contact location

The active sensing approach consists of two main algorithms.One is a geometrical estimation algorithm (Sect. 3.1) and the

123

Page 3: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121 111

Fig. 2 Contact location identification through link geometry. a Robotin contact situation at different times t1 and t2. A line model is applied onthe link surface. b Geometrical estimation of contact location accordingto the trace of line movements

other is a control strategy (Sect. 3.2) that makes the esti-mation algorithm successful. As described below, both theestimation algorithm and the control strategy complementeach other.

3.1 Contact location estimation

In this framework, the concept of the geometrical estimationalgorithm for contact location is introduced, and then, theproposed concept is expanded into a generalized geometry.

3.1.1 Concept of contact location estimation

Figure 2 shows the concept of defining the contact locationby a geometrical method. Suppose that the link geometryaccording to the configurations of the robot is known. Then,we can trace the movement of the link geometry caused byrobot motions. Consider a manipulator making contact withthe edge of the environment, as shown in Fig. 2a. For the con-tact situation shown in Fig. 2a, the robot motion is constrainedby the contact location. This constraint returns the geomet-rical intersection between the trace of movements (Fig. 2b).

By applying a line model to the link surface, the contactlocation can be identified by calculating the intersecting pointof lines according to the motion of the robot at a certain time,

e.g., t1 and t2. The intersecting points are calculated whenthe change in orientation with respect to the axis of rotationexceeds a provided value, called estimation resolution (θres)

(Fig. 2b). When the lines are arranged in a skew position, theproposed contact location estimation algorithm calculatesthe closest point of the lines.

When the manipulator makes contact with an edge of theenvironment (Fig. 2a), traces of the line model have one inter-secting point that is a contact location (Fig. 2b).

C = L1 ∩ L2 ∩ ... ∩ Li ∩ ... ∩ Ln (1)

where C is the intersection of the line traces that is a contactlocation, and Li is the set of points on the line model at acertain time ti . Each line passes through a certain point Pi

with position vector −→ri = [xi , yi , zi ] and direction vector−→m = [mxi , myi , mzi ].

∀L ={(x, y, z) | x − xi

mxi

= y − yi

myi

= z − zi

mzi

; i ∈ N

}

(2)

The parametric equation of a line in three-dimensional spaceis

−→r = −→ri + ki−→mi , ki ∈ R (3)

Because this is a two-dimensional problem, the contact loca-tion can usually be estimated with two line traces.

−→ri + ki−→mi = −−→ri+1 + ki+1

−−→mi+1 (4)

The contact location is obtained by substituting the parameterki for Eq. 3.

[ki

ki+1

]=

⎡⎣mxi −mxi+1

myi −myi+1

mzi −mzi+1

⎤⎦

−1 ⎡⎣xi+1 − xi

yi+1 − yi

zi+1 − zi

⎤⎦ (5)

3.1.2 Generalized geometry: plane approach

To expand the algorithm to three-dimensional space, anexpanded geometrical model of the proposed approach isexplained for application to a more general contact model inthree-dimensional space.

If the robot’s link has a boxy shape, the contact locationcan be obtained by applying a plane model to the link surface.Unless the plane models at each movement are parallel orcoincide, they must have an intersecting line (Fig. 3).

First, a geometric plane is applied to the link surface. Then,the movement of the plane is traced according to the motionof the robot link. It should be noted that two consecutiveplanes generate an intersecting line. Therefore, the trace ofthe plane leaves intersecting lines including the contact loca-tion because of its constraint. Second, intersecting points canbe calculated from intersecting lines according to the motionof the robot link, as in the previous section.

123

Page 4: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

112 Auton Robot (2014) 36:109–121

Fig. 3 Relations of the plane models at each time. Unless the planemodels at each movement are parallel or coincide, they must have anintersecting line

If the planar model has a pointed contact, the traces ofthe planar model define intersecting lines. The intersectinglines have one intersecting point that is a contact locationbecause the movement of the plane is constrained by thecontact location.

C = P1 ∩ P2 ∩ ... ∩ Pi ∩ ... ∩ Pn (6)

where C is the intersection of the line traces that is a contactlocation, and Pi is the set of points on the planar model atcertain time ti . The equation of a plane in three-dimensionalspace is

∀P = {(x, y, z) | ai x + bi y + ci z + di

= 0; ai , bi , ci , di ∈ R} (7)

where ai , bi , ci , and di are coefficients of each planar model.To estimate the contact location, the intersection of the planarmodel is calculated as follows:⎡⎢⎢⎣

a1 b1 c1 d1

a2 b2 c2 d2

... ... ... ...

an bn cn dn

⎤⎥⎥⎦

⎡⎢⎢⎣

xyz1

⎤⎥⎥⎦ = 0 (8)

where the first matrix in the left-hand side is the set of coef-ficients of planes. The solution of this linear system is theintersection of the planes, i.e., contact location.

When the rank of the first matrix in the left-hand side isgreater than or equal to 3, the solution is one point. Therefore,3 planes are usually required to estimate the contact location.However, when the rank of the matrix is less than 3, there aremany solutions for the contact location, because the solutionmay be an intersecting line or plane itself. For example, whenthe planar model has an edge contact, the rank of the matrixwill be 2; then, the solution will be an intersecting line andthe robot may contact with the edge of the environment.

rank(A) ≥ 3 ⇒ point contactrank(A) = 2 ⇒ line contactrank(A) = 1 ⇒ surface contact

Though the robot contacts a pointed object, the rank can beless than 3 because the robot can move with a parallel motionor rotate with the same rotational axis. To avoid this uncer-tainty, the normal vectors of the plane have to be changed by

Fig. 4 Sequence of the contact location estimation algorithm withplane approach

an appropriate control strategy for the manipulator. This iswhy both the control strategy and the estimation algorithmare important; the two complement each other.

The entire procedure of the contact location estimationalgorithm with plane approach is shown in Fig. 4.

Depending on the geometry of the robot link, the pro-posed estimation algorithm can show greater complexity. Ifthe robot has perfect knowledge of its geometrical shape,however, we can define the intersections at different times.This can be stated formally as Lemma 1 in Appendix.

3.2 Control strategy for active sensing

In this framework, a control strategy for successful activesensing is described. First, the operational space for activesensing is illustrated, and then, the control structure of theentire system is demonstrated.

3.2.1 Operational space for active sensing

In the active sensing framework, the operational space has tobe defined to control the contact force and the robot motion.In the example shown in Fig. 5, the contact force has to beapplied along the normal direction n of the contact surfaceand the desired motion has to be applied in its orthogonalspace for the corresponding control point.

The linear velocity of the contact point is the sum of thelinear velocity of the control point and the angular velocityof the contact link.

vc = vpci+ vp

= Jvi q̇ + ωi × p

= Jvi q̇ − p̂ Jωi q̇

= (Jvi − p̂ Jωi )q̇ (9)

where vc and vpciare the linear velocity of the contact point

and the control point, respectively. p is the vector from thecontrol point of the link to the corresponding contact point.

123

Page 5: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121 113

Fig. 5 A point contact on a link. The vector n is a unit vector normal tothe contact surface, and the vector p is the vector from the control pointof the link to the corresponding contact point. The vector pci indicatesthe control point of the robot link

vp is the linear velocity of the vector p. q is the joint config-uration of the robot. The Jacobian matrix Jvi can be obtainedby differentiating the position vector pci that locates the con-trol point of link i with respect to the manipulator base.

Jvi =[∂pc1

∂q1

∂pc2

∂q2. . .

∂pci

∂qi. . . 0 0 . . . 0

](10)

The Jacobian matrix Jωi can be obtained from

Jωi = [ z1 z2 . . . zi . . . 0 0 . . . 0] (11)

where zi is a direction cosine along the z-axis at the i th linkcoordinate frame.

From Eq. (9), the Jacobian of the contact space can beobtained as follows:

Jc =(

I − p̂0 I

)J (12)

where p is the vector from the control point of the link to thecorresponding contact point, J is the Jacobian matrix of thecontact link composed of Jv and Jω.

To achieve contact force control, the contact Jacobian(Park 2006), which is defined to handle the contact forcein the operational space formulation, is used. It representsthe contact force directions of the robot on a link at a certainpoint according to the desired contact situation.

The contact Jacobian is defined according to the estimatedcontact location of the robot. In other words, the contact Jaco-bian is obtained simultaneously with the estimated contactlocations.

The contact Jacobian and motion Jacobian to control therobot in a contact situation can be composed from the mod-ified Jacobian in the contact space (Fig. 6).

Fig. 6 Composing contact Jacobian and motion Jacobian from modi-fied Jacobian in contact space. J i is a Jacobian for the i th link of therobot. The i th row in a circle is selected as the contact Jacobian accord-ing to the contact normal direction

3.2.2 Control structure

The operational space formulation (Park and Khatib 2008;Khatib 1987) is used to design a motion/force controller asa decoupled dynamic equation for each space. With regardto the dynamics and control of the robot in contact, the jointtorque vector, �, is composed by the torque for contact forcecontrol and the null space torque for motion control.

� = J Tc Fc + N T

c J Tm Fm (13)

where the first term, J Tc Fc, is the control torque for contact

force control and the second term, N Tc J T

m Fm , is the torquefor motion control in the null space of the contact. To main-tain contact with the environment, a contact force has to beapplied to the contact location. The dynamics of the contactforce is composed by projecting the robot dynamics into theoperational space and using the estimated contact location.

The proposed algorithm is based on the geometricalmodel, and it requires an appropriate motion command tothe robot. Figure 7 shows a link of a robot in point contact.First, when the link has a planar shape, the contact normalwill always be in an orthogonal direction to the link. Further-more, if the link has a curved surface, the contact normal isorthogonal to the tangential line at the contact point.

After obtaining the contact normal, an operational spaceneeds to be constructed based on it. The contact normalbecomes the direction that commands the contact force, andthe motion can be commanded in its null space.

The contact locations or surfaces of the robot movetogether with the corresponding locations or surface of theenvironment. These contact conditions are regarded as con-straints and are incorporated into the system dynamics.Therefore, the given torques at all joints can be used to com-pute the resulting accelerations and contact forces.

The operational coordinates are defined to be the displace-ments in the contact normal space. The contact Jacobian forthese coordinate is

123

Page 6: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

114 Auton Robot (2014) 36:109–121

Fig. 7 The term n̂1 is a unitvector normal to the contactsurface. The term p1 is thevector from the correspondingcontact point to the control pointof the link. a Contact situationof flat surface link with curvedenvironment. b Contactsituation of curved link withcurved environment

J lc = (nl

c)T

J l (14)

where J l is the Jacobian for the control point of the lth link.nl

c is the matrix that spans the contact normal space composedof n̂1 and p1 shown in Fig. 7. To maintain contact, the controlforce has to be applied in the space of nl

c.

nlc =

(n̂1

n̂1 × p1

)(15)

where n̂1 is a unit vector normal to the contact surface andp1 is the vector from the control point of the link to thecorresponding contact point.

It should be noted that the control point and contact pointare two different points. Because robot control proceeds with-out the exact contact location being known, the contact Jaco-bian needs to be constructed according to the estimated con-tact location. Furthermore, the contact force is commandedtoward the estimated contact location through the contactnormal direction. On the other hand, the motion command isapplied to the control point.

The estimation algorithm starts by controlling the orien-tation of the link. The intersection of the lines is calculatedwhen the change in orientation with respect to each directionbecomes larger than the estimation resolution. The approxi-mate contour of the environment can be obtained by runningthe estimation control consecutively.

Figure 8 shows the block diagram of the overall controlstructure. The control sequence of the contact location esti-mation algorithm is as follows:

i) The contact location estimation algorithm runs whencontact occurs. The initial contact point is estimated fromthe random motion (see Sect. 3.3) of the link.

ii) Define the contact space and motion space as explainedin the context.

iii) Reconstruct the contact Jacobian of the contact linkaccording to the estimated contact location. The Contact

Jacobian is computed by J lc = (nl

c)T

J l and the motionJacobian is constructed in its null space.

iv) Given contact locations, the desired force and motionare commanded to the robot. Force control is used to

Fig. 8 a Block diagram of the contact estimation control frameworkfor a manipulator. The contact Jacobian and the motion Jacobian arereconstructed by the estimated contact location. b Control sequence ofthe contact location estimation algorithm

maintain the contact and motion control in null-space isused to estimate the contact location subsequently.

3.3 Effectiveness of proposed algorithm

At the beginning of the algorithm, the force controller appliesa force to a certain point on the link to maintain the con-tact; however, this point may not be the real contact locationbecause it has not yet been estimated. Then, unintended rota-tional motions are produced because the applied force createsa moment on the contact link. In this study, these unintendedmotions are called random motions. The contact location ofthe link can be estimated from such random motions throughthe proposed estimation algorithm.

Though the proposed active sensing algorithm includes acontrol strategy for contact estimation, unintended randommotions can occur against the designed controller because ofthe precision of the estimation. To achieve robustness againstsuch uncertainty, these random motions have to be analyzedin our estimation algorithm.

The frame of the contact link {L} can be represented as arotation matrix {R} by describing its orientation relative tothe base frame {0} and position vector P defining the origin

123

Page 7: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121 115

of frame {L} in the global coordinate system.

{L} = {l0 R,0 PlO RG} (16)

where {L} is the frame of the contact link, l0 R is the rotational

matrix relative to the base frame, and 0 PlORG is the positionvector from the origin of the base frame.

When contact occurs, the random motion causes arbitrarymovements of the contact link {L}. We denote the rotationmatrix of a moving contact link {L} as 0

l Rt according totime. Then, the variation of the rotation transformation dueto random motion is given as follows.

Roffset = l0 Rt2

l0 R−1

t1 (17)

where Roffset is a rotation transformation, and 0l Rt1 and 0

l Rt2

are rotation matrices according to time t1 and t2, respectively.Though the axis of rotation is a general direction rather

than one of the unit directions, any orientation could be repre-sented through proper axis and angle selection (Bottema andRoth 1979). Therefore, Roffset can be represented as equiva-lent axis K̂ = [kx , ky, kz]T and equivalent angle θ .

Roffset =⎡⎣r11 r12 r13

r21 r22 r23

r31 r32 r33

⎤⎦

=⎡⎣ 1−2ε2

2 − 2ε23 2 (ε1ε2 − ε3ε4) 2 (ε1ε3+ε2ε4)

2 (ε1ε2+ε3ε4) 1 − 2ε21 − 2ε2

3 2 (ε2ε3−ε1ε4)

2 (ε1ε3−ε2ε4) 2 (ε2ε3 + ε1ε4) 1 − 2ε21 −2ε2

2

⎤⎦

(18)

kx = r32 − r23

2sinθ, ky = r13 − r31

2sinθ, kz = r21 − r12

2sinθ(19)

θ = 2arccos(ε4) (20)

ε4 = 1

2

√1+r11r22r33 (21)

where K̂ = [kx , ky, kz]T is the equivalent axis and θ is theequivalent angle.

The contact location is calculated by the proposed algo-rithm when θ exceeds the estimation resolution, θres. Becauseany random motions can be expressed by this equivalent

angle-axis expression (Bicchi et al. 1993), the proposed algo-rithm can estimate the contact location with any randommotions of the link.

Hence, the proposed algorithm can analyze any motion,and both passive and active sensing is possible. In this study,though the control strategy is designed under a static envi-ronment, the proposed contact location estimation algorithmcan be applied when the contacting environment is movingslowly. If the robot is controlled in a compliant manner, theproposed algorithm still runs successfully when the robotis moved by the motion of the surrounding environment.Although it is not initially designed for this purpose, thispassive sensing case is verified through a simple experiment,as described in Sect. 4.2.3.

4 Experimental validation

Experiments have been conducted to verify and demonstratethe contact estimation algorithm and the active sensing con-trol framework. To validate the proposed algorithm in var-ious situations, both the middle of the robot’s link and theend-effector of the robot are used to estimate the contactlocation. Then, the proposed active sensing has been per-formed to test a convex contact situation between the robotand a spherical object. Additionally, the proposed estima-tion algorithm has been verified for passive movements inthe pointed contact situation of the robot by an externalforce.

4.1 Experimental setup

Roboticslab (Simlab 2010), a physics-based simulation pro-gram, is used for verifying the proposed approach. Robotic-slab provides not only a simulation environment but also areal-time control module and programmable interface. Fur-thermore, the 7-DOF SAP-1 manipulator is used for experi-mental demonstration in a contact situation with an edge ofthe environment (Fig. 9a) and spherical environment (Fig.9b). Its possible contact and motion tasks were limited by itskinematics and degrees of freedom.

Fig. 9 Experimental setup withSAP-1 manipulator. Contactoccurs between the 4th link ofthe manipulator and an edge ofthe environment (a) and thespherical object (b)

123

Page 8: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

116 Auton Robot (2014) 36:109–121

In the experiments shown in Fig. 9, the 4th link of the robotcontacts with an edge of the environment, and therefore, 4DOFs are used for the proposed active sensing. The activesensing is performed using the data from the joint encodersand robot geometry. No force/torque sensor or tactile sensoris used in these experiments. The SAP-1 manipulator wasconnected to a PC through a 1394 PCI-board, and the servorate of the controller for the robot was 500 Hz.

4.2 Active sensing under several contact situations

4.2.1 Estimation of contact location in edge contactsituation

In this section, the proposed active sensing strategy is demon-strated with a point contact situation between the 4th link ofthe manipulator and the edge of the environment (Fig. 9a).

8 10 12 14 16 18 20 22 24 26 280.16

0.18

0.2

0.22

0.24

0.26

0.28

0.3

0.32Estimated contact location

Time [s]

Posi

tion

[m]

Estimated Contact Location in x direction

Estimated Contact Location in z direction

Fig. 10 Estimated contact locations in a point contact situationbetween the 4th link of the manipulator and an edge of the environment(Fig. 9a)

Because there exist 4 DOFs at the 4th link, 1 DOF is usedfor maintaining the contact and the other 3 DOFs are usedfor orientation control.

The composition of the contact Jacobian is

J 4c = (n4

c)T

J 4 (22)

J 4m = (

J 44 J 4

5 J 46

)T(23)

where the contact Jacobian, J 4c , is formed according to the

contact normal direction and the motion Jacobian, J 4m , is

formed to control the orientation of the 4th link of themanipulator; J 4

i is the i th row of the local Jacobian at the4th link.

The estimated contact location points are plotted inFig. 10. Each position indicates the displacement from thebase of the manipulator to the estimated contact location.The error bouncing in the result can be filtered out using anyfilter, although in this result, we did not employ any filterin order to observe the pure performance of the proposedalgorithm.

Then, the proposed algorithm was applied to several axesrepeatedly in the same contact situation. The result in Fig. 11shows an edge shape of the contacting environment by theaccumulation of estimated three-dimensional contact pointinformation.

4.2.2 Active sensing with convex contact situation

Active sensing with a convex contact situation is demon-strated between the 4th link of the manipulator and thespherical object (Fig. 9b). The control strategy was sim-ilar to that described in Sect. 4.2.1, with the only differ-ence being the contact environment. The result is plottedin three-dimensional space, and the aggregation of con-tact locations indicates the part of the spherical object(Fig. 12).

Fig. 11 Active sensing is performed on several axes in a contact situa-tion with an edge. The aggregation of the contact location can representthe contour of the object. a Estimated contact locations in x–z plane. b

Estimated contact locations in y–z plane. c Estimated contact locationsin more general view

123

Page 9: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121 117

−0.24 −0.23 −0.22 −0.21 −0.2 −0.190.275

0.28

0.285

0.29

0.295

0.3

0.305

0.31

0.315

0.32

0.325

x position [meter]

Estimated contact location; view of(0, 0)z

posi

tion

[met

er]

Aggregation of Estimated contact locations

(a)

−0.02 −0.01 0 0.01 0.020.275

0.28

0.285

0.29

0.295

0.3

0.305

0.31

0.315

0.32

0.325

Estimated contact location; view of(90, 0)

y position [meter]

z po

sitio

n [m

eter

]

Aggregation of Estimated contact locations

(b)

−0.24−0.23

−0.22−0.21

−0.2 −0.02−0.01

00.01

0.02

0.28

0.29

0.3

0.31

0.32

y position [meter]

Estimated contact location; view of(45, 45)

x position[meter]

z po

sitio

n [m

eter

]

Aggregation of Estimated contact locations

(c)

Fig. 12 Active sensing is conducted in a convex contact situation (Fig.9b). The aggregation of the contact location can represent the contourof the object. a Estimated contact locations in x–z plane. b Estimated

contact locations in y–z plane. c Estimated contact locations in moregeneral view

Fig. 13 Simulationexperiments. a Contact occursbetween the SAP-1 manipulatorand the spherical object in thesimulation world. b Greencircles indicate the contactlocations estimated through theactive sensing algorithm and theends of the red lines indicate thereal contact locations (Colorfigure online)

The proposed active sensing strategy is also demonstratedwith a 7-DOF manipulator and a spherical object in the sim-ulation (Fig. 13a).

Figure 13b shows the performance of the proposed activesensing algorithm. Green circles indicate the contact loca-tions estimated through the active sensing algorithm, and theends of the red lines indicate the real contact locations. Itcan be observed that the estimated contact locations are veryclose to the real contact locations.

First, the proposed active sensing strategy has been appliedconsecutively along the object surface. As a result, the con-tour of the circular object can be estimated through the aggre-gation of the estimated contact locations (Fig. 14).

The red circles indicate the contact locations estimatedthrough the proposed algorithm and the gray ‘+’s indicatethe locations estimated through the data obtained from theforce/torque sensor. Though the estimation of the proposedalgorithm seems fairly accurate, some errors are inevitable.Because a curved surface always has only one point of contactwith each line, the calculated points are apart from the actualpoint of contact.

From Fig. 15, the error can be modeled as follows:

rest = robj

cos θ2

(24)

−0.27 −0.26 −0.25 −0.24 −0.23 −0.22 −0.21 −0.2 −0.190.27

0.28

0.29

0.3

0.31

0.32

0.33

0.34

Position in x direction [m]

Posi

tion

in z

dir

ectio

n [m

]

A comparison of the estimated contact locationsAggregation of estimated contact locations through Active SensingAggregation of estimated contact locations through Force/Torque sensorReal contour of the environment

environment is a sphere of radius 0.05858 mestimation resolution = 0.01 rad

Fig. 14 Comparison of the estimated contact locations from the sim-ulation result with 7 DOFs. The center of the circular object is(−0.2414, 0.25858) and its radius is 0.05858 m

derror = rest sinθ

2(25)

where robj is the radius of a circular object, rest is the esti-mated radius of the object, derror is the distance betweenthe real and the estimated contact locations, and θ is the

123

Page 10: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

118 Auton Robot (2014) 36:109–121

Fig. 15 Errors from contact estimation algorithm when the obstaclehas a curved surface and the robot link has a flat surface

estimation resolution. The estimation resolution is an offsetof the link orientation to determine whether the estimationalgorithm should be applied.

From Eq. 25, the error is inversely proportional to thecurvature of the environment. In general, the proposed activesensing estimates the contact locations accurately within theerror boundary as follows:

ε = 1

lim�s→0∣∣�θ�s

∣∣ tanθ

2(26)

=∣∣∣∣ ds

∣∣∣∣ tanθ

2(27)

where the first term in the right-hand side indicates theradius of curvature of the environment, and θ is the esti-mation resolution. The experimental error shown in Fig.14 is 5.8582 × e−4m. Theoretically, the errors convergeto zero when the robot contacts with an edge or tip of theenvironment.

The errors of the result shown in Fig. 14 are plotted in Fig.16. The dotted line indicates the theoretically ideal errors ofthe proposed active sensing technique. Because the accelera-tion of the manipulator creates inertial effects, the estimationfrom the measured force/torque sensor data are not accu-rate, whereas the contact locations are estimated accuratelywithin the error boundary of the algorithm using the proposedapproach.

Then, the proposed sensing strategy is applied to two axesto expand the concept in three-dimensional space (Fig. 17).The outliers shown in the upper right in Fig. 17b can also bedetected in Fig. 17c. These outliers may occur due to peculiarsurface conditions. That is, the link sometimes loses contactwhile the link is sliding on the surface because of the surfacecondition. The other outliers apart from the points group canbe filtered out using any filter, although in this result, we did

−0.27 −0.26 −0.25 −0.24 −0.23 −0.22 −0.21 −0.2 −0.19

−4

−2

0

2

4

6

8

10

12

14

16

x 10−3

Position in x direction [m]

Err

or m

agni

tude

[m

]

A compaison of the errors

The errors from the proposed Active SensingThe errors from the Force/Torque sensorTheoretically ideal errors of the proposed Active Sensing

environment is a sphere of radius 0.05858 mestimation resolution = 0.01 rad

Fig. 16 A comparison of the errors between the proposed active sens-ing and calculation from the force/torque sensor. The dotted line indi-cates the theoretical ideal errors of the proposed active sensing technique

not apply any filter in order to observe the pure performanceof the proposed algorithm.

4.2.3 Passive contact estimation with plane approach

The proposed generalized plane approach is verified for apointed contact situation through a simulation (Fig. 18). Con-tact occurs between the end-effector of the manipulator and apointed object. Then, the performance of the plane approachwas verified according to random motions.

In this experiment, the manipulator was controlled com-pliantly, and therefore, random motions occurred only due tothe external force in the simulator.

Figure 19 shows the performance of the proposed activesensing algorithm with a plane approach. It can be observedthat the estimated contact locations are close to the real con-tact locations indicated by the dotted line. This result indi-cates that the proposed algorithm still runs successfully whenthe robot is moved by external forces. Both the performanceof the proposed sensing algorithm and the capacity for pas-sive sensing were proved in this experiment.

5 Conclusion

An active sensing method is proposed to overcome the limita-tions of the sensing capability of a robot in terms of the senseof touch. Because the proposed contact location estimationalgorithm can be applied to a non-sensor-based estimationtechnique, contact sensing can be extended to the sensorlesspart of robots. Moreover, it can be used together with sensorscomplementarily. Robots can then utilize the contacts rather

123

Page 11: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121 119

Fig. 17 Active sensing is performed along two axes in a contact situa-tion with a sphere. The aggregation of the contact location can representthe contour of the object. a Estimated contact locations in x–z plane. b

Estimated contact locations in y–z plane. c Estimated contact locationsin more general view

Fig. 18 Active sensing in a point contact situation between the end-effector and a pointed object

than avoiding them and perform tasks in contact situationsthat are undetected by the vision sensors.

The contact location estimation algorithm proposed in thisstudy depends on the geometry of the robot body and joint

angles. Therefore, the proposed algorithm is not affected bynonlinear dynamics of the manipulator.

A compliant motion control scheme is used for activesensing. The estimation of the contact location stabilizesthe control framework, and the control framework makesthe estimation correct. In other words, the proposed esti-mation algorithm and the control strategy work comple-mentarily. The performance of the proposed approach hasbeen demonstrated on a physical robot and in a simulationenvironment. The proposed algorithm estimates the contactlocation not only actively but also passively through motionanalysis.

Further research will focus on the application of activesensing for multi-contact situations of a robot. To control therobot in such situations, a robot with a sufficiently high num-ber of degrees of freedom should be used for this advancedresearch. Then, the geometrical algorithm will be extendedto more complicated polyhedral geometry using polygonalmodeling.

14 16 18 20 22 24−0.301

−0.3005

−0.3

−0.2995

−0.299

−0.2985

−0.298

−0.2975

−0.297

time [s]

x po

sitio

n [m

]

Estimated contact location

estimated contact locationin x−direction

real contact location

(a)

14 16 18 20 22 24−0.001

−0.0008

−0.0006

−0.0004

−0.0002

0

0.0002

0.0004

0.0006

0.0008

0.001

y po

sitio

n [m

]

time [s]

Estimated contact location

estimated contact locationin y−direction

real contact location

(b)

14 16 18 20 22 240.199

0.1992

0.1994

0.1996

0.1998

0.2

0.2002

0.2004

0.2006

0.2008

0.201Estimated contact location

time [s]

z po

sitio

n [m

]

estimated contact locationin z−direction

real contact location

(c)

Fig. 19 Contact estimation algorithm was applied passively in a pointed contact situation (Fig. 18) using the plane approach. a–c indicate theestimated contact locations along the x-, y-, and z-axes, respectively

123

Page 12: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

120 Auton Robot (2014) 36:109–121

Acknowledgments An earlier version of this paper was presentedat the IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) 2012 workshop on “Advances in tactile sensing andtouch-based human-robot interaction.” This research was supportedby the Ministry of Knowledge Economy, Korea, under the IndustrialFoundation Technology Development Program supervised by the KoreaEvaluation Institute of Industrial Technology (No. 10038660, Develop-ment of the control technology with sensor fusion based recognitionfor dual-arm work and the technology of manufacturing process withmulti-robot cooperation), and by the Global Frontier R&D Program onHuman-centered Interaction for Coexistence funded by the NationalResearch Foundation of Korea grant funded by the Korea Government(MSIP) (No. 2011-0032014).

6 Appendix

Lemma 1 Let�3 denote the set of all ordered 3-tuples of realnumbers such as (x, y, z). We call �3 a Euclidean space ofdimension 3 and ordered 3-tuple points. A geometric objectin �3 is a nonempty subset of �3. Therefore, a geometricobject in �3 is a nonempty set of points in �3.

A transformation of �3 is a 1:1 correspondence between�3, which is continuous and has a continuous inverse. Atransformation that preserves the length is called an isome-try.

Let A and B be two geometrical objects in �3. A con-tinuous map Ft : �3 × [0, 1] → �3 is defined to bean ambient isotopy taking A to B. If F0 is the identity map(F0(A) = A), each map Ft is an isometry from �3 to itself,and F1(A) = B. If there is an ambient isotopy taking A toB, then A and B are said to be ambient isotopic.

Now if we have a geometric object A and an ambientisotopy Ft : �3 × [0, 1] → �3, then A is continuouslytransformed to F1(A) when the time goes from t = 0 to t =1. Therefore, A ∩ F1(A) is decided by A and Ft . Therefore,when A ∩ F1(A) is not empty, we can determine whether apoint in �3 is contained in A ∩ F1(A).

References

Bicchi, A., Salisbury, J. K., & Brock, D. L. (1993). Contact sensing fromforce measurements. International Journal of Robotics Research, 12,249–262.

Bottema, O., & Roth, B. (1979). Theoretical kinematics (Dover bookson engineering series). New York: Dover.

Chiaverini, S., & Sciavicco, L. (1993). The parallel approach toforce/position control of robotic manipulators. IEEE Transactionson Robotics and Automation, 9(4), 361–373.

De Luca, A. & Mattone, R. (2005). Sensorless robot collision detec-tion and hybrid force/motion control. Proceedings of the 2005 IEEEInternational Conference on Robotics and Automation (ICRA) (pp.999–1004).

García, A. H., Feliu, V. B., & Somolinos, J. A. S. (2003). Exper-imental testing of a gauge based collision detection mecha-nism for a new three-degree-of-freedom flexible robot. Journalof Field Robotics, 20(6), 271–284. http://www.odysci.com/article/1010112993115945.

Hebert, P., Hudson, N., Ma, J., & Burdick, J. (2011). Fusion of stereovision, force-torque, and joint sensors for estimation of in-handobject location. IEEE International Conference on Robotics andAutomation (ICRA) (pp. 5935–5941).

Hirukawa, H., Kajita, S., Kanehiro, F., Kaneko, K., & Isozumi, T.(2005). The human-size humanoid robot that can walk, lie down andget up. International Journal of Robotics Research, 24(9), 755–769.

Howe, R. D. (1993). Tactile sensing and control of robotic manipulation.Advanced Robotics, 8, 245–261.

Khatib, O. (1987). A unified approach for motion and force control ofrobot manipulators: The operational space formulation. IEEE Jour-nal of Robotics and Automation, 3(1), 43–53.

Lefebvre, T., Bruyninckx, H., & Schutter, J. D. (2001). Polyhedral con-tact formation modeling and identification for autonomous compliantmotion. IEEE Transactions on Robotics and Automation, 19, 2003.

Mason, M. T. (1981). Compliance and force control for computercontrolled manipulators. IEEE Transactions on Systems, Man andCybernetics, 11(6), 418–432.

Morinaga, S., & Kosuge, K. (2003). Collision detection system formanipulator based on adaptive impedance control law. Proceedingsof the IEEE International Conference on Robotics and Automation(ICRA) (Vol. 1, pp. 1080–1085).

Ogura, Y., Aikawa, H., Shimomura, K., Morishima, A., ok Lim, H., &Takanishi, A. (2006). Development of a new humanoid robot wabian-2. Proceedings of the IEEE International Conference on Roboticsand Automation (ICRA) (pp. 76–81).

Park, J. (2006). Control strategies for robots in contact. Ph.D. thesis,Stanford University.

Park, J., & Khatib, O. (2008). Robot multiple contact control. Robotica,26(5), 667–677.

Petrovskaya, A., Park, J., and Khatib, O. (2007). Probabilistic estima-tion of whole body contacts for multi-contact robot control. IEEEInternational Conference on Robotics and Automation (ICRA) (pp.568–573).

Ponce Wong, R., Posner, J., & Santos, V. (2012). Flexible microfluidicnormal force sensor skin for tactile feedback. Sensors and ActuatorsA: Physical, 179, 62–69.

Schutter, J. D., Bruyninckx, H., Dutre, S., Geeter, J. D., Katupitiya, J.,Demey, S., et al. (1999). Estimating first order geometric parametersand monitoring contact transitions during force controlled compliantmotion. International Journal of Robotics Research, 18, 1161–1184.

Sentis, L., Park, J., & Khatib, O. (2010). Compliant control of mul-ticontact and center-of-mass behaviors in humanoid robots. IEEETransactions on Robotics, 26(3), 483–501.

Simlab. (2010). Roboticslab.http://www.rlab.co.kr/.Suwanratchatamanee, K., Matsumoto, M., & Hashimoto, S. (2010).

Robotic tactile sensor system and applications. IEEE Transactionson Industrial Electronics, 57(3), 1074–1087.

123

Page 13: An active sensing strategy for contact location without tactile …dyros.snu.ac.kr/paper/autonrobot2014.pdf · Auton Robot (2014) 36:109–121 DOI 10.1007/s10514-013-9368-6 An active

Auton Robot (2014) 36:109–121 121

Hooman Lee received the B.S.degree in Bio System Engi-neering from Seoul NationalUniversity and M.S. degree inIntelligent Convergence Systemsfrom Seoul National University,Seoul, Korea, in 2009 and 2011,respectively. He is currently aresearcher at Intelligent RobotControl Research team, Depart-ment of Robot/Cognitive conver-gence, Electronics and Telecom-munications Research Institute(ETRI). His main research areasare contact consistent control,

compliant motion control of the robot and human-robot interaction.

Jaeheung Park received theB.S. and M.S. degrees in aeros-pace engineering from SeoulNational University, Korea, in1995 and 1999, respectively, andthe PhD degree in aeronau-tics and astronautics from Stan-ford University, US in 2006.From 2006 to 2009, He wasa post-doctoral researcher andlater a research associate at Stan-ford Artificial Intelligence Lab-oratory. From 2007 to 2008,he worked part-time at HansenMedical Inc., a medical robotics

company in US. He is currently an associate professor in the Gradu-ate School of Convergence Science & Technology at Seoul NationalUniversity, Korea. His research interests lie in the areas of robot-environment interaction, contact force control, robust haptic teleopera-tion, multicontact control, whole-body dynamic control, biomechanics,and medical robotics.

123