10
EUROGRAPHICS 2018 / D. Gutierrez and A. Sheffer (Guest Editors) Volume 37 (2018), Number 2 Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters Taeil Jin, Meekyoung Kim, and Sung-Hee Lee {jin219219,koms1701,sunghee.lee}@kaist.ac.kr Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea Figure 1: Our method can retarget an interaction motion to various skinned characters while preserving its interaction semantics. From the left, input interaction motion, individual motion retargeting that loses interaction semantics, and our results for three different pairs of characters. Abstract Applying motion-capture data to multi-person interaction between virtual characters is challenging because one needs to pre- serve the interaction semantics while also satisfying the general requirements of motion retargeting, such as preventing penetra- tion and preserving naturalness. An efficient means of representing interaction semantics is by defining the spatial relationships between the body parts of characters. However, existing methods consider only the character skeleton and thus are not suit- able for capturing skin-level spatial relationships. This paper proposes a novel method for retargeting interaction motions with respect to character skins. Specifically, we introduce the aura mesh, which is a volumetric mesh that surrounds a character’s skin. The spatial relationships between two characters are computed from the overlap of the skin mesh of one character and the aura mesh of the other, and then the interaction motion retargeting is achieved by preserving the spatial relationships as much as possible while satisfying other constraints. We show the effectiveness of our method through a number of experiments. CCS Concepts Computing methodologies Animation; 1. Introduction Existing motion-retargeting methods successfully solve the prob- lem of adapting the motion of a single character to characters of different shapes or topologies, or to different terrains. In compar- ison, retargeting the interaction motion between two characters to different target characters remains a challenging problem because the resulting motions should preserve the meaning of the interac- tion as well as achieve naturalness, but it is often not clear how to identify and express the semantics of the interactions. Some re- searchers have proposed efficient methods by which to solve this, often by representing the spatial relationships between interacting entities [HKT10, KPBL16], but these methods consider only char- acter skeletons and thus may require post-processing to refine the interaction motions for the corresponding skinned characters. In this paper, we propose a novel motion-retargeting method for characters with various body shapes. We assume that meaningful c 2018 The Author(s) Computer Graphics Forum c 2018 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

EUROGRAPHICS 2018 / D. Gutierrez and A. Sheffer(Guest Editors)

Volume 37 (2018), Number 2

Aura Mesh: Motion Retargeting to Preserve the SpatialRelationships between Skinned Characters

Taeil Jin, Meekyoung Kim, and Sung-Hee Lee{jin219219,koms1701,sunghee.lee}@kaist.ac.kr

Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

Figure 1: Our method can retarget an interaction motion to various skinned characters while preserving its interaction semantics. Fromthe left, input interaction motion, individual motion retargeting that loses interaction semantics, and our results for three different pairs ofcharacters.

AbstractApplying motion-capture data to multi-person interaction between virtual characters is challenging because one needs to pre-serve the interaction semantics while also satisfying the general requirements of motion retargeting, such as preventing penetra-tion and preserving naturalness. An efficient means of representing interaction semantics is by defining the spatial relationshipsbetween the body parts of characters. However, existing methods consider only the character skeleton and thus are not suit-able for capturing skin-level spatial relationships. This paper proposes a novel method for retargeting interaction motions withrespect to character skins. Specifically, we introduce the aura mesh, which is a volumetric mesh that surrounds a character’sskin. The spatial relationships between two characters are computed from the overlap of the skin mesh of one character and theaura mesh of the other, and then the interaction motion retargeting is achieved by preserving the spatial relationships as muchas possible while satisfying other constraints. We show the effectiveness of our method through a number of experiments.

CCS Concepts•Computing methodologies → Animation;

1. Introduction

Existing motion-retargeting methods successfully solve the prob-lem of adapting the motion of a single character to characters ofdifferent shapes or topologies, or to different terrains. In compar-ison, retargeting the interaction motion between two characters todifferent target characters remains a challenging problem becausethe resulting motions should preserve the meaning of the interac-tion as well as achieve naturalness, but it is often not clear how

to identify and express the semantics of the interactions. Some re-searchers have proposed efficient methods by which to solve this,often by representing the spatial relationships between interactingentities [HKT10, KPBL16], but these methods consider only char-acter skeletons and thus may require post-processing to refine theinteraction motions for the corresponding skinned characters.

In this paper, we propose a novel motion-retargeting method forcharacters with various body shapes. We assume that meaningful

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and JohnWiley & Sons Ltd. Published by John Wiley & Sons Ltd.

Page 2: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

interaction between a character and an object or a partner characteroccurs in a space within a certain vicinity of the character, and werefer to this space as the interaction space. In our method, the in-teraction space is modeled with the aura mesh, a tetrahedral meshthat surrounds a character and occupies the extent of the interactionspace. Every character has its own aura mesh, and the interactionbetween two characters is expressed with respect to the collisionbetween the skin mesh of one character and the aura mesh of theother. In addition, we construct the bijective mapping that specifiesthe correspondence between the points inside the aura meshes ofthe source and target characters. Therefore, the interaction seman-tics can be preserved during the motion-retargeting step by transfer-ring the collision points to the corresponding positions in the aurameshes of the target characters.

Aura-mesh-based retargeting has several advantages. It can rep-resent general interactions ranging from contact motion to non-contact motion occurring in the vicinity of characters. The retar-geted motion is truthful with respect to skin-level contact and col-lisions, and thus pose-processing, often required by skeleton-basedretargeting schemes, is unnecessary. The aura mesh can be gener-ated offline for each character because it only depends on the de-fault shape of a character, and it can be deformed very rapidly us-ing the same skinning algorithm used for the underlying skin mesh.The configuration of the aura mesh is independent of any particularinteraction scenario and it is therefore not necessary to know the in-teraction motion beforehand when constructing the aura mesh. Anytype of interaction motion can be expressed in the same manner,i.e., as collisions inside the aura mesh. Thus, our method is advan-tageous for real-time motion retargeting where interaction motionscannot be prescribed. Figure 1 presents examples showing that ourmethod can retarget interaction motions to other characters of dif-ferent sizes and BMIs.

This paper demonstrates that our aura mesh-based method is use-ful for retargeting interaction motions of various characters withdifferent body shapes. The balance of this paper is as follows. Af-ter discussing work related to our method in section 2, we providedan overview of our method in section 3. The aura mesh is intro-duced in section 4, followed by an explanation on the findings of askin-level spatial relationship in section 5 and the retargeting algo-rithm in section 6. Section 7 reports our experiments, and section 8concludes the paper.

2. Related Work

Motion retargeting increases the reusability of motion-capture databy adapting a character’s motion to different environments or char-acters. Due to its importance, motion retargeting has been re-searched widely in relation to computer graphics. Many existingmethods focus on retargeting motions to characters of differentsizes [MBBT00,PML∗09,MXH∗10] or topologies [WP09,YAH10,SOL13], or to various terrains [CKHL11, WMC11, HKS17]. Mostof these methods aim to modify the input motions to make it plau-sible for a changed character or terrain, and the complexity of theinput motions in terms of interaction is relatively low. As inputmotions contain more intricate meanings, such as complex multi-person interactions, the motion-retargeting problem becomes more

challenging because the resulting motion should not only look re-alistic but should also preserve the meaning of the movement.

Most existing methods address the motion-retargeting problemwith an optimization-based framework in which desiderata for newmotions are described in terms of the joint angles or end effectorconfigurations [LS99, CK99]. Under this framework, interactionwith external entities has been expressed rather straightforwardlyin terms of contact between body parts. Gleicher [Gle98] retargetstwo character dance motions by specifying the contact constraintof two grasping hands. Other methods [LHP06, XZY∗07, SZT∗07]use penetration constraints to avoid inappropriate instances of pen-etration with skeleton bodies.

However, it is often the case that interaction motions do not in-volve direct contact, such as “character A’s feet swing over charac-ter B’s back” or “character A jumps into character B’s side.” Suchmotions lead to stronger challenges when attempting to representthe interaction semantics, and efficient methods for this are neces-sary. This paper deals with the problem of representing both con-tact and non-contact interactions and retargeting such interactionsto different characters.

Representing the spatial relationship between interacting entitiesis an efficient means of modeling non-contact interactions. Onewell-known method for this is the interaction mesh developed byHo et al. [HKT10]. The interaction mesh is constructed by link-ing the characters’ joint positions with Delaunay tetrahedralization.Motion retargeting is performed by applying the Laplacian meshediting technique. This method preserves the interaction semanticsvery effectively both for contact and non-contact interactions, andit has evolved to retargeting interaction motions in various scenar-ios [NK12,HCKL13,AAKC13,VGB∗14b,TAAP∗16]. Another in-teresting approach to describing spatial relationship is the egocen-tric parameterization obtained by solving eletrostatics for a virtu-ally charged conducting object surface [WSSK13]. The resultingobject-centric curvilinear coordinate system around an object canbe used for various applications including creating interaction mo-tions relative to the object. Our method also represents the spatialrelationships between characters, but a distinguishing characteris-tic of our method is that it focuses on retargeting motions fromthe viewpoint of skin. The aura mesh wraps the character’s skinvertices, and we can determine the spatial relationships betweeninteraction motions with respect to skin surfaces by examining thecollisions between the aura mesh and the partner’s skin mesh. Theadvantage of this method especially shines when the target charac-ters have body mass indexes very different from that of the sourcecharacter. For example, if a target character is of the same height asthe source character but weighs more, retargeted motion that con-siders only the skeletal motion will likely lead to skin penetration.In contrast, our method naturally avoids this artifact. The aura meshis constructed for each character independently from the interact-ing entities. Therefore, once built, the aura mesh can be used forretargeting interaction motions against any external object. This isin contrast to the interaction mesh, which must be constructed dif-ferently for each interaction motion.

By applying the spatial relationship descriptor of [AAKC13] tothe human body surface, Molla et al. [MDB17] defined the egocen-tric coordinates that encode the whole posture space independently

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 3: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Figure 2: Overview of our method

from the character sizes, and used it to transfer motions with self-contacts to characters with different sizes while preserving the se-mantics of the postures. We share the same goal with them, but ourinterest further extends to the interaction motions between char-acters. Kim et al. [KPBL16] proposed a spatial map method thatconstructs tetrahedral meshes surrounding objects in order to de-fine correspondences between the surrounding spaces around theobjects. The spatial map is then used to retarget object-human in-teraction motions toward different objects. Our method is similar tothe spatial map method in that it constructs a tetrahedral mesh sur-rounding an object for motion retargeting. However, whereas thespatial map is built for static objects such as furniture, the proposedaura mesh allows for dynamic deformation and thus can be usedfor characters.

Besides motion retargeting approach, researchers have devel-oped other techniques to create interaction motions. The methodof [LL06] finds optimal action of a single person for a given state,and uses the learned policy to generate interaction motions betweentwo persons (boxers). Won et al. [WLO∗14] proposed a frameworkto create interaction motions from a set of pre-recorded interactionmotions from user-defined high level descriptions called the eventgraph.

In addition to motion retargeting, the spatial relationships amongobjects have been researched for other purposes. Zhao et al. de-veloped the interaction bisector surface (IBS), a new feature forrepresenting a scene with multiple objects using a Voronoi dia-gram [ZWK14]. Using IBS, they can efficiently represent and clas-sify complicated scenes with multiple objects. By improving theIBS features, Zhao et al. [ZCK17] analyzed and represented the

Figure 3: A template skin mesh (top left) and its interaction bound-ary (top right). The template aura mesh is constructed as a tetra-hedral mesh between the skin surface and the interaction boundary(bottom).

part-level interaction semantics between objects (including humancharacters), which then is used to generate new scenes with simi-lar interaction semantics. Other recent work undertakes an analysisand the representation of the functionalities of objects by focusingon the spatial relationship between the object and its surroundingspace [HZvK∗15, HvKW∗16].

3. Overview

Figure 2 shows an overview of our method, which consists of a pre-processing stage and a run-time stage. In the pre-processing stage,aura meshes are constructed for the source and target charactersby fitting a template aura mesh to the body shape of each charac-ter. Bounding geometries for the broad-phase collision test are alsodefined around the body parts.

In the run-time stage, the motions of the source characters areretargeted to the target characters. To do this, the interaction be-tween the two source characters is analyzed by detecting collisionsbetween the aura mesh of a character and the skin mesh of the othercharacter. The collision points are obtained for each aura mesh, andthese are used to preserve the interaction semantics between the tar-get characters. By using the correspondences between aura meshes,desired collision points are specified with regard to the aura meshesof target characters. Motion retargeting is achieved by solving theinverse kinematics that satisfies the desired collision points as wellas the naturalness of the motions.

4. Aura Mesh

While there could be many approaches by which to define the ex-tent of the interaction space, in our experiment we simply set theexternal boundary of the interaction space as a surface a certaindistance from the skin. To obtain the external boundary surface,we initially construct a uniform grid around the template character

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 4: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Figure 4: Corresponding points in the interaction spaces of two characters. Left: a straight line (red) around a male figure is mapped to acurve (green) around a female figure. The vertices of the aura meshes are shown in yellow. Right: corresponding points identified by the auramesh, marked with the same color among different characters with various poses.

Figure 5: Sub-meshes of the aura mesh (left) and bounding cylin-ders (right)

mesh and calculate the signed distance of each grid point from thecharacter surface. We then extract a surface mesh by connectingpoints with the same signed distance value. During the retargetingprocess, any object outside the aura mesh is considered not to beinteracting with the character. After creating the boundary mesh,we construct a template aura mesh between the skin surface of atemplate character model and the boundary mesh using the Tetgensoftware package (http://wias-berlin.de/software/tetgen/). Figure 3shows the template character mesh and its template aura mesh.

In our experiment, all of the characters are created from the sametemplate mesh and thus share the same mesh topology. Specifically,we used character models from Dyna data [PMRMB15]. Likewise,we create an aura mesh for a character by deforming the templateaura mesh so as to minimize the deformation energy while ensur-ing that the vertices of internal boundary coincide with those of thecharacter surface. To do this, we use the As-Rigid-As-Possible en-ergy technique proposed by [CPSS10]. The topologies of the con-structed character aura meshes are identical; thus, finding corre-sponding points between two aura meshes is as simple as identify-ing the corresponding target vertices from their indices and usingthe same barycentric coordinates. This one-to-one mapping pro-vides reasonable correspondence between the interaction spaces oftwo characters. Figure 4 shows examples of corresponding pointsbetween two interaction spaces of different characters.

Figure 6: Snapshots of an animated aura mesh

Bounding geometry For efficient collision detection, we subdi-vide the template aura mesh into a set of tetrahedral meshes ac-cording to the body parts and construct bounding geometries foreach sub-mesh. The subdivision step is performed by segmentingthe aura mesh vertices according to the most influential body parts,determined as the bone with the highest skinning weight. In ourexperiment, we created a total of 16 sub-meshes, as shown in Fig.5 (left). For each sub-mesh, we constructed bounding cylinders fora broad-phase collision test. Two types of bounding cylinders arecreated, one covering the skin surface, and the other covering theentire sub-mesh (Fig. 5, right). The former is used for the collisioncheck of the skin mesh while the latter is for the collision checkof the aura mesh. Note that the bounding geometries should be setlarge enough to keep the deforming skin and aura mesh inside.

Aura mesh animation When a character moves, the aura meshmust deform accordingly to fit to the shape of the deformed skinsurface. To realize this, the aura mesh is deformed by the same

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 5: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

method used for animating the underlying skin. Let the generalizedcoordinates of character be denoted by q = (rT

0 ,θT )T , where r0 ∈

R3 is the root position and θ is the joint angle vector. We use thelinear blend skinning (LBS) method to deform the aura mesh andsurface mesh:

x(q) =m

∑i=1

wi(x)Ti(q)T−1

i (0)x(0), (1)

where x(q) denotes the position of a vertex due to q and Ti is thetransformation of the skeleton. The weight wi(x) controls the de-gree of influence of Ti on x.

To compute the weights of the skin surface w∗i (x(∈ S)) where

S denotes the skin surface, we use an automatic rigging tool pro-vided by MIXAMO (https://www.mixamo.com/). Given the skin-ning weights of the skin surface, we compute the weights of theaura mesh vertices using the bounded biharmonic weight method[JBPS11]. Specifically, denoting A by the given aura mesh, wesolve the following optimization problem that minimizes the bi-Laplacian for the smooth change of the weights:

minimizem

∑i=1

∫A(∆wi)

2dV (2)

subject to wi(x) = w∗i (x) if x ∈ S, (3)

0≤ wi ≤ 1, (4)m

∑i=1

wi = 1. (5)

Constraint (3) ensures that the inner boundary surface of the auramesh deforms in the same manner as the skin, so that a gap betweenthe skin surface and the aura mesh is prevented. The remainingconstraints (4) and (5) enforce the non-negativity and partition ofunity of the skinning weights respectively.

Using the method described above, we can ensure that the skin-ning weights of an aura mesh achieve spatial smoothness and pre-vent any gap between the animated skin mesh and the aura mesh.Figure 6 shows the shapes of deformed aura meshes of variousposes made by several character models. The bounding cylindersare rigidly transformed by the animation of the joints.

5. Skin-Level Spatial Relationship

Scene semantics of human interaction motion can be described withrespect to the character’s body parts and their locations. For exam-ple, we can describe the interaction motion shown in Fig. 6 (bottomleft) as “a woman’s hip is on her partner’s back.” The aura mesh canbe used to represent such a description. During an interaction mo-tion of the source animation step, if some part of the character’sskin collides with the other character’s aura mesh, we determinethat the interaction occurs between the two characters around thatcollision point. Because these points are important for representingthe spatial relationships between characters, we dub them as sourceinteraction points. An interaction point is represented by the indexof the tetrahedral element and the barycentric coordinates, and eachinteraction point is associated with the index of the skin vertex ofthe partner character. We refer to the points in the target aura mesh

that correspond to the source interaction points as the target in-teraction points as shown in Fig. 7. The retargeting process aims topose the characters such that the corresponding skin vertices are lo-cated at the target interaction points of the aura mesh of the partnercharacter.

5.1. Identifying source interaction points

When the poses of the source characters are updated, their skinmeshes and aura meshes are deformed using a skinning method.We then extract the interaction points by checking for a collisionbetween the deformed skin mesh and the partner’s aura mesh. Foran efficient collision check, a broad-phase collision check is per-formed first between the internal bounding cylinders of a charac-ter and the external bounding cylinders of the other character. Forcolliding pairs, narrow-phase collision detection is performed. Weinitially construct AABB trees of the colliding sub-meshes of theaura mesh, after which we find the skin vertices that collide withthe aura mesh. Constructing AABB trees takes some time, but theoverall collision checking performance is increased due to the largenumber of skin vertices. After the collision check, we store eachcollision pair as a source interaction point.

5.2. Calculating target interaction points

The extracted source interaction point ps is expressed as itsbarycentric coordinates bi for the vertices xi of the associated tetra-hedral element.

ps =4

∑i=1

bixi, (6)

Here, ∑bi = 1. As the source interaction points represent the skin-level spatial relationship of the source animation, we identify theircorresponding points in the target characters to preserve the spatialrelationship. Because there is one-to-one correspondence betweena pair of aura meshes, all of the source interaction points in thesource aura mesh can be mapped to their corresponding points inthe target aura mesh using the same barycentric coordinates of thecorresponding vertices,

pt =4

∑i=1

bix∗i , (7)

where x∗i is the vertex in the target aura mesh with an index identi-cal to that of xi in Eq. (6).

6. Target Motion Generation

We determine the inverse kinematics to retarget the source motionsto the target characters. All characters have the same joint hierarchywith 52 joints including the hand, but with different sizes and skinshapes.

The retargeted motion should follow the source motion whilepreserving the spatial relationship between interacting characters.As the shapes of the target characters differ from those of the sourcecharacters, the two goals become contradictory to each other, im-plying that a compromise is needed. In addition, the resulting mo-tion should appear natural by satisfying basic constraints such asfoot-ground contact.

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 6: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Figure 7: Interaction points that are used to represent the skin-level spatial relationships. Left: source interaction points are shown in red.Right: corresponding target interaction points of each character are shown in green and orange. Their associated vertices are in red.

We achieve motion retargeting by solving the optimization-basedinverse kinematics at each time frame. The retargeted pose q in atime frame is determined by minimizing the weighted sum of theenergies:

q∗ = argminq

λsEs +λdEd +λbEb +λgEg +λθEθ, (8)

where the energy terms pertain to the spatial relationship (Es), posepreservation (Ed), root position (Eb), ground contact (Eg), and jointangles (Eθ). The weights λ control the importance of each energyterm. The optimization is solved by the Levenberg-Marquardt algo-rithm. For two-person interaction motions, the optimization param-eter q in (8) includes the generalized coordinates of both characters.

Because our inverse kinematics method is performed per frame,the temporal smoothness of the resulting motion is not guaranteed.To alleviate this, instead of directly applying the resulting poseq∗(t) from the optimizer to the character, we interpolate it with thepose q(t−1) in the previous frame. Specifically, the root positionsare linearly interpolated and the joint vectors are interpolated withthe slerp. We describe each energy term next.

Skin-level spatial relationship The target character pose pre-serves the spatial relationships between the source characters bysatisfying the requirements of the desired positions of the target in-teraction points. Denoting xi as the position of the vertex that isassociated with an interaction point i, we compute the pose thatbrings xi to pt,i.

Es(q) = ∑i

wi||pt,i(q)− xi(q)||2 (9)

The weight wi controls the importance of the interaction point i.We assign greater weights to the points near the skin surface andand nearly zero weights to the points near the external boundary.Specifically, the weight wi is determined as

wi =1−S2

iN

, Si =SD(ti)SDmax

, (10)

where SD(ti) denotes the signed distance of pt,i, as computed by thebarycentric interpolation of the signed distances of the neighboringvertices x∗i , and with SDmax as the maximum signed distance of thesub-mesh to which pt,i belongs. The weight is normalized by N,the total number of interaction points in the sub-mesh, such that the

resulting animation is not influenced by the number of interactionpoints.

Skeletal pose preservation To encourage the target character tofollow the source pose, the energy term Ed is defined as

Ed(q) = ∑i

αi||di(q)− di||2, (11)

where di ∈ R3 (i = 1 . . .52) is the direction vector that representsthe skeletal pose. They include the directions of the internal bonesas well as the orientations of the root and the end effectors. Thedesired value d is set to the direction of the source pose. The weightαi is assigned a higher value for a more important part, such as theend effectors.

Root position If a target character’s height is different than thatof the source character, the retargeted character may float aboveor penetrate into the ground. To help prevent either scenario, weset the desired root position r0 by adjusting the height of the rootaccording to the height difference of the roots at the T-pose.

Eb(q) = ||r0(q)− r0||2. (12)

Note that the desired root position acts only as a hint to adjust theoverall height of the root during the motion. The actual root heightwill be determined as an outcome of the optimization.

Foot-ground contact This term encourages the preservation of theground-foot contact state from the source motion.

Eg(q) = ∑i

I(i)||gi(q)− gi||2, (13)

where gi is the foot position. Its desired value gi is determined fromthe source motion. The indicator function I(i) returns 1 if the footis in contact with the ground in the source motion, returning 0 oth-erwise. A foot is determined to be in contact with the ground if itsdistance to the ground and speed are less than certain pre-specifiedthresholds.

Joint angles The skeletal pose preservation term Ed does not pre-vent joint twists, which may lead to the creation of a candy-wrapper

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 7: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Figure 8: Snapshots of a shoulder-rubbing motion. Top-Left:source character motion. Top-Right: Without the spatial relation-ship term, a skin penetration artifact occurs on the surface. Bottom:using our method, we can preserve the skin-level spatial relation-ship with respect to the target skin shapes.

artifact. To avoid this, we add the energy term below, which encour-ages the preservation of the joint angles.

Eθ(q) = ∑i

βi||θi− θi||2, (14)

where the Euclidean distance measures the difference between thetwo angles. The weight βi is assigned a high value for the anglecorresponding to the joint twist. This energy term is used as a reg-ularizer that prevents free axial rotation of the bones.

In our experiment, the weight parameters λ are set as follows:(λs,λd ,λb,λg,λθ) = (3.0,1.0,0.3,0.5,0.1).

7. Experiments

We tested the proposed method with four types of interaction mo-tion. Two person interaction motions are prepared either by com-posing two separate motion clips (boxing scene) or by capturingtwo-person interaction motions using multiple Kinect devices (mar-tial arts and dance motions). After applied to skinned characters,the motion sequences for the source motions are manually refinedto create proper contacts. The accompanying video shows the qual-ity of the retargeted motions for the test scenes. All experimentswere performed on a PC with an Intel core i7-4790 CPU and with16GB of RAM and an NVIDIA GeForce GTX750 graphics card.

Shoulder rubbing Figure 8 shows that a shoulder-rubbing motionis retargeted to other characters. Rubbing the shoulder involvescontinuous contact between the skin of the two body parts; thus,the interaction semantics can only be preserved when the skin-levelspatial relationship between the body parts is preserved during theretargeting step. Figure 8 shows that the retargeted motion loses thecorresponding motion semantics and shows skin penetration if the

Figure 9: Snapshots of a boxing scene

spatial relationship term (9) is not used in the retargeting process.In contrast, our method successfully retargets the motion to othercharacters with different sizes.

Boxing In a boxing motion, the interactions between two personsoccur through multiple body parts; thus, proper retargeting cannotbe achieved without considering the spatial relationship betweenthese body parts. Moreover, in contrast to the previous example,the retargeting in this case requires modifications of all body poses.Figure 9 shows that our method appropriately preserves this inter-action semantics while maintaining the naturalness of the motions.Skin interpenetration is naturally avoided as well. Note that becausewe recompute the joint angles of both characters, the motions ofboth characters are modified even if only one character is changed.

Martial arts In the martial arts example shown in Fig. 10, inter-action between two characters occurs frequently. Character A rollsto the character B’s flank while character B supports character Ausing his back. The rolling motion involves continuous contactbetween the two characters, which is properly preserved by ourmethod. The middle column of Fig 10 shows the result when thespatial relationship term is omitted in the optimization process.

Contemporary dance Lastly, Figs. 11 and 13 show a two-personcontemporary dance sequence in which the dancers exchangeforces and weights through different body parts. The input motionsequence starts with an interaction movement, followed by non-interactive movements. In our method, the aura mesh specifies theboundary of interaction, and the body parts that are distant fromthe partner’s aura mesh are not affected by the partner’s movement.The motion retargeted using the aura mesh creates smooth transi-tions between these motions.

Performance

Each character model has 6890 vertices, and it took 18 secondsto build each aura mesh, which each consists of 22821 tetrahedralelements.

Table 1 shows the computational performance of our method. Wecompute the maximum and average computation times to obtain the

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 8: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Figure 10: Snapshots of a martial arts example

Figure 11: Snapshots of a dance motion

interaction points and generate the target character motions. Theaverage in each case is calculated by the total time divided by thenumber of frames in each animation.

The major operations during the process of finding the interac-tion points are the AABB tree update and the collision check. WhileAABB trees are created in parallel for every colliding sub-mesh,the total time for constructing the trees depends on the number oftrees. Hugging pose shown in Fig. 12 was analyzed to check theworst case time as it has high number of colliding sub-meshes andinteraction points, and it took 5.6 seconds per frame for retargeting.

During the motion generation step, the optimization process con-verged within an average of five iterations for the shoulder-rubbing

Figure 12: Snapshots of a hugging pose

Table 1: Computational performance of our method

# Vertices ofa Character

# Tetrahedral elements inAura Mesh

OfflineTime(sec.)

6890 22821 18.187

ExperimentsFinding Interaction Points Retargeting

Time(sec./frame)

state # Trees # PointsTime

(sec./frame)

ShoulderRubbing

(179 frames)

mean 4.86 403.12 0.107 0.690

max 6 1510 0.137 1.490

Boxing(45 frames)

mean 1.95 85.44 0.039 0.790max 8 861 0.143 4.573

Matrial Art(74 frames)

mean 4.06 268.716 0.063 1.300max 12 801 0.146 4.360

ContemporaryDance

(133 frames)

mean 3.50 168.24 0.056 1.070

max 6 966 0.111 2.930

HuggingPose

max 14 2380 0.170 5.616

animation and 11 iterations for other animations. However, our im-plementation requires a nontrivial amount of time to update the auramesh per iteration, as it is done in the CPU though it is performedin parallel. We expect that GPU implementation of the aura meshskinning step will increase the speed significantly.

8. Limitation and Future Work

Our method has several limitations, and overcoming these will beinteresting future research topics. The LBS-based aura mesh ani-mation prevents gaps with the skin mesh if the skin is also animatedby the same LBS method. However, the well-known artifacts of theLBS method, e.g., the candy-wrapper phenomenon, propagate tothe aura mesh as well, leading to unnatural deformation of the auramesh for extreme poses. The artifact of the aura mesh due to LBScan get more problematic than that of the skin mesh because theaura mesh is located farther from the bones than the skin mesh. Inaddition, LBS may cause self-overlap of the aura mesh, in whichcase the interaction points should be properly identified from themultiple collision points due to the overlapped tetrahedrons. Moreadvanced skinning methods [KH14, VBG∗13, VGB∗14a, JBK∗12]will alleviate this problem.

If the skin is deformed by physical simulation, the aura meshcannot be animated with LBS methods. In this case, the aura meshmust be deformed appropriately given the deformed skin shape asa boundary constraint. A number of volumetric mesh deformation

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 9: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Figure 13: Two-character dance motion. The aura meshes define the interaction boundaries. Non-interactive individual motions are notaffected by the other party if they are separated beyond the aura meshes as can be seen in frames 101 and 118.

methods can be used for this purpose [CPSS10, Lip12, PNdJO14],and it will be necessary to evaluate the utility of these methods interms of the deformation quality of the aura mesh and the compu-tational efficiency.

One assumption on the aura mesh is that the partner’s motionoutside the boundary is not an interaction motion. However, therecould be cases that interaction motions occur outside the predefinedexternal boundary. In such cases skin-level retargeting is not impor-tant and existing methods [HKT10] will be useful. In this regards,the method of [WSSK13] parameterizes an object’s surroundingspace without boundary, which can be a strong advantage if theparameterization is used for motion retargeting.

Since our method computes retargeted pose per frame, we in-terpolate the result with that of the previous frame to increase thesmoothness. This strategy, however, may cause the violation ofthe constraints such as foot-ground contact. More principled waywould be to incorporate an additional smoothness term into the op-timization. While the visual qualities were similar when we testedwith the smoothness term for some examples shown in the paper,the smoothness term strategy is expected to generate better resultsfor more challenging scenarios.

We use every skin vertex as candidate interaction points toexpress the spatial relationships, indicating the desired positionsof the corresponding skin vertices of the targeted character. Ourmethod can be improved with respect to selecting the candidate in-teraction points by using a distribution of sample points that canemphasize the characteristics of the surface [PKH∗17]. In addition

to the interaction points, additional types of interaction indicatorsmay enhance the capability of capturing and retargeting interactionsemantics. For instance, an indicator that deals with the interactiondirection of body parts may complement the interaction points.

Finally, the retargeting quality of our method can be degradedif the source and target characters have very different dimensions.For example, the target interaction points may not be reachable ifan adult’s motion is retargeted to a child, and a retargeted motionthat satisfies all interaction points may look unnatural if a child’smotion is retargeted to an adult. A retargeting method that is ca-pable of solving such challenging cases should intelligently com-promise with regard to various desiderata depending on the motionsemantics while generating natural-looking motions by modifyingthe input motion to a large extent. This remains a challenging andimportant goal for in the area of motion retargeting.

Acknowledgement

This work was supported by the Global Frontier R&D Programfunded by NRF, MSIP, Korea (2015M3A6A3073743).

References[AAKC13] AL-ASQHAR R. A., KOMURA T., CHOI M. G.: Relation-

ship descriptors for interactive motion adaptation. In Proceedings of the12th ACM SIGGRAPH/Eurographics Symposium on Computer Anima-tion (2013), ACM, pp. 45–53. 2

[CK99] CHOI K.-J., KO H.-S.: On-line motion retargetting. In Com-puter Graphics and Applications, 1999. Proceedings. Seventh PacificConference on (1999), IEEE, pp. 32–42. 2

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.

Page 10: Aura Mesh: Motion Retargeting to Preserve the Spatial ...motionlab.kaist.ac.kr/wp-content/uploads/2018/03/auramesh_paper1097_CRC.pdfSOL13], or to various terrains [CKHL11,WMC11,HKS17]

Jin et al. / Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

[CKHL11] CHOI M. G., KIM M., HYUN K. L., LEE J.: Deformablemotion: Squeezing into cluttered environments. In Computer GraphicsForum (2011), vol. 30, Wiley Online Library, pp. 445–453. 2

[CPSS10] CHAO I., PINKALL U., SANAN P., SCHRÖDER P.: A simplegeometric model for elastic deformations. ACM transactions on graphics(TOG) 29, 4 (2010), 38. 4, 9

[Gle98] GLEICHER M.: Retargetting motion to new characters. In Pro-ceedings of the 25th annual conference on Computer graphics and inter-active techniques (1998), ACM, pp. 33–42. 2

[HCKL13] HO E. S., CHAN J. C., KOMURA T., LEUNG H.: Interac-tive partner control in close interactions for real-time applications. ACMTransactions on Multimedia Computing, Communications, and Applica-tions (TOMM) 9, 3 (2013), 21. 2

[HKS17] HOLDEN D., KOMURA T., SAITO J.: Phase-functioned neuralnetworks for character control. ACM Transactions on Graphics (TOG)36, 4 (2017), 42. 2

[HKT10] HO E. S., KOMURA T., TAI C.-L.: Spatial relationship pre-serving character motion adaptation. ACM Transactions on Graphics(TOG) 29, 4 (2010), 33. 1, 2, 9

[HvKW∗16] HU R., VAN KAICK O., WU B., HUANG H., SHAMIR A.,ZHANG H.: Learning how objects function via co-analysis of interac-tions. ACM Transactions on Graphics (TOG) 35, 4 (2016), 47. 3

[HZvK∗15] HU R., ZHU C., VAN KAICK O., LIU L., SHAMIR A.,ZHANG H.: Interaction context (icon): Towards a geometric function-ality descriptor. ACM Transactions on Graphics (TOG) 34, 4 (2015), 83.3

[JBK∗12] JACOBSON A., BARAN I., KAVAN L., POPOVIC J., SORKINEO.: Fast automatic skinning transformations. ACM Transactions onGraphics (TOG) 31, 4 (2012), 77. 8

[JBPS11] JACOBSON A., BARAN I., POPOVIC J., SORKINE O.:Bounded biharmonic weights for real-time deformation. ACM Trans.Graph. 30, 4 (2011), 78–1. 5

[KH14] KIM Y., HAN J.: Bulging-free dual quaternion skinning. Com-puter Animation and Virtual Worlds 25, 3-4 (2014), 321–329. 8

[KPBL16] KIM Y., PARK H., BANG S., LEE S.-H.: Retargeting human-object interaction to virtual avatars. IEEE transactions on visualizationand computer graphics 22, 11 (2016), 2405–2412. 1, 3

[LHP06] LIU C. K., HERTZMANN A., POPOVIC Z.: Compositionof complex optimal multi-character motions. In Proceedings of the2006 ACM SIGGRAPH/Eurographics symposium on Computer anima-tion (2006), Eurographics Association, pp. 215–222. 2

[Lip12] LIPMAN Y.: Bounded distortion mapping spaces for triangularmeshes. ACM Transactions on Graphics (TOG) 31, 4 (2012), 108. 9

[LL06] LEE J., LEE K. H.: Precomputing avatar behavior from humanmotion data. Graphical Models 68, 2 (2006), 158–174. 3

[LS99] LEE J., SHIN S. Y.: A hierarchical approach to interactive motionediting for human-like figures. In Proceedings of the 26th annual con-ference on Computer graphics and interactive techniques (1999), ACMPress/Addison-Wesley Publishing Co., pp. 39–48. 2

[MBBT00] MONZANI J.-S., BAERLOCHER P., BOULIC R., THAL-MANN D.: Using an intermediate skeleton and inverse kinematics formotion retargeting. In Computer Graphics Forum (2000), vol. 19, WileyOnline Library, pp. 11–19. 2

[MDB17] MOLLA E., DEBARBA H. G., BOULIC R.: Egocentric map-ping of body surface constraints. IEEE transactions on visualization andcomputer graphics (2017). 2

[MXH∗10] MA W., XIA S., HODGINS J. K., YANG X., LI C., WANGZ.: Modeling style and variation in human motion. In Proceedingsof the 2010 ACM SIGGRAPH/Eurographics Symposium on ComputerAnimation (2010), Eurographics Association, pp. 21–30. 2

[NK12] NAKAOKA S., KOMURA T.: Interaction mesh based motionadaptation for biped humanoid robots. In Humanoid Robots (Hu-manoids), 2012 12th IEEE-RAS International Conference on (2012),IEEE, pp. 625–631. 2

[PKH∗17] PIRK S., KRS V., HU K., RAJASEKARAN S. D., KANG H.,YOSHIYASU Y., BENES B., GUIBAS L. J.: Understanding and exploit-ing object interaction landscapes. ACM Transactions on Graphics (TOG)36, 3 (2017), 31. 9

[PML∗09] PRONOST N., MULTON F., LI Q., GENG W., KULPA R.,DUMONT G.: Morphology independent motion retrieval and control.International Journal of Virtual Reality 8, 4 (2009). 2

[PMRMB15] PONS-MOLL G., ROMERO J., MAHMOOD N., BLACKM. J.: Dyna: A model of dynamic human shape in motion. ACM Trans-actions on Graphics, (Proc. SIGGRAPH) 34, 4 (Aug. 2015), 120:1–120:14. 4

[PNdJO14] PFAFF T., NARAIN R., DE JOYA J. M., O’BRIEN J. F.:Adaptive tearing and cracking of thin sheets. ACM Transactions onGraphics (TOG) 33, 4 (2014), 110. 9

[SOL13] SEOL Y., O’SULLIVAN C., LEE J.: Creature features: on-line motion puppetry for non-human characters. In Proceedings of the12th ACM SIGGRAPH/Eurographics Symposium on Computer Anima-tion (2013), ACM, pp. 213–221. 2

[SZT∗07] SHI X., ZHOU K., TONG Y., DESBRUN M., BAO H., GUOB.: Mesh puppetry: cascading optimization of mesh deformation withinverse kinematics. In ACM Transactions on Graphics (TOG) (2007),vol. 26, ACM, p. 81. 2

[TAAP∗16] TONNEAU S., AL-ASHQAR R. A., PETTRÉ J., KOMURAT., MANSARD N.: Character contact re-positioning under large envi-ronment deformation. In Computer Graphics Forum (2016), vol. 35,Wiley Online Library, pp. 127–138. 2

[VBG∗13] VAILLANT R., BARTHE L., GUENNEBAUD G., CANI M.-P.,ROHMER D., WYVILL B., GOURMEL O., PAULIN M.: Implicit skin-ning: real-time skin deformation with contact modeling. ACM Transac-tions on Graphics (TOG) 32, 4 (2013), 125. 8

[VGB∗14a] VAILLANT R., GUENNEBAUD G., BARTHE L., WYVILLB., CANI M.-P.: Robust iso-surface tracking for interactive characterskinning. ACM Transactions on Graphics (TOG) 33, 6 (2014), 189. 8

[VGB∗14b] VOGT D., GREHL S., BERGER E., AMOR H. B., JUNG B.:A data-driven method for real-time character animation in human-agentinteraction. In International Conference on Intelligent Virtual Agents(2014), Springer, pp. 463–476. 2

[WLO∗14] WON J., LEE K., O’SULLIVAN C., HODGINS J. K., LEEJ.: Generating and ranking diverse multi-character interactions. ACMTransactions on Graphics (TOG) 33, 6 (2014), 219. 3

[WMC11] WEI X., MIN J., CHAI J.: Physically valid statistical modelsfor human motion generation. ACM Transactions on Graphics (TOG)30, 3 (2011), 19. 2

[WP09] WAMPLER K., POPOVIC Z.: Optimal gait and form for animallocomotion. In ACM Transactions on Graphics (TOG) (2009), vol. 28,ACM, p. 60. 2

[WSSK13] WANG H., SIDOROV K. A., SANDILANDS P., KOMURAT.: Harmonic parameterization by electrostatics. ACM Transactions onGraphics (TOG) 32, 5 (2013), 155. 2, 9

[XZY∗07] XU W., ZHOU K., YU Y., TAN Q., PENG Q., GUO B.: Gra-dient domain editing of deforming mesh sequences. ACM Transactionson Graphics (TOG) 26, 3 (2007), 84. 2

[YAH10] YAMANE K., ARIKI Y., HODGINS J.: Animating non-humanoid characters with human motion data. In Proceedings of the2010 ACM SIGGRAPH/Eurographics Symposium on Computer Anima-tion (2010), Eurographics Association, pp. 169–178. 2

[ZCK17] ZHAO X., CHOI M. G., KOMURA T.: Character-object in-teraction retrieval using the interaction bisector surface. In ComputerGraphics Forum (2017), vol. 36, Wiley Online Library, pp. 119–129. 3

[ZWK14] ZHAO X., WANG H., KOMURA T.: Indexing 3d scenes usingthe interaction bisector surface. ACM Transactions on Graphics (TOG)33, 3 (2014), 22. 3

c© 2018 The Author(s)Computer Graphics Forum c© 2018 The Eurographics Association and John Wiley & Sons Ltd.