10
Authoring System for Choreography Using Dance Motion Retrieval and Synthesis Ryo Kakitsuka * 1 Kosetsu Tsukuda 2 Satoru Fukayama 2 Naoya Iwamoto 1 Masataka Goto 2 Shigeo Morishima 1 1 Waseda University 2 National Institute of Advanced Industrial Science and Technology (AIST) Abstract Dance animations of a three-dimensional (3D) character rendered with computer graphics (CG) have been created by using motion capture systems or 3DCG animation editing tools. Since these methods require skills and a large amount of work from the creator, automated choreog- raphy synthesis system has been developed to make dance motions much easier. However, those systems are not designed to reflect artists’ preferred context of choreography into the synthesized result. Therefore, supporting the user’s preference is still important. In this pa- per, we propose a novel dance creation system that supports the user to create choreography with their preferences. A user firstly selects a preferred motion from the database, and then assigns it to an arbitrary section of the music. With a few mouse clicks, our system helps a user to search for alternative dance motion that reflects his or her preference by using relevance feedback. The system automatically synthesizes a continuous choreography with the constraint condition that the motions specified by the user are fixed. We conduct user studies and show that users can create new dance motions easily with their preferences by using the system. Keywords: relevance feedback, character animation, motion capture, dance * Corresponding author:[email protected] 1 Introduction Owing to the recent development of 3DCG tools, 3D character dance animation has been widely used in the entertainment field; for ex- ample, in films and computer games, and even for live performances in the form of “virtual popstar”, a 3DCG character animation that pro- jected on stage. Although there is a great neces- sity for choreographic authoring tools for char- acter dance animation, using 3DCG animation editing tools still requires a large amount of work, skill, and dance knowledge. In light of the above, the automatic synthe- sis of dance motion has been the method used to construct a 3D character dance animation with ease. However, fully automated synthe- sis [1, 2] does not allow the animators to ex- press their preferences. There are methods for editing timing and speed of exercise in a motion sequence [3]. Although these methods are use- ful to reflect a user’s preference to choreography or to edit the dance motion more specifically, it is still insufficient when a user wants to replace motion in a choreography with another one in a specific part of the music. In this case, mo- tion retrieval technique enables the animators to choose the alternative motion which matches the preference from the database. However, exist- ing motion retrieval methods were insufficient for the purpose, since they often place a large burden on animators by requiring input motions as queries by means of figure drawing [4, 5] or motion capture devices such as Microsoft Kinect CASA 2017 122

Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

Authoring System for ChoreographyUsing Dance Motion Retrieval and Synthesis

Ryo Kakitsuka∗1 Kosetsu Tsukuda2 Satoru Fukayama2

Naoya Iwamoto1 Masataka Goto2 Shigeo Morishima1

1Waseda University2National Institute of Advanced Industrial Science and Technology (AIST)

AbstractDance animations of a three-dimensional (3D)character rendered with computer graphics(CG) have been created by using motion capturesystems or 3DCG animation editing tools. Sincethese methods require skills and a large amountof work from the creator, automated choreog-raphy synthesis system has been developed tomake dance motions much easier. However,those systems are not designed to reflect artists’preferred context of choreography into thesynthesized result. Therefore, supporting theuser’s preference is still important. In this pa-per, we propose a novel dance creation systemthat supports the user to create choreographywith their preferences. A user firstly selects apreferred motion from the database, and thenassigns it to an arbitrary section of the music.With a few mouse clicks, our system helps auser to search for alternative dance motion thatreflects his or her preference by using relevancefeedback. The system automatically synthesizesa continuous choreography with the constraintcondition that the motions specified by the userare fixed. We conduct user studies and showthat users can create new dance motions easilywith their preferences by using the system.

Keywords: relevance feedback, characteranimation, motion capture, dance

∗Corresponding author:[email protected]

1 Intr oduction

Owing to the recent development of 3DCGtools, 3D character dance animation has beenwidely used in the entertainment field; for ex-ample, in films and computer games, and evenfor live performances in the form of “virtualpopstar”, a 3DCG character animation that pro-jected on stage. Although there is a great neces-sity for choreographic authoring tools for char-acter dance animation, using 3DCG animationediting tools still requires a large amount ofwork, skill, and dance knowledge.

In light of the above, the automatic synthe-sis of dance motion has been the method usedto construct a 3D character dance animationwith ease. However, fully automated synthe-sis [1, 2] does not allow the animators to ex-press their preferences. There are methods forediting timing and speed of exercise in a motionsequence [3]. Although these methods are use-ful to reflect a user’s preference to choreographyor to edit the dance motion more specifically, itis still insufficient when a user wants to replacemotion in a choreography with another one ina specific part of the music. In this case, mo-tion retrieval technique enables the animators tochoose the alternative motion which matches thepreference from the database. However, exist-ing motion retrieval methods were insufficientfor the purpose, since they often place a largeburden on animators by requiring input motionsas queries by means of figure drawing [4, 5] ormotion capture devices such as Microsoft Kinect

CASA 2017 122

Page 2: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

sensors1 [6, 7].The goal ofthis study is to provide an au-

thoring system for creating character dance ani-mation with users’ preferences that does not re-quire any knowledge of choreographic composi-tion and variations, or skills for constructing CGanimations. Our approach consists of the fol-lowing two steps: 1) the user assigns a preferredmotion obtained through our motion search al-gorithm to an arbitrary section of the given mu-sic. 2) our system automatically assigns mo-tion segments from the database to the remain-ing sections of the music so that the sequenceof motions matches the music input consideringthe connectivity of the motion segments and theclimax level of music.

To implement the system, we address twotechnical issues. The first one is how to enablea user to easily search for the preferred motionfrom the motion database. The second is how toautomatically synthesize a dance sequence un-der the condition where the motions selected bythe user are fixed. We solve the first problemby visually presenting motion sequences thatcan be changed based on user’s relevance feed-back [8] and the diversification framework [9].It is difficult to judge whether a dance motionis good or not without previewing a dance inconjunction with a corresponding musical piece.In our system, the user can view candidate se-quences on the screen and find a preferred alter-native motion with only a few mouse clicks. Weaddress the second issue filling the time betweenfixed parts with reasonable dance motions auto-matically selected from the database. We selectthe motions by minimizing the sum of the tran-sition cost function using Dijkstra’s algorithm.

The main contributions in this paper are asfollows.

1. We introduce a relevance feedback frame-work in dance motion retrieval from thedance motion capture database. Ourmethod enables a user to easily search forthe preferred dance motion.

2. Our method can automatically synthesize adance sequence considering the connectiv-ity of the motion segments and how well a

1www.xbox.com/en-US/kinect

candidate motion segment matches a corre-sponding piece of music.

3. We propose a new system for construct-ing character dance animation consideringthe animator’s preference. Results of auser study with the system showed that par-ticipants found the system to be suitablefor easily creating new dance motions withtheir preferences.

2 Related Work

This section describes previous research relat-ing to the two technical issues referred to above,namely, motion retrieval methods and study ofautomatic dance motion synthesis.

2.1 Motion Retrieval

In the past decade, motion retrieval technologieshave been advancing owing to the increasingavailability of large-scale motion databases2,3.There are several ways to retrieve a desired mo-tion sequence from a large-scale human motiondatabase, the simplest being the use of textualdescription key-words as queries; for example,“walking”, “jumping”, or more detailed sen-tences such as “a kick of the left foot followedby a punch” [10]. While this approach is in-tuitive and efficient, it requires a large amountof manual effort to annotate all the motion se-quences in the database, and it is not suffi-cient for retrieving certain desired dance mo-tion sequences, because some dance motionsdo not have appropriate names. To overcomethese limitations, many researchers have studiedthe content-based human motion retrieval meth-ods [6, 7, 11]. With those approaches, the de-sired motion sequences are retrieved by submit-ting a similar motion sequence as a query usingsimple and easy motion capture devices such asMicrosoft Kinect sensors or the Perception Neu-ron4. However, it is still difficult to acquire theappropriate motion sequences as a query, espe-cially when a user wishes to retrieve a dance mo-tion sequence, because these often include com-plex or difficult motions and it is necessary to

2http://mocap.cs.cmu.edu3http://smile.uta.edu/hmd/4https://neuronmocap.com

CASA 2017 123

Page 3: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

be able to performsuch movements. To acquirethe appropriate motion sequences as a query,a puppet interface [12] and sketch-based inter-faces [4, 5, 13] have been proposed. However,submitting a motion sequence by using the pup-pet interface or sketching a person’s movementis not easy for a general user.

Moreover, these approaches are premised onthe clarity of user’s information desire. Theycannot accommodate unclear requests of a user,such as searching for a dance motion sequenceby matching an expected image to a piece ofmusic. Exploratory search [14] is effective foraccommodating such unclear requests. The ex-ploratory search system [15, 16] needs to be de-signed so that it is tailored to a purpose and tar-geted users. For example, Bernard et al. [15]presented an exploratory search system in closecollaboration with the targeted users, who wereresearchers working on human motion synthesisand analysis, including a summative field study.However, an exploratory dance motion searchsystem for constructing character dance anima-tion matching input music has not yet been es-tablished. Our dance motion search system en-ables the user to search a wide variety of motionsequences without in-depth knowledge of dance.

2.2 Dance Motion Synthesis

The mainstream of generating dance motionfrom input music is uncovering the relation-ships between existing dances and their ac-companying music. Musical contents such astempo [17], acoustic beat features [1, 18], pitchand chord [19], and melodic contour [20], havebeen used to analyze these relationships, in ad-dition to combinations of acoustic features [21]and structural similarity [22]. These approachesconstruct a dance sequence by concatenatingsegments of existing dances. A method to gener-ate dance choreography automatically using ma-chine learning is introduced [2] and it enablesvarious dance motions that are not included inexisting motion databases to be generated.

However, in our proposed system, these meth-ods cannot automatically synthesize a sequenceunder the constraint condition that the motionschosen by the user are fixed. The dance syn-thesis method proposed by Xu et al. [23] hasthe potential to overcome the problem described

Figure 1. Overview of the system

above. The authors appliedMotion Graphs[24]to motion synthesis for synchronization withstreaming music.Motion Graphsis a methodfor creating realistic, controllable motions us-ing the structural expression of relations amongsmall bits of motion. The dance synthesismethod introducedSynchronization Capacitytoensure that the duration of the generated motionmatches with the duration of the input music.However, this constraint often causes the samemovement to be generated repeatedly, becauseof a transitable loop of nodes permanently be-ing passed. We define the transition cost func-tion between motion segments, considering theconnectivity of motion segments and how wella candidate motion segment matches a corre-sponding piece of music. Then, the filling dancemotions are selected by minimizing the transi-tion cost function using Dijkstra’s algorithm. Inthis way, our method automatically synthesizesa dance sequence without a loop of nodes be-ing made, under the constraint condition that themotions selected by the user are fixed.

3 Choreographic AuthoringSystem

Fig. 1 provides an overview of the system.Firstly, the user of the system assigns a preferredmotion to any part of the input music timeline(Fig. 1, 1⃝). Dance motions that correspond toother parts of the music are then synthesized

CASA 2017 124

Page 4: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

automatically, so thatthe movements are con-nected naturally (Fig. 1,2⃝). Secondly, the userselects the parts he or she wishes to fix or changein a generated sequence of dance (Fig. 1,3⃝).The part the user wishes to change is then re-placed with other preferred motions from thedatabase (Fig. 1,4⃝). Dance motions that cor-respond to other parts in the music are updatedautomatically (Fig. 1, 5⃝). Finally, these stepsare repeated in order to compose a new dancemotion considering the user’s preference.

There are two technical challenges to realiz-ing the proposed system, as described in Sec-tion 1. We describe the solution in Sections 4and 5. Motion segments in the database areobtained by separating existing dance motions5

at every four beats. Although automatic dancemotion segmentation methods based on choreo-graphic primitives are proposed [25], we do notuse them owing to insufficient accuracy of them.

4 Dance Search System UsingRelevance Feedback

Our approach described in Sections 4 and 5 isconstructed under the following two hypothe-ses built by Shiratori et al. [1]: 1) the rhythmof dance motions is synchronized to that of mu-sic. 2) the intensity of dance motions is synchro-nized to that of music.

In this section, we describe how our systemenables users to easily search for preferred mo-tions from the motion database. For most users,it is difficult to judge whether a dance motion isgood or not without previewing it with a cor-responding musical piece. In our system, theuser can view candidate sequences on a screenand simply choose the preferred one. Our sys-tem enables the user to re-retrieve motion datausing relevance feedback [8]. During the earlyphases of feedback, our system expands the va-riety of candidates, and gradually converges asthe number of re-retrievals increases. We uti-lize the diversification framework proposed byDou et al. [9] to select candidates with simul-taneous consideration of a candidate’s relevanceto the corresponding piece of music and motionsalready selected. Then + 1-th motion segment

5http://www.perfume-global.com/

mn+1 is given by

mn+1 =argmaxm∈U\Sn

{ρ · rel(q,m)

+ (1− ρ) · Φ(m,Sn, Ll)}.(1)

Sn is the set of motion segments already recom-mended by the system andLl is the set of motionsegments the user likes.U is the universal set ofmotion segments in the database.ρ (0 ≤ ρ ≤ 1)is the parameter that controls the tradeoff be-tween rel(q,m) and Φ(m,Sn, Ll). rel(q,m)represents the relation between the candidatemotion segmentm and the corresponding pieceof musicq. We use the root mean square (RMS)mean of each musical bar as the music featureand the motion intensity (as defined by Eq. (2))mean of each motion segment as the motion fea-ture. We define the motion intensityW as thelinear sum of the approximated instantaneousspeed calculated from the position of the joints:

W (f) =∑i

αi · ∥xi∥ (2)

whereαi is the regularization factor for thei-th joint. ∥xi∥ is the Euclidean norm of theapproximated instantaneous velocity of thei-thjoint. These regularization parameters dependon which parts we recognize as being importantfor dance expression.Φ(m,Sn, Ll) controls thevariety of candidates. We defineΦ(m,Sn, Ll)as follows:

Φ(m,Sn, Ll) = τ ·min{D(m,mi)|mi ∈ Sn}+ (1− τ) ·max{Sim(m,mj)|mj ∈ Ll}

(3)

whereD(m,mi) andSim(m,mi) represent thedissimilarity and the similarity betweenm andmi, respectively.Sim(m,mi) is calculated us-ing the method proposed by Wang et al. [7].D(m,mi) is given by D(m,mi) = 1 −Sim(m,mi). τ is the parameter that controls thediversity of candidates and is given by{

τ0 = 1τk = 19

20τk−1 (k = 1, 2, · · ·) (4)

wherek is the number of feedback instances.As the number of re-retrievals increases,τ de-creases, thereby contracting the variety of can-didates.

CASA 2017 125

Page 5: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

Figure 2. Automaticsynthesis of a dance se-quence using Dijkstra’s algorithm.The red bold lines show the route min-imizing the sum of the cost functions.

5 Automatic Synthesis of a DanceSequence

Our system automatically synthesizes a dancesequence matched to input music, with the con-straint condition that the motions selected by theuser are fixed. Therefore, the user is able to fo-cus on the parts that he or she has a strong pref-erence for of the input music timeline.

First, the user assigns a preferred motionobtained through our dance search system de-scribed in Section 4 to the parts with strong pref-erence. Subsequently the system assigns mo-tions from the motion database to other partsof the music by considering the connectivity ofmotion segments and how well a candidate mo-tion segment matches a given piece of music.The goal of this is to acquire the most appro-priate motion segment sequence. The rhythmof the candidate dance motion segments is syn-chronized to that of the input music by resizingthe motion segments. The motion segment se-quence that minimizes the sum of the transitioncost functionC is obtained by using Dijkstra’salgorithm (Fig. 2). The transition cost functionfrom the motion segmentsA toB is given by

C(A,B) = β·Cconnect(A,B)

+ (1− β) · Cmusic(q,B).(5)

β (0 ≤ β ≤ 1) is the parameter that con-trols the tradeoff betweenCconnect(A,B) and

Cmusic(q,B). Cconnect(A,B) analyzes the con-nectivity from motion segmentsA to B, whileCmusic(q,B) analyzes how well the intensity ofthe motion segmentB matches that of the cor-responding piece of musicq. These evaluationfunctions are described in detail in Sections 5.1and 5.2.

5.1 Evaluating the Climax Level of Music

In the evaluation functionCmusic(q,B), to an-alyze how well the intensity of the motion seg-mentB matches that of the corresponding pieceof musicq, we use the climax level analysis ofmusic proposed by Sato et al. [26]. The au-thors calculate the temporal climax transition ofa piece of musicAs(t) using music features suchas RMS energy,∆RMS energy, roughness, andspectral flux. They select music features fromMIR toolbox Ver. 1.3 [27].

We use the temporal climax transition, nor-malized to values ranging from 0 to 1.

A′s(t) =

As(t)

maxAs(t)(6)

Then, we calculate themeans ofA′s(t) in each

of the music bars, andCmusic(q,B) is given by

Cmusic(q,B) =∣∣A′

s(t, q)−A′s(t, q

B)∣∣ . (7)

A′s(t, q) represents the mean ofA′

s(t) in themusic barq. A′

s(t, qB) represents the mean of

A′s(t) in the original music barqB correspond-

ing to the motion segmentB.

5.2 Connectivity Analysis of MotionSegments

In the evaluation functionCconnect, we uti-lize the connectivity analysis of motion seg-ments [1]. We consider both the posture simi-larity Spose and the movement similaritySmove.The posture similaritySpose between theiA-thframe of the motion segmentA and thejB-thframe of the motion segmentB is defined as theangular similarity of the link direction vectorsv:

Spose(iA, jB) =

∑l

γl ·(h(vl(iA)) ·h(vl(j

B)))

(8)

CASA 2017 126

Page 6: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

Figure 3. Exampleof the choreographic author-ing system screen

whereγl is the regularization factor for thel-thlink. Throughh, an input vectora is convertedto the unit vectora/|a|. The movement similar-ity Smove is calculated as follows:

Smove(iA, jB)

=∑l

γl · g[h(vl(jB)− vl(i

A)) · h(vl(iA))]

· g[h(vl(jB)− vl(i

A)) · h(vl(jB))]

(9)

where g[x] = x if x > 0, and g[x] =0 otherwise. Here,v is calculated from thecandidate motion segment. Thus, connectiv-ity from motion segmentsA to B is given byCconnect(A,B) = 1/(Spose + Smove)

2.

5.3 Transition Motion Generation

The resulting motion sequence is obtained byconnecting the motion segment sequence usingcubic interpolation for the posture of a charac-ter. For the position of the character, we payattention to the position relative to the ground,to avoid effects such as sliding or floating in theair.

6 Interface

The screenshot of the implemented user inter-face of our system is shown in Fig. 3. The inter-face includes basic viewing functions, such asa window showing the dance sequence, func-tions to load the input music, save, play andstop/pause the generated video, and the input

Figure 4. (a) Exampleof interactive sequenceselection. Six different dance candi-date sequences are previewed. Theupper-right and lower-right candidatesare selected by the user. (b) By click-ing the search button, the dance se-quence candidates are updated usingrelevance feedback.

music timeline. A musical bar is used as theunit to which the user assigns a preferred mo-tion segment.

The interface also provides the followingfunctions for constructing a character dance an-imation reflecting the user’s preferences.

Searching for a user preference dance mo-tion segment:Fig. 4 shows how a user searchesfor a preferred motion segment from the mo-tion database. Upon clicking a musical bar towhich the user wishes to assign a motion seg-ment, the system visually presents the user withcandidate sequences selected on the basis of thesampling method described in Section 4. Thenumber of candidate sequences can be changedfrom four to eight at an interval of two usingthe “+” and “−” buttons (the default number issix). Users can update candidate sequences re-flect their preferences by clicking the search but-ton after selecting one or more motion segmentsthey like. When the users find a preferred mo-tion segment, they can assign it to the musical

CASA 2017 127

Page 7: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

Figure 5. Diversity of the candidates

bar by clicking the set button.Generating a dance sequence matched to in-put music: When the generate button is clicked,the system automatically synthesizes a continu-ous choreography with the constraint conditionthat the motion segments are selected by theuser. The user can view the generated dancesequence in the upper-right section of the inter-face.

7 Experimental Results

7.1 Diversity of Candidates

We defined the diversity of candidatesas the mean of the dissimilarities be-tween each pair of six sequence candidates( 115

∑5i=1

∑6j=i+1D(mi,mj)). Fig. 5 portrays

the relationship between the diversity of can-didates and the number of feedback instances.As expected, the variety of candidates graduallyconverges as the number of feedback iterationsis increased. However, the diversity of candi-dates begins to increase after the number offeedback iterations is 3. This is because themotion segments selected as sequence candi-dates previously are neglected, that is to say,m ∈ U\Sn in Eq. (1). Ifm ∈ U\Sn is replacedwith m ∈ U , the user cannot update the dancesequence candidates displayed in the screen byclicking the search button after the combinationof the minimum mean of the dissimilaritiesbetween each pair of sequence candidates isdisplayed.

7.2 Resulting Dance Animation

To check the effectiveness of our dance synthe-sis method in the system, we conducted a userstudy. Twenty-three computer science students

Figure 6. Results of(Q1) and (Q2) for eachdance synthesis method.

participated in this study. The participants wereshown six dance animations in total.

As input, we used three pieces of music (onepiece of “Perfumeglobalsite sound” and twopiecesof “2U Night Drive”). Each piece ofmusic involved adding two types of generateddance. These dance animations were generatedby using motion segments from the database,with the constraint condition that the motionsspecified by the user are fixed. Other than usingour method, motion segments were randomlyselected from the database. The synthesized mo-tion sequence is obtained by connecting the mo-tion segment sequence using cubic interpolationas with our method. To evaluate satisfaction andutility, after watching each of the video clips, theparticipants were asked the following questions,

(Q1) Did you feel satisfied with the movementsof character body? (7-points Likert scale,7: most satisfied; 1: least satisfied)

(Q2) Did you feel that the dance is matched tothe music? (7-points Likert scale, 7: mostsatisfied; 1: least satisfied)

(Q3) When did you feel satisfied or dissatisfiedwhen watching the video? (open ended)

Fig. 6 shows the results of the questionnaire.The results statistically proved that our methodoutperforms random selection in terms of theresulting movements quality of character body(p-value was2.80 × 10−9 using the Wilcoxonsigned-rank test) and how well a dance matchesto input music (p-value was3.17 × 10−7 usingthe Wilcoxon signed-rank test). In addition, theresult showed that our dance synthesis methodis effective in the system (the average score ofQ1 and Q2 were 5.67 and 5.39 respectively).

On the other hand, a participant who has ex-perience of dancing provided a negative com-

CASA 2017 128

Page 8: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

Figure 7. Anexample result of generated dance motion. The motions that a user selects are withinorange frames. The system automatically synthesizes a continuous choreography with theconstraint condition that the motions selected by the user are fixed. The duration of theresulting dance is 32 beats (approximately 14.77 sec) and the screenshots are taken at twobeat (approximately 0.92 sec) intervals.

ment about the dance synthesized by our methodas follows.

“ I felt slightly dissatisfied with the quality ofthe dance owing to lack of choreographiccoherence on the same music structure.”

To improve the quality of a synthesized dance,it is effective to consider how well a choreo-graphic structure matches to a music structure.The dance animations generated by our systemare shown in Fig. 7.

7.3 User Feedback about the System

In this section, we discuss the early reactions ofour targeted users who are interested in seeinga dance without any knowledge of choreogra-phy and any experience of making 3D anima-tion. The number of them was about 20 peopleor more. We asked them to use our user inter-face or to observe authoring choreography aftersufficiently describing the purpose of our systemand how to use our interface.

Most of them felt that the computing time ofgenerating a dance sequence is short enough tooperate without causing stress to the user. Onthe other hand, some others presented improve-ments to our interface. They felt that the time-line without visualizing the music structure andthe state of being unable to browse a historyof past searches are inconvenient. Visualizing

the input music structure estimated automati-cally and implementation of the search historyfunction are our future work. In addition, an-other user presented new functions to import anarbitrary character model data, stage and back-ground, since he felt that the suitable choreogra-phy varies by character or scene.

Many of them agree with our motivation todevelop this program. Moreover, they had aninterest in the system and told us they want touse it. Some of them felt that it is magnificentto enable people who cannot dance to create achoreography, since only someone who has anenough skill to dance is permitted to create thechoreography, previously.

8 Conclusion and Future Work

In this paper, we have proposed a novel systemfor choreographic authoring for character danceanimation. To consider in implementing the sys-tem, we have provided the dance search systemusing relevance feedback and automatic synthe-sis of a dance sequence considering the connec-tivity of the dance motion segments and howwell a candidate motion segment matches a cor-responding piece of music. With this system,we can create new dance performances that con-sider the user’s preference in the virtual space.

However, since the variety of the choreog-

CASA 2017 129

Page 9: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

raphy of thesynthesized dance depends on thedance motion database, it is important to expandthe database. In addition, to improve the qualityof a resulting dance, it is effective to considerhow well a choreographic structure matches to amusic structure. Furthermore, it is important toimprove our interface in ways such as the visu-alization of the music structure that is estimatedautomatically [28] and the implementation ofthe search history function as future works.

This paper should be regarded as being thenew initiative addressing a challenging problemin the animation community. It can enhance thequality of dance creation for the people in needand facilitate creative activities.

Acknowledgements

Hatsune Mikuis copyrighted by Cripton FutureMedia INC., and we used it in our research un-der PIAPRO license (www.piapro.net). 3Dmodel ofHatsune Mikuused in our research iscreated by Lat. This work was supported in partby ACCEL, JST.

References

[1] Takaaki Shiratori, Atsushi Nakazawa, andKatsushi Ikeuchi. Dancing-to-music char-acter animation. InComputer GraphicsForum, volume 25, pages 449–458. WileyOnline Library, 2006.

[2] Satoru Fukayama and Masataka Goto.Automated choreography synthesis usinga gaussian process leveraging consumer-generated dance motions. InProc. of Ad-vances in Computer Entertainment Tech-nology, page 23. ACM, 2014.

[3] Tomohiko Mukai and Shigeru Kuriyama.Pose-timeline for propagating motionedits. In Proc. of the ACM SIG-GRAPH/Eurographics Symposium onComputer Animation, pages 113–122.ACM, 2009.

[4] Myung Geol Choi, Kyungyong Yang,Takeo Igarashi, Jun Mitani, and Jehee Lee.Retrieval and visualization of human mo-tion data via stick figures. InComputer

Graphics Forum, volume 31, pages 2057–2065. Wiley Online Library, 2012.

[5] Jun Xiao, Zhangpeng Tang, Yinfu Feng,and Zhidong Xiao. Sketch-based humanmotion retrieval via selected 2d geomet-ric posture descriptor.Signal Processing,113:1–8, 2015.

[6] Michalis Raptis, Darko Kirovski, andHugues Hoppe. Real-time classifica-tion of dance gestures from skeleton an-imation. In Proc. of the ACM SIG-GRAPH/Eurographics symposium on com-puter animation, pages 147–156. ACM,2011.

[7] Pengjie Wang, Rynson WH Lau, ZhigengPan, Jiang Wang, and Haiyu Song. Aneigen-based motion retrieval method forreal-time animation.Computers & Graph-ics, 38:255–267, 2014.

[8] Joseph John Rocchio. Relevance feedbackin information retrieval. 1971.

[9] Zhicheng Dou, Sha Hu, Kun Chen, Rui-hua Song, and Ji-Rong Wen. Multi-dimensional search result diversification.In Proc. of ACM international conferenceon Web search and data mining, pages475–484. ACM, 2011.

[10] Atsuo Yoshitaka and Tadao Ichikawa. Asurvey on content-based retrieval for mul-timedia databases.IEEE Trans. on Knowl-edge and Data Engineering, 11(1):81–93,1999.

[11] Meinard Muller, Tido Roder, and MichaelClausen. Efficient content-based retrievalof motion capture data. InACM Trans.on Graphics, volume 24, pages 677–685.ACM, 2005.

[12] Naoki Numaguchi, Atsushi Nakazawa,Takaaki Shiratori, and Jessica K Hod-gins. A puppet interface for retrievalof motion capture data. InProc. ofACM SIGGRAPH/Eurographics Sympo-sium on Computer Animation, pages 157–166. ACM, 2011.

CASA 2017 130

Page 10: Authoring System for Choreography Using Dance Motion ...casa2017.kaist.ac.kr/wordpress/wp-content/uploads/2017/...for live performances in the form of “virtual popstar”, a 3DCG

[13] Min-Wen Chao, Chao-HungLin, JackieAssa, and Tong-Yee Lee. Human mo-tion retrieval from hand-drawn sketch.IEEE Trans. on Visualization and Com-puter Graphics, 18(5):729–740, 2012.

[14] Ryen W White and Resa A Roth. Ex-ploratory search: beyond the query-response paradigm (synthesis lectures oninformation concepts, retrieval & ser-vices). Morgan and Claypool Publishers,3, 2009.

[15] Jurgen Bernard, Nils Wilhelm, BjornKruger, Thorsten May, Tobias Schreck,and Jorn Kohlhammer. Motionexplorer:Exploratory search in human motion cap-ture data based on hierarchical aggre-gation. IEEE Trans. on visualizationand computer graphics, 19(12):2257–2266, 2013.

[16] Songle Chen, Zhengxing Sun, Yan Zhang,and Qian Li. Relevance feedback for hu-man motion retrieval using a boosting ap-proach. Multimedia Tools and Applica-tions, 75(2):787–817, 2016.

[17] Costas Panagiotakis, Andre Holzapfel,Damien Michel, and Antonis A Argyros.Beat synchronous dance animation basedon visual analysis of human motion andaudio analysis of music tempo. InInter-national Symposium on Visual Computing,pages 118–127. Springer, 2013.

[18] Jae Woo Kim, Hesham Fouad, John L Sib-ert, and James K Hahn. Perceptually mo-tivated automatic dance motion generationfor music. Computer Animation and Vir-tual Worlds, 20(2-3):375–384, 2009.

[19] Ferda Ofli, Engin Erzin, Yucel Yemez, andA Murat Tekalp. Learn2dance: Learn-ing statistical music-to-dance mappingsfor choreography synthesis.IEEE Trans.on Multimedia, 14(3):747–759, 2012.

[20] Sageev Oore and Yasushi Akiyama. Learn-ing to synthesize arm motion to music byexample. 2006.

[21] Rukun Fan, Songhua Xu, and WeidongGeng. Example-based automatic music-

driven conventional dance motion synthe-sis. IEEE Trans. on visualization and com-puter graphics, 18(3):501–515, 2012.

[22] Minho Lee, Kyogu Lee, and JaeheungPark. Music similarity-based approach togenerating dance motion sequence.Mul-timedia tools and applications, 62(3):895–912, 2013.

[23] Jianfeng Xu, Koichi Takagi, and ShigeyukiSakazawa. Motion synthesis for synchro-nizing with streaming music by segment-based search on metadata motion graphs.In Multimedia and Expo, IEEE Interna-tional Conference on, pages 1–6. IEEE,2011.

[24] Lucas Kovar, Michael Gleicher, andFrederic Pighin. Motion graphs. InACMTrans. on graphics, volume 21, pages 473–482. ACM, 2002.

[25] Narumi Okada, Naoya Iwamoto, TsukasaFukusato, and Shigeo Morishima. Dancemotion segmentation method based onchoreographic primitives. InGRAPP,pages 332–339, 2015.

[26] Haruki Sato, Tatsunori Hirai, TomoyasuNakano, Masataka Goto, and Shigeo Mor-ishima. A soundtrack generation system tosynchronize the climax of a video clip withmusic. In Multimedia and Expo, IEEEInternational Conference on, pages 1–6.IEEE, 2016.

[27] Olivier Lartillot and Petri Toiviainen. Amatlab toolbox for musical feature extrac-tion from audio. InInternational Confer-ence on Digital Audio Effects, pages 237–244, 2007.

[28] Masataka Goto. A chorus section detec-tion method for musical audio signals andits application to a music listening sta-tion. IEEE Trans. on Audio, Speech, andLanguage Processing, 14(5):1783–1794,2006.

CASA 2017 131