A Demonstration of Continuous Interaction with Elckerlyc

Preview:

DESCRIPTION

A Demonstration of Continuous Interaction with Elckerlyc. Herwin van Welbergen, Dennis Reidsma Job Zwiers. Temporal coordination: from turn based interaction to continuous interaction. Current interaction with virtual humans Using speech, facial expression and gestures - PowerPoint PPT Presentation

Citation preview

A Demonstration of Continuous Interaction with Elckerlyc

Herwin van Welbergen, Dennis ReidsmaJob Zwiers

Temporal coordination: from turn based interaction to continuous interaction

Current interaction with virtual humans• Using speech, facial

expression and gestures• Turn-based, ‘push-to-talk’

interaction paradigm

Our goals

• Allow continuous interaction

• All partners express themselves continuously

• In parallel• Needs prediction

and anticipation

Continuous interaction

Temporal coordination in human behavior- Timing of backchannel- Allowing room for (intrusive) backchannel from

partner- Overlap during non-intrusive backchannel- …

In this talk- Focus on specifying continuous interaction- Using minor timing and parameter value adjustments

in ongoing behavior

Elckerlyc

Temporal coordination in interaction with virtual humans• Elckerlyc: open source platform for behavior generation

designed for such temporally coordinated behavior (… and other things)

• Executes behavior specified in the behavior markup language (BML)

• Implements SAIBA Framework

Why BML/SAIBA?

> 10 years of research on ECAs• Increasingly sophisticated models

– Directed to different aspects of behavior

• Building a state-of-the-art ECA entails re-implementing all these models

– Allow reuse of models

SAIBA framework

Intent Planning

BehaviorPlanning

Behavior Realization(Smartbody, Greta,

ACE, Elckerlyc)

FML BML

FeedbackFeedback

• Three-level setup of many existing ECAs• Separation between stages• Each stage is a black box• Clear-cut definition of interfaces between stages

BML Example

Bml Request

<bml id=”bml1"><gaze type=”AT” id="gaze1" target="AUDIENCE"/>

<speech id="speech1"> <text>Welcome ladies and gentlemen!

</text></speech>

</bml>

Behaviors

BML Example

BML Behaviors

Gesture Head Gaze Speech Locomotion Posture Facial expression

eyeseyes

torsotorso

legslegs

Coordin

ation

Phases and sync-points

Synchronizing behaviors

<bml><gaze id="gaze1" target="AUDIENCE"/> <speech start=”gaze1:ready” id="speech1"> <text>Welcome ladies and gentlemen!

</text></speech>

</bml>

Example: gaze shift to moving target

<bml id="bml1"><speech id="bluespeech" start="gaze1:ready">

<text>I'm staring at the blue cube.</text></speech>

<speech id="greenspeech" start="gaze2:start"><text>Look! A green sphere.</text>

</speech>

<speech id="bluespeech2" start="gaze2:end"><text>Looking at the blue cube again.</text>

</speech>

<gaze id="gaze1" start="1" ready="2" type="AT" modality="NECK" target="bluebox"/>

<gaze id="gaze2" start="4" ready="5" relax="6" end="7" type="AT" modality="NECK" dynamic="true" target="greensphere"/>

</bml>

Continuous interaction using SAIBA

Intent Planner

BehaviorPlanner

FML BML Anticipator

SensorUser

VirtualHuman

movement & speech

Sensor

Sensor+interpretation

Realizer

Continuous Interaction Scenarios

Intent Planner

BehaviorPlanner

[increase tempo]

Sensors +Interpretation

exercise istoo easy

increasedifficulty

BehaviorRealizer

exercisedescription

TempoAnticipator

predictedtempo

Sensor

User

movement & speech

VirtualTrainer

Virtual Trainer

Continuous Interaction Scenarios

Virtual Trainer<bml id="bml1">

<bmlt:procanimation id="exercise1" name="squat"/><constraint id="c1">

<synchronize ref="exercise1:beat1"><sync ref="excerciseAnticipator:beat1-0.5"/></synchronize><synchronize ref="exercise1:beat2"><sync ref="excerciseAnticipator:beat2-0.5"/></synchronize>...

</constraint></bml>

Demo

Herwin
[Spatie balk voor tempo, align oefening]

Continuous Interaction Scenarios

Intent Planner

BehaviorPlanner

[wait for turn then speak]

take turnand perform

communicativeact

(politeness,

arousal, friendliness)

BehaviorRealizer

speech withstart sync linked to

anticipator

SpeechStopAnticipator

predictedstop time

SensorUserVirtualHuman

movement & speech

<bml id="bml1"><speech id="speech1" start="speechStopAnticipator:stop+x">

<text>Bla bla</text></speech>

</bml>

Taking the turn (as soon as possible)

Herwin

Demo

• Uses pre-planning• Easily allows incremental updates of speech end

prediction

Continuous Interaction Scenarios

Intent Planner

BehaviorPlanner[keep turn]

keep turn(politeness,

arousal,

friendliness)

BehaviorRealizer

increasevolume of speech1

UserVirtualHuman

movement & speech

Sensor +Interpretation

user wants turn

Keeping the turn

Demo

Conclusion

• Elckerlyc provides mechanisms for the specification and execution of behaviors synchronized to predicted human behaviour

• Flexible pre-planning• Incremental update of user behavior prediction

Open issues

Specifying parameter changes <bmlt:setparameter id=“pchange1" start="10"

end="speech1:end” target="speech1" parameter="volume" curve="linear" startvalue="25" endvalue="100"/>

• Send a parameter specification message– Allow easy experimentation with parameter change– Synchronization to behaviors– Parameter value curve (e.g. linear, ease-in ease-out, …)– Using BML?

• Conceptually doesn’t match very well with other behaviors• Should not modify synchronization constraints• Requires specialized mechanisms to refer to behaviors in

previous blocks

Open Issues

Creating anticipators• Currently we use placeholders• Real-time music tempo tracking and prediction (Pieter

Bos et al)• Predicting listener backchannel relevant moments

(Louis-Philippe Morency, Iwan de Kok, ...)• Fitness excercise tempo tracking and prediction (Eike

Dehling)• Real-time turn-giving prediction (Jonsdottir et al 2008)

Enterface 2010

Backchannel aware guide• Predicts and anticipates completions, acknowledgement and

continuers of a user• Response handling strategies

– Modulated by politeness– Ignore (and speak louder)– Acknowledge response– Wait for acknowledgement before continuing– Change route-giving plan– Etc

• Requires – Adaption of timing of ongoing behavior– Adaptation of parameterization of ongoing behavior– Graceful interruption

Thank you for your attention

Acknowledgments Stefan Kopp’s

group Bart van Straalen Ronald Paul Mark ter Maat Zsofia Ruttkay Greta developers SmartBody

developers

More information/demo• http://hmi.ewi.utwente.nl/showcase/Elckerlyc• Contact Dennis, Herwin:

{dennisr,welberge}@ewi.utwente.nl

END

The End.

Enterface 2010

Route recap: dealing with acknowledgements A1: So, we went to the square with the obelisk… A2: We crossed the river using the suspension bridge… A3: Crossed the square with the arch monument… A4: And took the first street to the right.• Do not deal with it: simply elicit the feedback, pause a bit, then continue

speaking, whatever the subject does• Deal with it: elicit feedback, pause a bit, if feedback is there, wait till it is

finished before acknowledging (smile, nod?) and continuing• Deal with it, the “continuous interaction way”: elicit feedback, wait, but

if feedback comes and it is non-intrusive, immediately acknowledge the feedback (smile, nod?) and continue speaking even while the user is still finishing the feedback

• Evaluate responsiveness, attentiveness, politeness, etc

Enterface 2010

Eliciting of completion A: Do you know the way to the square with the… euhm… [makes

thinking gestures]… euhm… that thing [makes iconic gesture for the arch monument]… euhm… [looks at the listener inviting a response]… euhm… how do you call it?... etc.

[correct completion:] U: arch monument? A: Yes, the arch monument. [Incorrect completion:] U: Obelisk? A: No, I mean… euhm... the square with the arch monument.

Some boring technical facts

• Written in Java• ≈80,000 lines of code (Greta: 45,000; SmartBody: 35,000)

– including ≈ 30,000 lines code for Collada loading and custom graphics

– ≈ 200 (J)Unit test cases, ≈ 1100 tests

• Takes advantage of multiple processors/cores– Physical simulation (OdeJava)– Rendering (Jogl)– Text To Speech (MS Speech API, Mary TTS, Text-based)

Licensing

• Pre-release under the GPL v3 license– No source repository yet, contact Herwin for code– ‘Official’ versioned release soon

• Inverse dynamics code is available under MIT license– http://www.herwinvanwelbergen.nl/index.php?selected=phd

Elckerlyc design concerns

Continuous interaction• Allow fast and flexible planning and re-planning• Allow specification of synchronization to predictions

Flexibility• Easily exchange internal modules, add new modalities, …

Motion generation• Not so much focus on designing (procedural) animation• But on combining motion from all kinds of paradigms

– In sequence and parallel– Physical simulation, mocap/keyframe animation, procedural

motion, biomechanical models, …– New: mixed dynamics

Elckerlyc Architecture

Highlights• Scheduling and playing

stages• Scheduler can easily be

exchanged• Scheduler resolves

timing of a sync point per constraint

• Sync points can slightly adjusted

Scheduler

SpeechPlanner ... Planner

BehaviorBinding

BML stream

Parser

behaviors,(time) constraints

animation plan

peg board

AnimationPlanner

speech plan ... plan

joint rotations

Animation Engine Speech Engine

behavior, syncs to resolve

resolved syncs

behavior, syncs to resolve

resolved syncs

Anticipator Sensor

anticipation change

AnimationPlayer SpeechPlayer

speech

time pegupdate

... Engine

SAIBABehaviorPlanner

feedback

Scheduling

Playing

.. player

add

be

havi

or

add

be

havi

or

set t

ime

peg

SAIBAIntent

Planner

Animation plan

Animation plan• Result of planned

BML• Specification of

motion to be executed by the player

• Contains timed motion units

• Timing linked to time pegs

Organization of motion

Motion Units• (≈LMPs, gestures, controllers, …)• Parameters, linked to BML parameters• Can be executed, typically rotate joints

– At canonical time (0..1)

• Contain phases, key times can be aligned to time pegs– Key times are assigned canonical time values

• Physical, procedural, mocap/keyframe, specialized

Procedural motion units

Procedural motion units• Function of time (0..1) and parameter vector• of end effector position and/or joint rotation

– Custom functions for perlin noise, splines, …– Example:

<EndEffector local="false" target="r_wrist” translation="0;(1-t)*starty+t*endy;0.3"/>

• Key frames• Mocap• Can be imported from Greta gestures

Physical motion units

Physical controllers• Input: desired state • Minimize discrepancy between

current and desired state• Output: joint torques that

guides the VH closer to this state

• Can cope with external perturbation

Video from Abe et al.

2007

Physical simulation for gesture

Advantages• Models force transference between body parts• Models momentum effects

– Overshoot– Models pendulum like motion easily

• Realistic balance• Desired state specification fits some motions well

– Loosely hanging arm (desired state: slightly damp arm motion)– Balance (desired state: projected CoM in support polygon)– Spine (desired state: spine pose)

Physical simulation for gesture

Disadvantages• Precise timing and limb placement is an open issue

– It is unknown if and when a controller achieves a desired limb position

– These aspects are crucial for (speech accompanying) gesture

Mixed dynamics

Mixed dynamics• Use kinematic motion on

body parts that require precise timing/placement

• Use physical simulation the remaining part of the body

• Calculated the forces the kinematic parts apply on the physical part

– Using inverse dynamics

• Allow switching between different kinematic/physical representations

– Depending on the needs of the moment

Comparison with mocap

Executing a Greta gesture

Executing a SmartBody gesture

Physical spine (experimental)

Switching

Transition motion units

• Define a transition animation from physics to kinematics• Or kinematics to kinematics• Flexible start state• Predicted end state• Only define type, joints and start and end time <bmlt:transition id="trans1" class="HermiteSplinePhysicalTransition" start="armhang:end"

end="7"> <bmlt:parameter name="joints" value="l_shoulder,l_elbow,l_wrist"/> </bmlt:transition>

Prediction

Animation predictor• Can predict a joint configuration at a certain time• Uses a copy of the animation plan with predictable timed

motion units• Predictable timed motion units

– Deterministically define pose at each time• Procedural motion unit

– Determinism during certain motion phases• Stroke of gesture• Gaze at static target

Custom kinematic motion units

Adhere to the motion unit interface• Move joints on the basis of time (0..1) and parameter

values (can be changing during motion)• Provide canonical key positions for synchronization• Motion should be smooth (C0 continuous)• Can make use of predicted motion• Can use a flexible start position• Currently:

– Gaze, pointing

• Planned: MURML motion units

Animation player

Animation player• Executes an animation

plan• Timing is linked to time

pegs• Time warping: converts

‘real’ time to canonical time

• Combines physical simulation and kinematic motion automatically

Scheduling

Scheduling• Determine

synchronization constraints between behaviors

• Distribute behavior over planners

• BML => plans• Assign preliminary

times to pegs

Scheduling

• Conflicting constraints• Stretching/skewing/skipping if necessary

Scheduling in Elckerlyc

Using the SmartBody scheduling algorithm• Assumes that the behaviors in the XML are ordered

by time importance• The timing of the 1st behavior is never adapted• The timing of the 2nd behavior is adapted to fit the

time constraints imposed by the 1st behavior• The timing of the 3rd behavior is adapted to fit the

time constraints imposed by the 1st and 2nd behavior• Etc• So, ‘later’ behaviors never change the timing of

earlier behaviors

Scheduling

Scheduling• Modularity: separate parser and uni-modal planners• Each planner

– Can provide timing information on BML behaviors given time constraints• Desired duration, requested timing

– Can construct (motion) units from BML behaviors– Can validate the current plan

• Allows us to easily exchange the scheduler– Use ACE’s chunk based scheduler?

a b c d

a db c

?time constraints,

requestedmotion timing

motion unitpreferred timing

timedmotion unit

Animation Planner

Animation planner• Construct the

animation plan• BML => timed motion

units• Resolves execution

times for unknown Time Pegs for a timed motion unit , given constraints

• Can validate the current animation plan

From BML to timed motion units

Gesture binding• Maps behaviors to motion

units• Handles parameter

assignment in motion units– Maps BML parameter– Or provide defaults

• Allows easy modification of BML realization

• Allows easy addition of new BML behaviors

• Mapping is currently 1 to 1

Gesture binding<gesturebinding><MotionUnitSpec type="head"> <constraints> <constraint name="action" value="ROTATION"/> <constraint name="rotation" value="NOD"/> </constraints> <parametermap> <parameter src="amount" dst="a"/> <parameter src="repeats" dst="r"/> </parametermap> <parameterdefaults> <parameterdefault name="a" value="0.5"/> <parameterdefault name="r" value="1"/> </parameterdefaults> <MotionUnit type="ProcAnimation" file="nod.xml"/></MotionUnitSpec>

<MotionUnitSpec type="head"> <constraints> <constraint name="action" value="ROTATION"/> <constraint name="rotation" value="SHAKE"/> ...</gesturebinding>

BML specification of a head nod<bml id="bml1"> <head id="nod1" repeats=”3" action="ROTATION" rotation="NOD"/></bml>

Procedural motion unit defined in nod.xml<ProcAnimation prefDuration="1.0"> <Rotation target="vc4" rotation="a*0.5*sin(alpha*2*pi*r);0;0"/> <Rotation target="skullbase" rotation="a*0.5*sin(alpha*2*pi*r);0;0"/> <Parameter id="a"/> <Parameter id="r"/> </ProcAnimation>

Replanning

• Moving Time Pegs– Keys of timed motion units are linked to time pegs => retiming is

handled automatically– Planner can be queried for the validity of the motion plan after

moving time pegs

• Access to motion units, speech units, ability to remove or adapt them

Anticipation

Specify synchronization to predicted times<bml id="bml1" xmlns:bmlt="http://hmi.ewi.utwente.nl/bmlt">

<bmlt:procanimation id="conduct1" name="3-beat"/>...<bmlt:procanimation id="conduct8" name="3-beat"/><bmlt:controller id="balance1" class="BalanceController"/><constraint id="c1">

<synchronize ref="conduct1:start"> <sync ref="metronome1:tick1"/> </synchronize>

<synchronize ref="conduct1:beat2"> <sync ref="metronome1:tick2"/> </synchronize> ...

<synchronize ref="conduct8:end"> <sync ref="metronome1:tick25"/>

</synchronize></constraint>

</bml>

Demo with anticipation

Other uses for Time Pegs

• Chunk based/last minute scheduling– MORE???

• Communication between Engines– Communicate time drift of TTS in Speech Engine to Face Engine

Further work

Implementing Anticipators for turn-taking• End of turn, interruption point, user wants turn, ..

Combining Timed Motion Units How to best implement interruption?

• Directly hook into Elckerlyc?• Define some BML strategies?

– Core BML can only replace, append and insert behaviors– No synchronization between different BML requests– Create strategy for replace, tighter merge, change?

Further work

Specifying parameter value changes• Using BML??

<bml id="bml2" scheduling="tight-merge"/><bmlt:setparameter id="reparam1" start="10" end="speech1:end"target="speech1" parameter="volume" curve="linear"

startvalue="25" endvalue="100"/></bml>

Collaboration

Exchanging Motion Units, other modules? Anticipation/prediction

• Predicting listener backchannels with USC/ICT– (Louis-Philippe Morency, Iwan de Kok, ...)

Rendering/output generation• Bonebus interface for Elckerlyc?

Some boring technical facts

• Written in Java• ≈80,000 lines of code (Greta: 45,000; SmartBody: 35,000)

– including ≈ 30,000 lines code for Collada loading and custom graphics

– ≈ 200 (J)Unit test cases, ≈ 1100 tests

• Takes advantage of multiple processors/cores– Physical simulation (OdeJava)– Rendering (Jogl)– Text To Speech (MS Speech API)

Licensing

• Pre-release under the GPL v3 license– No source repository yet, contact Herwin for code– ‘Official’ versioned release soon

• Inverse dynamics code is available under MIT license– http://www.herwinvanwelbergen.nl/index.php?selected=phd

Thank you for your attention

Acknowledgments Job Zwiers Ronald Paul Mark ter Maat Zsofia Ruttkay Greta developers SmartBody

developers

More information/demo• http://hmi.ewi.utwente.nl/showcase/Elckerlyc• Contact Dennis, Herwin:

{dennisr,welberge}@ewi.utwente.nl

END

The End.

RESTJES

De volgende dingen horen typisch in de afdeling Easter Eggs:

• Scheduling• Hoe maak je je eigen controllers• Hoe maak je je eigen engine• Hoe neem je een nieuwe avatar op in het systeem• Hoe bouw je procedurele animaties• Hoe maak je mocap animaties

(daar is ergens een oude herwin tutorial van, no?• …

FUTURE WORK

• OPEN: Multiple BML blocks, BML streams, …? Hoe gaan we dat oplossen? Wat waren de mogelijkheden ook weer?

• OPEN: meer, zie einde deze presentatie

BML Behaviors

More behaviors…

Open Issues

Handling multiple BML Requests

• How to handle both long sequences and graceful interruption of behavior?

Granularity of BML Requests is left open• Full monologue• Or short spurts of behavior

– Single gaze shift– Speech clause + related gestures

Handling multiple BML Requests

Handling successive blocks• In development• Required for BML core compliance:

– Replace– Append (but what about persistent behaviors?)– Merge (default) merges the new BML block into the current BML block(s). Behaviors in this request start immediately, provided they do not

conflict with on-going behaviors (and behaviors scheduled in the future) of prior BML requests (for the given character?). If such a potential conflict is detected with required behaviors, the realizer must not perform any behaviors of the new request and notify via an error / exception. If only non-required behaviors potentially conflict with prior behaviors, the realizer may choose not to perform these behaviors, as long as warning notifying of the omission are sent. If this results in the omission of all behaviors, an error/exception must be sent instead of a warning. Alternatively, the schedule of these non-required

behaviors may be adjusted to avoid conflicts, as long as required synchronization constraints are not violated. – tight-merge: (Elckerlyc specific) Like merge, but the new BML block may refer to

behaviors in the current block(s). – mergeandreplace (Elckerlyc specific): like tight-merge, but replaces all behaviors from

the current block(s) with those in the new block if their ids match. Optionally allows the (fluent) replacement of currently running behaviors.

– removeandreplace (Elckerlyc specific): removes all current behaviors except those with ids matching behaviors in the new block. Then replaces those behaviors with the new behaviors and adds the new behaviors. This is different from replace in that it allows replacing behaviors that are currently running and retains persistent behavior.

– TODO: wat is nodig voor enterface?

Stuff from old presentation

Animation Player

Kinematic Motion

InverseDynamics

PhysicalControllers

activecontrollers,

desired states

joint rotationsjoint velocities

joint accelerations

connector torques

connectorvelocity,

connector acceleration

joint torques

joint rotations

joint rotationsVirtual Human

current state

Animation Player

AnimationPlanPlayer

Physical/kinematicbody selection

active controllers

Physical body

ForwardDynamics

Physical Simulation

animation plan

Open issues

Conflict resolution• What if multiple behaviors want to use the right hand?• Resource allocation in the planner

– Use left hand if right hand is in use

• Combination of animations– Store in motion units how they combine with other motion units

• Nod motion should be additively blended• Manipulative hand gesture should constrain hand position

– In planner => hierarchical motion plans (as in SmartBody)– In realizer => final combination phase (as in EMBR)– Use existing computer animation techniques for the combination

(see Welbergen et al 2009 for an overview)

Open issues

Easily obtaining more motion units• Because authoring motion units is a lot of work• With other research groups?

– Cooperation?– Automatic conversion of Greta gestures to our proc animation

End of stuff from old presentation

Feedback from the realizer

To the planner• BML performance start• BML performance stop• Sync-Point progress• Exception• Warning

Intent Planning

BehaviorPlanning

BehaviorRealization

FML BML

FeedbackFeedback

Advanced synchronization

<constraint> Allows synchronization of sync points

• Before another sync-point• After another sync-point• At another sync-point

Advanced synchronization

<constraint id="before_example"> <before ref="speech_1:start"> <sync

ref="gaze_1:stroke"/> <sync

ref=“nod_1:stroke"/> </before>

</constraint> • Before/after not implemented in any realizer• Elckerlyc supports synchronize

<constraint id=“sync_example"> <synchronize ref="speech_1:start">

<sync ref="gaze_1:stroke"/>

<sync ref=“nod_1:stroke"/>

</synchronize>

</constraint>

welberge
gaze_1,nod_1 before speech_1

Gesture behavior

• Coordinated movement with arms and hands

Gesture types: Point

Gesture types: beats

Gesture types: conduit

welberge
Holding up a bounded 'container', filled with 'meaning'.

Gesture types: lexicalized

<speech>

<bml><speech id="s1"> <text>

This is a complete core level BML <sync id="tm1"/> speech description.</text></speech><gesture id="g1" stroke="s1:tm1" type="BEAT">

</bml>

Persistent behaviors

No end time• Posture• Gaze• Point (?)• ‘New’ behaviors overwrite old behaviors• TODO, moet dit hier? In Elckerlyc: implemented with

replacement groups

BML Design

• Describes occurrence of behaviors• Relative timing and synchronization of behaviors• Form of behaviors• Realizer-independent• But allows extensions for realizer-dependent behavior

BML Design: Realizer independence

• Cannot rely on skeleton joints, speech synthesis systems, available animations, ...

• Refers to body-parts, lexicalized locations, common verbs– speech, face, gesture, center, left

• Allows specification of nuanced/detailed behavior through extensions

BROADER VIEW

• wat zijn VH toepassingen en waar zijn die goed voor? • hoe zit de methodische cyclus in elkaar van

observeren, modelleren en nabootsen van menselijk gedrag?

• Wat is continuous interaction, waar is dat goed voor, waarom moeten VHs dat ook kunnen?

ELCKERLYC

• SAIBA Global architecture SAIBA (met name: de feedback loops met warnings enzo)

• BMLWat zijn de belangrijkste issues bij behavior generation die door BML worden aangepakt? Wat is BML en hoe werkt het? Synchronisatie van behaviors, parallel specificeren van allerlei behaviors, abstractie (!), WAT ZIJN BEHAVIORS (voorbeelden)?

• Design concerns voor Elckerlyc• Globale architectuur Elckerlyc

met peg board, planners, players, etc. Feedback loops…• Feature: organisation of motion• Bridge: The Animation Player (& Animation Plan)• Feature: Physics and mixed dynamics

switching & mixed dynamics & physics (waarom uberhaupt?)• Feature: persistent behaviors & replacement groups• Feature: prediction of motion (voor transitiecontrollers enzo)• Extension: Hoe maak je je eigen motion units

(voor bielefeld relevant). Uitleg procmotionunits, uitleg mocap (kort, = pmu), voorbeeld greta en smartbody, vraag “en Max?”

• Feature: gesture binding en planning

ELCKERLYC IN USE , CONCLUDING REMARKS

• hoe wordt Elckerlyc gebruikt voor een VH toepassing? Wat is de relatie tov intent&behavior planning? Die vallen buiten Elckerlyc. Als je behavior gepland hebt: (1) maak elckerlyc, stuur BML (2) maak Elckerlyc, maak MUs en pas die aan en voeg die toe aan player

• Wat kan Elckerlyc dat andere realizers niet kunnen? (of: wat zijn de tekortkomingen van andere realizers die je aan begin presentatie noemt als ‘het doel van Elckerlyc’?) Refereer terug naar de design concerns

• Hoe blijkt die continuous interaction nu daadwerkelijk in Elckerlyc te zitten? Hebben we al goede voorbeelden? Wanneer wel?

• Boring technical facts

Switching

<bml id="bml1">

<!-– Some conducting gestures --> <bmlt:procanimation id="conduct1" name="3-beat"/> <bmlt:procanimation id="conduct2" name="3-beat" start="conduct1:end"/> <bmlt:procanimation id="conduct3" name="3-beat" start="conduct2:end"/> <bmlt:procanimation id="conduct4" name="3-beat" start="conduct3:end"/> <bmlt:procanimation id="conduct5" name="3-beat" start="conduct4:end"/>

<!–- Physical controllers --> <bmlt:controller id="balance1" class="BalanceController"/> <bmlt:controller id="armhang" class="CompoundController" name="leftarmhang" start="3.5"

end="6.5"/>

<!–- Transtion --> <bmlt:transition id="trans1" class="HermiteSplinePhysicalTransition" start="armhang:end"

end="7"> <bmlt:parameter name="joints" value="l_shoulder,l_elbow,l_wrist"/> </bmlt:transition>

</bml>

Continuous interaction

Continuous interaction• Observation and interpretation• Cognition and deliberation• Multi-modal Expression

Occurring in parallel

Expression timed tightly, from moment to moment, to perceived movements and behavior of other

E.g.: Conduct an orchestra; Count along with exercising user; Talk about co-occurring physical task performed by user (device maintenance, tutoring) or about fixed media (presenting info about multi media content in IR or QA)

Feedback loop.

Recommended