Learning the Relation of Motion Control and Gestures...

Preview:

Citation preview

Learning the Relation of Motion Control and Gestures Through Self-

Exploration

Saša Bodiroža1, Aleksandar Jevtid2, Bruno Lara3, Verena V. Hafner1 1 Humboldt-Universität zu Berlin, Germany

2 Robosoft, France

3 Universidad Autonoma del Estado de Morelos, Mexico

Robotics Challenges and Vision Workshop, RSS 2013 Berlin, Germany

June 27, 2013

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Outline

• Motivation

• Previous work on gestures

• Internal models

• Proposed model, experiment and results

• Other applications of internal models

• Future work and takeaway messages

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Outline

• Motivation

• Previous work on gestures

• Internal models

• Proposed model, experiment and results

• Other applications of internal models

• Future work and takeaway messages

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Motivation

• Robot as a new “specie”

• Development of social robots inspired by child development

• Human-robot interaction catered to human needs

• Multimodal interaction – focus on intuitive gestures

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Motivation – natural HRI

• Gestures – deictic, iconic, metaphoric, beats

• Commonly used to indicate locations of interest

• Learning motor control strategies for robot control

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Outline

• Motivation

• Previous work on gestures

• Internal models

• Proposed model, experiment and results

• Other applications of internal models

• Future work and takeaway messages

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Human Gesture Vocabulary in Robot Waiter Scenario

Actions Gestures

A1 A2 A3 A4 A5 A6 A7 A8 A9 A10

G1 G2 G3 G4 G5 G6 G7 G8 G9 G10

Actions: call waiter (1), order beer (2), cancel beer (3), order this (4), cancel this (5), ask for a suggestion (6), clean table (7), take away glass (8), bring bill (9), take away bill (10) and where is the toilette (11). Gestures (AL > 25%): pointing (1), writing on an imaginary piece of paper (2), index finger wave (3), hand wave “no” (4), sliding gesture for canceling (5), circular movement of hand over a surface (6), circular movement of finger over a surface (7), handling the object (8), raised hand (9), hand wave (10), no gesture (11).

Bodiroža, S., Stern, H. I., Edan, Y. “Dynamic Gesture Vocabulary Design for Intuitive Human-Robot Dialog”, in Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction, Boston, USA, 2012.

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Gesture Recognition

Using dynamic time warping to compare two vectors

Vectors represent scaled 3D positions of a hand through time.

Bodiroža, S., Doisy, G. and Hafner, V. V., “Position-Invariant, Real-Time Gesture Recognition Based on Dynamic Time Warping”, in Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 2013.

Adaptive method, learning possible with one training instance

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Outline

• Motivation

• Previous work on gestures

• Internal models

• Proposed model, experiment and results

• Other applications of internal models

• Future work and takeaway messages

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Internal Models

Daniel M. Wolpert, Computational approaches to motor control, Trends in Cognitive Sciences, Volume 1, Issue 6, September 1997, Pages 209-216, ISSN 1364-6613,

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Learning Sensorimotor Mappings

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Extending towards Execution

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Experiment

• Performing random motor commands

• Observing sensory consequences of the performed commands with a Kinect – in particular perceived change in the location of the person’s right hand

• Learning sensorimotor schemas from obtained data

• Applying the inverse action to achieve rotation task and tracking of the person’s hand

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Customized robuLAB10 Platform

• Wheeled robot platform

• Kinect at ~1.5 m height

• Actuated: – Rotation of the robuLAB

platform

– Tilt unit of the Kinect

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Learning

• Learning an inverse model and a forward model

• Inverse model – controller given St and St+1, predict Mt

a motor command corresponding to the observed action

• Forward model – predictor given St and Mt, predict S*

t+1

used to calculate the error of the prediction

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Robot’s Point of View – Learning

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Robot’s Point of View – Execution

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Results

• Training set of 60 points

• Testing with 25 repetitions

• Average hand displacement during testing (x, y, z) = (0.43, 0.24, 0.06)m

s.d. = (0.13, 0.12, 0.05)m

• Prediction error (x, y, z) = (0.08, 0.24, 0.06)m

s.d. = (0.06, 0.12, 0.04)m

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Outline

• Motivation

• Previous work on gestures

• Internal models

• Proposed model, experiment and results

• Other applications of internal models

• Future work and takeaway messages

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Learning Pointing Gestures

Schillaci, G., Hafner, V. V., Lara, B., “Coupled Inverse-Forward Models for Action Execution Leading to Tool-Use in a Humanoid Robot”, in Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction, Boston, USA, 2012.

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Embodied Gesture Perception

A. Sadeghipour and S. Kopp, “A Probabilistic Model of Motor Resonance for Embodied Gesture Perception”, Intelligent Virtual Agents, Z. Ruttkay, et al., eds., vol. 5773, Berlin, Heidelberg: Springer, 2009, pp.90–103.

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Outline

• Motivation

• Previous work on gestures

• Internal models

• Proposed model, experiment and results

• Other applications of internal models

• Future work and takeaway messages

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Future Work – Learning Following

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Future Directions

• Extending to learning pointing for robot control Learning with scaffolding

Reinforcement learning

• Learning gestures using internal models A. Sadeghipour and S. Kopp, “A Probabilistic Model of Motor Resonance for Embodied Gesture Perception”, Intelligent Virtual Agents, Z. Ruttkay, et al., eds., vol. 5773, Berlin, Heidelberg: Springer, 2009, pp.90–103.

• Learning pointing Schillaci, G., Hafner, V. V., Lara, B., “Coupled Inverse-Forward Models for Action Execution Leading to Tool-Use in a Humanoid Robot”, in Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction, Boston, USA, 2012.

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Takeaway messages

• Learning motor control for robot motion is a viable approach for everyday HRI

• Observing both internal and external stimuli during learning

• Taking inspiration from child development – active exploration of space through motor babbling and learning sensorimotor schemas

Saša Bodiroža, Aleksandar Jevtid, Bruno Lara, Verena V. Hafner

Thanks

Acknowledgements

FP7 Marie Curie Actions

INTRO project

Guido Schillaci, Guillaume Doisy

Recommended