47
Institute of Systems and Robotics ISR – Coimbra Mobile Robotics Laboratory PROMETHEUS WP5 – Behavior Modeling Kamrad Khoshhal Roudposhti Version: 2.6

PROMETHEUS

  • Upload
    efuru

  • View
    37

  • Download
    0

Embed Size (px)

DESCRIPTION

PROMETHEUS. WP5 – Behavior Modeling. Kamrad Khoshhal Roudposhti. Version: 2.6. INDEX. 1- Objectives 2- Human behavior analysis 3- Behavior samples 4- Other projects in around of the our subjects 5- Reference 6- Planning for WP5 7- Reference. Objectives. Summary of WP5. - PowerPoint PPT Presentation

Citation preview

Page 1: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

PROMETHEUS

WP5 – Behavior Modeling

Kamrad Khoshhal Roudposhti

Version: 2.6

Page 2: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

INDEX

1- Objectives

2- Human behavior analysis

3- Behavior samples

4- Other projects in around of the our subjects

5- Reference

6- Planning for WP5

7- Reference

Page 3: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Objectives

Page 4: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Summary of WP5START: T0+12 (January 2009)

Deliverables

D5.1 (T0+24) Progress on Behaviour Modelling (TEIC)

D5.2 (T0+30) Learning and short-term prediction (FCTUC)

Tasks

Task 5.1 : Particle filtering techniques applied to the learning process of Bayesian network

structures Task 5.2 :

Learning/Recognition of Human Action/ Interaction patterns

Task 5.3 : Short time Prediction of Human Intention

Involved partners

Person - months

FOI, UOP, TUM, ISR-FCTUC, PROBAYES, TEIC

2 11 15 20 10 3

Page 5: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Objectives

The scope of WP5 :•analysis and recognition of motion patterns and the production of high-level description of actions and interactions.

•Understanding of behaviors

Specifically, this WP must conclude on –a) represent semantic concepts of behavior, –b) map motion characteristics -mainly velocities and feature trajectories- to semantic concepts

–c) choose efficient representations to interpret the scene meanings.

The detection is based on matching observed behavior with the learned patterns.

Page 6: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Human behavior analysis

Page 7: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

What´s the subject

•Human behavior is the collection of behaviors exhibited by human beings and influenced by culture, attitudes, emotions, ....(From Wikipedia)

•The behavior of people falls within a range with some behavior being common, some unusual, some acceptable, and some outside acceptable limits.(From Wikipedia)

•Many researchers worked on special human behaviour from several categories. Some of popular subjects about this area in the world are: gait [Dawson], action analysis [C. Rao et al.], gesture recognition [Mitra and Acharya], and facial expression recognition [Bartlett] and explicit body movement based communication, namely sign language recognition [Kadir et al.] and etc.

Page 8: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

•Prerequisite of human behavior is human motion.

•Wang showed a general framework for different levels vision analysis that it shows in a Figure: [Wang et al]

Page 9: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Laban Movement Analysis

•Laban Movement Analysis (LMA) is a method for observing, describing, notating, and interpreting human movement•The works of Norman Badler's group mention 5 major components shown in Figure,

The major components of LMA are Body, Space, Effort, Shape and Relationship

Page 10: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Laban Movement Analysis

Space

The Space component defines several concepts: a) Levels of Space, Basic Directions,Three Axes, and b) Three Planes Door Plane (vertical), Table plane (horizontal) , and the Wheel

Plane (sagittal) each one lying in two of the axes (Joerg Rett and Jorge Dias 2007)

Page 11: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Laban Movement Analysis

Effort

• Space: Direct / Indirect • Weight: Strong / Light • Time: Sudden / Sustained • Flow: Bound / Free

Effort MovementSpace Direct Pointing gesture

Space Indirect Waving away bugs

Weight Strong Punching

Weight Light Dabbing paint on a canvas

Time Sudden Swatting a fly

Time Sustained Stretching to yawn

Flow Bound Moving in slow motion

Flow Free Waving wildly

Effort qualities and exemplary movements (Jörg Rett and Jorge Dias 2007)

Page 12: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Laban Movement Analysis

Shape

• Rett summarize three Shape qualities and express it in terms of spatial directions. By using a major and a minor direction we are able to express the Shape in the concept of the Three Planes (πvert , π horz,π sag).

Shape Direction example Plane

Enclosing

Spreading

Major: Sideward

Minor: For-/Backward

Clasping someone in a hug

Opening arms to embrace

Horizontal

Sinking

Rising

Major: Up-/Downward

Minor: sideward

Stamping the floor indignation

Reaching for something in a high shelf

Vertical

Retreating

Advancing

Major: For-/Backward

Minor: Up-/Downward

Avoiding a punch

Reaching forward to shake hands

Sagittal

Page 13: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

State of the art

•A long tradition in research on computational solutions for Laban Movement Analysis (LMA) has the group around Norman Badler, who already started in 1993 to re-formulate Labanotation in computational models [Badler1993].

•The work of Zhao & Badler [Zhao&Badler] is entirely embedded in the framework of Laban Movement Analysis. Their computational model of gesture acquisition and synthesis can be used to learn motion qualities from live performance. Many inspirations concerning the transformation of LMA components into physically measurable entities were taken from this work.

Trajectories of sensors (attached at shoulders, elbows, and hands).

L. Zhao, N.I. Badler / Graphical Models 67 (2005)

Page 14: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

State of the art

•Nakata et al. reproduced expressive movements in a robot that could be interpreted as emotions by a human observer. [Nakata]

•The first part described how some parameters of Laban Movement Analysis (LMA) can be calculated from a set of low-level features.

•They concluded further that the control of robot movements oriented on LMA parameters allows the production of expressive movements and that those movements leave the impression of emotional content to a human observer.

• The critical points on the mapping of low-level features to LMA parameters was, that the computational model was closely tied to the embodiment of the robot which had only a low number of degrees of freedom.

2 dimensional shape

Page 15: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

State of the art

•Rett & Dias in [Rett&Dias2007] presented as a contribution to the field of human-machine interaction (HMI) a system that analyzes human movements online, based on the concept of Laban Movement Analysis (LMA).

•The implementation used a Bayesian model for learning and classification. •They presented the Laban Movement Analysis as a concept to identify useful features of human movements to classify human actions.

•The movements were extracted using both, vision and magnetic tracker. •The descriptor opened possibilities towards expressiveness and emotional content. •To solve the problem of classification, they used the Bayesian framework as it offers an intuitive approach to learning and classification.

The components and the frames of reference for tracking human movements

Page 16: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Behavior Samples

Page 17: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Indoor Part

Page 18: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Place down bag

Page 19: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Pick up bag

Page 20: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Robbery

Page 21: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Falling down

Page 22: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Faint & robbery

Page 23: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Faint (SmartHome)

Page 24: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Fighting

Page 25: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Greeting

Page 26: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Panic situation

Page 27: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Lift and move a heavy box and...

The person falls down The normal action

Page 28: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Outdoor Part

Page 29: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Fighting & Pushing

Page 30: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

2 Persons Fighting and one of them escape

Page 31: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

An angry person An normal person

Page 32: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Loitering Normal

Page 33: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Other projects in around of the our subjects

Page 34: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

The COBOL project

Communication with Emotional Body Language

•COmmunication with Emotional BOdy Language (COBOL) was launched in 2006 by the the European Commission as project, in the 6th EU framework programme. The Commission will be supporting this Specific Targeted Research Project for three years, to the tune of €1.8 million.

The project consists of 5 workpackages, each of which is described belowWorkpackage 1:Description and analysis of the kinematic and dynamical structure of EBLWorkpackage 2:Development of EBL avatars and measurement of EBL perception and recognitionWorkpackage 3:The cognitive basis of EBLWorkpackage 4:Coordinating social interactionsWorkpackage 5:Cross-cultural EBL

Page 35: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

The AMI project

Augmented Multi-party Interaction

•The AMI Consortium formed in January 2004 to conduct basic and applied research, with the aim of developing technologies that help people have more productive meetings.

•Their technologies rely on basic research in disciplines ranging from speech recognition, language processing, computer vision, human-human communication modeling, and multimedia indexing and retrieval. The AMI Consortium brings together scientists from these fields as well as technologists, interface specialists, and social psychologists in order to achieve its vision.

Page 36: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Netcarity

Ambient technology to support older people at home

Netcarity was launched in 2007 and for 4 years, €13 million European project researching and testing technologies which will help older people improve their:•Wellbeing •Independence •Safety •Health

Page 37: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

References from the Partners

Page 38: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Reference from Partners• [1] Markus Ablassmeier, Tony Poitschke, Frank Wallho, Klaus Bengler, and Gerhard Rigoll. Eye gaze studies comparing head-up and head-down

displays in vehicles. pages 2250 2252. Multimedia and Expo, 2007 IEEE International Conference, July 2007.

• [2] Jorgen Ahlberg, Martin Folkesson, Christina Gronwall, Tobias Horney, Erland Jungert, Lena Klasen, and Morgan Ulvklo. Ground target recognition in a query- ased multi-sensor information system. Technical Report LiTH-ISY-R-2748, Division of Automatic Control Department of Electrical Engineering Linkopings universitet, October 2006.

• [3] Simon Ahlberg, Pontus Horling, Katarina Johansson, Karsten Jored, Hedvig Kjellstrom, Christian Martenson, Goran Neider, Johan Schubert, Pontus Svenson, Per Svensson, and Johan Walter. An information fusion demonstrator for tactical intelligence processing in network-based defense. Inf. Fusion, 8(1):84107, 2007.

• [4] Marc Al-Hames, Benedikt H²ornler, Ronald M²uller, Joachim Schenk, and Gerhard Rigoll. Automatic multi-modal meeting camera selection for videoconferences and meeting browsers. pages 2074 2077. Multimedia and Expo, 2007 IEEE International Conference, July 2007.

• [5] Dejan Arsic, Joachim Schenk, Bjorn Schuller, Frank Wallho, and Gerhard Rigoll. Submotions for hidden markov model based dynamic facial action recognition. pages 673 676. Image Processing, 2006 IEEE International Conference on 8-11 Oct. 2006, Oct. 2006.

• [6] Dejan Arsi.c, Frank Wallho, Bj²orn Schuller, and Gerhard Rigoll. Video based online behavior detection using probabilistic multi stream fusion. pages 1354 1357. Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on 6-8 July 2005, July 2005.

• [7] Dejan Arsi.c, Frank Wallho, Bj²orn Schuller, and Gerhard Rigoll. Video based online behavior detection using probabilistic multi stream fusion. Volume 2, pages 6069. Image Processing, 2005. ICIP 2005. IEEE International Conference on Volume 2, 11-14 Sept. 2005, Sept. 2005.

• [8] Mikael Brännström, Ron Lennartsson, Andris Lauberts, Hans Habberstad, Erland Jungert, and Martin Holmberg. Distributed data fusion in a ground sensor network. Stockholm, Sweden, July 2004. The 7th International Conference on Information Fusion June 28 to July 1, 2004.

• [9] C. Coué, Th. Fraichard, P. Bessiere, and E. Mazer. Multi-sensor data fusion using bayesian programming : an automotive application.

• [10] C. Coué, Th. Fraichard, P. Bessikre, and E. Maze. Using bayesian programming for multi-sensor multi - target tracking in automotive applications. Proceedings of the 2003 IEEE ,International Conference on Robotics &Au- tomation, 2003.

• [11] C. Coué, Th. Fraichard, P. Bessière, and E. Mazer. Multi-sensor data fusion using bayesian programming : an automative application. Conference on intelligent Robots and Systems EPFL, Lausanne, Switzerland ., Proceesings of the 2002 IEEE/RSJ, October 2002.

• [12] Julien Diard, Pierre Bessière, and Emmanuel Mazer. Merging probabilistic models of navigation: the bayesian map and the superposition operator. This work is supported by the BIBA european project (IST-2001-32115)., 2001.

• [13] Fadi Dornaika and Jorgen Ahlberg. Fitting 3d face models for tracking and active appearance model training. Image and Vision Computing 24, pages 10101024, 2006.

• [14] Theodoros Giannakopoulos, Nicolas Alexander Tatlas, Todor Ganchev, and Ilyas Potamitis. A practical, real-time speech-driven home automation front-end. IEEE Transactions on Consumer Electronics, 51, MAY 2005.

• [15] T. Kostoulas, I. Mporas, T. Ganchev, and N. Fakotakis. The eect of emotional speech on a smart-home application. 21st International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems, 2008.

Page 39: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Reference from Partners• [16] A. Lazaridis, T. Kostoulas, I. Mporas, T. Ganchev, N. Katsaounos, S. Ntalampiras, and N. Fakotakis. Human evaluation of the logos

multimodal dialogue system. Athens, Greece, July 2008. 1st International Conference on PErvasive Technologies Related to Assistive Environments July 16 - 19.

• [17] Anna Linderhed, Stefan Sjökvist, Sten Nyberg, Magnus Uppsäll, Christina Grönwall, Pierre Andersson, and Dietmar Letalick. Temporal analysis for land mine detection. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis (2005), 2005.

• [18] I. Potamitis, N. Fakotakis, and G. Kokkinakis. Robust automatic speech recognition in the presence of impulsive noise. ELECTRONICS LETTERS 7th June 2007, 37, June 2007.

• [19] Ilyas Potamitis. Estimation of speech presence probability in the eld of microphone array. IEEE SIGNAL PROCESSING LETTERS, 11, DECEMBER 2004.

• [20] Ilyas Potamitis, Huimin Chen, and George Tremoulis. Tracking of multiple moving speakers with multiple microphone arrays. IEEE TRANS- ACTIONS ON SPEECH AND AUDIO PROCESSING, 12, SEPTEMBER 2004.

• [21] Ilyas Potamitis and George Kokkinakis. Speech separation of multiple moving speakers using multisensor multitarget techniques. IEEE TRANS- ACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART A: SYS- TEMS AND HUMANS, 37, JANUARY 2007.

• [22] Stephan Reiter, Bj²orn Schuller, and Gerhard Rigoll. Segmentation and recognition of meeting events using a two-layered hmm and a combined mlp-hmm approach. pages 953 956. Multimedia and Expo, 2006 IEEE International Conference on 9-12 July 2006, July 2006.

• [23] J. Rett and J. Dias. Human robot interaction based on bayesian analysis of human movements. Proceedings of EPIA 07, Lecture Notes in AI, Springer Verlag, Berlin., 2007.

• [24] Joerg Rett, Jorge Dias, and Juan-Manuel Ahuactzin. Laban Movement Analysis using a Bayesian model and perspective projections. Brain, Vision and AI, 2008. ISBN: 978-953-7619-04-6.

• [25] Jörg Rett. ROBOT-HUMAN Interface using LABAN Movement Analysis Inside a Bayesian framework. PhD thesis, University of Coimbra, 2008.

• [26] Gerhard Rigoll, Stefan Eickeler, and Stefan M²uller. Person tracking in realworld scenarios using statistical methods. pages 398 402. Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on 28-30 March 2000, March 2000.

• [27] Sascha Schreiber, Andre Stormer, and Gerhard Rigoll. A hierarchical asm/aam approach in a stochastic framework for fully automatic tracking and recognition. pages 1773 1776. Image Processing, 2006 IEEE International Conference on 8-11 Oct. 2006, Oct. 2006.

• [28] Frank Wallho, Martin RuB, Gerhard Rigoll, Johann Gobel, and Hermann Diehl. Surveillance and activity recognition with depth information. Pages 1103 1106. Multimedia and Expo, 2007 IEEE International Conference on 2-5 July 2007, July 2007.

• [29] N. Xiong and P. Svensson. Multi-sensor management for information fusion: issues and approaches. Information Fusion 3, pages 163186, 2002.

• [30] P. Zervas, N. Fakotakis, and G. Kokkinakis. Development and evaluation of a prosodic database for greek speech synthesis and research. Journal of Quantitative Linguistics, 15(2):154184, 2008.

Page 40: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Planning for WP5

Page 41: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Relationship of the WP3,4 and 5

WP3(Sensor modeling and multi-sensor fusion techniques )

Task 3.3Bayesian network structures

for multi-modal fusion

WP4(Localization and tracking techniques as applied to humans )

Task 4.3Online adaptation and learning

WP5Behavior learning and recognition

Page 42: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

This WP includes:

•Application of particle filtering in the inference procedure of Bayesian network structures including the novel cases of Multi stream, Coupled and Asynchronous HMMs

•Training of Bayesian network structures on ground truth of the perceptual modalities, which will be available from hand-labeled data, and recognition of behavior

•Evaluation of the efficiency of Bayesian network structures on generating short-term prediction of tasks based on the observations of the multi-modal network

Page 43: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Partner Specific Skills

FOI project management, surveillance systems, data fusion, people tracking, assessment of different types of threats, potential of new sensors [17, 3, 2, 8, 13, 29]

UOP acoustics, speech recognition, speech understanding, natural language processing, microphone arrays, speaker localization and tracking, speaker verification and identification, language recognition [15, 16, 30]

TUM human-machine communication, face recognition, visual surveillance, intelligent multimedia information processing methods, video indexing, gesture recognition, broadcast data processing. [1, 4, 5, 7, 6, 22, 26, 27, 28]

FCTUC people tracking, face recognition, human-machine communication, motion detection, intentional content and expressiveness of a human body movement using Laban analysis, Bayesian framework and human movement-tracking system. [23, 25, 24]

PROBAYES advanced probabilistic techniques, Bayesian analysis of Markov process-based models, algorithms / software for Bayesian reasoning and learning, commercial libraries, model scenarios for risk assessments [9, 11, 10, 12]

MARAC System Integration, Industrial activities in communications, Navigation systems, Land Radio-communications, Telecommunications Systems, Telephone Exchanges – PABXs, Networks, Satellite Communications Terminals, Geological/ Geophysical/ Meteorological Systems, Educational Training Systems and Scientific Instruments

TEIC acoustic surveillance, one-channel audio source separation, speaker localization and tracking, Bayesian statistics and tracking [18, 21, 20, 19, 14]

Page 44: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Partner MM Expected Role in the Project

FOI 2 Data fusion, surveillance systems, people tracking, assessment of different types of threats, potential of new sensors .

UOP 11 speech recognition, microphone arrays, speaker localization and tracking, speaker verification and identification

TUM 15 human-machine communication, face recognition, visual surveillance, intelligent multimedia information processing methods, video indexing, gesture recognition, broadcast data processing.

FCTUC 20 people tracking, face recognition, human-machine communication, motion detection, intentional content and expressiveness of a human body movement using Laban analysis,

PROBAYES 10 advanced probabilistic techniques, Bayesian analysis of Markov process-based models, algorithms / software for Bayesian reasoning and learning, commercial libraries, model scenarios for risk assessments

MARAC 0

TEIC 3 acoustic surveillance, one-channel audio source separation, speaker localization and tracking, Bayesian statistics and tracking

Page 45: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

References

Page 46: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Other References• [Badler1993] N.I. Badler, C.B. Phillips, and B.L. Webber. Simulating Humans: Computer Graphics,Animation, and Control.

Oxford Univ. Press, 1993.

• [Bregler] C. Bregler. Learning and recognizing human dynamics in video sequences San Juan, Puerto Rico, 1997. Conference on Computer Vision and Pattern Recognition.

• [Dawson] Mark Ruane Dawson. Gait recognition. Technical report, Department of Computing Imperial College of Science, Technology & Medicine London, June 2002.

• [Kadir et al.] Timor Kadir, Richard Bowden, Eng-Jon Ong, and Andrew Zisserman. Minimal training, large lexicon, unconstrained sign language recognition. In British Machine Vision Conference 2004. British Machine Vision Conference 2004, 2004. Winner of the Industrial Paper Prize.

• [Mitra and Acharya] Sushmita Mitra and Tinku Acharya. Gesture recognition: A survey. IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, 37, MAY 2007.

• [Bartlett] Bartlett M.S., Littlewort G., Fasel I., and Movellan J.R. Real time face detection and expression recognition: Development and application to human-computer interaction. CVPR Workshop on Computer Vision and Pattern Recognition for Human-Computer Interaction., 2003.

• [Nakata] T. Nakata, T. Mori, and T. Sato. Analysis of impression of robot bodily expression. Journal of Robotics and Mechatronics, 14:2736, 2002.

• [C. Rao et al.] C. Rao, A. Yilmaz, and M. Shah. View-invariant representation and recognition of actions. Internation Journal of Computer Vision, pages 203226, 2002.

• [Rett&Dias2007] J. Rett and J. Dias. Human robot interaction based on bayesian analysis of human movements. Proceedings of EPIA 07, Lecture Notes in AI, Springer Verlag, Berlin., 2007.

• [Rett&Dias2008] Joerg Rett, Jorge Dias, and Juan-Manuel Ahuactzin. Laban Movement Analysis using a Bayesian model and perspective projections. Brain, Vision and AI, 2008. ISBN: 978-953-7619-04-6.

• [Starner&Pentland] T. Starner and A. Pentland. Visual recognition of american sign language using hidden markov models,. pages 189194, Zurich, Switzerland, 1995. International Workshop on Automatic Face and Gesture Recognition.

• [Wang et al] Liang Wang, Weiming Hu, and Tieniu Tan. Recent developments in human motion analysis. Pattern Recognition Society, pages 585601, 2003.

• [Zhao&Badler] L. Zhao and Badler, N.I. Acquiring and validating motion qualities from live limb gestures. Graphical Models 67, pages 116, 2005.

Page 47: PROMETHEUS

Institute of Systems and RoboticsISR – Coimbra

Mobile Robotics Laboratory

Thank you for your attention!