Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
Lecture Notes in Computer Science 6949Commenced Publication in 1973Founding and Former Series Editors:Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David HutchisonLancaster University, UK
Takeo KanadeCarnegie Mellon University, Pittsburgh, PA, USA
Josef KittlerUniversity of Surrey, Guildford, UK
Jon M. KleinbergCornell University, Ithaca, NY, USA
Alfred KobsaUniversity of California, Irvine, CA, USA
Friedemann MatternETH Zurich, Switzerland
John C. MitchellStanford University, CA, USA
Moni NaorWeizmann Institute of Science, Rehovot, Israel
Oscar NierstraszUniversity of Bern, Switzerland
C. Pandu RanganIndian Institute of Technology, Madras, India
Bernhard SteffenTU Dortmund University, Germany
Madhu SudanMicrosoft Research, Cambridge, MA, USA
Demetri TerzopoulosUniversity of California, Los Angeles, CA, USA
Doug TygarUniversity of California, Berkeley, CA, USA
Gerhard WeikumMax Planck Institute for Informatics, Saarbruecken, Germany
Pedro Campos Nicholas GrahamJoaquim Jorge Nuno NunesPhilippe Palanque Marco Winckler (Eds.)
Human-ComputerInteraction –INTERACT 2011
13th IFIP TC 13 International ConferenceLisbon, Portugal, September 5-9, 2011Proceedings, Part IV
13
Volume Editors
Pedro CamposNuno NunesUniversity of Madeira9020-105, Funchal, PortugalE-mail: {pcampos, njn}@uma.pt
Nicholas GrahamQueen’s UniversityKingston, ON K7L 3N6, CanadaE-mail: [email protected]
Joaquim JorgeInstituto Superior Técnico1049-001 Lisbon, PortugalE-mail: [email protected]
Philippe PalanqueMarco WincklerUniversity Paul Sabatier31062 Toulouse Cedex 9, FranceE-mail: {palanque, winckler}@irit.fr
ISSN 0302-9743 e-ISSN 1611-3349ISBN 978-3-642-23767-6 e-ISBN 978-3-642-23768-3DOI 10.1007/978-3-642-23768-3Springer Heidelberg Dordrecht London New York
Library of Congress Control Number: 2011935338
CR Subject Classification (1998): H.5.2, H.5.3, H.3-5, I.2.10, D.2, C.2, K.3-4
LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Weband HCI
© IFIP International Federation for Information Processing 2011This work is subject to copyright. All rights are reserved, whether the whole or part of the material isconcerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publicationor parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,in its current version, and permission for use must always be obtained from Springer. Violations are liableto prosecution under the German Copyright Law.The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,even in the absence of a specific statement, that such names are exempt from the relevant protective lawsand regulations and therefore free for general use.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Foreword
Advances in interactivity, computing power, mobile devices, large displays andubiquitous computing offer an ever-increasing potential for empowering users.This can happen within their working environment, in their leisure time or evenwhen extending their social skills. While such empowerment could be seen asa way of connecting people in their workspace, home or on the move, it couldalso generate gaps requiring larger effort and resources to fruitfully integratedisparate and heterogeneous computing systems.
The conference theme of INTERACT 2011 was “building bridges” as we be-lieve human–computer interaction (HCI) is one the research domains more likelyto significantly contribute to bridging such gaps. This theme thus recognizes theinterdisciplinary and intercultural spirit that lies at the core of HCI research. Theconference had the objective of attracting research that bridges disciplines, cul-tures and societies. Within the broad umbrella of HCI, we were in particular seek-ing high-quality contributions opening new and emerging HCI disciplines, bridg-ing cultural differences, and tackling important social problems. Thus, INTER-ACT 2011 provided a forum for practitioners and researchers to discuss all aspectsof HCI, including these challenges. The scientific contributions gathered in theseproceedings clearly demonstrate the huge potential of that research domain to im-proving both user experience and performance of people interacting with comput-ing devices. The conference also is as much about building bridges on the humanside (between disciplines, cultures and society) as on the computing realm.
INTERACT 2011 was the 13th conference of the series, taking place 27years after the first INTERACT held in early September 1984 in London, UK.Since INTERACT 1990 the conferences have taken place under the aegis of theUNESCO International Federation for Information Processing (IFIP) TechnicalCommittee 13. This committee aims at developing the science and technology ofthe interaction between humans and computing devices through different Work-ing Groups and Special Interests Groups, all of which, together with their officers,are listed within these proceedings.
INTERACT 2011 was the first conference of its series to be organized in co-operation with ACM SIGCHI, the Special Interest Group on Computer–HumanInteraction of the Association for Computing Machinery. We believe that thiscooperation was very useful in making the event both more attractive and visibleto the worldwide scientific community developing research in the field of HCI.
We thank all the authors who chose INTERACT 2011 as the venue to publishtheir research This was a record year for the conference in terms of submissionsin the main technical categories. For the main Technical Program there were atotal of 680 submissions, including 402 long and 278 short papers, out of whichwe accepted 171 (111 long and 60 short submissions), for a combined acceptancerate of less than 25%. Overall, from a total of 741 submissions for all tracks, 290were accepted, as follows:
VI Foreword
– 111 Full Research Papers– 60 Short Research Papers– 54 Interactive Poster Papers– 17 Doctoral Consortium Papers– 16 Workshops– 12 Tutorials– 5 Demonstrations– 6 Organizational Overviews– 4 Industrial Papers– 3 Special Interest Groups– 2 Panels
Our sincere gratitude goes to the members of our Program Committee (PC), whodevoted countless hours to ensure the high quality of the INTERACT Conference.This year, we improved the reviewing process by moving to an associate chairmodel. With almost 700 submitted papers, it is impossible for the PC Chairs toread every paper. We recruited 103 Associate Chairs (ACs), each of whom han-dled up to 12 papers. The ACs recruited almost 800 external reviewers, guaran-teeing that each paper was reviewed by three to six referees. ACs also provided ameta-review. Internal discussion among all the reviewers preceded the final deci-sion between the PC Chairs and the AC. This herculean effort was only possibledue to the diligent work of many people. We would like to thank you all for theeffort and apologize for all the bullying required to get the work done on time.
In addition, sincere thanks must be extended to those whose contributionswere essential in making it possible for the conference to happen and for theseproceedings to be produced. We owe a great debt to the Conference Commit-tees, the members of the International Program Committee and the numerousreviewers who had to review submissions from the various categories. Similarly,the members of the conference Organizing Committee, the staff at INESC-ID, es-pecially Manuela Sado, deserve much appreciation for their tireless help with allaspects of planning and managing the many administrative and organizationalissues. We would like to especially thank Tiago Guerreiro for his dedication withthe Student Volunteer program, and Jose Coelho who worked tirelessly to makethe online program a reality. Thanks are also due to Alfredo Ferreira for keepingand single-handedly maintaining the website, and to Pedro Campos and MarcoWinkler for the superb work done with the conference proceedings. Finally, ourthanks go to all the authors who actually did the scientific work and especiallyto the presenters who took the additional burden of discussing the results withtheir peers at INTERACT 2011 in Lisbon.
July 2011 Nicholas GrahamDaniel Goncalves
Joaquim JorgeNuno Nunes
Philippe Palanque
Table of Contents – Part IV
Long and Short Papers
Usable Privacy and Security
A Field Study of User Behavior and Perceptions in SmartcardAuthentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Celeste Lyn Paul, Emile Morse, Aiping Zhang,Yee-Yin Choong, and Mary Theofanos
Improving Computer Security Dialogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Cristian Bravo-Lillo, Lorrie Faith Cranor, Julie Downs,Saranga Komanduri, and Manya Sleeper
Usable Privacy and Security in Personal Health Records . . . . . . . . . . . . . . 36Inma Carrion, Jose L. Fernandez-Aleman, and Ambrosio Toval
Shining Chrome: Using Web Browser Personas to Enhance SSLCertificate Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Max-Emanuel Maurer, Alexander De Luca, and Tobias Stockinger
User Experience I
Ambient Displays: Influencing Movement Patterns . . . . . . . . . . . . . . . . . . . 52Tasos Varoudis
Three User-Driven Innovation Methods for Co-creating CloudServices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Ting-Ray Chang and Eija Kaasinen
Designing for the Secondary User Experience . . . . . . . . . . . . . . . . . . . . . . . . 84Ole Andreas Alsos and Dag Svanæs
Engaging Visitors in Museums with Technology: Scales for theMeasurement of Visitor and Multimedia Guide Experience . . . . . . . . . . . . 92
Mohd Kamal Othman, Helen Petrie, and Christopher Power
User Experience II
An Image of Electricity: Towards an Understanding of How PeoplePerceive Electricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Yoram Chisik
XXVI Table of Contents – Part IV
Structuring the Collaboration of Multiple Novice Design Ethnographers:Towards a New User Research Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Paul Gault, Catriona Macaulay, Graham Johnson, andJudith Masthoff
Customer Experience Modeling: Designing Interactions for ServiceSystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Jorge Teixeira, Lia Patrıcio, Nuno J. Nunes, and Leonel Nobrega
User Experience Research in the Semiconductor Factory:A Contradiction? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Marianna Obrist, Wolfgang Reitberger, Daniela Wurhofer,Florian Forster, and Manfred Tscheligi
User Experience III
Client’s Temporal Trajectory in Child Protection: Piecing InformationTogether in a Client Information System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Saila Huuskonen and Pertti Vakkari
Unsupervised Parameter Selection for Gesture Recognition with VectorQuantization and Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Przemys�law G�lomb, Micha�l Romaszewski, Arkadiusz Sochan, andSebastian Opozda
Number Entry Interfaces and Their Effects on Error Detection . . . . . . . . . 178Patrick Oladimeji, Harold Thimbleby, and Anna Cox
An Exploration of the Utilization of Electroencephalography andNeural Nets to Control Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Dan Szafir and Robert Signorile
Social Translucence as a Theoretical Framework for Sustainable HCI . . . 195Mary Barreto, Evangelos Karapanos, and Nuno Nunes
User Modelling
A Revised Mobile KLM for Interaction with Multiple NFC-Tags . . . . . . . 204Paul Holleis, Maximilian Scherr, and Gregor Broll
The Entropy of a Rapid Aimed Movement: Fitts’ Index of Difficultyversus Shannon’s Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
R. William Soukoreff, Jian Zhao, and Xiangshi Ren
The Difference Matters: Benchmarking Visual Performance of aCognitive Pilot Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Florian Frische and Andreas Ludtke
Table of Contents – Part IV XXVII
Visual Search in Radial Menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248Krystian Samp and Stefan Decker
Visualization
Analytic Trails: Supporting Provenance, Collaboration, and Reuse forVisual Data Analysis by Business Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Jie Lu, Zhen Wen, Shimei Pan, and Jennifer Lai
Exploration Views: Understanding Dashboard Creation andCustomization for Visualization Novices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Micheline Elias and Anastasia Bezerianos
Patient Development at a Glance: An Evaluation of a Medical DataVisualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Margit Pohl, Sylvia Wiltner, Alexander Rind, Wolfgang Aigner,Silvia Miksch, Thomas Turic, and Felix Drexler
Evaluation of HaloDot: Visualization of Relevance of Off-Screen Objectswith over Cluttering Prevention on Mobile Devices . . . . . . . . . . . . . . . . . . . 300
Tiago Goncalves, Ana Paula Afonso, Maria Beatriz Carmo, andPaulo Pombinho
Web Interaction
Using Card Sorts for Understanding Website Information Architectures:Technological, Methodological and Cultural Issues . . . . . . . . . . . . . . . . . . . . 309
Helen Petrie, Christopher Power, Paul Cairns, and Cagla Seneler
The Treatment of Temporal Data in Web-Based Reservation Systems:An Inspection-Based Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Gerhard F. Knolmayer, Lukas E. Helfenstein, and Viola Sini
A Tool Support for Web Applications Adaptation Using NavigationHistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Sergio Firmenich, Marco Winckler, and Gustavo Rossi
Web Usability Probe: A Tool for Supporting Remote UsabilityEvaluation of Web Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Tonio Carta, Fabio Paterno, and Vagner Figueredo de Santana
Demos
A Demo of a Dynamic Facial UI for Digital Artists . . . . . . . . . . . . . . . . . . . 358Pedro Bastos, Xenxo Alvarez, and Veronica Orvalho
XXVIII Table of Contents – Part IV
A Set of Customizable Games Supporting Therapy of Children withCerebral Palsy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Beant Dhillon, Areti Goulati, Ioannis Politis, Agata Raczewska, andPanos Markopoulos
Mobile Total Conversation – Communication for All, Everywhere . . . . . . 362Erik Zetterstrom
Storytelling Meets the Social Web: An HTML5 Cross-PlatformApplication for Older Adults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Tiago Boldt Sousa, Pedro Tenreiro, Paula Alexandra Silva, andEduarda Mendes Rodrigues
Tablexcel: A Multi-user, Multi-touch Interactive Tabletop Interface forMicrosoft Excel Spreadsheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Guillaume Besacier
Doctoral Consortium
Assessing Short-Term Human-Robot Interaction in Public Space . . . . . . . 370Jakub Z�lotowski
Barefooted Usability Evaluation: Addressing the Mindset, Resourcesand Competences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Anders Bruun
Collaborative Human-Machine Communication: User-Centered Designof In-Vehicle Speech Dialog Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Linn Hackenberg
Development of a Methodology for Evaluating the Quality in Use ofWeb 2.0 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Tihomir Orehovacki
Distortion Techniques for Sketching Interaction . . . . . . . . . . . . . . . . . . . . . . 386Paul Schmieder
Evaluation of Information Classification on Websites and Impact ofCulture: A Cross Country Comparison of Information Classification . . . . 390
Ather Nawaz
Exploring New Ways of Utilizing Automated Clustering and MachineLearning Techniques in Information Visualization . . . . . . . . . . . . . . . . . . . . 394
Johann Schrammel
Exploring Serendipity’s Precipitating Conditions . . . . . . . . . . . . . . . . . . . . . 398Lori McCay-Peet
Table of Contents – Part IV XXIX
Human-Computer Interaction and Human Mental Workload: AssessingCognitive Engagement in the World Wide Web . . . . . . . . . . . . . . . . . . . . . . 402
Luca Longo
Human-Computer Interaction for Security Research: The Case of EUE-Banking Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Caroline Moeckel
Information Architecture Automatization for the Semantic Web . . . . . . . . 410Josep Maria Brunetti and Roberto Garcıa
Microinteractions to Augment Manual Tasks . . . . . . . . . . . . . . . . . . . . . . . . 414Katrin Wolf
OPEN-HEREDEUX: OPEN HEuristic REsource for Designing andEvaluating User eXperience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Llucia Masip, Marta Oliva, and Toni Granollers
Sketching Language: User-Centered Design of a Wizard of OzPrototyping Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Stephan Schlogl
Time Affordances and Physical Mobility in the Context of UbiquitousTechnologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Larissa Pschetz
Usability Evaluation in Software Development Practice . . . . . . . . . . . . . . . 430Marta Kristin Larusdottir
Website Customization: Exploring a Tag-Based Approach in theAustralian Banking Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Rajinesh Ravendran
Industrial Papers
Acceptance and Speed of Animations in Business Software . . . . . . . . . . . . 438Lisa Mattes, Martin Schrepp, Theo Held, and Patrick Fischer
Developing Mobile Remote Collaboration Systems for Industrial Use:Some Design Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Leila Alem and Weidong Huang
Experiences of Online Co-creation with End Users of Cloud Services . . . . 446Kaarina Karppinen, Kaisa Koskela, Camilla Magnusson, andVille Nore
Interactive Installations: Tales from the Trenches . . . . . . . . . . . . . . . . . . . . . 450Pedro Campos, Miguel Campos, and Joaquim A. Jorge
XXX Table of Contents – Part IV
Interactive Posters
A Conceptual Framework for Modeling Awareness Mechanisms inCollaborative Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Fernando Gallego, Ana Isabel Molina, Jesus Gallardo, andCrescencio Bravo
A Longitudinal Pilot Study to Evaluate Non-visual Icons in a MobileExertion Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Huimin Qian, Ravi Kuber, and Andrew Sears
A Remote Multi-touch Experience to Support Collaboration betweenRemote Museum Visitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Ernesto Arroyo, Valeria Righi, Roger Tarrago, and Josep Blat
A Systematic Evaluation of Mobile Applications for DiabetesManagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
C. Martin, D. Flood, D. Sutton, A. Aldea, R. Harrison, and M. Waite
An Integrated Approach to Develop Interactive Software . . . . . . . . . . . . . . 470Begona Losada, Maite Urretavizcaya, and Isabel Fernandez de Castro
Analyzing the Level of Presence While Navigating in a VirtualEnvironment during an fMRI Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Miriam Clemente, Alejandro Rodrıguez, Beatriz Rey,Aina Rodrıguez, Rosa M. Banos, Cristina Botella,Mariano Alcaniz, and Cesar Avila
Applying the Affinto Ontology to Develop a Text-Based EmotionalConversation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Idoia Cearreta and Nestor Garay
Augmented Mirror: Interactive Augmented Reality System Based onKinect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Lucıa Vera, Jesus Gimeno, Inmaculada Coma, andMarcos Fernandez
Calls for Interaction: The More the Better? User Experience of 3DCarousel and Additional Interaction Techniques . . . . . . . . . . . . . . . . . . . . . . 487
S. Shyam Sundar, Saraswathi Bellur, Jeeyun Oh, and Haiyan Jia
Can Persona Facilitate Ideation? A Comparative Study on Effects ofPersonas in Brainstorming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Xiantao Chen, Ying Liu, Ning Liu, and Xiaojie Wang
Children with Special Needs: Comparing Tactile and TangibleInteraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Cesar Ortea Suarez, Javier Marco, Sandra Baldassarri, andEva Cerezo
Table of Contents – Part IV XXXI
Coupling Interaction and Physiological Metrics for InteractionAdaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Luıs Duarte and Luıs Carrico
Dual Flow Interaction: Scene Flow and Data Flow, Dual Interaction inArt Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
Jose Ma Alonso-Calero, Arcadio Reyes-Lecuona,Jesus Marın-Clavijo, and Josefa Cano-Garcıa
Effects of Touch Screen Response Time on Psychological State andTask Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Nozomi Sato and Kentaro Nakajima
Elaborating Analysis Models with Tool Support . . . . . . . . . . . . . . . . . . . . . 511Gregor Buchholz and Peter Forbrig
End-User Support for Information Architecture Analysis in InteractiveWeb Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Luis A. Rojas and Jose A. Macıas
Enriching Evaluation in Video Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519Jose L. Gonzalez Sanchez, Rosa M. Gil Iranzo, andFrancisco L. Gutierrez Vela
Evaluation of 3D Object Manipulation on Multi-touch Surfaces UsingUnconstrained Viewing Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Daniel Mendes and Alfredo Ferreira
Evaluation of an Accessible Home Control and Telecare System . . . . . . . . 527Fausto Sainz, Judit Casacuberta, Marta Dıaz, and Jaisiel Madrid
Experimenting and Improving Perception of 3D Rotation-BasedTransitions between 2D Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Maxime Cordeil, Christophe Hurter, and Stephane Conversy
HTML 5 Support for an Accessible User-Video-Interaction on theWeb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Lourdes Moreno, Paloma Martınez, Ana Iglesias, andMarıa Gonzalez
Improving the Flexibility of Model Transformations in the Model-BasedDevelopment of Interactive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
Christian Wiehr, Nathalie Aquino, Kai Breiner, Marc Seissler, andGerrit Meixner
In Two Minds about Usability? Rationality and Intuition in UsabilityEvaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Volker Thoma and Elliott P. White
XXXII Table of Contents – Part IV
Influence of Web Content Management Systems in Web ContentAccessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Juan Miguel Lopez, Afra Pascual, Llucia Masip,Toni Granollers, and Xavier Cardet
Instructional Animations: More Complex to Learn from Than at FirstSight? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
Anna Wong, Nadine Marcus, and John Sweller
Integrating Feedback into Wearable Controls . . . . . . . . . . . . . . . . . . . . . . . . 556Catia Sousa and Ian Oakley
Intelligent Playgrounds: Measuring and Affecting Social Inclusion inSchools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
Olga Lyra, Evangelos Karapanos, and Vassilis Kostakos
It Does Not Fitts My Data! Analysing Large Amounts of Mobile TouchData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
Niels Henze and Susanne Boll
Measuring Cognitive Workload with Low-CostElectroencephalograph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
Avi Knoll, Yang Wang, Fang Chen, Jie Xu, Natalie Ruiz,Julien Epps, and Pega Zarjam
Model-Based Accessible User Interface Generation in UbiquitousEnvironments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
Raul Minon, Julio Abascal, Amaia Aizpurua, Idoia Cearreta,Borja Gamecho, and Nestor Garay
Multiuser Augmented Reality System for Indoor Exhibitions . . . . . . . . . . 576Jesus Gimeno, Ricardo Olanda, Bibiana Martinez, andFernando M. Sanchez
Natural Interaction without Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580Carina S. Gonzalez-Gonzalez, David Cabrera-Primo,Melvin Gutierrez, and Jose Sigut-Saavedra
NAVI – A Proof-of-Concept of a Mobile Navigational Aid for VisuallyImpaired Based on the Microsoft Kinect . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
Michael Zollner, Stephan Huber, Hans-Christian Jetter, andHarald Reiterer
OntoCompo: A Tool to Enhance Application Composition . . . . . . . . . . . . 588Christian Brel, Anne-Marie Dery-Pinna,Philippe Renevier-Gonin, and Michel Riveill
Table of Contents – Part IV XXXIII
Personal Chart: Health Data Logging Made Easy with a MobileDevice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
Mikko Paldanius, Anu Lehtio, Minna Karukka, andPertti Huuskonen
Psychosocial Indicators via Hand Tremor . . . . . . . . . . . . . . . . . . . . . . . . . . . 596Ted Selker, Patricia Collins, and Will Dayton
Recognizing Emotions from Video in a Continuous 2D Space . . . . . . . . . . 600Sergio Ballano, Isabelle Hupont, Eva Cerezo, and Sandra Baldassarri
Supporting Moodle-Based Lesson through Visual Analysis . . . . . . . . . . . . . 604Diego Gomez-Aguilar, Miguel Conde-Gonzalez,Roberto Theron, and Francisco Garcia-Penalvo
Supporting Transformations across User Interface Descriptions atVarious Abstraction Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
Mauro Lisai, Fabio Paterno, Carmen Santoro, andLucio Davide Spano
Texture Recognition: Evaluating Force, Vibrotactile and RealFeedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Jonatan Martınez, Arturo S. Garcıa, Diego Martınez,Jose P. Molina, and Pascual Gonzalez
The Application of Preference Mapping in Aesthetic WebsiteEvaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
Eleftherios Papachristos and Nikolaos Avouris
The Effect of Religious Identity on User Judgment of WebsiteQuality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
Ons Al-shamaileh, Alistair Sutcliffe, and Antonella De Angeli
Toward a Better Guidance in Wearable Electronic Orientation Aids . . . . 624Slim Kammoun, Marc J.-M. Mace, Bernard Oriola, andChristophe Jouffrais
Towards a Context Oriented Approach to Ethical Evaluation ofInteractive Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Sandra Burri Gram-Hansen, Henrik Scharfe, andJens Vilhelm Dinesen
Towards a Framework of Co-Design Sessions with Children . . . . . . . . . . . . 632Emanuela Mazzone, Janet C. Read, and Russell Beale
Towards a Semantic Modelling Framework in Support of MultimodalUser Interface Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
Elena Tsiporkova, Tom Tourwe, and Nicolas Gonzalez-Deleito
XXXIV Table of Contents – Part IV
Towards an Experimental Framework for Measuring Usability ofModel-Driven Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
Jose Ignacio Panach, Nelly Condori-Fernandez, Arthur Baars,Tanja Vos, Ignacio Romeu, and Oscar Pastor
TROCAS: Communication Skills Development in Children with AutismSpectrum Disorders via ICT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Margarida Lucas da Silva, Carla Simoes, Daniel Goncalves,Tiago Guerreiro, Hugo Silva, and Fernanda Botelho
Usability Assessment of a Multimodal Visual-Haptic Framework forChemistry Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
Sara Comai and Davide Mazza
Usability Planner: A Tool to Support the Process of Selecting UsabilityMethods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
Xavier Ferre and Nigel Bevan
User Experience Specification through Quality Attributes . . . . . . . . . . . . . 656Llucia Masip, Marta Oliva, and Toni Granollers
Using Availability Heuristics in Game Design to Introduce Children toEnergy Sufficient Behaviours at Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
Nsemeke Ukpong, Privender Saini, and Abdullah Al Mahmud
UsiXML Extension for Awareness Support . . . . . . . . . . . . . . . . . . . . . . . . . . 665Jose Figueroa-Martinez, Francisco L. Gutierrez Vela,Vıctor Lopez-Jaquero, and Pascual Gonzalez
Web Accessibility Requirements for Media Players . . . . . . . . . . . . . . . . . . . 669Marıa Gonzalez, Lourdes Moreno, Paloma Martınez, andAna Iglesias
Organization Overviews
Christian Doppler Laboratory: Contextual Interfaces . . . . . . . . . . . . . . . . . 675David Wilfinger, Alexander Meschtscherjakov, Astrid Weiss, andManfred Tscheligi
Interaction Modeling at PROS Research Center . . . . . . . . . . . . . . . . . . . . . . 677Jose Ignacio Panach, Nathalie Aquino, and Oscar Pastor
Overview of the Brazilian Computer Society’s Council forHuman-Computer Interaction (CEIHC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
Cristiano Maciel, Elizabeth Furtado, Marco Winckler,Milene Silveira, and Raquel Prates
Table of Contents – Part IV XXXV
Supporting a Multidisciplinary Digital Media Research Communitywith GRAND Aspirations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Kellogg S. Booth and Eleni Stroulia
The Centre for Internationalization and Usability: EnablingCulture-Centred Design for All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
Jose Abdelnour-Nocera, Andy Smith, John Moore, Cecilia Oyugi,Souleymane Camara, Malte Ressin, Sujan Shresta, and Alison Wiles
Panels
Critical Design :: Is It Just Designers Doing Ethnography or Does ItOffer Something More for Interaction Design? . . . . . . . . . . . . . . . . . . . . . . . 685
Michael Smyth, Chris Speed, and Martin Brynskov
Everyone is a Designer, Even Executives! . . . . . . . . . . . . . . . . . . . . . . . . . . . 687Jannie Lai and Iram Mirza
Special Interest Groups (SIGs)
HCI for Peace: Promoting Peace and Preventing War throughComputing Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
Juan Pablo Hourcade, Natasha E. Bullock-Rest, Janet C. Read, andYoram Chisik
Interaction and Music Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691Sidney Fels and Michael Lyons
User Interface eXtensible Markup Language SIG . . . . . . . . . . . . . . . . . . . . . 693Gaelle Calvary, Olivier de Wasseige, David Faure, andJean Vanderdonckt
Tutorials
Activity-Centered Interaction Design: A Model-Driven Approach . . . . . . . 696Larry Constantine
Analysis, Redesign and Evaluation with Teasing Apart, PiecingTogether . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
Clare J. Hooper
Context-Aware Adaptation of User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 700Vivian Genaro Motti and Jean Vanderdonckt
Designing the Search Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702Tony Russell-Rose
XXXVI Table of Contents – Part IV
Improving the Content of User Requirements . . . . . . . . . . . . . . . . . . . . . . . . 704Nigel Bevan
Model-Driven Inquiry: Beyond Ethnography and Contextual Inquiry . . . 706Larry Constantine
Scenario-Based Requirements Engineering Facilitating InteractionDesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
Hermann Kaindl
Sketching Interactive Systems with Sketchify . . . . . . . . . . . . . . . . . . . . . . . . 710Zeljko Obrenovic
UIs Automatically Optimized for Your Smartphone . . . . . . . . . . . . . . . . . . 712Hermann Kaindl
User Experience Evaluation – Which Method to Choose? . . . . . . . . . . . . . 714Virpi Roto, Arnold Vermeeren,Kaisa Vaananen-Vainio-Mattila, and Effie Law
User Experience Evaluation in Entertainment and Games . . . . . . . . . . . . . 716Regina Bernhaupt
Workshops
5th Workshop on Software and Usability Engineering Cross-Pollination:Patterns, Usability and User Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
Peter Forbrig, Regina Bernhaupt, Marco Winckler, andJanet Wesson
Accessible Design in the Digital World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720Gerhard Weber, Helen Petrie, and Jenny Darzentas
Building Bridges – HCI and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 722Achim Ebert, Gitta Domik, Nahum Gershon, and Gerrit van der Veer
Combining Design and Engineering of Interactive Systems throughModels and Tools (ComDeisMoto) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
Stefan Sauer, Kai Breiner, Heinrich Hussmann, Gerrit Meixner,Andreas Pleuss, and Jan Van den Bergh
Data-Centric Interactions on the Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726Paloma Dıaz, Tim Hussein, Steffen Lohmann, and Jurgen Ziegler
Encouraging Serendipity in Interactive Systems . . . . . . . . . . . . . . . . . . . . . . 728Stephann Makri, Elaine G. Toms, Lori McCay-Peet, andAnn Blandford
Table of Contents – Part IV XXXVII
Human Work Interaction Design for e-Government and PublicInformation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
Dinesh Katre, Pedro Campos, Torkil Clemmensen,Rikke Orngreen, and Annelise Mark Pejtersen
Improving the Content of User Requirements . . . . . . . . . . . . . . . . . . . . . . . . 732Nigel Bevan
Mobile Accessibility Workshop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734Daniel Goncalves, Luis Carrico, and Markel Vigo
Promoting and Supporting Healthy Living by Design . . . . . . . . . . . . . . . . . 736Gordon Baxter, Lisa Dow, Stephen Kimani, and Nilufar Baghaei
Re-framing HCI through Local and Indigenous Perspectives . . . . . . . . . . . 738Jose Abdelnour-Nocera, Masaaki Kurosu, Torkil Clemmensen,Nic Bidwell, Ravi Vatrapu, Heike Winschiers-Theophilus,Vanessa Evers, Rudiger Heimgartner, and Alvin Yeo
Software Support for User Interface Description Language . . . . . . . . . . . . . 740Adrien Coyette, David Faure, Juan Gonzalez-Calleros, andJean Vanderdonckt
User Experience in Cars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742Manfred Tscheligi, Albrecht Schmidt, David Wilfinger,Alexander Meschtscherjakov, and Andrew L. Kun
User Interaction Techniques for Future Lighting Systems . . . . . . . . . . . . . . 744Dzmitry Aliakseyeu, Jon Mason, Bernt Meerbeek, Harm van Essen,Serge Offermans, and Andres Lucero
Values in Design - Building Bridges between RE, HCI and Ethics . . . . . . 746Christian Detweiler, Alina Pommeranz, Jeroen v.d. Hoven, andHelen Nissenbaum
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
P. Campos et al. (Eds.): INTERACT 2011, Part IV, LNCS 6949, pp. 600–603, 2011. © IFIP International Federation for Information Processing 2011
Recognizing Emotions from Video in a Continuous 2D Space
Sergio Ballano1, Isabelle Hupont1, Eva Cerezo2, and Sandra Baldassarri2
1 Aragon Institute of Technology, Department of R&D and Technology Services, Zaragoza. 5018, María de Luna 7-8, Spain
2 Universidad de Zaragoza, Computer Science and Systems Engineering Department, Zaragoza. 50018, María de Luna 3, Spain
{sballano,ihupont}@ita.es, {ecerezo,sandra}@unizar.es
Abstract. This paper proposes an effective system for continuous facial affect recognition from videos. The system operates in a continuous 2D emotional space, characterized by evaluation and activation factors. It makes use, for each video frame, of a classification method able to output the exact location (2D point coordinates) of a still facial image in that space. It also exploits the Kalman filtering technique to control the 2D point movement along the affective space over time and to improve the robustness of the method by predicting its future locations in cases of temporal facial occlusions or inaccurate tracking.
Keywords: Affective computing, facial expression analysis.
1 Introduction
Facial expressions are often evaluated by classifying still face images into one of the six universal “basic” emotions proposed by Ekman [1] which include “happiness”, “sadness”, “fear”, “anger”, “disgust” and “surprise”. This categorical approach fails to describe the wide range of emotions that occur in daily communication settings and ignores the intensity of emotions.
Given that humans inherently display facial emotions following a continuous temporal pattern [2], more recently attention has been shifted towards sensing facial affect from video sequences. The study of facial expressions’ dynamics reinforces the limitations of categorical approach, since it represents a discrete list of emotions with no real link between them and has no algebra: every emotion must be studied and recognized independently.
This paper proposes a method for continuous facial affect recognition from videos. The system operates in a 2D emotional space, characterized by evaluation and activation factors. It combines a classification method able to output, frame per frame, the exact location (2D point coordinates) of the shown facial image and a Kalman filtering technique that controls the 2D point movement over time through an “emotional kinematics” model. In that way, the system works with a wide range of intermediary affective states and is able to define a continuous emotional path that characterizes the affective video sequence.
Recognizing Emotions from Video in a Continuous 2D Space 601
2 Facial Images Classification in a Continuous 2D Affective Space
The starting point of the system is the method for facial emotional classification presented in authors’ previous work [3]. The inputs to this method are the variations with respect to the “neutral” face of the set of facial distances and angles shown in Fig. 1. This initial method combines through a majority voting strategy [3] the five most commonly used classifiers in the literature (Multilayer Perceptron, RIPPER, SVM, Naïve Bayes and C4.5) to finally assign at its output a confidence value CV(Ei) of the facial expression to each of Ekman’s six emotions plus “neutral”.
Fig. 1. System's facial inputs
To enrich the emotional output information from the system in terms of intermediate emotions, the evaluation-activation 2D model proposed by Whissell has been used. In her study, Whissell assigns a pair of values <evaluation, activation> to each of the approximately 9000 affective words that make up her “Dictionary of Affect in Language” [4]. The next step is to build an emotional mapping so that an expressional face image can be represented as a point on this plane whose coordinates (x,y) characterize the emotion property of that face.
The words corresponding to each of Ekman’s six emotions have a specific location (xi, yi) in the Whissell space. Thanks to this, the output of the classifiers (confidence value of the facial expression to each emotional category) can be mapped onto the space. This emotional mapping is carried out considering each of Ekman’s six basic emotions plus “neutral” as weighted points in the evaluation-activation space. The weights are assigned depending on the confidence value CV(Ei) obtained for each emotion. The final coordinates (x,y) of a given image are then calculated as the centre of mass of the seven weighted points.
3 From Still Images to Video Sequences
Thanks to the use of the 2-dimensional description of affect, which supports continuous emotional input, an emotional facial video sequence can be viewed as a point (corresponding to the location of a particular affective state in time t) moving
602 S. Ballano et al.
through this space over time. In that way, the different positions taken by the point (one per frame) and its velocity over time can be related mathematically and modeled, finally obtaining an “emotional path” in the 2D space that reflects intuitively the emotional progress of the user throughout the video.
3.1 Modeling Emotional Kinematics with a Simple Kalman Filter
For real-time “emotional kinematics” control, the Kalman filter is exploited. Analogously to classical mechanics, the “emotional kinematics” of the point in the Whissell space (x-position, y-position, x-velocity and y-velocity) is modeled as the system’s state in the Kalman framework at time tk. In this way, the Kalman iterative estimation process -that follows the well-known recursive equations detailed in Kalman’s work [5]- can be applied to the recorded user’s emotional video sequence, so that each iteration corresponds to a new video frame (i.e. to a new sample of the computed emotional path). One of the main advantages of using Kalman filter for the 2D point emotional trajectory modeling is that it can be used to tolerate small occlusions or inaccurate tracking so that, when a low level of confidence in the facial tracking is detected, the measurement will not be used and only the filter prediction will be taken as the 2D point position.
3.2 Experimental Results
In order to demonstrate the potential of the proposed “emotional kinematics” model, it has been tested with a set of complex video sequences recorded in an unsupervised setting (VGA webcam quality, different emotions displayed contiguously, facial occlusions, etc.). A total of 15 videos from 3 different users were tested, ranging from 20 to 70 seconds, from which a total of 127 key-frames were extracted to evaluate different key-points of the emotional path.
These key-points were annotated in the Whissell space thanks to 18 volunteers. The collected evaluation data have been used to define a region where each image is considered to be correctly located. The algorithm used to compute the shape of the region is based on Minimum Volume Ellipsoids (MVE) and follows the algorithm described by Kumar and Yildrim [6]. MVE looks for the ellipsoid with the smallest volume that covers a set of data points. The obtained MVEs are used for evaluating results at four different levels, as shown in Table 1. As can be seen, the success rate is 61.90% in the most restrictive case, i.e. with ellipse criteria and rises to 84.92% when considering the activation axis criteria. Finally, Fig. 2 shows an example of emotional path obtained after applying the “emotional kinematics” model.
Table 1. Results obtained in an uncontrolled environment
Ellipse criteria (success if inside the ellipse)
Quadrant criteria (success if in the same quadrant as the ellipse centre)
Evaluation axis criteria (success if in the same evaluation semi-axis as the ellipse centre)
Activation axis criteria (success if in the same activation semi-axis as
the ellipse centre) Success% 61.90% 74.60% 79.37% 84.92%
Recognizing Emotions from Video in a Continuous 2D Space 603
Fig. 2. “Emotional kinematics” model response during the different affective phases of the video and the occlusion period. In dashed red, emotional trajectory without Kalman filtering; In solid blue, reconstructed emotional trajectory using Kalman filter.
4 Conclusions
This paper describes an effective system for continuous facial affect recognition from videos. The main distinguishing feature of our work compared to others is that the output does not simply provide a classification in terms of a set of emotionally discrete labels, but goes further by extending the emotional information over an infinite range of intermediate emotions and by allowing a continuous dynamic emotional trajectory to be detected from complex affective video sequences.
References
1. Keltner, D., Ekman, P.: Facial Expression Of Emotion. In: Handbook of Emotions, pp. 236–249. Guilford Publications, Inc., New York (2000)
2. Petridis, S., Gunes, H., Kaltwang, S., Pantic, M.: Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities. In: Proceedings of the 2009 International Conference on Multimodal Interfaces, pp. 23–30 (2009)
3. Hupont, I., Cerezo, E., Baldassarri, S.: Sensing facial emotions in a continuous 2D affective space. Presented at the Systems Man and Cybernetics (SMC), Istanbul, Octubre 10 (2010)
4. Whissell, C.M.: The Dictionary of Affect in Language, Emotion: Theory, Research and Experience. Academic, New York (1989)
5. Kalman, R.: A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME – Journal of Basic Engineering, 35–45 (1960)
6. Kumar, P., Yildirim, E.A.: Minimum-Volume Enclosing Ellipsoids and Core Sets. Journal of Optimization Theory and Applications 126, 1–21 (2005)