3
A note of caution regarding anthropomorphism in HCI agents Kimberly E. Culley, Poornima Madhavan Human Factors Psychology, Old Dominion University, Norfolk, VA, United States article info Article history: Available online 16 January 2013 Keywords: HCI Universal usability Anthropomorphism Affect as information Trust Agent abstract Universal usability is an important component of HCI, particularly as companies promote their products in increasingly global markets to users with diverse cultural backgrounds. Successful anthropomorphic agents must have appropriate computer etiquette and nonverbal communication patterns. Because there are differences in etiquette, tone, formality, and colloquialisms across different user populations, it is unlikely that a generic anthropomorphic agent would be universally appealing. Additionally, because anthropomorphic characters are depicted as capable of human reasoning and possessing human motiva- tions, users may ascribe undue trust in these agents. Trust is a complex construct that exerts an impor- tant role in a user’s interactions with an interface or system. Feelings and perceptions about an anthropomorphic agent may impact the construction of a mental model about a system, which may lead to inappropriate calibrations of automation trust that is based on an emotional connection with the anthropomorphic agent rather than on actual system performance. Ó 2012 Elsevier Ltd. All rights reserved. Universal usability is an important component of HCI, particu- larly as companies promote their products in increasingly global markets. An increasingly diverse user population will bring with it different cultural, ethnic, racial, and linguistic backgrounds (Shneiderman & Plaisant, 2010). Jackson et al. (2003) note that there are individual differences relating to how people seek, use, and absorb information from the environment, and assert that there are cultural bases to how individuals perceive, organize, and evaluate such information. This highlights the importance of cognitive style in users’ interaction with information. Cognitive style refers generally to stable preferences or strategies for perceiv- ing, remembering, thinking, and problem solving (Messick, 1976), as well as how individuals relate to others (Witkin, Moore, Goo- denough, & Cox, 1977). These individual differences would be applicable to how an individual acquires, perceives, and processes the information contained in an interactive interface, such as an anthropomorphic agent. Successful anthropomorphic agents must have appropriate computer etiquette and nonverbal communication patterns, which shape the user’s expectations about acceptable behaviors and interactions with the system (Miller, 2002). Generally, Shneider- man and Plaisant (2010) note that anthropomorphic characters re- quire socially appropriate affect, as well as well-timed head movements, nods, gaze patterns, and gestures in order to be suc- cessful and accepted by the user. Because there are differences in etiquette, tone, formality, and colloquialisms across different user populations, it is unlikely that an anthropomorphic agent would be universally appealing. Jackson et al. (2003) assert that perfor- mance of cognitive tasks is generally more successful when the presentation of information aligns with an individual’s cognitive style. Because cognitive style includes cultural nuances, an unap- pealing (or worse) anthropomorphic agent could have a detrimen- tal effect on HCI and the cognitive tasks they were intended to support. Additionally, because anthropomorphic characters are depicted as capable of human reasoning and possessing human motivations, users may ascribe undue trust in these agents. Trust is a complex construct that has an important role in a user’s interactions with other humans and aspects of an interface or system. Lee and See (2004) define trust as confidence that an agent will help an individ- ual achieve his/her goals or desired state in a situation character- ized by ambiguity and uncertainty. Trust is assumed to function as a social decision heuristic by facilitating choice under uncer- tainty (Lee & See, 2004). If the human operator views an anthropo- morphic agent as having shared motivations and goals, he or she may have poor calibration of trust in the system that the anthropo- morphic agent is representing. Lee and See (2004) note that cali- bration of trust bears a strong relationship to Parasuraman and Riley’s (1997) discussion of misuse and disuse of automation, denoting overtrust and distrust respectively as examples of poor calibration. Lee and See (2004) note that conventional approaches to trust frequently undervalue the impact of affect and inflate the influence of cognitive capacities. This assertion is applicable to both human– computer trust and human–human trust. Rempel, Holmes, and Zanna (1985) propose that human–human trust involves predict- ability, dependability, and faith in the future of the relationship. 0747-5632/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2012.11.023 E-mail addresses: [email protected] (K.E. Culley), [email protected] (P. Madhavan) Computers in Human Behavior 29 (2013) 577–579 Contents lists available at SciVerse ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

A note of caution regarding anthropomorphism in HCI agents

Embed Size (px)

Citation preview

Computers in Human Behavior 29 (2013) 577–579

Contents lists available at SciVerse ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

A note of caution regarding anthropomorphism in HCI agents

0747-5632/$ - see front matter � 2012 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.chb.2012.11.023

E-mail addresses: [email protected] (K.E. Culley), [email protected] (P.Madhavan)

Kimberly E. Culley, Poornima MadhavanHuman Factors Psychology, Old Dominion University, Norfolk, VA, United States

a r t i c l e i n f o a b s t r a c t

Article history:Available online 16 January 2013

Keywords:HCIUniversal usabilityAnthropomorphismAffect as informationTrustAgent

Universal usability is an important component of HCI, particularly as companies promote their productsin increasingly global markets to users with diverse cultural backgrounds. Successful anthropomorphicagents must have appropriate computer etiquette and nonverbal communication patterns. Because thereare differences in etiquette, tone, formality, and colloquialisms across different user populations, it isunlikely that a generic anthropomorphic agent would be universally appealing. Additionally, becauseanthropomorphic characters are depicted as capable of human reasoning and possessing human motiva-tions, users may ascribe undue trust in these agents. Trust is a complex construct that exerts an impor-tant role in a user’s interactions with an interface or system. Feelings and perceptions about ananthropomorphic agent may impact the construction of a mental model about a system, which may leadto inappropriate calibrations of automation trust that is based on an emotional connection with theanthropomorphic agent rather than on actual system performance.

� 2012 Elsevier Ltd. All rights reserved.

Universal usability is an important component of HCI, particu-larly as companies promote their products in increasingly globalmarkets. An increasingly diverse user population will bring withit different cultural, ethnic, racial, and linguistic backgrounds(Shneiderman & Plaisant, 2010). Jackson et al. (2003) note thatthere are individual differences relating to how people seek, use,and absorb information from the environment, and assert thatthere are cultural bases to how individuals perceive, organize,and evaluate such information. This highlights the importance ofcognitive style in users’ interaction with information. Cognitivestyle refers generally to stable preferences or strategies for perceiv-ing, remembering, thinking, and problem solving (Messick, 1976),as well as how individuals relate to others (Witkin, Moore, Goo-denough, & Cox, 1977). These individual differences would beapplicable to how an individual acquires, perceives, and processesthe information contained in an interactive interface, such as ananthropomorphic agent.

Successful anthropomorphic agents must have appropriatecomputer etiquette and nonverbal communication patterns, whichshape the user’s expectations about acceptable behaviors andinteractions with the system (Miller, 2002). Generally, Shneider-man and Plaisant (2010) note that anthropomorphic characters re-quire socially appropriate affect, as well as well-timed headmovements, nods, gaze patterns, and gestures in order to be suc-cessful and accepted by the user. Because there are differences inetiquette, tone, formality, and colloquialisms across different userpopulations, it is unlikely that an anthropomorphic agent would

be universally appealing. Jackson et al. (2003) assert that perfor-mance of cognitive tasks is generally more successful when thepresentation of information aligns with an individual’s cognitivestyle. Because cognitive style includes cultural nuances, an unap-pealing (or worse) anthropomorphic agent could have a detrimen-tal effect on HCI and the cognitive tasks they were intended tosupport.

Additionally, because anthropomorphic characters are depictedas capable of human reasoning and possessing human motivations,users may ascribe undue trust in these agents. Trust is a complexconstruct that has an important role in a user’s interactions withother humans and aspects of an interface or system. Lee and See(2004) define trust as confidence that an agent will help an individ-ual achieve his/her goals or desired state in a situation character-ized by ambiguity and uncertainty. Trust is assumed to functionas a social decision heuristic by facilitating choice under uncer-tainty (Lee & See, 2004). If the human operator views an anthropo-morphic agent as having shared motivations and goals, he or shemay have poor calibration of trust in the system that the anthropo-morphic agent is representing. Lee and See (2004) note that cali-bration of trust bears a strong relationship to Parasuraman andRiley’s (1997) discussion of misuse and disuse of automation,denoting overtrust and distrust respectively as examples of poorcalibration.

Lee and See (2004) note that conventional approaches to trustfrequently undervalue the impact of affect and inflate the influenceof cognitive capacities. This assertion is applicable to both human–computer trust and human–human trust. Rempel, Holmes, andZanna (1985) propose that human–human trust involves predict-ability, dependability, and faith in the future of the relationship.

578 K.E. Culley, P. Madhavan / Computers in Human Behavior 29 (2013) 577–579

In order to achieve these qualities, the trustee must be consideredreliable and predictable, concerned with one’s needs, and worthyof confidence. These characteristics may be viewed in parallel withthe means by which trust in automation is developed. Both hu-man–human and human–computer relationships require a senseof dependability and predictability, whereby the trustee will per-form as expected on a regular basis. This relates to automation reli-ability, which increases operator trust in the system (Bailey &Scerbo, 2007). Feelings and perceptions about an anthropomorphicagent may be used in the construction of a mental model about asystem, which may lead to inappropriate calibrations of automa-tion trust that is based on an emotional connection with theanthropomorphic agent rather than on actual system performance.

Nass and Moon (2000) examined the application of social rulesand expectations to technological systems. Even though a human isobviously a human and a computer is obviously a computer, andthese are discrete entities, individuals continue to attribute humancharacteristics to non-human systems. Harrison and Hall (2010)assert that anthropomorphism is ‘‘likely a byproduct of the abilityto draw upon one’s own beliefs, feelings, intentions, and emotions,and apply the knowledge of these experiences to the understand-ing of the mental states of other species’’ (p. 35). Given that thebreadth of information necessary to give an operator the impres-sion of system transparency is impractical and likely to cause tech-nology overload (Sarter & Woods, 1995), human operators mayproject animate characteristics to automated systems in order togenerate a more user-friendly mental model. The attribution ofanthropomorphic traits to automated systems can be reinforcedwhen the user experiences automation surprise and perceivesthe system as animate and acting independently of operator inputand intentions (Sarter & Woods, 1995).

Users or operators assign human characteristics and traits tosystems in the course of interactions. This is evident in Lee andMoray’s (1992) discussion of processes information as a basis oftrust, whereby the operator bases trust on qualities and character-istics attributed to the agent rather than on specific behaviors.Generally, trust is considered provisional, based on system perfor-mance, processes, and purpose. Trust is conditional upon the sys-tem’s perceived ability and competence to achieve the operator’sgoals, and an operator’s utilization of information to build trust re-quires an understanding of the system as well as its perceivedcapability to achieve the operator’s goals.

The attribution of characteristics to an agent or system relatesto the correspondence bias, which refers to the human tendencyto explain behaviors in terms of internal dispositions, traits, andmotives, and to underestimate the influence of external situationfactors (Gawronski, 2004; Gilbert & Malone, 1995). The inclusionof an anthropomorphic agent only strengthens and supports thehuman tendency to assign human motivations, tendencies, ratio-nale, and capabilities to non-human systems. This can have a sub-stantial impact on the development of operator trust in a system.That trust may be unsubstantiated if it is based on characteristicsthe human operator has attributed to the system as a result ofanthropomorphism, rather than rooted in experience with the sys-tem itself; this is particularly the case if observations are madewhen the agent’s behavior is highly constrained by situational fac-tors. In human–human interactions, motives and intentions con-tribute heavily to an individual’s construal of another, but suchpersonal intentions are frequently not apparent and must be in-ferred (Gilbert & Malone, 1995). Likewise, a lack of system trans-parency also leads to the extrapolation of system characteristicsbased on more tangible and apparent associated but not analogousattributes or behaviors of the agent.

A final consideration is the role of affect engendered by interac-tion with the anthropomorphic agent. Loewenstein and Lerner(2003) note that affect can have either a direct or indirect impact

on decision making. A direct impact involves the effect of immedi-ate affect on a decision or behavior. An indirect impact, on theother hand, involves the subjective modification of risk assess-ments or judgments of event probabilities by an individual. McCor-mick and McElroy (2009) note that this can be important inaccordance with the affect as information theory, whereby nega-tive affect functions as a proximal factor in judgments by servingas an indication that an individual’s desired state or goal for a giventask is not being achieved. If an anthropomorphic agent is not cul-turally attuned to the user, this may engender negative affect thatimplies to the user that he/she is not properly utilizing the inter-face or achieving his/her goals in the use of the system. However,the affect as information heuristic may misinform the user if his/her affective discomfort is due strictly to feelings about the anthro-pomorphic agent and not a result of unsatisfied goal progress.

It is important to bear in mind that anthropomorphism does re-tain some important functions in HCI, and should not be advisedagainst in all contexts and situations. Anthropomorphic agentscan have some benefit in the context of educational software forchildren, as Shneiderman and Plaisant (2010) have asserted thatanthropomorphic characters can add visual appeal and increaseuser engagement. Strafling, Fleischer, Polzer, Leutner, and Kramer(2010), cite numerous examples of successful integrations of real-istic and cartoon-like humans and animals in e-learning software,and include some examples whereby the inclusion of a pedagogicalagent can be beneficial to adult learners as long as appropriate cri-teria for the agents are applied. However, their utility is to somedegree context-specific; given that anthropomorphic agents canincrease anxiety and reduce performance for adults (Shneidermanand Plaisant, 2010), great caution should be exercised whenanthropomorphism is incorporated in interfaces designed for anadult user population. Specifically, because of an increasingly di-verse user population, care should be taken to ensure universalusability.

It is possible that future developments in user customizabilityof agents may alleviate some of the concern regarding universalusability engendered by anthropomorphism. For example, an agentwhose physical characteristics the user has selected and that com-municates using a natural language system would likely be well re-ceived, provided the natural language of the agent is also that ofthe user. Advances in home computer processor and memory re-sources may provide opportunities for more realistic intonation,gaze patterns, facial expressions, hand gestures, and postures,which may also benefit the visual appeal and general psychologicalreceptiveness to the agent. However, it is also important to bear inmind that as the agent becomes increasingly morphologically sim-ilar to a human, it is also likely that operators will engage in thecorrespondence bias more frequently by ascribing human motiva-tions, reasoning abilities and capabilities to this non-human sys-tem. Operator trust in the system will subsequently be impacted,which will affect overall performance with the system if that trustis unsubstantiated based on agent characteristics rather than sys-tem functioning. Because individual differences in both human–human and human–computer interaction exert such a substantialimpact on human-system integrated functioning, it is necessaryto account for this factor when designing interfaces involvinganthropomorphic agents.

References

Bailey, N. R., & Scerbo, M. W. (2007). Automation-induced complacency formonitoring highly reliable systems: The role of task complexity, systemexperience, and operator trust. Theoretical Issues in Ergonomics Science, 8(4),321–348.

Gawronski, B. (2004). Theory-based bias correction in dispositional inference: Thefundamental attribution error is dead, long live the correspondence bias.European Review of Social Psychology, 15, 183–217.

K.E. Culley, P. Madhavan / Computers in Human Behavior 29 (2013) 577–579 579

Gilbert, D. T., & Malone, P. S. (1995). The correspondence bias. Psychological Bulletin,117(1), 21–38.

Harrison, M. A., & Hall, A. E. (2010). Anthropomorphism, empathy, and perceivedcommunicative ability vary with phylogenetic relatedness to humans. Journal ofSocial, Evolutionary, and Cultural Psychology, 41(1), 34–48.

Jackson, L. A., von Eye, A., Biocca, F. A., Carbatsis, G., Fitzgerald, H. E., & Zhao, Y.(2003). Personality, cognitive style, demographic characteristics, and Internetuse: Findings from the HomeNetToo project. Swiss Journal of Psychology, 62(2),79–90.

Lee, J. D., & Moray, N. (1992). Trust, control strategies and allocation of function inhuman–machine systems. Ergonomics, 35, 1243–1270.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriatereliance. Human Factors, 46, 50–80.

Loewenstein, G., & Lerner, J. S. (2003). The role of affect in decision making. In R. J.Davidson et al. (Eds.), Handbook of affective sciences (pp. 619–642). New York:Oxford University Press.

McCormick, M., & McElroy, T. (2009). Healthy choices in context: How contextualcues can influence the persuasiveness of framed health messages. Judgment andDecision Making, 4(3), 248–255.

Messick, S. (1976). Personality consistencies in cognition and creativity. In S.Messick (Ed.), Individuality in learning (pp. 4–23). San Francisco, CA: Jossey-Bass.

Miller, C. A. (2002). Definitions and dimensions of etiquette (AAAI Report No. FS-02-02). Retrieved from American Association for Artificial Intelligence website:<http://aaaipress.org/Papers/Symposia/Fall/2002/FS-02-02/FS02-02-001.pdf>.

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses tocomputers. Journal of Social Issues, 56(1), 81–103.

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse,abuse. Human Factors, 39, 230–253.

Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journalof Personality and Social Psychology, 49, 95–112.

Sarter, N. B., & Woods, D. D. (1995). How in the world did we ever get into that mode?Mode error and awareness in supervisory control. Human Factors, 37(1), 5–19.

Shneiderman, B., & Plaisant, C. (2010). Designing the user interface: Strategies foreffective human–computer interaction (5th ed.). United States of America:Addison Wesley.

Strafling, N., Fleischer, I., Polzer, C., Leutner, D., & Kramer, N. C. (2010). Teachinglearning strategies with a pedagogical agent: The effects of a virtual tutor andits appearance on learning and motivation. Journal of Media Psychology, 22(2),73–83.

Witkin, H. A., Moore, C. A., Goodenough, D. R., & Cox, P. W. (1977). Field dependentand field independent cognitive styles and their educational implications.Review of Educational Research, 47, 1–64.