7
A Model of Motivation Based on Empathy for AI- Driven Avatars in Virtual Worlds Genaro Rebolledo-Mendez, Sara de Freitas Serious Games Institute Coventry University Coventry, UK {GRebolledoMendez, SFreitas}@coventry.ac.uk Alma Rosa Garcia Gaona Facultad de Informática Universidad Veracruzana Jalapa, Veracruz, México [email protected] Abstract— this paper presents a model of motivation based on empathy for artificial intelligence (AI)-driven avatars in virtual worlds. The model is theoretically-inspired, based on understanding and sharing other people’s intentions. The AI architecture presented is aimed at providing an avatar with the capabilities of motivational coaching in learning situations. The model consists of two general modules for understanding and sharing motivation; understanding consists of shaping a model of the world while sharing consists of deciding the best course of action and reaction in the virtual world. The AI architecture and its associated algorithms are presented. Future areas for research are defined, including practical applications of this architecture for serious games and activities in virtual world applications. Keywords- empathy; motivation; serious games I. INTRODUCTION Affective processes have been largely overlooked in traditional areas of AI application. However, the introduction of the term, ‘affective computing’ [1] has brought about an interest in understanding affective processes in humans. This has led to new mechanisms to simulate or endow computer- based application with tools to deal with these processes. The aim of this investigation is to present the basis for defining a motivational model to allow a virtual avatar to act as a motivational coach in serious (non-leisure, educational) games and with virtual world applications (such as Second Life). The aim is to provide a generalizable architecture that could be applied to different learning situations. The resulting avatar may be conceptualised as a motivational coach whose feedback is adapted depending on the learning task at hand and the avatar interacting with it to personalize instruction. The interest in motivational issues arises from its potential impact on serious games effectiveness in virtual worlds. The emergence of serious games has increased dramatically over the last five years which has produced both potential for developing new learning technologies based upon immersive and interactive interfaces, and has produced many unanswered questions regarding the implementation of frameworks, theories, methods and design strategies for making serious games more effective and useful as part of education, health and training. However, an area that has seen an increased development of applications strongly rooted in frameworks, theories and methods is motivation in Intelligent Tutoring Systems (ITSs). In particular, the idea of utilizing synthetic avatars as tutors and theoretically inspired motivational models has informed the implementation of the motivational model for serious games. The research questions governing this investigation centre upon motivational issues in virtual-world avatars; in particular how to understand another avatar’s motivation and how to implement avatar motivation and share it in virtual worlds. To cast some light on these issues, a model of motivation for avatars is presented. The motivational model is based upon the concept of ‘shared intentionality’ [2]. In this investigation, motivation (from the Latin motus, past participle of movere, to move) is understood as the effort spent by avatars (both synthetic and human-controlled) in a shared, virtual space. The paper is organized in four sections. To provide the foundations of motivational research in educational technology, section two presents research done with motivational issues in ITSs. To address the issue of understanding and sharing motivation in avatars, a model of motivation is proposed in section three where a model of motivation is created by defining a set of steps for understanding motivation and a model to share motivations based upon intentions. Finally, section four presents the conclusions and defines a future research agenda. II. MOTIVATION IN COMPUTER-AIEDED INSTRUCTION An Intelligent Tutoring System is a computer-based application designed and developed to explicitly support learning in a specific topic. ITSs have traditionally been rooted in pedagogical frameworks. For example, LOGO [3] has been designed considering a constructionist approach and has been used to teach mathematics [4]. A more recent approach to ITSs has addressed the learner's motivation [5, 6] to aid instruction. There are three associated processes in addressing motivation: recognition, reaction [7] and the ‘plausibility problem’ [8]. This last problem refers to the fact that even though the first two processes could be implemented mechanically in a computer, still leaves the questions of whether or not the reactions were believable [8]. This problem needs to be investigated as a topic for further research. 978-0-7695-3588-3/09 $25.00 © 2009 IEEE DOI 10.1109/VS-GAMES.2009.33 5 978-0-7695-3588-3/09 $25.00 © 2009 IEEE DOI 10.1109/VS-GAMES.2009.33 5 978-0-7695-3588-3/09 $25.00 © 2009 IEEE DOI 10.1109/VS-GAMES.2009.33 5 2009 Conference in Games and Virtual Worlds for Serious Applications 978-0-7695-3588-3/09 $25.00 © 2009 IEEE DOI 10.1109/VS-GAMES.2009.33 5

[IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

A Model of Motivation Based on Empathy for AI-Driven Avatars in Virtual Worlds

Genaro Rebolledo-Mendez, Sara de Freitas Serious Games Institute

Coventry University Coventry, UK

{GRebolledoMendez, SFreitas}@coventry.ac.uk

Alma Rosa Garcia Gaona Facultad de Informática

Universidad Veracruzana Jalapa, Veracruz, México

[email protected]

Abstract— this paper presents a model of motivation based on empathy for artificial intelligence (AI)-driven avatars in virtual worlds. The model is theoretically-inspired, based on understanding and sharing other people’s intentions. The AI architecture presented is aimed at providing an avatar with the capabilities of motivational coaching in learning situations. The model consists of two general modules for understanding and sharing motivation; understanding consists of shaping a model of the world while sharing consists of deciding the best course of action and reaction in the virtual world. The AI architecture and its associated algorithms are presented. Future areas for research are defined, including practical applications of this architecture for serious games and activities in virtual world applications.

Keywords- empathy; motivation; serious games

I. INTRODUCTION Affective processes have been largely overlooked in

traditional areas of AI application. However, the introduction of the term, ‘affective computing’ [1] has brought about an interest in understanding affective processes in humans. This has led to new mechanisms to simulate or endow computer-based application with tools to deal with these processes. The aim of this investigation is to present the basis for defining a motivational model to allow a virtual avatar to act as a motivational coach in serious (non-leisure, educational) games and with virtual world applications (such as Second Life). The aim is to provide a generalizable architecture that could be applied to different learning situations. The resulting avatar may be conceptualised as a motivational coach whose feedback is adapted depending on the learning task at hand and the avatar interacting with it to personalize instruction. The interest in motivational issues arises from its potential impact on serious games effectiveness in virtual worlds. The emergence of serious games has increased dramatically over the last five years which has produced both potential for developing new learning technologies based upon immersive and interactive interfaces, and has produced many unanswered questions regarding the implementation of frameworks, theories, methods and design strategies for making serious games more effective and useful as part of education, health and training. However, an area that has seen an increased

development of applications strongly rooted in frameworks, theories and methods is motivation in Intelligent Tutoring Systems (ITSs). In particular, the idea of utilizing synthetic avatars as tutors and theoretically inspired motivational models has informed the implementation of the motivational model for serious games. The research questions governing this investigation centre upon motivational issues in virtual-world avatars; in particular how to understand another avatar’s motivation and how to implement avatar motivation and share it in virtual worlds. To cast some light on these issues, a model of motivation for avatars is presented. The motivational model is based upon the concept of ‘shared intentionality’ [2]. In this investigation, motivation (from the Latin motus, past participle of movere, to move) is understood as the effort spent by avatars (both synthetic and human-controlled) in a shared, virtual space. The paper is organized in four sections. To provide the foundations of motivational research in educational technology, section two presents research done with motivational issues in ITSs. To address the issue of understanding and sharing motivation in avatars, a model of motivation is proposed in section three where a model of motivation is created by defining a set of steps for understanding motivation and a model to share motivations based upon intentions. Finally, section four presents the conclusions and defines a future research agenda.

II. MOTIVATION IN COMPUTER-AIEDED INSTRUCTION An Intelligent Tutoring System is a computer-based application designed and developed to explicitly support learning in a specific topic. ITSs have traditionally been rooted in pedagogical frameworks. For example, LOGO [3] has been designed considering a constructionist approach and has been used to teach mathematics [4]. A more recent approach to ITSs has addressed the learner's motivation [5, 6] to aid instruction. There are three associated processes in addressing motivation: recognition, reaction [7] and the ‘plausibility problem’ [8]. This last problem refers to the fact that even though the first two processes could be implemented mechanically in a computer, still leaves the questions of whether or not the reactions were believable [8]. This problem needs to be investigated as a topic for further research.

2009 Conference in Games and Virtual Worlds for Serious Applications

978-0-7695-3588-3/09 $25.00 © 2009 IEEE

DOI 10.1109/VS-GAMES.2009.33

5

2009 Conference in Games and Virtual Worlds for Serious Applications

978-0-7695-3588-3/09 $25.00 © 2009 IEEE

DOI 10.1109/VS-GAMES.2009.33

5

2009 Conference in Games and Virtual Worlds for Serious Applications

978-0-7695-3588-3/09 $25.00 © 2009 IEEE

DOI 10.1109/VS-GAMES.2009.33

5

2009 Conference in Games and Virtual Worlds for Serious Applications

978-0-7695-3588-3/09 $25.00 © 2009 IEEE

DOI 10.1109/VS-GAMES.2009.33

5

Page 2: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Therefore, this investigation addresses the first two aspects of the problem of motivation: recognition and reaction. The rationale of framing the motivational model or architecture in a theoretical model of intention sharing [2] is to try to endow the AI-driven avatar with beliefs and intentions that could be verbally expressed, as well as acted in the virtual world. For example, this architecture allows the avatar to hold conversations of the type ’I believe you want to open the treasure chest1, have you considered its implications…’. This possibility could provide interactions between AI- and human-driven avatars in virtual worlds with a more realistic capacity by further extending the conversation to tackle motivation states perceived as low. To date various attempts have been made to include recognition and reaction mechanisms grounded in theoretical motivational frameworks with the aim of motivating the learner. GUIDON, for example, is an expert system that advises practitioners on the diagnosis of bacterial infections [9]. In its design are included mechanisms that react to sub-optimal diagnoses made by users. GUIDON checks the practitioner’s conclusions against a database and is programmed to intervene, that is present performance feedback only if the learner’s expected diagnosis was not reached. These reactions, although not expressly motivating, constitute the first steps towards recognizing degrees of effort while using ITSs. A similar example is the LISP tutor which analyzed students’ problem-solving episodes by comparing them to an optimal model [10]. Differences between the students’ and the system’s solutions are addressed by taking a suitable set of remedial actions, based on performance feedback. One of the first works explicitly addressing the issue of modelling motivation with AI tools in ITSs is that of MORE [5]. In this work, a clear distinction is made between cognitive and motivational modelling, discriminating between ‘domain-based’ and ’motivational-based‘ techniques used in ITSs. Del Soldato and du Boulay [5] elaborated their work on the assumption that domain-based and motivation-based techniques could sometimes be in conflict, hence they signalled the need for a reconciliation mechanism to resolve the resulting discrepancies [5]. A model of motivation was developed considering an eclectic mix of motivational frameworks such as the ARCS model [11] and the work on learning and fun [8]. Subsequent works addressing the topic of motivation in ITSs considered the issue of modelling the learner’s motivational state in order to provide an appropriate and natural reaction to the perceived state of motivation or de-motivation. A particularly interesting example is provided by the work by de Vicente and Pain [16] who, departing from a set of elements identified as relevant for addressing motivation, derived a set of rules from human experts about how to detect motivational states. . Other approach to motivational issues in ITSs has been the belief that some elements of computer-based instruction, such

1 A hypothetical situation involving the opening of a treasure chest is

presented throughout the paper. It represents only an action in a learning activity framed within a serious game.

as the use of synthetic avatars, bring about motivational benefits per se [12, 13]. According to this view of motivation, it is believed that agents motivate learners by virtue of being attractive and engaging. Pedagogical agents are techniques applied in ITSs to endow the learning environment with a special way of human-computer interaction centred around interactions between the learner and the agent [13]. The idea is to provide mechanisms that make the computer end of the interaction more human-like. Large-scale evaluation of pedagogical agents have showed some significant improvement in the learning of tasks [12]. There are claims that agents add benefits to the learning experience if they are able to convey emotions [14], exploit non-verbal communication such as gestures [12], and more importantly if they are able, with the use of AI techniques, to be believable (as if a human was controlling it). Johnson and colleagues attribute the characteristic of believability of agents as a motivating property in the learning environment, prompting learners to spend more time in the ITSs [13].

With the advent of immersive virtual worlds such as Linden Lab's Second Life or Forterra’s Olive, the possibility of implementing autonomous avatars capable of possessing motivational characteristics (borrowed from ITSs) in their interaction with other avatars opens up new opportunities for learning. Being non-specific, virtual worlds offer the possibility of defining pedagogically based learning situations framed in the context of a serious game or virtual world activity. Under this scenario, it could be possible to include avatars with agent-like characteristics, which are aware of the cognitive and motivational needs of the learner/gamer in virtual worlds. The topic of cognitive modelling is not addressed in this paper, as it is specific to a learning situation. The issue of providing avatars with a model of motivation is addressed in the following sections where a model of motivation is defined for virtual avatars. The main question leading this paper is: what motivational traits should be considered, and how can they be conceptualised and implemented?

III. MOTIVATION IN VIRTUAL WORLDS Research in motivation for ITSs has showed the benefits of

addressing this topic in computer-aided instruction. Two types of benefits can be identified: on the one hand, the use of synthetic avatars has been shown to be intrinsically motivating, helping the learner to achieve higher degrees of learning [12]. On the other hand, the use of motivational modelling [5, 15] has shown the appropriateness of applying AI techniques, modelling in particular, to read the learner’s motivation [15]. By framing an agent’s reactions in theories of motivation, it has been possible to provide the right amount of motivating strategies to varying degrees of motivational states. Virtual worlds afford the possibility of defining pedagogically sound serious games where AI-driven avatars are part of the game’s narrative and are aware of the gamer/student motivation. It is desirable that this avatar is general, i.e. it plays the role of a motivational coach in different learning circumstances, as an actor would perform various roles depending on the film at hand. Such an avatar possesses cognitive modelling abilities, which will allow it to have an informed decision about the gamers’ skills in a learning topic and in so doing, personalising

6666

Page 3: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

the interaction. To do so, the avatar will possess motivational modelling capabilities. In addition, however, the avatar will also possess its own motivation, which will be to teach, guide and motivate the learner. As such, the avatar can be thought of as teacher with motivational coaching capability for teaching in different domains. Under this scenario, the work of understanding and sharing intentions [2] was chosen as a theoretical framework to inform the motivational model for virtual worlds.

A. Understanding and sharing other avatar’s motivation Previous research in motivation for ITSs has addressed issues related to motivational phenomena in relation to the problems of recognition, reaction and believability. Various works have been presented with regards to the recognition problem [16, 17]. Others have dealt with the reaction problem not only in classroom situations but also in ITSs [18, 19]. Yet, other ITSs have dealt with both, the recognition and reaction problems [5, 15]. The problem of believability has also been addressed elsewhere [12, 20]. As stated before, this research only deals with the problems of recognition and reaction by endowing the AI-driven avatar with a model of motivation based on the concepts of understanding (recognition) and sharing (reacting) motivational states in virtual worlds. To do so, Tomasello’s et al. [2] work served as an inspiration to develop a new model of motivation. The work reported by McQuiggan and Lester [23] takes a similar approach to our study, by training synthetic agents to be empathic in the general sense of the word. However, the motivation model proposed here defines an avatar (which could be seen as empathetic) in the sense that it understands another avatar’s motivations and shares its own. Differently from McQuiggan and Lester [23], the model proposed here is theoretically inspired as opposed to being developed according to a data-driven model. The motivational model outlined in this paper is based upon the concepts of understanding and sharing intentions [2]. According to this concept, understanding involves an AI-driven avatar (the observer) perpetually monitoring another, human-driven avatar (the actor) 2 and coming to an understanding of its motivation and reacting appropriately. The diagram in Figure 1 illustrates the process of understanding and sharing which is particularized to motivation in our research. The observer to the left perpetually monitors world and the actions played by the actor to the right. The observer’s objectives consist of motivating the actor in pursuing a learning task. A model of motivation is constructed based upon understanding the actor’s motivation towards the learning task. The process of decision-making involves defining a course of action while considering both the observer’s objectives in the light of the current model of reality and selecting a set of skills to motivate the actor, if the decision achieved attributes a consideration that intervention to affect the world is required. Tomasello and colleagues’ [2] framework serves as a way of organizing the aspects of recognition and reaction. However, the possibility of

2 The terms observer and actor are used throughout the rest of this

paper to refer to the AI-driven avatar or human-driven avatar respectively.

externalising intentions based on the decisions achieved might provide an extra degree of believability in the interactions between AI- and human-driven avatars. Having intentions is useful as it allows the observer to express understanding of the world and gives the ability to act out intentions in its interactions with other avatars. For example, the AI-driven avatar might express the intention of providing the gamer with more activities of a type given that it believes the degree of expertise required to move further in the games has not been achieved.

Figure 1. A diagram of the observer perpetually monitoring the actor and

reacting accordingly

B. A model of motivation Sharing intentions involves both expressing the current intention and acting it out. An example of sharing an intention consists on establishing a chat stating: ’I am going to help you opening the treasure chest as I believe you have not yet figured out how to open it …’. Following this statement the AI-driven avatar embarks on a series of steps aimed at showing the actor how to open the treasure chest. These reactions involve motivating feedback, which is in line with the observer’s objective of being a motivational coach. Motivation research in ITSs has implicitly given synthetic agents the role of ‘motivating’ learners. By endowing synthetic agents with theoretically-inspired [5, 15, 21] and data-driven [22] knowledge-bases, synthetic agents play the role of a motivation coach. This role inadvertently gives them a motivation or objective, which is to motivate the student to learn.

The motivational model proposed here, explicitly gives the avatar the role of a motivational coach, whose ‘motivation’ is precisely to monitor another avatar’s motivation, assess the current situation and react accordingly. To illustrate this, a model of motivation based on an understanding of the world and a limited way of sharing them by reacting in a way proper of a motivational coach is described here. The main objective is to create an AI-driven motivational coach for serious games and activities conducted in the virtual world. As such, the avatar’s objective is to optimize or maximize the actor’s motivation consisting in making them … ’willingly put forth

7777

Page 4: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

the necessary effort to develop and apply their skills and knowledge’ [18].

In order to perpetually monitor the actor, which is the prerequisite to form a model of the world, Tomasello et al. [2]

propose the observer follows the next series of steps (algorithm for understanding):

Step 1. The actor is an animated entity capable of moving and performing more complex actions. The observer is able to distinguish the movement of the actor from unanimated object movements (such as a tree).

Step 2. The actor pursues goals. In a given context, the actor will show evidence of deliberate movements. If an error occurs the actor persists to overcome it until a successful outcome is achieved. The observer is able to detect deliberate actions from other actions.

Step 3. The actor follows plans. In a given context, the actor chooses different skills (for example chatting) in order to achieve a goal. A plan is conceived as a logical set of actions to achieve a goal. The observer is capable of identifying plans and assessing the degree of success in the actor’s performance with regards to the plan.

Evidence of a goal-oriented (directed) behaviour will be achieved through training neural networks with the purpose of identifying a persistent behaviour in a learning task. The data-driven approach used in the empathic model [23] seems an appropriate example for training neural networks. The input for the neural network is a binary variable of perceived persistence. Understanding a plan is a more complicated endeavour where the use of data mining techniques will be useful. Understanding whether the actor has behaved according to a specific learning trajectory would provide evidence of planning or present the conception of a logical sequence of steps towards a goal. It involves defining data-driven queries based upon the actor’s log files. The approach that will be taken for this purpose is similar to the learnograms [24]. However, instead of visualising student’s behaviours, the information retrieved (not strictly a learnogram but the analogy serves to illustrate its usefulness) is compared to different pre-defined learning trajectories. This comparison informs the model to take a decision on how to proceed. The following sequence of ranked actions to achieve a goal is an example of a learning trajectory. The mechanisms previously described form the three steps identified through the framework [2] for understanding and sharing. It also defines a model of reality (see Figure 1). The work of Tomasello and colleagues [2] has also informed the other aspect of motivation modelling: managing the observer’s own goals and sharing them. There are goals associated with this sub-process as well as a set of skills, a decision-making episode and a model of reality. As suggested earlier on in the paper, the objective for this avatar is to motivate the actor using both the set of skills at hand and the model of reality withdrawn from the virtual world as well as the actors’ actions. By doing so, the motivational model endows the avatar with a purpose, which is to serve as a motivational coach. The model of the world provides the avatar with a general idea of the actor’s motivational state. However, extra

information is needed to provide a diagnosis about the actor’s motivation, in particular, whether help is needed and a degree of confidence. Although this paper considers a set of three input variables, it could be possible to extend this variable set and employ a larger number of diagnosis variables [16]. On the reaction (sharing) side of the interaction, a set of theoretical driven skills provides the observer avatar with a database supporting its objective [11, 17, 18, 25]. However, the rules for understanding and a motivational knowledge-base to share cannot be placed outside a framework. Traditionally, ITS’s are organized around learning situations that interconnect to form a learning curriculum. A serious game with learning objectives is similarly structured. To understand how the understanding and sharing mechanisms will be deployed the concept of a learning curriculum is presented next in association to a Bayesian Belief Network (BBN).

A learning situation involves modules, which form a learning curriculum, considering the learning objectives for a particular topic. For each module an assessment of the actors’ motivational state is determined and a Bayesian Belief Network (BBN) calculates the probabilities that the actor will be motivated in easier or more difficult modules. Figure 2 depicts a Directed Acyclic Graph (DAG), which serves as an example of a BBN consisting of five learning nodes. The relationships between these nodes are defined with the arrows and the level of difficulty is given by a letter, A is less difficult than B, etc. For every node, the avatar will take a decision based on all the elements of the model (state of the BBN, state of the world, own objectives and skills) that will, in turn, affect the existing model of the world to enter a new loop in the understanding algorithm.

Figure 2. Example of a BBN for five learning nodes

A binary motivational state (high, low) is associated with every node based on the input variables (see Table 1). A high state is determined when two or more input variable equal one, a low state is determined otherwise. A set of skills or motivational knowledge base, has been defined for the observer considering theoretically inspired motivational reactions. A particular diagnosis is associated with different states of motivation and a set of motivational reactions is

8888

Page 5: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

selected for each diagnosis (see Table 2). The mechanisms to react in a motivating fashion, involve the actor and the observer in a learning situation within a game. The observer perpetually monitors the actor to form a motivational model. If the model detects (diagnoses) a low state of motivation a set of reactions is selected (prescription) with the aim of motivating the actor. To determine the motivation for each node, the observer follows the algorithm of understanding and is defined is as follows:

1. Perpetually monitor the actor

1.1 Given errors in a node (see Figure 2), determine whether the actor being persistent?

1.1.1 Yes, the actor is being persistent (persistence = 1) Praise the actor every time persistence is identified

1.1.2 No, the actor is not being persistent (persistence = 0) Go to step

1.2 Does the actor have a plan (i.e. seems to follow a logical set of steps towards a learning goal of this node)?

1.2.1 Yes it has a plan Praise past achievements and suggest to put more effort and persist more.

1.2.2 No it does not have a plan Go to step 2.1

2. Understanding the actor

2.1 Ask the actor whether help (h) is needed at domain level (1 = help needed, 0 = help not needed), go to step 2.2

2.2 Ask the actor about confidence (c) level tackling an easier learning task (1 = confident, 0 = not confident), go to step 2.3

2.3 Calculate past persistence (pp) and go to step 2.4 (past persistence = mode of previous occurrences of persistence, 1.1.1 and 1.1.2)

2.4 Based on past persistence, help required and confidence determine the actor’s motivational state (See Table 1, diagnosis), go to step 3.1

3. Sharing motivation with the actor

3.1 Given the current motivational state, calculate the probability of the actor being motivated in other nodes Update BBN

3.2 Provide motivating strategies (see Table 2) considering the diagnosis and reacting to the current state of the world Go to step 1

The beliefs associated to individual learning nodes, will provide the basis for allowing the actor to proceed to more complex learning situation. If the BBN determines that the observer’s motivation in other nodes will be low, then a decision has to be taken regarding the learning trajectory which is more appropriate to follow. The interaction becomes more complex and the input from a cognitive model is required to decide what to do next. The work of del Soldato and du Boulay [5] regarding conflicts could inform the type of decisions and the degrees of conflict that might arise: What if the motivational model determines the actor should do node D, but the cognitive model determines otherwise?

TABLE I. MOTIVATION DIAGNOSIS BASED ON PAST PERSISTENCE (PP), HELP (H) AND CONFIDENCE (C)

PP H C Diagnosis 1 1 1 Type 1. Motivation = high. Provide help,

increase relevance and expectancy [11] 1 1 0 Type 2. Motivation = high. Provide help,

increase outcomes [11] and tackle low confidence [18].

1 0 1 Type 3. Motivation = high. Increase expectancy [11]

1 0 0 Type 4. Motivation = low. Increase outcomes [11] and tackle low confidence [18].

0 1 1 Type 5. Motivation = high. Provide help, increase curiosity and relevance [11]

0 1 0 Type 6. Motivation = low. Provide help, increase curiosity and outcomes [11] and tackle low confidence [18].

0 0 1 Type 7. Motivation = low. Increase curiosity and relevance [11].

0 0 0 Type 8. Motivation = low. Increase curiosity and outcomes [11] and tackle low confidence [18].

TABLE II. OBSERVER’S REACTIONS BASED ON MOTIVATION DIAGNOSIS

Diagnosis Reaction

Type 1 Provide domain help for this node. Specify criteria for success in this node. Provide opportunity to problem-solving based on example.

Type 2 Provide domain help for this node. Use unexpected rewards and corrective feedback when error is detected. Use praise feedback when persistence is achieved. Praise actor considering the degree of task completion. Provide strategies to succeed on task at hand.

Type 3 Let the actor select the next learning node. Specify the criteria for success in the chosen node.

Type 4 Use unexpected rewards and corrective feedback when error is detected. Use praise feedback when persistence is achieved. Praise actor considering the degree of task completion. Provide strategies to succeed on task at hand.

Type 5 Provide domain help for this node. Use novel, incongruous, conflictual and paradoxical events. Provide opportunities to achieve success under moderate risk conditions.

Type 6 Provide domain help for this node. Use novel, incongruous, conflictual and paradoxical events. Use attributional feedback and other devices that help students connect success to personal effort and ability. Praise actor considering the degree of task completion. Provide strategies to succeed on task at hand.

Type 7 Guide the actor into a process of questions generation and inquiry about the topic at hand. Provide opportunities to achieve success under moderate risk conditions.

Type 8 Use novel, incongruous, conflictual and paradoxical events. Use attributional feedback and other devices that help students connect success to personal effort and ability. Praise actor considering the degree of task completion. Provide strategies to succeed on task at hand.

IV. CONCLUSIONS AND WORK FOR THE FUTURE A model of motivation has been presented based on the notion of understanding and sharing intentions [2]. The theoretical framework enables the AI-driven avatar (observer in the model) to come to an understanding of the virtual world and to

9999

Page 6: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

define and share an intention to affect the current state of this world. The model presented has put forward the mechanisms associated to understanding the world and provided a set of motivational reactions (skills) that allow the avatar to play the role of a motivational coach in serious games (non-leisure, educational) or in activities in virtual worlds. Motivation is important as previous research of motivational issues in intelligent tutoring systems has provided evidence of the learning benefits of addressing these issues in computer aided learning. One of the aspects of the theoretical framework that was described is the sharing part of the equation. By providing appropriate motivational feedback, the AI-driven character may partly address this. However, more complex characteristics need to be proposed to allow the AI avatar express its intentions to other avatars as well as acting out motivational coaching. Given that the interaction happens in a virtual world, the term human-computer interaction is not used to describe communication in virtual worlds. Instead, by expressing intentions verbally (or by chatting as it most commonly happens in virtual worlds) the interaction between avatars (at least one of which is AI driven) will possess a more naturalistic trait which opens up the possibility of addressing the plausibility problem [8]. A possible testing scenario for this way of interaction is Turing’s test [26]. Another aspect that has been mentioned but has not been addressed is the definition of cognitive modelling. The existence of a cognitive model should complement the motivational model. However, cognitive modelling is an aspect of AI that is linked to a particular learning situation. For example, the work on cognitive modelling [27, 28] describes the principles of cognitive modelling that ought to be included in a serious game for virtual worlds. One aspect that will arise between the interaction of a motivational model and a cognitive model is the potential conflict between the two models: What would happen if the motivational model gives the learner the freedom to choose a learning node but not the cognitive model? MORE [5] provides a preliminary account of how to manage these conflicts, but an implementation in serious games and its effectiveness is yet to be studied. One important characteristic of this model is that it is generalizable: it allows the re-use of a virtual-world avatar to act as a motivational coach in different learning situations. This model is defined in a way that the learning nodes in the BBN can be changed but the recognition and reactions mechanisms would not suffer, except where the avatar offers help at domain level, which is specific to a learning situation. However, how well this model of motivation integrates with varying cognitive models and how well it copes with possible conflicts is a topic for future studies. It is relatively straightforward to identify the three variables used in this proposal in all virtual worlds, but they might offer additional characteristics that might enrich the motivational model. Possible uses of the motivational coach include its use as an adversary in a serious game (where motivation is used against the gamer), personal tutors, learning companions or motivational coaches. Work for the future includes the evaluation of this model. The first phase integrates the

algorithms associated to this model to an assessment exercise in Second Life. Forty students from the Universidad Veracruzana in the Computer Science program will interact for twenty minutes with the AI-driven avatar. The objectives of this evaluation is to assess how well the model adapts to individual users and throw some light onto the appropriateness of using this type of modelling for assessment exercises. A longer term goal consists on the design and evaluation of a serious game for higher education, the definition of cognitive modelling and its integration with the motivational modelling into a learning situation and whether the proposed neural network and data mining methods can be used effectively to achieve the modelling objectives. An area that could be the focus of future papers is pedagogy presenting results regarding the effectiveness of the model in the assessment exercise.

REFERENCES [1] Picard, R.W., Affective Computing. 1997, Cambridge, MA: The MIT

Press. [2] Tomasello, M., et al., Understanding and sharing intentions: The origins

of cultural cognition. Behavioural and brain sciences, 2005. 28: p. 675-735.

[3] Papert, S., Mindstorms. Children, computers and powerful ideas. 1980, New York: Basic Books.

[4] du Bolay, B. and J.A.M. Howe, Re-learning mathematics thorugh LOGO: first progress report. 1978, University of Edinburgh, Department of Artificial Intelligence.

[5] del Soldato, T. and B. du Boulay, Implementation of motivational tactics in tutoring systems. International Journal of Artificial Intelligence in Education, 1995. 6: p. 337-378.

[6] Qu, L. and W.L. Johnson. Detecting the Learner's Motivational States in An Interactive Learning Environment. in AIED 2005. 2005: IOS Press.

[7] Picard, R.W. and J. Klein, Computerss that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 2002. 14: p. 141-169.

[8] Malone, T. and M. Lepper, Making learning fun, in Aptitude, Learning and Instruction: Conative and Affective Process Analyses, R. Snow and M. Farr, Editors. 1987, Lawrence Erlbaum. p. 223-253.

[9] Clancey, W.J., GUIDON. Journal of Computer-Based Instruction, 1983. 10(1): p. 8-14.

[10] Anderson, J.R. and C.F. Boyle, Cognitive principles in the design of computer tutors, in Modelling Cognition, P. Morris, Editor. 1987, Wiley.

[11] Keller, J.M., Motivational Design of Instruction., in Instructional-Design theories and models: An overview of their current status, C.M. Reigeluth, Editor. 1983, Erlbaum: Hillsdale. p. 383-434.

[12] Lester, J.C., et al., Deictic and Emotive Communication in Animated Pedagogical Agents, in Embodied Conversational Agents, J. Cassell, et al., Editors. 2000, MIT Press: Boston. p. 123-154.

[13] Johnson, W.L., J. Rickel, and J. Lester, Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments. International Journal of Artificial Intelligence in Education, 2000. 11: p. 47-78.

[14] Bates, J., The role of Emotion in Believable Agents. Communications of the ACM, 1994. 37(7): p. 122 - 125.

[15] Rebolledo Mendez, G., B. du Boulay, and R. Luckin. "Be bold and take a challenge": Could motivational strategies improve help-seeking? in 12th International Conference on Artificial Intelligence in Education. 2005. Amsterdam: IOS Press.

[16] de Vicente, A. and H. Pain. Informing the Detection of the Student's Motivational State: An empirical Study. in 6th International Conference on Intelligent Tutoring Systems. 2002. Biarritz, France: Springer-Verlag.

[17] Lepper, M., et al., Motivational Techniques of Expert Human Tutors: Lessons for the design of Computer-Based Tutors, in Computers as

10101010

Page 7: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Cognitive Tools, S.P. Lajoie and S.J. Derry, Editors. 1993, Erlbaum: Hillsdale, NJ. p. 75-105.

[18] Ames, C.A., Motivation: What Teachers Need to Know. Teachers College Record, 1990. 91(3): p. 409-421.

[19] Keller, J.M. and K. Suzuki, Use of ARCS motivation model in courseware design, in Instructional designs for microcomputer courseware, D.H. Jonassen, Editor. 1988, Lawrence Erlbaum Associates.: Hillsdale, NJ.

[20] Machado, I. and A. Paiva. Heroes, Villains, Magicians, ...: Believable Characters In A Story Creation Environment. in AIED 1999. 1999.

[21] Song, S.H. and J.M. Keller, Effectiveness of Motivationlly Adaptive Computer-Assisted Instruction on the Dynamic Aspect of Motivation. Educational Technology Research and Development, 2001. 49(2): p. 5-22.

[22] Conati, C. and H. McLaren. Evaluating A Probabilistic Model of Student Affect. in 7th International Conference on Intelligent Tutoring Systesm. 2004: Springer Verlag.

[23] McQuiggan, S.W. and J.C. Lester, Modeling and evaluating empathy in embodied companion agents. International Journal of Human-Computer Studies 2007. 65(4): p. 348-360.

[24] Nachimas, R. and A. Hershkovitz. A Case Study of Using Visualization for Understanding the Behavior of the Online Learner. in International Workshop on Applying Data Mining in e-Learning (ADML’07), Second European Conference on Technology Enhanced Learning (EC-TEL07). 2007. Corfu, Greece.

[25] McCombs, B.L., Strategies for Assesing and Enhancing Motivation: Keys to promoting Self- Regulated Learning and Performance, in Motivation: Theory and Research, J. Harold F. O'Neil and M. Drillings, Editors. 1994, Lawrence Erlbaum Ass.: Hillsdale, N.J.

[26] Turing, A.M., Computing machinery and intelligence. Mind, 1950. 59: p. 433-460.

[27] Anderson, J.R., C.F. Boyle, and e. al., Cognitive modelling and intelligent tutoring. Artificial Intelligence, 1990. 42: p. 7-49.

[28] John David, F., Making them behave: cognitive models for computer animation. 1998, University of Toronto.

11111111