23
Adaptive systems: from intelligent tutoring to autonomous agents D Benyon and D Murray* Computer systems which can automatically alter aspects of their functionality or interface to suit the needs of individuals or groups of users have appeared over the years in a variety of gmses Most recently, attention has focused on intelligent inter- face agents, which are seen as specialised, knowledge-based systems acting on behalf of the user in some aspect of the interaction Slmdar requirements for automatic adaptation have been noted in intelligent tutoring systems, natural-lan- guage systems and intelligent interfaces. The paper bnngs together the research which has emanated from a number of backgrounds, and provides a unifying perspective on adaptive systems in general. An architecture for adaptive systems and a methodology for their development are presented_ The paper also describes software support for producing adaptive systems, and offers some experimental evidence to justify both the deslr- abihty and feasibility of exploiting an adaptive system approach to human-computer interaction Keywords: adaptive systems, intelligent interfaces, user models, domain models, autonomous agents, adaptive-system architec- tures, development methodologies The concept that computer systems should be capable of adapting themselves to suit the needs of either individuals or dtfferent classes of users is an apparently attractive, tf shghtly unusual, one. Adaptive computer interfaces have been discussed over the past ten years or so and have had a number of advocates [Edmonds, 1981; Innocent, 1982; Zlssos and Wltten, 1985] but it is only now that consider- ation of how to budd such systems is gaming prominence [Kobsa and Wahlster, 1989; Browne, Totterdell and Nor- man, 1990; Hancock and Chlgnell, 1989; Sulhvan and Tyler, 1991, Schneider-Hufschmldt, Kuhme and Mali- Computing Department, Open University, Milton Keynes MK7 6AA, UK *Social and Computer SciencesResearch Group, Department of Socio- logy, University of Surrey, Guddford, Surrey, UK Revlsed paper recetved 4 June 1993 nowskl, 1993; Gray, Hefley and Murray, 1993] Recent mterest in the development of multi-agent systems [Laurel, 1990] and Intelligent interfaces [Chignell and Hancock, 1988] have also focused attention on this important area. Early applications of the adaptive system concept have been rather dlsappomting and problems have proved far harder to deal with than was first expected Wtll the dream of interface agents go the same way? In th~s paper we provide a unifying perspective on adaptive systems which facilitates the companson of different systems and offers guidance on the circumstances under which soft- ware designers should consider an adaptwe system approach to human-computer interaction design. After briefly considering the arguments for and against adap- tive systems in this section, we survey existing apph- cations of the concept in the second section. This illus- trates the range of problems which software developers have sought to solve using an adaptive system approach. Although there are many differences in the design of such systems, we believe that they all share a common archt- tecture. This is presented m the third section. In the following section we consider a methodology for the development of adaptive systems within the framework provided by the adaptive system architecture The next section discusses our own experience of developing speci- fic adaptive systems, and it illustrates in concrete terms the more abstract nature of the dtscussion of the third and fourth sections. The sixth sectton contains our con- clusions as to the likely development of adaptive systems over the next few years One of the atms of this paper ts to provide a reference model for the comparison of adapttve systems. Our inten- tion is to survey the field and to draw together the numer- ous efforts which have been expended. We believe that much can be learnt from comparing and contrastmg the philosophies and experiences emanating from areas as apparently diverse as intelligent tutoring systems and agent-based human-computer Interaction. Emerging 0950-7051/93/040197-23 © 1993 Butterworth-Hememann Ltd Knowledge-Based Systems Volume 6 Number 4 December 1993 197

Adaptive systems: from intelligent tutoring to autonomous agents

Embed Size (px)

Citation preview

Page 1: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems: from intelligent tutoring to

autonomous agents D Benyon and D Murray*

Computer systems which can automatically alter aspects of their functionality or interface to suit the needs of individuals or groups of users have appeared over the years in a variety of gmses Most recently, attention has focused on intelligent inter- face agents, which are seen as specialised, knowledge-based systems acting on behalf of the user in some aspect of the interaction Slmdar requirements for automatic adaptation have been noted in intelligent tutoring systems, natural-lan- guage systems and intelligent interfaces. The paper bnngs together the research which has emanated from a number of backgrounds, and provides a unifying perspective on adaptive systems in general. An architecture for adaptive systems and a methodology for their development are presented_ The paper also describes software support for producing adaptive systems, and offers some experimental evidence to justify both the deslr- abihty and feasibility of exploiting an adaptive system approach to human-computer interaction

Keywords: adaptive systems, intelligent interfaces, user models, domain models, autonomous agents, adaptive-system architec- tures, development methodologies

The concept that computer systems should be capable of adapting themselves to suit the needs of either individuals or dtfferent classes of users is an apparently attractive, tf shghtly unusual, one. Adaptive computer interfaces have been discussed over the past ten years or so and have had a number of advocates [Edmonds, 1981; Innocent, 1982; Zlssos and Wltten, 1985] but it is only now that consider- ation of how to budd such systems is gaming prominence [Kobsa and Wahlster, 1989; Browne, Totterdell and Nor- man, 1990; Hancock and Chlgnell, 1989; Sulhvan and Tyler, 1991, Schneider-Hufschmldt, Kuhme and Mali-

Computing Department, Open University, Milton Keynes MK7 6AA, UK *Social and Computer Sciences Research Group, Department of Socio- logy, University of Surrey, Guddford, Surrey, UK Revlsed paper recetved 4 June 1993

nowskl, 1993; Gray, Hefley and Murray, 1993] Recent mterest in the development of multi-agent systems [Laurel, 1990] and Intelligent interfaces [Chignell and Hancock, 1988] have also focused attention on this important area.

Early applications of the adaptive system concept have been rather dlsappomting and problems have proved far harder to deal with than was first expected Wtll the dream of interface agents go the same way? In th~s paper we provide a unifying perspective on adaptive systems which facilitates the companson of different systems and offers guidance on the circumstances under which soft- ware designers should consider an adaptwe system approach to human-computer interaction design. After briefly considering the arguments for and against adap- tive systems in this section, we survey existing apph- cations of the concept in the second section. This illus- trates the range of problems which software developers have sought to solve using an adaptive system approach. Although there are many differences in the design of such systems, we believe that they all share a common archt- tecture. This is presented m the third section. In the following section we consider a methodology for the development of adaptive systems within the framework provided by the adaptive system architecture The next section discusses our own experience of developing speci- fic adaptive systems, and it illustrates in concrete terms the more abstract nature of the dtscussion of the third and fourth sections. The sixth sectton contains our con- clusions as to the likely development of adaptive systems over the next few years

One of the atms of this paper ts to provide a reference model for the comparison of adapttve systems. Our inten- tion is to survey the field and to draw together the numer- ous efforts which have been expended. We believe that much can be learnt from comparing and contrastmg the philosophies and experiences emanating from areas as apparently diverse as intelligent tutoring systems and agent-based human-computer Interaction. Emerging

0950-7051/93/040197-23 © 1993 Butterworth-Hememann Ltd Knowledge-Based Systems Volume 6 Number 4 December 1993 197

Page 2: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptlvesystems from mtelhgent tutormg to autonomous agents D Benyon and D Murray

from this experience Is a dlsclphne which promises to impact not just human~zomputer lnteractmn, but also on the multiple interacting agent systems which we see as being a major development m HCI and computer science over the next few years

Why should we have adaptive systems?

Computer systems can be difficult to learn and, once learnt, may be easily forgotten. As the proliferation of computer apphcatlons and different delivery platforms continues apace, the number of mdiwduals exposed to the vagaries of a range of computer environments like- wise continues to grow Computer applications tend to embody specific characteristics which make the chosen design solution better suited to some users than others However, many systems could potentially have a range of &fferent features to match the diversity of user popula- tions

Even when practised users learn one system well, the software industry's predilecnon for the development of new features, bug fixes, and the production of new 0-e slightly different and ' improved') versions [Thlmbleby, 1990a, 1990b] means that much interface functionality Js continually changing There is also the problem of users changing their perception of and proficiency with differ- ent software systems Discretmnary users are those users who have the option not to use a system. They must be persuaded or otherwise enticed into making use of the facilmes Once &scretionary users are conwnced that a system is worth using, they may alter their working prac- trees and become reliant on the system. As with nowce users, they soon become too constrained by the features of the original system. We need to provide such users with effective methods of graduating to the more complex use of systems.

Some designers do, of course, implement systems to cater for different users, environments and working prac- tices. Some budd different systems for &fferent markets Others allow indlvlduahsation by means of 'customismg' facilities or 'macros ' The drawback with such approaches ~s that the user must learn functions whmh are tangential to their main task Although some routines will only have to be set once, others involve learning speciahsed commands Tadonng facilities are typmally very general, do not take fine-grained, in&vidual differ- ences into account and do not cater for a user's task needs, either perceived or lmphclt. Although users may learn and become committed to the use of a piece of software and so are no longer discretionary with respect to the system itself, they are still discretionary with respect to customising the system to better suit their needs

The use of metaphor and analogy as a means of making system functionality accessible to user popula- tions has been claimed to overcome many usage problems. However, this can be a hit-and-miss affmr, well suited for some but not for the populatmn as a whole, dependent as it is upon a closely shared appreciation of the basis of the metaphor. Recent critimsm [Kay, 1990] of metaphor m interface design adds wexght to this argu- ment

Adaptive systems are systems which can alter aspects of their structure, functlonahty or mtert:ace m order to accommodate the differing needs of individuals or groups of users and the changing needs of users over time [Benyon, Innocent and Murray, 1987] Adaptrve systems seek to take over the burden of tadorlng systems to individuals and groups

One important question, of course, is whether this goal is feasible - - technically, operationally or economically For example, Thimbleby [Thlmbleby, 1990b] argues that only complex systems can benef t from an adaptive capa- bility, and that, as they are complex, it is not possible to provide such a capabdlty because the user patterns of usage will be weaker He asserts that there is simply not enough bandwidth in the user interface to accommodate the required functionality and adaptation Stmdar argu- ments can be made regarding the cost of braiding an adapuve capablhty into an application and the technical problems concerned with speed of processing, knowledge representation and so on Not only has Jt been asked whether systems can actually incorporate enough stat- able knowledge about an mdlvldual user to make adap- tive responses to that person a viable propos~tmn, but the basis upon which user charactenstms can be mferred ha~ been questioned [Kobsa, 1990]

We do not accept that the feaslbdity argument can be theoretically proven. A very simple adaptive mechanism may be highly effective. The cost assocmted with the implementation of adaptive systems can be justified if they significantly improve usability and the quality of interaction Furthermore, the inclusion of an adaptive capability may not be such a large overhead if it arises as a natural consequence of better attention and metrics being applied to lnteractwe system design. One of the objectives of the work reported here has been to develop cost-effective software support for adapUve systems development The feaslbdity of adaptive systems remains a usabdlty issue [Benyon, 1993a]

Suffice it to say at this pomt that we strongly believe that computer systems and interfaces to computer appli- catmns can be made to adapt to suitable and identifiable user characteristics, to the benefit of the user in terms of the 'effectiveness' of performance, speed and accuracy of execution and personal satisfaction, and that appropriate characteristics can be xdenUfied, inferred and tested We propose that some systems must be Inherently adaptive ff they are to fulfil their purpose and that many systems which have to deal with variety, of users or of environ- ments, may be effective as adapuve systems

APPLICATIONS OF ADAPTIVE SYSTEM C O N C E P T

Benyon and Murray [Benyon and Murray, 1993] provide a detailed review of adaptive user interfaces. Other useful reviews are [Kok, 1991] and [Norcio and Stanley, 1989]. One of the problems with the area is that similar Ideas have emerged from different dlsciphnes As is often the case, different disciplines employ their own terminology whmh make comparisons and generalisation difficult. The classification below has been chosen to reflect the different historical backgrounds which contribute to

198 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 3: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems, from intelligent tutoring to autonomous agents D Benyon and D Murray

adaptive systems research. Systems which are described as 'intelhgent' may take many forms and are built for many reasons and to many different ends [Elkerton, 1987; Mason and Edwards, 1988]. Claims, however, have in the past been somewhat overstretched and few systems have fully lived up to their grandiose expectations. Future systems may well tackle smaller, more manageable areas and deal with issues which are more tractable

Intelligent interfaces Intelligent interfaces emerged as an important theme in the UK Government funded Alvey initiative in the early to mid1980s, when the term 'intelligent front end' (IFE) was preferred An IFE was then concewed of as an interface to an existing piece of software which would be accessed by a large number of diverse users. The IFE would provide a usable interface to packages which were too complex or too technically daunting to offer easy mteracUon to such a variety of users [Bundy, 1983; Inno- cent, 1982; Benyon, 1984]. The Monitor system [Benyon, 1984; Benyon and Murray, 1988], although implemented in an intelligent tutoring system domain, focused on the structure of adaptive systems in general. The work provided an architecture for adaptive systems which included an exphcit representation of individual users and an explicit representation of the domain This was qualitatively different from other adaptive systems which prowded adaptation according to various mechanical or statistical means [Furnas, 1985; Greenberg and Witten, 1985] and which included a limited and implicit user model. There was no attempt to maintain a long-term representation of user characteristics. More recently mtelligent interfaces have been applied to information retrieval [Chlgnell and Hancock, 1988, Branjik, Guida and Tasso, 1990].

Natural language systems A significant strand in the investigation of adaptive systems has been the extensive attention prod to natural language (NL) interfaces, which provide a tough focus for research into aspects of plan, goal and belief recogni- tion Many systems are simulations or representations of small subsets of an entire dialogue, but they illustrate the difficulUes of infernng users' intentions and future actions from dialogue only. Related research strands such as speech act and discourse theories, decision theory and rhetoncal structure theory have been employed, but the problems remain difficult to overcome, even when only written and not spoken interaction is considered.

Natural language systems adapt by generating text approprmte to the particular query and characteristics of individual users. To do this they have to infer the user's needs and focus of attention from the (ambiguous) use of natural language. Anaphorlc references (the use of words such as 'it', ' that' etc.) and ellipsis (where Information is missing from a statement) offer difficult syntactic problems, but Inferring the semantics of an utterance and the intention which the user had in making that utterance are even more intractable problems which have generated

a wealth of research studies m both AI and computatio- nal linguistics.

However, it is now becoming more favourable to consider that N L systems must also include some form of model of the user or dialogue partner in addition to algorithms and heunsucs for inferencing and mecha- nisms of domain knowledge representation. [Kobsa and Wahlster, 1989] provides several examples of recent systems and current research projects which are more pertinent to HCI concerns.

Intelligent tutoring systems The rationale of computer-based teaching systems is that, for given students and topics, a computer system can alleviate the variance of human-based teaching skills and can determine the best manner in which to present indi- vidually targeted instruction in a constrained subject domain. In order to mlnimlse the discrepancy between a user's knowledge state and the representation of an iden- tified expert's knowledge (a 'goal state') the ITS must be able to distinguish between domain-specific expertise and tutorial strategy. A computer coach assesses the current state of a learner's knowledge ([Dede, 1986] covets this area comprehenswely) and provides instruction tadored to that individual's training needs within an overall con- text of courses, syllabuses and tutorial objectives [Self, 1987]. ITSs are expert-based systems which must be able to recognlse errors and misconceptions, to monitor and intervene when necessary at different levels of explana- tion, and to generate problems on a g~ven set of instruc- tional guidelines.

A 'student model' of the user of an ITS system stores information on how much the student 'knows' about concepts and relationships which are to be learnt and about the student's level and achievements. It often con- talns a history of task performance and some detailed representation of the state of an individual's knowledge m a specified subject area Some of this may be held in the form of a user profile, and it can have other uses m management and score keeping.

Student models are explicit representations of a state of knowledge of a specific domain, possibly with Individual personal information to maintain records of performance and achievements. Other models, the tutor model and the expert model, have very specialised functions. The expert model is a representation of the knowledge to be imparted, together with the explanatory facihty for remedial intervention. It may also contain an error recog- nition and evaluation feature, and problem-solving model. The tutorial model arranges teaching strategy, imtiates remedial actions and monitors and assesses per- formance in conjunction with the evaluative function Problem generation from the knowledge base will offer a sequence of problems, adapting the difficulty level on the basis of previous performance, but it will also present new problem classes and review or practise already known items An interface controller arranges the rele- vant output and accepts student input.

The distinction between a student model and other types of model hinges on the use to which the model is put and its location within the interface management system.

Knowledge-Based Systems Volume 6 Number 4 December 1993 199

Page 4: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptwesystems from mtelhgent tutormg to autonomous agents DBenyonandDMurray

John Self [Self, 1986] classified the types of knowledge which determine the overall performance of a teaching program as knowledge of how to teach (which includes knowledge of students In general), knowledge of what is being taught, and knowledge of who is being taught 0-e knowledge of one student in particular). This corres- ponds to the tutor model, the expert model and the student model The integration of the expert and student models is crucial to the provision of generative, or adap- tive, tutoring systems (that is, those in which dialogues and teaching 'frames' are not predetermined but, rather, are generated lnteractlvely on the basis of an evaluative function operating on the student model)

The field of ITS encompasses many detaded consider- atlons such as the teaching strategy (e g intelligent computer coach v e r s u s the Socratic tutor, discovery learning v e r s u s guided discovery learning, and task centred learning by doing) They are adaptive systems in that they aim to tailor the tutoring to the needs of indivi- dual learners. Historically they have attended prlnopally to ensuring that a student attains a certain level of com- petence in a well specified domain This is measured by applying theories of learning mstanUated in concrete examples and comparing performance with that of an expert

assistant' by taking mto account those thli~g~ v~la~cll the user knows best

Explanation systems

A similar but shghtly different strand has become appar- ent in recent research: that of being able to provide an explanatory facility for the behavlour of the system (e_g [Paris, 1989]). Expert systems have been crlticlsed for failing to provide adequate and suitably-tailored expla- nations [Steels, 1987] and to be effective these systems must tailor their explanation to the assumed knowledge of the user [Kass and Flnin, 1988; Carroll and McKen- dree, 1987] An explanation-based system combines the problems of NL generation, help and tutonng and presents extremely stubborn problems to researchers and to system budders Lehner [Lehner, 1987] suggests that the interaction between a user and an expert system is actually a situation in which two problem solvers co- operatively attempt to solve common dectslon problems_ Of course, each will have different decismn processes, make use of different heuristics, and operate on diver- gent, if fundamentally identical, data

Intelfigent support systems

Another popular apphcatlon of intelligent interface systems IS in the provision of context-dependent 'active' help [Fischer, Lemke and Schwab, 1986, Flnin, 1989, Hansen, Holgaard and Smith, 1988, Bronisz, Grossi and Jean-Marie, 1989; Stehouwer and van Bruggen, 1989] On-hne help systems track the user's context and incor- porate assistant strategies and a set of action plans in order to intervene when most appropriate or when the user appears to be having difficulty. Intelligent help systems share some characteristics with ITS since a diag- nostic strategy is required to provide the most appropri- ate help for that user in that particular situation. How- ever, they also have to be able to infer the user's high level goal from the low level data available in the form of command usage Various strategies and approaches have been suggested [Fischer, Lemke and Schwab, 1986; Fischer, Morch and McCall, 1989; Chin, 1986, 1989: Mason, 1986; Jerrams-Smlth, 1985] Intelligent help has further developed into 'critiquing systems' [Fischer, 1989], where users must be competent in the subject domain being critiqued, rather than being tutees or learners [Moore and Swartout, 1988, Fischer, 1987; Fischer, 1989]

A major problem with these mixed-initiative and co- operative dialogues is concerned with who has the "con- trolling hand' If an assistant or critic offers a piece of advice that an individual overrides, how can that assis- tant behave 'sensibly' while helping with the remainder of the same task, especially if the task is repeated with slightly different data (as in medical expert systems, for example). If the assistant is an intelligent computer system, how can it avoid giving the same bad advice over and over again and actually learn from the user? The system must have knowledge of how to be a 'competent

Co-operative intelligent agents

Recent interest In computer supported co-operative work (CSCW), distributed artificial intelligence (DAI) and HCI have taken the adaptive system concept further, and this has led to the development of interface agents Co- operative systems require models of all the systems and humans participating in the interaction [Seel, 1990]. Agents are entities that are capable of voluntary, rational action carried out in order to achieve goals, and that hold a representation or 'belief in the state of the world. They come to hold these beliefs through existing data and by deriving new belief from interaction with external sources and as a result of internal reasoning.

Issues which become evident here are those concerned with a number of interacting agents In more complex systems agents may be interacting in a number of ways, modelling other agents and adapting and responding to a variety of needs Multiple agent systems incorporating surrogates to filter routine tasks and providing person- ahsed 'theatrical metaphor' interfaces are cited as the wave of the future [Negroponte, 1989], and simple agent- based interaction is a prerequisite of co-operative dia- logue, between human or computer, as In the interface agents, which provide 'expertise, skill and labour', de- scribed by Laurel [Laurel, 1990].

Such developments can be seen as an extension of dialogue assistants [Alty and McKeli, 1986, Alty and Mullin, 1987; Coutaz, 1987]. Essentially agents are adap- tive systems, but systems which are spectalised and know about only a very small part of the world Much of the work presented here IS highly relevant to the design of agent-based systems. An important issue for co-operative or support systems IS, firstly, how knowledge is actually represented (both task specific details and knowledge about the user or co-operative partner), and, secondly, how the interaction or conversation can be realised.

200 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 5: Adaptive systems: from intelligent tutoring to autonomous agents

Adaphve systems from mtelhgent tutoring to autonomous agents: D Benyon and D Murray

Co-operative support systems can also be thought of as task-orientated dialogue systems, which are actually characterised by the 'conversational roles' which each partner is expected to adopt. Active dialogue partners in mixed-initiative dialogues are those which try to identify a user's intentions in order to exhibit co-operative behav- lOUt. To behave co-operatively, the system must discover the presumed plans underlying the user's question or statement, represent those plans in its knowledge base, examine them for hidden obstacles, and provide infor- mation which overcomes those obstacles

In keeping with the needs of ITSs, active help systems and more general intelligent support systems, agent- based interaction requires the computer partner to main- tain explicit models of a user's goals and plans and to have sufficient information about prior knowledge and about known misconceptions or errors in the user's cur- rent state of knowledge. Implementations of intelligent support systems must cope with problems of scale in deciding upon relevant and irrelevant information and with uncertainty in time relationships or object selection. There must also be mechanisms to allow the planner to recover when assumptions are unjustified or conditions change.

Summary

Although the system categories described are quite differ- ent and work has taken place in different disciplines, with the researchers employing different techniques and spe- clalised language to explain their concepts, they seem to share an underlying architecture. All are adaptive systems in that they automatically alter aspects of the system to suit the requirements of individual or groups of users, or, more generally, to suit the needs of other agents in the system. All have to infer characteristics of the other agent from the interaction Work on active help, ITSs, NLSs, explanation-based and co-operative agent systems contribute to the confusion because they describe systems from distinct perspectives We need to consider how adaptive systems are designed and how the adaptive functionality can be incorporated into existing architec- tures.

A R C H I T E C T U R E F O R A D A P T I V E S Y S T E M S

In this section we present a description of the components which all adaptive systems must have. Clearly the com- plexity of the system and the requirements of the appli- cation have an impact on the detail contained in each component The advantage of developing a generallsed architecture for adaptive systems is that it enables researchers to talk in the same language, to compare different systems and to develop appropriate represen- tation techniques.

In order to provide a perspective for the following discussion, we may represent the overall structure of an adaptive system as shown in Fzgure 1. An adaptive system has a model of the system with which it is interacting. Often this other system will be a human and hence we refer to this representation as the user model. An adap-

M o d e l of the o t h e r ) I ys tem. T h e 'User |

o d e l ' J

I M o d e l of the u s e r - s y s t e m 1 in te rac t ion . Th e | ' I n t e r ac t i o n M o d e l ' J

Figure 1

M o d e l of the H

Jl s y s t e m itself. T h e ' D o m a i n M o d e l ' .

Overview of adaptive system

tive system also includes some representation of the application which is to have the adaptive capability. This is the domain model. The interaction of user and system is described in the interaction model. Each of the three components of Figure 1 will be considered in detail, and their contents will be related to the typology of adaptive systems identified in the second section.

Our aim here is to draw together the strands of research identified in the second section and to concen- trate on an elaboration of the components of an adaptive system. We do not concern ourselves with the run-time issues which are central to user interface management systems (UIMSs), nor to the details of various Implemen- tations We seek to develop a conceptual model of adap- tive systems. The design-time issues which are the concern of such devices as user interface design environ- ments (UIDEs), user modelling shells and adaptive system development environments are discussed in the fourth section.

User model

A user model is a representation of the knowledge and preferences which the system 'believes' that a user (which may be an individual, a group of people or a non-human agent) possesses. It is separable by the system from the rest of its knowledge and contains exphcit assumptions about the user. The user model is used to provide adapti- vlty either by intervention or by co-operative agreement with a user Providing adaptive functionality requires that a user model controls an inference engine and often implies that it samples user behavlour It may infer per- ceived goals and courses of actions and act upon such decisions, altering features of the interaction to meet the task and personal needs of individuals. User models may be highly pragmatic in that they represent only what is required in order to facilitate the required adaptation. Other user models may be orientated towards a realistic description of the user. Murray [Murray, 1987a; 1987b] refers to these as 'embedded user models' (EUMs). In contrast to mental models or designers' models, which are intangible, user models are representations of user features and decisions which are accessible by the system software.

The knowledge represented in the user model may be acquired tmplicitly from inferences made about the user

Knowledge-Based Systems Volume 6 Number 4 December 1993 201

Page 6: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems from mtelhgent tutor)ng to autonomous agents

or it may be exphct t ly ehclted from the user. Exphclt acqutsltion may be achieved through some co-operative behaviour such as the asking of relevant questions Alter- natively, the user model may be fed with previously stored information, held on whatever medium may be appropriate or transportable. The use of user records, profiles or scores in a personal data store such as a ' smart ' card is one mechanism that allows full-scale use of adap- tive systems with explicitly acquired data [Murray, 1989].

Knowledge for the user model can be acquired Impli- citly by making inferences about users from their interac- tion, by carrying out some form of test, or from assigning users to generic user categories usually called 'stereo- types'. The notion of user stereotypes derives from the work of Elaine Rich [Rich, 1979, 1983, 1989]. Stereotypes represent a structured collection of traits or characteris- tics, stored as facets, to which is attached a value, and optionally a confidence level and rationale_ Some traits are triggers and have an attached probability rating which can mediate or inhibit the firing of a whole stereo- type_ They can be used to ' . . . provide a way of forming plausible inferences about yet unseen things on the basis of things that have been observed' [Rich, 1983]. Stereo- types model users on a variety of dimensions and repre- sent characteristics of users in a hierarchy At the top of the hierarchy is the 'any person' stereotype which defines the characteristics relevant to all users of the system. All stereotypes at lower levels in the hierarchy may inherit the characteristics of this stereotype. Lower level stereo- types will depend on the application, but retain the property of inheriting characteristics from parents. At the bot tom of the hierarchy is the individual who may inherit characteristics from a large number of stereo- types. One of the problems with such a representaUon is to decide just what happens when conflicting characteris- tics are inherited_ Conflict resolution rules must then be included to deal with such situations.

Other mechanisms may be employed to mfer user char- acterlstics In terms of our adaptive system architecture, it is the interaction model which is prlmardy responsible for acquiring implicitly knowledge about users We deal in detail with these mechanisms further below.

Rich [Rich, 1989] describes the space of user models using two dimensions The first, canonical versus lndwl- dual, describes whether the model is of one single user or a collection of models of different individuals A canoni- cal model represents the 'typical' user and is not usually stored explicitly in the system It is the designer's model identified above. Rich also distinguishes long-term models from short-term models_ Long-term models are representations of fairly stable user characteristics such as expertise or interest. Short-term models are employed in transitory problem-solving behavlour for the specific task in hand, focusing on specific topics and goals in the immediate interaction This distinction as also referred to as local (short-term) user models versus global (long- term) user models

Sleeman [Sleeman, 1985] provides a more detailed descnpUon of short-term mdiwdual models with refer- ence to ITSs He describes student modelling techniques as scalar, ad hoc, profile, overlay and process. The profile model encompasses a stereotype model while the others correspond to descriptions of the mechamsms of model-

D Benyon and D Murray

ling student knowledge and referring the state of that student's learning

Overlay models are particularly pertinent to ITSs Here the expert's knowledge of the domain ~s represented and the student's knowledge is captured as a subset of that knowledge, overlaid on the expert's knowledge In this way, aspects of the expert's knowledge which the student does not possess stand out. The problem with this approach IS that the student may believe things about the domam which the expert does not Perturbation models [Kass and Fmln, 1989] can be used to represent beliefs which are outside the expert's view of the domain. 'Buggy' models [Burton, 1982] and 'real rules' [Sleeman, 1985] may be used to represent common misconceptions which users may have about the domain

Short-term individual, or local user models are of par- tlcular concern in natural language systems_ The problems of inferring users" Intentions or their focus of attention from some natural language statement requires that the system models the relevant part of the preceding discourse NL systems tend to encode a lot of linguisttc knowledge concerning how and where anaphoric and elllpUc references occur_ Inferring the focus of attention of the user is similarly important In such systems_ In these cases long-term user models are not required, but short- term use of language Is central.

Intelligent Interfaces on the other hand have to deal with fundamental cognitive characteristics such as users' preferences for particular styles of display, basic cogm- tive capabilities such as spatial ability and preferred learning styles [van der Veer, 1990, Benyon, 1993b] In these systems, long-term, cogmtwely valid models are vital. Similar cogmtlve capabilities become central to lntelhgent support systems and explanation systems [From, 1989] where the requirements of the system demand that help or explanations take into account users' existing knowledge and the mtenUon whtch they have in asking for advice

Autonomous agent systems have the added comphca- tlon of representing 'second level' beliefs They must not only infer what they beheve the other system beheves, but must also base their adaptation on what they beheve that the other system beheves that they believe

User models may contain one or more of three types of knowledge of the user Firstly, the user model may hold data about what the system believes the user believes about the domain. It is domain dependent data. Because of the slmdanty of this data to that held by ITSs, we refer to this portion of the user model as the student model. Student model data may be kept at one or more of three levels the intentional, or task, level, the logical level and the physical level. (The reasons for identifying these levels are explained further below )

The task level describes user goals in the domain. For example, an intelligent interface to an lnformauon retrie- val domain may need to infer that the user is attempting to discover how many items meet some criteria rather than actually trying to obtain a hst of those items. Failure to recognise the intentions underlying some user acUon will result in a less satisfactory interaction. An intelligent support system may need to infer that a user is trying to display the contents of a directory and not list the content of a file in response to a query such as 'how do I use ls to

202 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 7: Adaptive systems: from intelligent tutoring to autonomous agents

Adapt=ve systems from intelligent tutoring to autonomous agents D Benyon and D Murray

display my directory' A natural language interface should be able to infer that the user has misunderstood a concept and requires further elaboration of that concept rather than simply respond automatically to some syn- tactic analysis of the dialogue

A second level of description of user knowledge of the domain is the logwal level Here the system records what it believes the user understands about the logical func- tioning and the logical concepts embodied by the domain. For example, an intelligent interface should be capable of recognlslng that attempting to execute a data- base language 'select' statement in response to a help system prompt is a logical, or semantic error (given that the help system cannot execute database language state- ments) rather than a syntactic error of attempting to obtain help on the 'select' statement.

Finally the system records the user's (inferred) know- ledge at the physical level For example, an intelligent help system must recognise when a user understands the semantics of a command but has forgotten the syntax if it is to provide appropriate advice. At each of these levels the user model should record the user knowledge and the user's erroneous beliefs

Domain independent data may be considered either as fundamental psychological data or as profile data Psy- chological data is concerned with essential cognitive and affective traits of users and is held m the psychological model component of the user model. There is an ever increasing body of experimental evidence which confirms that users differ in cognitive skills and personality traits which significantly affect the quality of certain interac- tion styles and user requirements [van der Veer, 1990; Egan, 1988; Jennlngs, Benyon and Murray, 1991] These characteristics of users are particularly resistent to change by the user and hence are particularly important for adaptive systems. If users find it difficult or impossible to change aspects of their make-up, these are exactly the characteristics to which the system should adapt [van der Veer, 1990, Benyon, 1993b]. Spatial ability is a character- lStlC which appears particularly relevant to HCI [Vicente and Williges, 1988; Vicente, Hayes and Williges, 1987; Egan, 1988], particularly where users have to navigate through the conceptual space of file structures or system modes. We know of no system which adapts to users' affective states, but clearly interface agents will have to represent these characteristics if they are to fulfil their promise [Kay, 1990, Laurel, 1990].

Data concerning the background, interests and general knowledge of users is held in a user profile component of the user model. This data is not psychologacal in nature, but may interact with cognitive characteristics in a number of ways. For example, users with poor spatial ability may be able to deal effectively with an interface style if they have a certain level of experience using that style [Benyon, 1993b] Knowledge of generic applications is stored in the user profile as IS much of the stereotype- Inherited data, such as that of being a business traveller [Morlk, 1989] or feminist [Rich, 1983]

The user model thus consists of the three interlinking components shown in Figure 2. The student model com- ponent is created directly from the domain model (see below). Both the psychological and profile components have to be represented explicitly, preferably using some

I Student 1 Model

I User ) P ro f i l e

Psycho log ica l ] Model )

U s e r Model

Figure 2 Main components of user model

user modelhng software (see the fourth section) All aspects of the user model will require considerable proto- typing, evaluation and refinement before they capture appropriate and salient aspects of users with sufficient accuracy for a given domain.

Domain model The user model is required in an adaptive system so that it can alter aspects of the system in response to certain inferred or given user characteristics. The domain model is required in order to define the aspects of the appli- cation which can be adapted or which are otherwise required for the operation of the adaptive system. Other terms which have been used for this concept Include application model, system model, device model and task model

The domain model serves a number of purposes. Firstly, it forms the basis of all the inferences and predic- tions which can be made from the user-system interac- tion. It is important therefore that the model is at an appropriate level of abstraction to allow the required inferences to be made. There may be mechanisms which, as a result of some observed behaviour or stated charac- teristic, predict that some problem will occur or infer a user's attempt to achieve some goal. In order to make these inferences the system must have an appropriate representation of the domain.

For example, in UC the fact that a user knows the command rm IS used to infer that the user knows /s as well This is only possible because UC has a domain model which specifies a certain relationship between the rm and Is commands. In TRA CK [Carberry, 1989] the domain model includes sequences of actions (plans) which are required to achieve a particular goal. This model is used to infer the user's goal from their observed actions

In addition to inferences, the domain model forms the basis for all the adaptations which the system can make. The system can only change aspects of the application which are described by the domain model. In HAM-ANS [Morik, 1989], for example, the domain model contains a representation of hotels which includes 'quietness' as an attribute. The system exploits this representation in making a recommendation of a hotel to a particular user, by emphasizing that a particular hotel is quiet. This adaptation is only possible because the domain model contains this attribute In TAILOR [Paris, 1989] the

Knowledge-Based Systems Volume 6 Number 4 December 1993 203

Page 8: Adaptive systems: from intelligent tutoring to autonomous agents

Adaphve systems from mtelhgent tutoring to autonomous agents

domain model represents two levels of description of components of complex devices such as telephones so that it can provide appropriate explanations for different users

Any system which is capable of evaluating its own actions also requires a domain model. The domain model holds the characteristics of the application which are measurable, so that they can be evaluated for effec- tiveness against the required criteria.

The final use of the domain model is to form the basis of the student model component of the user model The system needs to record what it believes the user believes about certain aspects of the application The domain model must describe the system so that it can store data about the user's understanding of the various concepts and functions in the application

The domain model consists of one or more abstrac- tions of the system. These abstractions allow the adaptive system to reason about the target application, to facilitate adaptations to other agents, to evaluate its effectiveness and to supply the student model part of the user model with its content. I f the system is to be capable of adapting the screen displays then these must be described in the domain model I f it is to adapt the functionality of the system, then the domain model must represent alterna- tive functional capabdities and the relationship between functions Similarly if we want the system to alter the description of concepts then these too must be modelled If the system is to infer user goals, then goals and plans must be explicitly modelled_

In most adaptive systems the domain model is implicit The representations are embedded in the system code or are only available after a significant amount of process- ing For example, in Grundy [Rich, 1983] the classifica- tion of books as suitable for particular types of people can only be obtained from the stereotype user models. There is no explicit representation of the domain Chin [Chin, 1986] recognlsed the restrictions of this in the UC system and categorlsed domain knowledge into stereo- typical levels of difficulty. Hence, 'simple' concepts in Unix include the commands rm, ls and cat and the concept of a file, and 'mundane ' commands Include vt and d/ft. Other commands are classified as 'complex' or 'esoteric'

The benefits to be gained from having an explicit and well defined domain model are considerable and have long been recognised in AI. A separate domain model provides improved domain independence which means that refining the domain model is much easier This is most important as it is unlikely that any adaptive system design will have a perfect representation of the domain at the first at tempt A separate and explicit domain model is more easily used for multiple purposes such as providing explanations of the system's behavlour

We see the domain model as a description of the appli- cation which contains facts about the domain, i e the objects, their attributes and the relationships between objects The domain model is the designer's definition of the aspects of the apphcation relevant to the needs of the adaptive system A central question in constructing a domain model is that of deciding what level of descrip- tion should be represented. Since the domain model forms the basis of the student model component of the

D Benyon and D Murray

External Task or Goal

/

D e w c e /

Independent / /

/

Figure 3

/

/ /

/ Mapping

(Internal) Tasks

Generahsed task analysis model

Device

Dependent

Mapping

(Physical) Actmns

Table 1 Emphasis of some task analysis techmques

Goal level Internal task level Physlcat action level

GOMS Goals Operators Methods TAG Tasks Actions ETIT External task Internal task Payne Problem Device space

space CLG Task level Semantic level

Nielsen Invisible layers

Syntax and lexlcal levels Visible layers

user model, it is important that the domain model is to a large extent cognmvely valid That is, it should capture a view of the domain that is appropriate to human infor- mation processing.

One of the areas to consider a realistic cognitive rep- resentation of computer systems is the work on task analysis techniques [Wilson, Barnard, Green and Mac- lean, 1988; Diaper, 1989; Payne and Green, 1989; B6sser, 1987]. These are formalisms which at tempt to represent some aspect of the application. In a critical review of cognitive task analysis, Benyon [Benyon, 1992a] identi- fied three levels of description provided by these tech- niques.

The user has something which she/he wishes to achieve outside the computer system (e.g. produce a letter) This goal is then translated into a number of tasks which have to be performed using a specified device such as a particu- lar word processor (e.g. start the word processor, edit the letter, print it). Tasks are decomposed into a hierarchy of simpler subtasks. These can be further decomposed through several levels until some 'simple task' [Payne and Green, 1989] or 'unit task' [Card, Moran and Newell, 1983] is reached. These simple tasks are those which can be characterlsed by having no problem solving or control structure component Simple tasks may be different for experts and novices. For example, the edit task is decom- posed until we get to the level of a simple task which is to move the cursor one character forward. ThLs general model of task analysis is presented in Figure 3 and the focus of various task analysis techmques is shown in Table 1. Nielsen [Nielsen, 1986] provides a similar argu- ment concerning different representations. He focuses on the difference between the 'invisible layers' (conceptual features) of the application concerning the goals, tasks

204 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 9: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems: from intelligent tutoring to autonomous agents D Benyon and D Murray

and semantics of the system, and the 'visible layers', which are concerned with perpetual and physical features.

This model reflects the abstracnon/presentation dicho- tomy of many software architectures ([Benyon and Mur- ray, 1993], and see also IEEE Software (1989)), but also includes the goal or external task level of description. Goals are concerned with what the system can be used for. They are external to the system. A central concern of HCI Is how they can be mapped onto the functions which are available in the apphcation

Internal tasks on the other hand are high level, logical descriptions of the activities which have to be done in order to achieve that goal. Tasks are dependent on the design of the system and on the functions which are available. For example, the goal of producing a letter is mapped onto the tasks of start the word processor, edit the letter, print it etc. However, in some systems, it may be unnecessary for the user to start the word processor if the application has been configured in a particular way. Tasks may be allocated to human or machine depending on the facilities available.

The physical actions are the actual keystrokes, mouse clicks, cursor movements and so on necessary to enact the tasks. These are dependent on the style of the presen- tation.

These three levels are echoed in Rasmussen's consider- atlon of mental models and HCI [Rasmussen, 1986; 1987]. On the basis of his experience of techmcians and operators in complex systems such as power plants, Ras- mussen proposes that five levels of abstraction are required to capture the problem solving behaviour. Models at low levels of abstractmn are related to a speci- fic physical world. The physical form of an object is the lowest level used. The next level up concerns physical functions. Humans model these in a language related to their physical properties (e.g electrical and mechanical). These two levels relate to the physical actions of the task analysis model (see Ftgure 3).

Rasmussen's generalised function level involves models of the function of system objects without concern for their physical arrangement. An engineer recognises that some object is a multiplexer, say, and therefore it performs certain functions. The multiplexer may be physically realised in a number of configurations of actual physical components This level is a description which is unconcerned with the physical arrangement (the syntax) of a particular system. It is the (internal) task level of the task analysis model (see Figure 3).

Above the level of generalised function is the level of abstract function Here the model is expressed in a lan- guage which depends on universal laws and symbols and not on the local physical and functlo,aal properties. The human's mental model at this level can only be formed by considering the purpose or reasons behind a system struc- ture. At the highest level of abstraction, the system may be modelled in relation to its purpose within its overall environment. These two levels thus concern the goals which the system can be used for" the system's purpose. Rasmussen argues that in human systems higher level functions are naturally derived from the purpose of the system whereas physical systems are responding to the

laws of nature. Physical systems are causal systems as opposed to intentional systems.

Rasmussen's model is similar to the philosophical arguments of Pylyshyn [Pylyshyn, 1984] and Dennett [Dennett, 1989]. Pylysbyn argues that what 'might be called the bastc assumpnon of cognmve science... [is] that there are at least three distinct, independent levels at which we can find explanatory pr inc ip les . . , biological, functional and intentional' ([Pylyshyn, 1984], p 131, Pylyshyn's italics). He relates these terms to those used by other writers. For example, Newell [Newell, 1982] calls them the device level, symbol level and knowledge level. Pylyshyn himself prefers the terms physical, symbol and representational (or semantic) level.

The levels are distinguishable from each other and necessary because they reveal generahsatlons which would otherwise not be apparent. A functional descrip- tion is necessary because different functions may be re- allsed through the same physical states and different physical states may reahse the same function. Although Pylyshyn is primarily concerned in this discussion with human cognitive systems, it is apparent that both physi- cal and functional descriptions are appropriate for computer systems also. For example, the physical action of pressing ^D will result In the application performing different functions, depending on the system. Similarly, the function of moving a cursor can be accomplished in a variety of different ways. The semantic, or representa- tional level of description is required because certain principles or constraints are revealed. We Interpret behavlours of systems not only through function, but also through relating function to purpose, by relating the representations of the system to external entities. The purely functional view of someone dialling 911 (or 999 in the UK) does not reveal that that person is seeking help. There is no simple causal connection between dialhng a certain number and the seeking of help, it is a semantic, or representational relation. It is this level, of intennons on behalf of the user of a system, that also needs describ- ing. We need a representation of the goals, 'beliefs' and 'desires' which the system has.

Dennett also recogmzes three levels of description. We can understand the behavlour of complex systems by taking a physical view, a design view or an intentional view. The physical view (also called the physical stance or physical strategy) argues that in order to predict the behavlour of a system you simply determine its physical constitution and the physical nature of any inputs and then predict the outcome based on the laws of physics. However, sometimes it is more effective to switch to a design stance. With this strategy, you predict how the system will behave by believing that it will behave as it was designed to behave. However, only designed behav- iour is predictable from the design stance. If a different sort of predictive power Is required then you may adopt the intentional stance. This is summarized as follows'

• Treat the system as a rational agent. • Figure out what beliefs tt ought to have given its place

in the world and its purpose. • Figure out what desires it ought to have. • Predict that this rational agent will act to further its

goals in the light of its beliefs.

Knowledge-Based Systems Volume 6 Number 4 December 1993 205

Page 10: Adaptive systems: from intelligent tutoring to autonomous agents

Adaphvesys tems from mtelhgent tutormg to autonomous agents D Benyon and [] Murray

Table 2 Comparison of domain levels

Our model Task level Logical le*cl Physical kxcl

Dennett Intentional stance Design stance Pylyshyn Representational level Symbol level Rasmussen Purpose and abstract Generahsed

function function Task analysis External task or goal (Internal) task User interface software Abstraction level

Physical ~tant, e Physical level Physical fl~rm and physical ftmctlon Action Presentation level

• Hence, predict what the agent wdl do on the basis of what it ought to do

It 1s clear that the intentional stance provides a useful strategy for understanding humans and for predicting what will happen It ~s, therefore, an important part of the domain model which we require since that model will form the basis of our references and predictions of future behaviour

The conclusion of this analysis suggests that a cognl- ttvely vahd domain model should capture descriptions of the apphcatlon at three levels, which we shall refer to as the

• task level, • logical level, • physical level.

At each of these levels, the structure (the objects and relationships which exist) and the processing of the appli- cation need to be described. A comparison of the various domain representations which we have considered is shown in Table 2

The task level describes the application from the perspective of what it can be used for and the general components or subsystems which it has. This is the level at which a system is related to external entities, to achieve goals m the outside world The task level is important because the user needs to be aware that (for example) an e-mail system can be used to send messages and that it can be used for elementary word processing, but that it is not ideally suited to that purpose. Unless users understand the purpose of a system they will be left frustrated by many of its shortcomings

The logical level of the domain model equates with the semantic (in task analysis terms) or functional level. It is the level of generahsed functions. The term 'logical' is used here because it emphasizes that, m order to achieve some purpose, certain functions have (logically) to be performed and certain objects have to exist This IS the design stance where the system is described in logical functions/concepts: a description which concentrates on how something works. In agent-based interaction this level is particularly important. Logically something has to be done or some data has to be provided in order to fulfil a purpose [Benyon, 1992b]. Whether it is done by human or agent is a design decision

The physical level provades a causal description con- cerned with how simple tasks are sequenced and how objects and displays are laid out. It is concerned with the presentation of the system, dialogue control and physical actions. The domain model must also describe the map- pings between the levels. It does not contain everything

about the application. The domain model represents the aspects of the apphcatlon which are to be used m provid- ing the adaptive capabilities

Interaction model

The third component of an adaptive system ts its rep- resentation of the actual and designed interaction between user and application, the interaction model. This use of the term is thus very different from the interaction model proposed by Norman [Norman, 1986] which is a theoretical representation of human-computer interac- tion in general It is much closer to the notion of a discourse model, or dialogue model However, there is still much debate over exactly what is meant by a dis- course model and what type of data it contains (see the discussion in [Computational Linguistics, 1988]). We will continue with the term Interaction model and consider its relationship with discourse models further below

An mteractton is a user making use of the system at a level which can be monitored. Data gathered from moni- toring this interaction can be used to make mferences about the user's beliefs, plans and/or goals, long-term characteristics, such as cognitive traits, or profile data, such as previous experience. The system may tailor its behavlour to the needs of a particular interaction or, given suitably 'reflective' mechanisms, the system may evaluate its inferences and adaptations and adjust aspects of its own organization or behaviour.

In some other representations (e g [Alty and McKell, 1986]) the interaction model is seen as a part of the domain model However, we hke to see the components as explicit and separated from other parts of the architec- ture because of the gains which arise from understanding the roles of the different models. For example, Alty and McKell 's application model can trap errors. However, an error can only be understood with respect to some notion of what ts correct at some level of description The defim- tzon of an error belongs with the representation of domain concepts which is rightly kept in the domain model. It is the role of the interaction model to identify and deal with errors.

There are two main aspects to the interaction model:

• capturing the appropriate raw data, • representing the inferences, adaptations and evalu-

ations whtch may occur.

Raw data IS obtained through maintaining a dialogue history or dialogue record (DR) which is a trace of aspects of the user's observed behaviour. The dialogue record ts kept for as long as ts required according to the

206 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 11: Adaptive systems: from intelligent tutoring to autonomous agents

Adapt,ve systems, from intelligent tutoring to autonomous agents D Benyon and D Murray

McMeMdMeMfMdMcMeMdMdMcMe

I203

Md

I1 010

MaA1A,A$KsA1A,KsEvA1A,KsEvAIA,KsEvM}

T667538030 19 292

M{M{M{M}A1A,A$KcKxKwWf Explanation

Drag out (highlight) some text

Cut

Posit ion and cut (NB no drag means no change in the file).

Pas te

Position, drag, cut

Position, posit ion, drag, cut.

Insert 2 characters, no backspaces in 3 seconds

Posi t ion and Insert one character in 10 seconds.

Select command window

Substitution command

Substitution command

Subst i tut ion er ror d i sp layed

Two more a t tempts with errors

Scrolling up and down in command window with t imes tamp ou tpu t showing time,

consis tency and 292 characters typed.

global subst i tut ion

Wri te out file

Warning message occurred

Monitor Output

Me

MdMe

Mf

MdMcMe

MdMdMcMe

I 2 0 3

Md

I1 0 10

Ma

AIA,A$Ks

AIA,Ks

Ev

AIA,KsEvAIA,KsEv

M}

T667538030 19 292

M{M{M{M{M}

AIA,A$KcKx

Kw

Wf

Figure 4

Codes indicate commands e.g. M} is mouse but ton 3, Me is select 'cut" f rom pop-up menu, Ks is c o m m a n d 'search' , addresses e.g. "A,' is 'to end of file', warnings, errors, etc.,

Portion of dialogue record from Basser data project, with associated explanation

needs of the adaptive system and is then deleted. It may contain details such as the sequence of keystrokes, mouse chcks and mouse movements made, timing information and system messages. It is an abstraction of the interac- tion since it cannot capture everything which takes place.

A portion of the dialogue record from the Basset data project [Thomas, Benyon, Kay and Crawford, 1991] is

shown in Figure 4, with an accompanying interpretation. This project is monitoring the use of the editor sam [Pike, 1987] by over 1000 users at the Umverslty of Sydney, Australia. The purpose of the dialogue record is to enable inferences to be made about how commands are learnt, how they are used, the dlfliculties which different groups of users have and so on. The dialogue record contains the

Knowledge-Based Systems Volume 6 Number 4 December 1993 207

Page 12: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptwe systems from mtelhgent tutoring to autonomous agents

mouse movements and other commands used by the users Mouse commands map directly to menu selections and so the data can be sensibly Interpreted. However, it was not possible to record various aspects of the interac- tion such as the size and shape of windows created or the actual poslt~on of the mouse on the screen Other aspects of the interaction left out of the dialogue record were such information as the command arguments entered Simdarly it was decided to record only the total number of keystrokes and backspaces entered when inserting text The dialogue record is an abstraction of the actual dmlogue suitable for the purpose at hand

Although the dmlogue record contains only low level data, it is surprising how much can be Inferred if the knowledge necessary is available (1 e. provided by a human or stored in a domain model) For example, the command sequence MdMe (posmon and cut) lndmates a possible error since there is no 'drag' (or 'hlghhght') operatmn which indicates the data to be cut. A tlmestamp (code T) is output every five minutes which shows the total number of characters inserted. This allows us to calculate how many were typed into a command window even though such data is not recorded directly

The second part of the interaction model is a descrip- tion of the 'stereotyped" lnteracUon, the interaction knowledge base (IKB) This describes the inferences that can be made from the dialogue, the evaluations of the interaction which are possible and the changes (adap- tations) which the system can accomplish

The user model and domain model define what can be inferred The IKB actually does the referencing by com- bining the various domain model concepts to infer user characteristics or by combining user model concepts to adapt the system. The IKB represents the relationship between domain and user characteristics It provides the interpretation of the dialogue record For example, given the dialogue record in F~gure 4, an attribute in the student model part of the user model

KSub = knowledge of substitute command values I1 51

and definitions m the domain model for Ks (substitution command), Ev (subsututlon error), Kc (change com- mand), Kx (global substitution), the IKB might contain the rule

IF KsEv OR KsEv occurs in dialogue > 3 times

AND KcKx does not occur

THEN KSub = 1 OR KSub = KSub - 1

which refers the level of a user's knowledge of the substi- tute command (by noting the errors resulting from Ks) and updates the user model appropriately Adaptations may be accomphshed in a similar fashion. For example

IF KSub < 2 A N D KsEv occurs in dialogue

THEN SubHelp

which calls a routine (SubHelp) to offer advice on the substitute command if users have a low value of KSub (knowledge of the substitute command) and if they make

D Benyon and D Murray

asubsti tUtlonerror Notlccthat StlbHelpand Lm\ chatt- Ing interaction now becomes part of the dJdloguc lccord

A sophisticated adaptive system may' ~ubwquentl 3 el,aluate its own recommendation by analysing the dia- logue record further and performing some check and repmr activity (a routine called Check) on the SubHelp routine'

IF SubHelp has been called

AND KsEv occurs in dialogue

THEN Check SubHelp

The Interaction model IS a vital part of an adaptive system The developer of adaptive systems has to decide the level of abstraction which is required for the dialogue record, the individual user data and the interaction knowledge base. In our approach, the interaction model must be constrained so that all attributes used in the representation are defined either in the domain model or m the user model as appropriate Automatic mechanisms are required for this through software support (see further below)

Relationship between models

We have described a conceptual structure for adaptive systems which consists of three models the user model, the domain model and the interaction model, as illus- trated in Figure 5 Although the discussion has generally been developed as if there is only one of these in each adaptive system this may not be the case.

For example, in a given system there may be several user roles and hence several user models. The focus of the system may be about a person. In general there may be a large number of user models representing the lndwldual agents in a mulU-agent co-operatwe system Similarly there may be more than one domain model. Conceptually our approach is that we want to model users, or agents, who are characterised by hawng one sort of data, the psychological model, personal profile and the student model, and the domains, which are charactensed by hav- ing another sort of data, the task, logical and physical levels of description_

The interaction model as expressed through the adap- tive, Inference and evaluation mechanisms may be extre- mely complex, embodying theories of language, peda- gogy or explanation Morik [Morik, 1988] emphasises the difference between the interaction-independent theory of language and the interaction-dependent nature of the actual dialogue. This reflects the distinctmn which we have drawn between the dialogue record and the mecha- nisms in the IKB which use that data. In general, the interaction model will represent the strategaes and theory of the particular type of system.

M E T H O D O L O G Y FOR ADAPTIVE SYSTEM D E V E L O P M E N T

We take the view that whether a system should have an adaptive capability or not is essentially a h u m a w

208 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 13: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems from Intelligent tutoring to autonomous agents D Benyon and D Murray

user model / \ domain model J

L interaction knowledge base /

L interaction model J

\ Figure 5 Overall archltecture for adaptwe system

computer mteraction (HCI) activity. Alternatives to I I adaptive systems will often exist for system developers includmg traming users m the use of a system, the use of other support mechanisms and customisation facihties. The suitability of each of these solutions will depend on the nature of the system, the environments in which it will be used and the characteristics of its users Adaptive systems are one solution to usability problems [Benyon, 1993a]. However, we do expect that incorporatmg more knowledge into the interface and providing systems with an adaptive capability will become an increasingly attractive option for designers

As an HCI activity, the development of adaptive systems shares problems with all HCI development Fischer [Fischer, 19891 summarises these as follows

l There is no real theory of HCI. l Current ‘life cycle’ models are inadequate because of

the ill structured nature of HCI problems. l There is a great degree of ‘design mstabihty m HCI

and hence there is a need to evaluate and re-design systems.

l Requirements cannot be fixed and used as the basis for deriving a formal design equivalent

Hartson and Hix [Hartson and Hix, 19891 also emphasise that designers take an ‘alternating waves’ approach to design rather than a strict top-down approach and that evaluation becomes central. HCI needs to deal with cog- nitive design issues which do not have any generally accepted notations.

These problems of desigmng effective HCI systems have led various researchers to suggest alternatives. These are generally characterised by

l a changed view of the systems life cycle, l an approach to systems development based on

prototyping, evaluation and iteration, l the need for effective software support tools.

Implement

Figure 6 Star approach to mteractlve systems development [Adapted from [Hartson and HIX, 198911

We concur with the spirit of these suggestions and the current discussion should be seen in this context. How- ever, the fact that text is a linear medium can make descriptions of iterative processes difficult Readers should bear in mmd that the textual description mevi- tably hides the process of continual refinement and iter- ation which underlies systems development. We deal with the important role of software tools further below.

General approach to HCI development

The alternative to the tradrtional life cycle recommended in [Hartson and Hix, 19891 is a star representation which has evaluation as bemg central. We have changed the labels in some of the components so that they correspond with our termmology, which IS explained below The star model is shown in Fzgure 6, but it is important to remem- ber that all the activities are highly interdependent and that two or more activities may occur at the same time.

Systems analysis Systems analysis is the process of understanding the problems of any current system and the requirements for

Knowledge-Based Systems Volume 6 Number 4 December 1993 209

Page 14: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptwe systems from mtelhgent tutoring to autonomous agents

any replacement system Systems analys~s inewtably mvolves some destgn considerations It consists of five interrelated and interdependent activities:

• functional analysis, which alms to establish the mare functions of the system,

• data analysts, which ts concerned with understanding and representing the meaning and structure of data in the apphcation; data analysis and functional analysis go hand in hand to describe the information process- ing capabditles of the system [Benyon, 1990],

• task analysts, which focuses on the cognitive charac- teristics reqmred of the users by the system, e.g. the search strategy required, cognmve loading, and the assumed mental model; task analysis is device depen- dent [Benyon, 1990] and hence it reqmres some design to have been completed before it can be under- taken,

• user analysis, which determines the scope of the user population which the system ~s to respond to, it Is concerned with obtaining attributes of users which are relevant to the application, this includes aspects of each of the components of a user model identified further above,

• envtronment analysts, which covers the environments within which the system ~s to operate; this includes physical aspects of the environment and 'softer' features such as the amount and type of user support which is required.

Specify user requirements During the development process, user requirements will emerge. There are three aspects to the user reqmrements:

• functional requirements describing what the system must be able to do,

• data requirements describing the data which the system is to store and manipulate,

• usabihty requtrements which specify the effectiveness, learnabihty, flexibihty and attitude [Shackel, 1990].

Deign In line with the arguments presented above, the design of a system can be seen as consisting of three levels of descriptmn, task, logical and physical, and the mappings between them. Once the requirements have been obtained at a suitable level of detail, the applicatmn can be speci- fied at these three levels. The application may be repre- sented in terms of structure and/or functions at each of these levels and any of a number of notations may be used_

• The task level describes the (external) tasks, or goals which the system is to support.

• The logical level describes the logical functioning and structure of the application.

• Physical design is concerned with the layout of screens and the constructmn of icons etc. (the repre- sentational aspects) and with the operational aspects (how objects behave).

• The task/logical mapping describes the cognitive pro-

D Benyon and D Murray

cessmg required by users as they translate their goals into the system's funcUonahty. The logical/physical mapping describes the consist- ency, learnabd~ty and other aspects of the actual system use

The transmon from logical to physical design is particu- larly important since it ~s here that functions are allocated either to the human or to the machine and ~t is here that an adaptwe capabihty is most likely to be considered A decision to be made by designers is that, if a function is allocated to the user, it will reqmre them to face addt- tlonal cogmtlve processing, but, if it is allocated to the machine, then the system will reqmre an adaptwe capabl- hty

Prototyping The feature of any prototype is that it does not have a generahsed hfetlme. It may be built and thrown away or it may evolve into the final system.

Prototypes must be quick, easy and cheap to construct and hence they rely on the provision of good software tools.

Implementation This is concerned with completing the system, program- nung ~t m an implementation language, thorough testing and other quality assurance processes, acceptance and completion of the documentation.

Evaluation Frequent evaluatmn ts central to the methodology. It may take the form of a very quick discussion with users or a formal review procedure. It may require a lengthy experiment or it may demand the skills of an HCI expert. It is also important to recognise that evaluation is apph- cable at all stages of development; analysis should be evaluated as well as requirements and design. Evaluation may take place in order to help generate ideas or to clarify details later in the development process and thus be primarily formative rather than summatwe.

Adaptive system development

The development of any interactive system and the issues which must be addressed have been described above However, adaptive systems differ from other interactive systems in that they are characterised by design variety Rather than the designer trying to obtain a single solu- tion to a problem, the designer specifies a number of solutions and matches those with the variety and the changeability of users, environments and tasks

At some point in the system development process, the designer may decide that the system requires an adaptive capability. This may be to meet a specified user require- ment, it may arise from the analysis or it may happen dunng the transition from a logical to physical design. Once this option has been identified, the adaptive system must be developed in parallel with the application, using the same development approach.

The specification of the adaptive system part of the

210 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 15: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems from mtelhgent tutoring to autonomous agents D Benyon and D Murray

application requires the specification of the domain, user and interaction models. This in turn requires the designer to focus on what features of the system are to be adaptive, what characteristics of the users it will adapt to and how the adaptations will be accomplished It may be that the whole system is to be adaptive. In this case, the domain model and the system design are the same thing. How- ever, usually only a part of an apphcatlon will require an adaptwe capabihty and so it is this part which must be isolated and specified in the domain model We expect there to be considerable experimentation and refinement during the development of the adaptive component of a system and that software support is vital to the success of adaptive systems (see below).

An important issue in adaptive system development is that of ensuring that the adaptivity can be controlled and measured and that the reasons for hawng an adaptive mechanism are known and clear. We beheve that the architecture and methodology presented here contribute to this. However, others [Browne, Totterdell and Nor- man, 1990] suggest that metrics are necessary m order to control the development process Their experience of developing a number of prototype adaptive systems in the U K led them to the conclusion that the whole issue of how, where and when to adapt needed to be exphcitly stated. They recommend that there should be the follow- ing measures.

• an objective metric which describes the purpose of the adaptwe facility,

• a trigger metric which identifies what will trigger the adaptive mechamsm,

• an assumption metric which describes the rationale behind the ob3ectwe metric,

• a recommendation metnc which describes what recommendations are possible,

• a generality metric which describes the generality of the findings,

• an implementation metric which relates to the effect that the adaptive mechanism will have on the imple- mentation

range of possible outputs for any gaven input. Self- mediatlng systems monitor the effect on a model of the mteraction. Possible adaptations can be tried out in theory before being put into practice. Self-modifying systems are capable of changing their representations. This allows self-modifying systems to reason about the interaction.

In terms of the architecture which we have developed in this paper, we can see the levels of adaptiv~ty in respect of the models possessed by the adaptive system. 'Simple' adaptwe systems possess a dialogue record and some adaptation mechanism. They do not have to have exphcit user and domain models, though refining the mechanisms is made more difficult if they do not. Many natural language systems fall into th~s category.

The move to a learning system, a self-regulating adap- tive system, is significant Such systems now require infer- ence mechanisms. They have to be able to abstract from the dialogue record and capture a logical or task level lnterpretauon of the mteractlon Similarly the system must now include a representation of its own 'purpose ' in its domain model. Self-mediating systems require a more sophisticated model of the mteracuon and of the other system and reqmre evaluation mechamsms in order to determine how effectwe a possible adaptation may be. Self-modifying systems are meta-adaptive systems in that they can change the user, interaction and domain models

An important difference between Browne et al.'s approach and our own is that they see adaptation as being adaptation to an environment whereas we see adaptation as being to another system. Whereas Browne et al argue that

if X occurs in the interaction then the system will do Y

our approach is to say that

f fXoccurs then Yis a characteristic of the other system

if Y is a characteristic of the other system then the system should do Z

The notion of memcs in HCI is currently receiving much attention, but it is still in its formative stages. We gener- ally accept that metrics are an important development in HCI and that they have a role to play m adaptive systems However, we believe that it is unnecessary to have separ- ate adaptwlty metrics. Adaptlwty is a usability issue and can be accommodated by usability metrics.

In considenng adaptive systems as solutions to rater- face problems, the destgner must consider an appropriate level of adaptlvlty. [Browne, Totterdell and Norman, 1990] examines adaptwity m natural and computer systems, and identifies four levels of adaptive system (Stmple) adaptive systems are sumulus-response systems or simple rule-based systems They are characterised by their abihty to produce a change in output in response to a change in input Their adaptive mechanism is 'hard wired'. Self-regulatmg systems monitor the effects of the adaptation on the subsequent interaction and evaluate this through trial and error. A mechanism is required which maintains a history or at least provides immediate feedback. This evaluation mechanism then selects from a

In other words Browne et al do not expliotly model the other system They only react to the behaviour of the other system. We believe that this behavlourist view of interacting systems is restrictwe, principally because ~t is difficult to transfer effects from one situation to another. By braiding realistic, humanistic representations of peo- ple, we are able not just to make better long-term use of the inferred characteristics, but also to discover any human cognitive processing requirements demanded by particular system features

Software support

The adaptive system developer needs to consider the opportunities for adaptwlty provided by the framework of the three models and their relationships identified in the third section Developing the user model forces the designer to focus on the psychological, profile and domain (student) knowledge which users will require. The domain has to be described at the task, logical and

Knowledge-Based Systems Volume 6 Number 4 December 1993 211

Page 16: Adaptive systems: from intelligent tutoring to autonomous agents

Adaphve systems from intelhgent tutoring to autonomous agents D Benyon and D Murray

physical levels. Specifying the interaction model requires the designer to consider what data is available from the interaction (the dialogue record) and the references, adaptations and evaluations (the IKB) which can be made from the dmlogue record within the context of the domain and user models

As w~th many aspects of HCI design, the designer cannot be expected to code all the contents of the interac- Uon knowledge base and specify the user and domain models using a low-level language. Indeed our experience with the first project (see the fifth section) demonstrated this quite convincingly Hence, software support became an important and integral part of the adaptive system methodology.

The Adaptive Systems Development Environment (ASDE) [Benyon, Jennings and Murray, 1990; Benyon and Murray, 1993] Is a designer's toolkit which tailors the general-purpose nature of an AI toolkit to the specific needs of an adaptive system developer and which exploits the UIMS facilities of the underlying system at run time At present, we do not seek to develop the 'self-modifying' systems described in the fourth section Accordingly, the software does not facilitate the development of such systems. Changing the representations is still the job of the designer_

The ASDE is similar to the concept of an expert system shell During the development phase of building an adap- tive system, the developer employs the ASDE to specify the characteristics of the target adaptive system and ~ts users, their interaction and the mechanisms which will guide the Inferencing, adaptations and evaluations which are to take place. Once the structure of a particular adaptive system has been established, the developer uses the ASDE to specify the values of relevant attributes of individual users or classes of user and the references, adaptations and evaluations which are to be made from these values_ These are mstantiated in the target system with which the user interacts

The ASDE reflects the architecture of adaptive systems described above. The domain model describes the struc- ture and functioning of the domain of the target (adap- tive) system at the three levels of abstraction and provides the basis of an 'overlay' model which ~s used as the representation of the user's knowledge of the domain The student model automatically inherits the domain model. Each user (individual or class) is represented in the adaptive system as a system object When indlwduals use the target system, they are allocated to one or more stereotypes on the basis of the knowledge which the system has acquired about the user. The ln&vldual then becomes a member of that class and inherits the charac- teristics of the class Conflicting classifications can be handled through the conflict resolutmn mechanisms provided by the AI toolkit As the system learns more about the user, so the user model becomes increasingly individual. The adaptation and inference rules are speci- fied through the AI toolkit 's own mechanisms

An important aspect of the tool is that the models are explicit and can be displayed and edited if necessary This facility is vital for the user model, both in respect of the privacy of individuals and their rights under data protec- tmn legislation, and in order to maintain the accuracy of the model Making the interaction and domain models

exphclt and separable from the other components of the adaptive system facilitates changing the mechanisms a~ details of the interaction are understood The designer describes these mechanisms through the ASDF

There are several other software systems concerned with developing aspects of adaptive systems [Kobsa, 1993] G U M S [Finin, 1989] is smctly a user modelling system and appears strong on the implementation of user stereotypes. These stereotypes include inference rules (which are part of the interaction model m the ASDE) However, G U M S does not place so much emphasis on the domain modelling or adaptation and evaluation rules, preferring to see these as parts of the user model Kay 's system [Kay, 1993] IS more wide ranging in that it seeks to provide tools for allowing users access to their user models (for viewing, editing etc ) It also provides an architecture to handle various levels of confidence which the system has in its inferred data As w~th the system described here, Kay Identifies a conceptual and physical level of knowledge which the user may have of the domain Kobsa ' s system [Kobsa, 1990] employs a graphi- cal function/concept tool that is slmdar m structure to the domain model described here The UM-tool described by Branjik [Branjlk, Guida and Tasso, 1990] takes a similar approach to modelhng users and attaches methods of obtaining information to the frame slots_

EXPERIMENTAL WORK

In the first project during 1986-88, we were primarily concerned with the mechanisms and feasibility of adap- tation A small ITS was developed and implemented in GC Lisp This system directed students through the learning material according to their inferred learning style and level of expertise and their performance during the current mteraction session The dialogue was all pre- determined and so navigation could be controlled by directing the user to the next node in the system [Benyon and Murray, 1988].

Some pilot experiments with this system [Benyon, Innocent and Murray, 1987, Benyon, Milan and Murray, 1987] confirmed that whilst the mechamsms were effec- tive in that they successfully tracked a user through the dialogue, were able to make inferences from this behav- Iour, and hence were able to adapt the dialogue based on these inferences, the inferences which were made did not correlate with either the users' own assessments of them- selves or other more objective measures_

The mechanisms were laborious to code in Lisp and this represented a serious developmental restriction. As a result of these experiences, the second project was designed, and this tackled three main areas.

• There was the need to find rehable metrics of cogni- tive factors which genuinely affected the interaction

• We wanted to implement the system in a much richer environment in order to exploit facilities such as automatic inheritance mechanisms, rule specifica- tion, frame creation and user interface management offered by an AI toolkit.

• We needed to attend to the development of a shell type system which would facilitate the creation and

212 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 17: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems: from intelligent tutoring to autonomous agents D Benyon and D Murray

amendment of adaptive systems by providing a set of tools to enable rapid and incremental changes to be made. The provision of such a toolkit meant that the architecture of an adaptive system had to be carefully understood and specified.

The second and third items above have been well des- cribed in th~s paper The experimental work was aimed at finding more out about the first item and was undertaken with the following alms:

• to identify one or more reliable and measurable Indi- vidual differences which affect the interaction,

• to identify aspects of the interaction which could accommodate these differences and improve the per- formance of the disadvantaged group,

• to identify a way of inferring which group an lndlw- dual user was in,

• to develop a mechanism which automatically adapted the system appropriately.

Pilot experiment

In the first experiment, reported fully in [Jennings, Benyon and Murray, 1991] an on-line shopping cata- logue database system was developed. The database con- tained details of all the items which were available from the catalogue. The user could query the catalogue to find out, for example, if there was a vacuum cleaner costing less than £65. All the ~tems in the catalogue had three attributes, for example colour, size and cost. Five func- tionally equivalent Interfaces, 1conic, button, command, menu and quesuon, were developed for the database. These were chosen as being typical of interface styles which are widely available. 24 users performed similar tasks using each of the interfaces. The t~me taken to complete a number of standard tasks was recorded and taken as the measure of performance. Each user was tested for five individual differences, spatial ability, verbal ability, field dependency, logical/lntmtive thought, and short term memory ability, using standard psycho- metric tests. Those with the top 12 scores on each test formed the high group and those with the 12 lowest scores a low group. The main results of this experiment are shown in Figure 7.

The results from this experiment suggested a number of interesting things. Since there were five different inter- faces, those on which users achieved a similar perfor- mance indicated something similar about the interface styles. Differences on the same interface indicated some- thing different about the user groups. There are clearly performance differences between users and between interfaces. Some of these are due to the characteristics of the particular interface (for example the 1conic interface generally required fewer mouse chcks than the other interfaces), but the relative differences in the slopes of the lines indicate something more fundamental Most notable ~s the large difference in performance on the command interface by the high and low spatial abdity groups, but other differences were also important [Jen- nlngs, Benyon and Murray, 1991]

The feature which we decided to concentrate on was

the level of spatial ability and field dependence (which correlated s~gmficantly) and the characteristics of the command interface which we took to be the openness and flexibility of the dialogue. Intuitively there appeared to be some conceptual spatial activities concerned with moving around such a dialogue which were rehably measured by the spatial abihty test we had used.

Second experiment

In the light of these results, a second database system was developed which had two interfaces. One (a command interface based on SQL) was designed to be open and flexible and the other (a menu interface) more con- strained. The hypothesis we were testing was that users with a good spatial ablhty would perform better on the open dialogue By its nature the constrained dialogue was slower and more restrictive and so it would take longer for users to complete a task. However, poor spatial abi- hty users were expected to perform better on the menu interface since they would make more mistakes and spend more time thinking when using the command interface.

30 subjects (all graduates) took part. They all per- formed slmdar tasks on the two interfaces. Users were tested for spatial ability and were also asked how they percewed their own use of computers (frequent/occasio- nal) and their experience with similar interfaces (on a scale from 1 = low to 5 = high). The time taken to complete a number of tasks was taken as the measure of performance.

Initially only one-half of the hypothesis was con- firmed, namely that people with a high spatial ablhty would prefer the command interface. The results did not show a slgmficant performance difference for those with low spatial abihty. The results were re-examined and it was discovered that there was an additional factor involved the level of experience with a command inter- face. When this was taken into account, the performance difference in the groups was significant Figure 8 shows that only the low spatial, low experience group (20% of the subjects) performed significantly better using the menu interface as opposed to the command interface. It appeared that differences in performance could be explained by spatial abihty plus previous command lan- guage experience. The number of errors made shows this quite convincingly (see Ftgure 9.). The low spatial, low experience group made an average of nearly three errors when using the command Interface, whereas the other groups made practically no errors. Since command lan- guage experience would change with the frequency of computer use, this characteristic of mdivldual users also needed to be considered.

The results of this experiment suggested that an inter- face style which mlnimises navigation and constrains the dialogue, such as the menu interface, is suitable for users with both a low spatial ability and low experience of using command language style .:nterfaces. On the other hand, an Interface which facilitates an open and flexible dialogue such as the command interface is suitable for all users with a high spatial abdlty, whatever their previous experience, and for users with a low spatial abihty who

Knowledge-Based Systems Volume 6 Number 4 December 1993 213

Page 18: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptwe systems from retell=gent tutoring to autonomous agents. D Benyon and D Murray

A B

400 task

completion time (sees)

300

200

4OO

Q

M

p < 0 . 0 ] ~ 300

_ I B

oi+ htg. 200

~ Q

B

, , , , i

400 task

completion time (sees)

300

200

C

~ Q

~ M

C

~ B , i . . . .

I i , , i , i = , , i

1ol. hlgh

400

300

200

D

- - M

~ B

I

lo'w

E

400 task

completion time (sees)

300

200

~ Q ~ - M

~ C

I

Figure 7 Main results of pilot experiment, (a) spatial abday, (b) verbal abd=ty, (c) field mdepender=ce, (d) short term memory, (e) thinking [B. button interface, C command interface, 1 teonic interface, M' menu interface, Q: question inter/ace.]

have a high level of experience of using command language interfaces.

I t is not clear to what extent these results would gener- alise to other users and systems. In the first instance, non-

graduate user groups may have much lower levels of spatial ability which cannot be compensated for by high experience levels. Secondly, although the menu interface appears to be free from the effects o f spatial ability, it is

214 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 19: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptwe systems from intelligent tutoring to autonomous agents. D Benyon and D Murray

Low spatial abdity High spatial ability

500 ~ 500 -

~ .00 q00

C C

300 ; 300 [ 7 E E

I/ ~ 200 200

loo

o - - ~ o Low Htgh Low Htgh

Command language experience Command language expermnce

Figure 8 Mean test session times

] Command interface

] Menu interface

c o t. 3 O.

O. ~ g 2

~'~

L 0 .

E c

t.- 0

Low spatial abihty

& Low High

Command language exper=e,~ce

C o t, 3 Q.

O. C 2

L o

t.. 0 .

£

High spatial abd=ty

r - ! • Low High

Command language expermnce

Figure 9 Mean number of errors per person per session

] Command interface

] Menu interface

constderably slower (and therefore less suitable) for a large proport ion of the users. Thirdly, the SQL state- ments which users were required to perform were relati- vely simple. More complex usage of SQL may demon- strate increased problems for low spatial ability groups.

Adaptive system

We believe that the results of this experiment demon- strate exactly the sort of problem faced by many system designers. In terms of the methodology presented earher, we may consider the experimental work to constitute the systems analysis stage. The designer now has a rationale

for adopting an adaptive system solution* to the inter- face design. There ts a large group of users (20%) who require one sort of interface and another group who require another. Being infrequent computer users, the users who require the more constrained interface should not be expected to customise the system. Training m the command language is not a viable option as the syntax would soon be forgotten. Although the results of the pilot experiment suggest that iconic or button interfaces may be suitable for all users, it is not always feasible to provide

*A more detailed version of this example, relating it to the process of development, is provided m [Benyon, 1993a]

Knowledge-Based Systems Volume 6 Number 4 December 1993 215

Page 20: Adaptive systems: from intelligent tutoring to autonomous agents

Adapt,vesystems from ,ntelhgent tutormg to autonomous agents DBenyonandDMurray

Table 3 Domain model for sample adaptr, e system

Level Description Attribute name Value.,

Task A task is a successful Ta~ks ~ \ complet,on of a query

Logical An error Js defined as Error,s ' I \'

• an incorrect formulaUon of a query (including specifying attributes m the wrong place. etc ).

• a missing or incorrect operator (such as <, > etc ),

• an inappropriate command (e g typing 'select " when in the help system)

The average task completion ATCT I l N/ time is calculated as the total seconds t,me to complete a block of 12 tasks divided by 12

An interface is a coherent style Interface Imenu, of mteracUon command~

Logical

Physical

Table 4 User model for sample adaptwe system

Model Attribute name How obtained Values

Cogmtlve Spatml ablhty Inferred from {high, lowl the interaction (see reference rule 1 )

Profile Command By asking user {high, low, none} experience From reference

rule 1 Profile Computing By asking user {frequent,

occaslonal~ Student Tasks From dialogue { 1 N]

record Student ATCT From dmlogue { 1 N} seconds

record Student Errors From dmlogue { 1 N}

record Student Interface From dialogue {menu,

record command}

these, par t icular ly in large systems. There seems to be good reason to consider the adaptwe system approach

The designer should now consider whether data is available f rom the dialogue record to facihtate the required adap ta t ion and the mechanisms which can be employed to make the relevant inferences and adap- tat ions A simple error coun t seems to be suitable for this purpose since errors correlate with spatial abili ty and c o m m a n d experience. Errors can be easily detected from the dialogue and can be used to infer appropr ia te charac- tenstics of users The system is to be adaptive insofar as it will change the interface style in response to one or two characteristics of the users. The design of the adaptive system is shown as Tables 3 and 4 and the interac~xon model for a sample adapuve system gaven below (from [Benyon, 1993a; 1993b])

Dialogue record

Thxs consists of the data items (tasks, errors)

Interaction knowledge base"

Inference rule 1

I fmter face = c o m m a n d and errors > 1 and tasks = 12 Then spatial ability = low and c o m m a n d experience = low

Adaptatzon rule 1

If spatial ability = high Then interface = c omma nd

Adaptation rule 2

If spatial ability = low and c o m m a n d = none and comput ing = frequent Then interface = c o m m a n d

experience

Adaptatzon rule 3.

If spatial ability = low and c o m m a n d = none and comput ing = occasional Then interface = menu

experience

The domain model in thts case consists of four attributes, one at the task level, two at the logical level and one a t the physical level of description The user model contains data on the user 's personal profile, cognitive characteris- tics and the s tudent mode l . The s tudent model inheri ts all the at tr ibutes f rom the doma in model and is updated f rom the dialogue record An impor t an t aspect of defin- ing the user model is to establish how the par t icular data will be obtained. The interact ion model consists of the dialogue record which records details of the n u m b e r of errors made and the n u m b e r of tasks completed, and the interact ion knowledge base, which describes the refer- ences, adapta t ions and evaluat ions which the system can make. In this case no evaluat ions are under taken , but it is a trivial development to incorporate them [Bvnyon, 1993a]

The adaptive system described here has been devel- oped as a demons t ra to r but has no t been used in any

216 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 21: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems from mtelhgent tutoring to autonomous agents' D Benyon and D Murray

further experiments. The system as it stands is inevitably very simple, but it does dlustrate much of the theory presented In this paper. It shows not just the feasibihty of the adaptive mechanisms, but the feasibility of finding and recording cogmtive characteristics which effect the interaction. The contention is that the users make mis- takes because they have a poor spatml abdity and a low level of command language experience and the command interface requires them to have a high level of spatial ability and/or a high level of command language experi- ence. We are able to infer the domam independent char- acteristic of spatial abdlty because of the characteristics of the command interface Since spatial ability ~s domain independent we can then use this knowledge when the user interacts with other systems This data can be unob- truswely and quickly obtained. A word of caution is required here It may be that the effect which we observed was not the spatial abd~typer se, but something related to spatial ability. Indeed it may be 'abihty to move around a computer system' and may not have an analogue else- where

S U M M A R Y

In this paper, we have surveyed the field of adaptive systems and argued that all these systems share a common architecture consisting of three models, the user model, the domain model and the interaction model. Each of these can be further understood as detailed in the third section. In this endeavour we have sought to present a reference model, thus enabhng existing systems to be compared in terms of the ways in which they implement the architecture

We offer a simple definition of adaptive systems. A system ~s adaptwe if

• It can recewe and process s~gnals from another system, i.e. it can interact [Benyon, 1993c],

• it can automatically change its state and/or its behav- iour appropriately

The sophistication of the adaptive mechanism depends on the quality of the models which it possesses and the use to which ~t can put them. An adaptive system

• possesses a model of the apphcation which is to be adaptable (the domain model) so that it knows what characteristics it can alter; this model is maintained at the three levels of tasks, physical and logical,

• possesses a model of the system with which it can interact (the user model); this model is developed and enhanced by monitoring the interaction,

• possesses a model of the interaction which includes inference, evaluation and adaptation mechanisms.

We have also described a methodology for the develop- ment of adaptive systems which indicates where and when a system developer might consider implementing an adaptive capability. The software support necessary to underpin the methodology has been presented in the fourth section. The experimental work illustrates the methodology in action.

The work described here is important m a number of respects. Firstly, the results of the experimental work add weight to the growing awareness of the importance of individual differences in human-computer interaction [Egan, 1988]. Our results suggest that spatial abdity, verbal abihty and field dependence are important cogni- tive characteristics of users which have a significant effect on the quality of the interaction. These results confirm those found by Vlcente and Williges for spatial and verbal abihty [Vlcente and Wdliges, 1988] and by Fowler and Murray [Fowler and Murray, 1987] in relation to field dependence.

The system designer has to decide what to do in the face of these findings. From our results, we can make the general recommendation that interfaces should not be designed which reqmre users to traverse a number of systems or a multi-level hierarchy and which do not provide adequate reference points because such systems require users to have a high level of spatial ability, and many users do not have th~s characteristic. However, since many such systems do exist, and will continue to exist if systems are to provide a high level of function- ahty, then the designer must choose between providing increased levels of support for users, prowdlng custom,s- ing facilities or building lntelhgence into the interface.

Our results also indicate that the user profile is import- ant. We found that the frequency of computer use and knowledge of generic systems (command languages m this case) were factors affecting the quahty of the mterac- tion The Basser data project [Thomas et al., 1991] has also highlighted the impact which previous experience of computer systems and more basic skills such as typing and famdiarity with a mouse has on learning command sets

The future for adaptive systems seems promising. There are three arguments in favour of adaptive systems: the a prtort argument, the usabdity argument and the changing user argument. Systems which seek to tailor their responses to the needs of individuals, from intelli- gent tutoring systems through natural language, intelli- gent support and explanation systems to autonomous agents, have to be adaptive They are constrained by the bandwidth of the dialogue record, but they must conform to the architecture presented here. Other systems may be more usable if they have an adaptive capability. Adaptl- wty also offers a solution to the problem of users chang- lng over time

We began th]s paper by questioning how successful the concept of interface agents could be. These miniature knowledge-based systems would take on some of the more mundane tasks for us We may dream of agents attending meetings for us, orgamsing our diaries, guiding us through large information spaces, explaining the complexities of some device or filtering the excesses of bureaucratic detail, but can they be a reality9

We believe that the architecture and methodology pre- sented here is highly applicable to the development of interface agents. The concept of a user model is genera- llsed to the concept of an agent model, but otherwise the architecture remains the same The experimental work described the development of an interface selection agent Natural language interfaces are agents which can trans- late the human's natural language into the pedantic ton-

Knowledge-Based Systems Volume 6 Number 4 December 1993 217

Page 22: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems from mtelhgent tutoring to autonomous agents

gue of the computer In the second sectmn we discussed critical agents, helpful agents, tutoring agents and expla- natron agents. Following Browne et al , we have dis- cussed levels of agency m respect of the models possessed by the agent stmple agents, learning agents, self-medtat- mg agents and self-modifying agents Self-modifying agents are meta-agents, since they can change the agent, interaction and domain models

The methodology presented ts applicable to the devel- opment of agent-based systems. The designer has to consider how the usability of systems can be improved by including agents The explicit nature of the models pos- sessed by the agents can help to overcome problems of users expecting too much from their software They can display (and edtt if thts is deemed desirable) the know- ledge possessed by the agent Thts provides the sort of security described in [Kozierok and Maes, 1993] where users have control over a threshold below which the agent will advise them of the dectslon rather than acting mdependently.

The application of knowledge-based techntques to mterface issues still has a long way to go However, the proviston of a reference model is at least a start on the road to considered and understandable adapttve systems.

ACKNOWLEDGEMENTS

Much of the work reported here was sponsored by the UK National Physical Laboratory under extra-mural research agreements NPL 86-0436 and NPL 82-0486. Special thanks are due to all the people who were involved in the projects at NPL, Leicester Polytechnic, UK, and the Open University, UK, in particular Nigel Bevan, Steve Howard, Peter Innocent, Frances Jennings, Steve Milan, Richard O'Keefe, David Schofield, Jaginder Shergill and Cathy Thomas. Much of this paper was complied during David Benyon's visiting fellowships, during which opportunities and help were provided by Judy Kay at the University of Sydney, Australia, and David HIll and the University of Calgary, Canada The Royal Society of Great Britain supported the visit to the University of Calgary. Some parts of this paper have been pubhshed m [Benyon and Murray, 1993].

REFERENCES Alty, J L and McKell, P (1986) 'Apphcatmn modelhng m a user interface management system' in Hamson, M D and Monk, A F (Eds) People and Computers Designing for Usablhty Cambridge University Press, UK Alty, J L and Mulhn, J (1987) 'The role of the dialogue system in a user interface management system' Proc INTERACT '87 Second lFIP Con- ference on Human-Computer Interaction Elsevier, Netherlands Benyon, D R (1984) 'Monitor a self-adaptwe user interface' Proc INTERACT '84 First IFIP Conference on Human-Computer Interaction Elsevier, Netherlands Benyon, D R_ (1990) Information and Data Modelhng Blackwells, UK Benyon, D R. (1992a) 'The role of task analysis in systems design' Interacting w, th Computers 4(1) 102-121 Benyon, D R (1992b) 'Task analysm and systems design the dlsophne of data' lnteractmg with Computers 4(2) 246-259 Benyon, D.R (1993a) 'Adaptwe systems, a solutmn to usabdlty problems' User Modelling and User Adapted Interact,on Benyon, D R (1993b) 'Accommodating indwldual differences through an adaptive user interface' in Sehnerder-Hufsehmsdt, M, Kuhme, T

D Benyon and D Murray

and Mahnowskl, U (Eds) Adaptive U~er Inter[a~e~ Re~ult~ amt Prospects Elsevier, Netherlands Benyon, D R (1993c) 'A semiotic model of interacting systems" m Connoly, J and Edmonds, E A CSCW and AI Lawrence Erlbaum Benyon, D R, Innocent, P R and Murray, D M (1987) 'System adap- twlty and the modelhng of stereotypes' Proc INTERACT '87 Second IFIP Conference on Human Computer Interactron Elsevier, Nether- lands Benyon, D R, Jennmgs, F and Murray, D M (1990) 'An adaptwe system developer's toolkit" Proc INTERACT '90 Third IFIP Confer- ence on Human Computer Interaction Elsevier, Netherlands Benyon, D R., Milan, S and Murray, D M (1987) 'Modelling users" cogmtwe ablhties m an adaptive system' Proc 5th Svmpm'tum EFISS Plenum Publishing, USA Benyon, D R and Murray, D M (1988) "Experience with adaptwe interfaces' Computer Journal 31 (5) Benyon, D R and Murray, D M (1993) 'Applying user modelhng to human-computer interaction design" AI Revww 6 43-69 Bosser, T (1987) Learning m Man Computer Interact:on Sprmger- Verlag Bran.Ilk, G , Gmda, G and Tasso, C (1990) "User modelhng m expert man machine interfaces a case study m reformation retrieval' IEEE Tran~ Systems, Man and Cybernetws 20(1) Bronlsz, D, Grossl, T and Jean-Marie, F (1989) 'Advice-giving d,a- logue an integrated system' Proc 6th Annual ESPRIT Conference Kluwer Browne, D P, Totterdell, P A and Norman, M A (1990) Adapm'e Uaer Interfaces Academic Press, UK Bundy, A (Ed) (1983) Alvey 1983 lntelhgent Front £nd Workshop Department of Trade and Industry, UK (26, 27 Sep) Burton, R R (1982) "Diagnosing bugs in a simple procedural skill' m Sleeman, D H and Brown, J S (Eds_) Intelligent Tutoring Systems Academic Press, USA Carberry, S (1989) 'Plan recogmtlon and its use m understanding dmlog" m Wahlster, W and Kobsa, A (Eds) op cit Card, S, Moran, A P and Newell, A (1983) The Psychology of Human Computer Interact:on Lawrence Erlbaum, USA Carroll, J M and MeKendree, J (1987) 'Interface design issues for advlce-glwng systems' Communications of A CM 30(1) Chlgnell, M H and Hancock, P A (1988) 'Intelhgent interfaces' m Helander, M (Ed) Handbook of Human Computer Interaction Elsevier, Netherlands Chin, D N (1986) 'User modelling in UC, the Umx consultant' Prot CHI'86 Human Factor~ m Computing Systems ACM, USA Chin, D N (1989) 'KNOME modelling what the user knows in UC' m Wahlster, W and Kobsa, A (Eds) op_ cit Computational Lmgutstw~ (1988) 14(3) Coutaz, J (1987) "PAC an object onentated model for implementing user interface" Human Computer lnteractwn Proc INTERACT '87 Elsevier, Netherlands Dede, C (1986) 'A review and synthesis of recent research m intelligent computer-assisted instruction" International Journal of Man-Machme Studte~ 24 Dennett, D (1989) The Intentional Stance MIT Press, USA Diaper, D (1989) Ta~k Analvszs for Human Computer Interactwn Elhs Horwood, UK Edmonds, E A (1981) "Adaptive man-computer dialogues' m Coombs, M J and Alty, J L Computing Skdls and the User Interface Academic Press, UK Egan, D E (1988) 'In&wdual differences m human-computer interac- tion' in Helander, M (Ed) Handbook of Human-Computer Interaction Elsev,er, Netherlands Elkerton, J (1987) 'A framework for dessgmng mtelhgent human- computer interfaces m Salvendy, G (Ed.) Cognitive Engineering in the Design of Human-Computer lnteractton and Expert Systems Elsevier, Netherlands Fmln, T W (1989) 'GUMS - - a general user modelhng shell' m Wahls- ter, W and Kobsa, A (Eds) op cit Fischer, G (1987) 'Making computers more useful' m Salvendy, G (Ed) Cogmtive Engineering m the Design of Human-Computer Interac- tion and Expert Systems Elsewer, Netherlands Fischer, G (1989) 'HCI software lessons learned, challenges ahead' IEEE Software Fischer, G, Lemke, A C and Schwab, T (1986) 'Knowledge-based help systems' Proc CHI'86 Human Factors in Computing Systems ACM, USA Fischer, G, Morch, A and McCall, R (1989) 'Design enwronments for

218 Knowledge-Based Systems Volume 6 Number 4 December 1993

Page 23: Adaptive systems: from intelligent tutoring to autonomous agents

Adaptive systems: from retell,gent tutoring to autonomous agents D Benyon and D Murray

constructwe and argumentative desxgn' Proc CH1'89 Human Factors in Computing Systems ACM, USA Fowler, C and Murray, D M (1987) 'Gender and cognitive style differences at the human-computer interface' Proc 1NTERACT '87 Second IFIP Conference on Human-Computer lnteracHon Elsevier, Netherlands Furnas, G (1985) 'New Jersey experience with an adaptwe indexing scheme' Proc CHI'85 Human Factors m Computmg Systems ACM, USA Gray, W, Hefley, W and Murray, D M (Eds) (1993) Proc lstlnterna- ttonal Workshop on Intelligent User Interfaces ACM, USA Greenberg, S and Wltten, I H (1985) 'Adaptwe personahsed interfaces - - a question of wabdlty' Behavtour and Information Technology 4(1) Hancock, P A and Ctugnell, M_H (1989) (Eds) Intelhgent Interfaces, Theory, Research and Design North-Holland, USA Hansen, S S ,Holgaard, L and Smith, M (1988) 'EUROHELP intelli- gent help systems for reformation processing systems' Proc_ 5th Annual ESPRIT Conference Kluwer Hartson, H R and Hlx, D (1989) 'Toward empirically derived methodologies and tools for HCI development' Internattonal Journal of Ma~Machme Studtes 31 477-494 Innocent, P R (1982) 'A self-adaptwe user interface' InternatzonalJour- hal o f Man-Machine Studies 16(3) 287-300 Jennlngs, F , Benyon, D R and Murray, D M (1991) 'Adapting systems to lndwldual differences m cognluve style' Acta Psychologwa (Dec) 2errams-Smith, J (1985) 'SUSI - - a smart user-system interface' in Johnson, P and Cook, S (Eds) People and Computers Destgmng the Interface Cambndge Umverslty Press, UK Kass, R and From, T (1988) 'The need for user models m generating expert system explanations" International Journal of Expert Systems 1(4) Kass, R and From, T (1989) 'The role of user models in co-operatwe mteractwe systems' Internattonal Journal of lntelhgent Systems 4(1) Kay, A (1990) 'User interface a personal view' m Laurel, B (Ed) The Art of Human-Computer Interface Destgn Addison-Wesley, UK Kay, J (1993) 'Pragmattc user modelhng for adapatwe interfaces' m Schnelder-Hufschmldt, M , Kuhme, T and Mahnowskl, U (Eds) Adapttve User Interfaces - - Results and Prospects North-Holland, Netherlands Kobsa, A (1990) 'Modelling the user's conceptual knowledge in BGP- MS, a user modelhng shell system' Computational Intelligence 6(4) Kobsa, A (1993) 'User modelling_ recent work, prospects and hazards' m Schneider-Hufschmldt, M , Kuhme, T and Mahnowski, U (Eds) Adapttve User Interfaces - - Results and Prospects North-Holland, Netherlands Kobsa, A and Wahlster, W (1989) User Models m Dtalog Systems Spnnger-Verlag, Germany Kok, A.J (1991) 'A review and synthesis of user modelling in intelligent systems' Knowledge Engineering Review 6(1) 21-47 Kozlerok, R and Maes, P (1993) 'A learning interface agent' Proc 1st Internattonal Workshop on Intelhgent User Interfaces ACM, USA Laurel, B (Ed) (1990) The Art of Human-Computer Interactton Destgn Addison-Wesley, USA Lehner, P E (1987) 'Cogmtwe factors in user/expert-system interac- tion' Human Factors 29(1) Mason, M V (1986) 'Adaptive command prompting m an on-hne documentation system' International Journal o f Man-Machine Studws 25 Mason, J and Edwards, J L (1988) 'Surveying projects on intelligent dmlogue' International Journal of Man-Machine Studws 28 Moore, J and Swartout, W R (1988) 'Planmng and reacting' Proc_ AAAI Workshop on Text Planning and Generatmn St Paul, MN, USA (25 Aug) Monk, K (1989) 'User models and conversatmnal settings modelhng the user's wants' m Wahlster, W and Kobsa, A (Eds) op cit Murray, D M (1987a) 'A survey of user modelling definltmns and techmques' NPL D1TC Report 92/87 Murray, D M. (1987b) 'Embedded user models' Proc INTERACT "87 Second IF1P Conference on Human-Computer Interaction Elsevmr, Netherlands Murray, D M (1989) 'Modelhng for adaptwity' Proc 8th lnterdlsct-

phnary Workshop Informattcs and Psychology North-Holland, Nether- lands Negroponte, N (1989) 'Beyond the desktop metaphor' lnternattonal Journal o f HCl 1(1) Newell, A (1982) 'The knowledge level' Arttfictallntelhgence 18(1) 87 127 Nielsen, J (1986) 'A virtual protocol model for computer-human interaction' lnternatlonal Journal o f Man Machine Stu&es 24 Norclo, A_F and Stanley, J. (1989) 'Adaptwe human-computer rater- faces: a hterature study and perspective' IEEE Trans Systems, Man and Cybernetws 19(2) Norman, D (1986) m Norman, D and Draper, S (Eds) User Centred System Design Addison-Wesley, USA Pans, C_L (1989) 'The use of exphclt user models in a generation system' m Wahlster, W and Kobsa, A (Eds) op clt Payne, S K and Green, T R G (1989) 'Task action grammar the model and its developments' m Diaper, D (Ed) Task Analysts for Human-Computer Interactwn Elhs-Horwood, UK Pike, R (1987) 'The text editor sam' Software Practwe and Experience 17 813-845 Pylyshyn, Z W (1984) Computatwn and Cogmtton MIT Press, USA Rasmussen, J (1986) Information Processing and Human-Machine Interaction North-Holland, Netherlands Rasmussen, J (1987) 'Mental models and their lmphcations for design' Austrian Computer Society 6th Workshop on lnformattcs and Psy- chology (Jun) Rich, E (1979) 'User modelhng via stereotypes' Cogmttve Scwnce 3 Rich, E_ (1983) 'Users are mdwlduals mdwlduahsmg user models' lnternatwnal Journal of Man-Machine Studws 18 Rich, E (1989) 'Stereotypes and user modelhng' m Wahlster, W and Kobsa, A (Eds_) op Clt Schnelder-HufschmJdt, M, Kuhme, T and Mahnowski, U (Eds) (1993) Adapttve User Interfaces - - Results and Prospects North- Holland, Netherlands Seel, N (1990) 'From here to agent theory' AISB Quarterly 72 (Spnng) Self, J (1986) "Applications of machine-learning to student modelhng' lnstructzonal Science 14 Self, J (1987) 'User modelling m open learning systems' m Whiting, J and Bell, D (Eds) Tutoring and Momtormg Faclhttes for European Open Learning Elsevier, Netherlands Shackel, B (1990) 'A framework for usability' m Preece, J and Keller, L (Eds) Readings m Human-Computer Interactton Prentice-Hall Sleeman, D (1985) 'UMFE a user modelling front-end system' Inter- national Journal of Man-Machme Studws 23 Steels, L (1987) 'The deepening of expert systems' AICOM (1) 9-16 Stehouwer, M and van Bruggen, J (1989) 'Performance Interpretation in an intelbgent help system' Proc 6th Annual ESPRIT Conference Kluwer Sullivan, J W_ and Tyler, S_W. (Eds) (1991) Intelligent User Interfaces ACM Press, USA Thimbleby, H (1990a) 'You're right about the cure - - don't do that' Interacting with Computers 2(1) Thimbleby, H (1990b) User Interface Design Addison Wesley, UK Thomas, R C, Benyon, D R , Kay, J and Crawford, K (1991) 'Mom- tonng editor usage, the Basser data project' Proc NCIT '91 Penang, Malaysia van der Veer, G C (1990) Human-Computer Interaction Learning, Individual Differences and Destgn Recommendations Offsetdrukkenj Haveka, Netherlands Vicente, K , Hayes, B.C_ and Wdhges, R C (1987) 'Assaying and isolating individual d~fferences m searching a hierarchical file system' Human Factors 29(3) 349-359 Vicente, K J and Wllhges, R.C (1988) 'Accommodating mdwidual differences m searching a hierarchical file system' International Journal of Man-Machme Studws 29 Wilson, M D , Barnard, P_J, Green, T_R G and Maclean, A (1988) 'Task analyses m human-computer interaction' m van der Veer, C C, Green, T R G , Hoc, J M and Murray, D_M_ (Eds) Working with Computers Theory versus Outcome Academic Press, UK Zlssos, A and Wltten, I (1985) 'User modelling for a computer coach a case study' International Journal of Man-Machine Studies 23

Knowledge-Based Systems Volume 6 Number 4 December 1993 219