12
T he emergence of web-based applications, such as electronic com- merce and internet media, has coincided with growing recognition that many of the tasks people perform with computers can be performed bet- ter when the application adapts to its user. A popular example is Netflix, which gives customized movie recommendations to each user, using past movie ratings of the user and his or her friends. Netflix and other pref- erence-aware interactive systems share the common aim of aiding the user in carrying out tasks—from finding a product to editing a docu- ment—by eliciting preferences from the user, inferring a preference mod- el, and using the model to decide when and how to take action. The variety of preference-aware interactive applications includes rec- ommender systems (such as Netflix or Amazon.com) that suggest items based on the user’s similarity to other users or on previously viewed items, conversational systems that interact with the user in a simplified dialogue to perform a task, interfaces that adapt to the user’s preferences and situation, and personal agents that can proactively support the user, modeling his or her needs and desires. In this article we review characteristic examples of intelligent, prefer- ence-aware interactive systems, survey the major questions, challenges and approaches with respect to preference representation, reasoning, and explanation and with respect to interactivity and task performed, and give an outlook on the potential of these systems. Throughout, we illus- trate that designing a successful, adoptable system requires more than simply selecting the best form of representation, explanation, and rea- soning. It requires a cohesive design that ensures the perceived benefits from using preferences significantly exceed the perceived costs of elicit- ing them. The interactivity of applications varies in nature, extent, modalities, and manifestation, as much as applications vary in their domains. Despite this diversity, the interactivity literature provides guidelines for designing an intelligent, interactive system (Horvitz 1999). Articles WINTER 2008 13 Copyright © 2008, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 Preferences in Interactive Systems: Technical Challenges and Case Studies Bart Peintner, Paolo Viappiani, and Neil Yorke-Smith n Interactive artificial intelligence sys- tems employ preferences in both their reasoning and their interaction with the user. This survey considers preference handling in applications such as rec- ommender systems, personal assistant agents, and personalized user inter- faces. We survey the major questions and approaches, present illustrative examples, and give an outlook on potential benefits and challenges.

Preferences in Interactive Systems: Technical Challenges and Case

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Preferences in Interactive Systems: Technical Challenges and Case

The emergence of web-based applications, such as electronic com-merce and internet media, has coincided with growing recognition thatmany of the tasks people perform with computers can be performed bet-ter when the application adapts to its user. A popular example is Netflix,which gives customized movie recommendations to each user, using pastmovie ratings of the user and his or her friends. Netflix and other pref-erence-aware interactive systems share the common aim of aiding theuser in carrying out tasks—from finding a product to editing a docu-ment—by eliciting preferences from the user, inferring a preference mod-el, and using the model to decide when and how to take action.

The variety of preference-aware interactive applications includes rec-ommender systems (such as Netflix or Amazon.com) that suggest itemsbased on the user’s similarity to other users or on previously vieweditems, conversational systems that interact with the user in a simplifieddialogue to perform a task, interfaces that adapt to the user’s preferencesand situation, and personal agents that can proactively support the user,modeling his or her needs and desires.

In this article we review characteristic examples of intelligent, prefer-ence-aware interactive systems, survey the major questions, challengesand approaches with respect to preference representation, reasoning, andexplanation and with respect to interactivity and task performed, andgive an outlook on the potential of these systems. Throughout, we illus-trate that designing a successful, adoptable system requires more thansimply selecting the best form of representation, explanation, and rea-soning. It requires a cohesive design that ensures the perceived benefitsfrom using preferences significantly exceed the perceived costs of elicit-ing them.

The interactivity of applications varies in nature, extent, modalities,and manifestation, as much as applications vary in their domains.Despite this diversity, the interactivity literature provides guidelines fordesigning an intelligent, interactive system (Horvitz 1999).

Articles

WINTER 2008 13Copyright © 2008, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602

Preferences in Interactive Systems: Technical Challenges

and Case Studies

Bart Peintner, Paolo Viappiani, and Neil Yorke-Smith

n Interactive artificial intelligence sys-tems employ preferences in both theirreasoning and their interaction with theuser. This survey considers preferencehandling in applications such as rec-ommender systems, personal assistantagents, and personalized user inter-faces. We survey the major questionsand approaches, present illustrativeexamples, and give an outlook onpotential benefits and challenges.

Page 2: Preferences in Interactive Systems: Technical Challenges and Case

Preferences for interactive applications concernnot only the base task—the characteristics of whatthe user wants achieved with the application—butalso the interactive process—the characteristics ofhow the user wants it achieved. Lieberman andSelker (2000) discuss the notion of the context ofan interactive application. Preferences constituteone aspect of context. In situations and for appli-cations where preferences are pertinent—mostnotably, the preferences of the system’s user orusers—failure to design and build the applicationaround these preferences risks leaving the applica-tion out of context.

Table 1 lists five aspects that collectively define apreference-aware interactive system. Associatedwith them is a set of technical challenges for thedeveloper. We highlight four of these challengesthat will be seen recurrently in the case studies tobe discussed.

First, the degree and nature of interaction mustbe tailored to the specific need and the context.Would you use a navigation system that suggeststhe best route to your destination if you must firstanswer a battery of questions to seed a preferencemodel? Personalization is of limited value if theapplication is difficult or tedious to use.

Second, the acquisition of preferences is a cen-tral challenge in an interactive system (Pu andChen 2008). Preferences of sufficient breadth andfidelity must be acquired for the reasoning—or thereasoning must be robust to limited preferenceinformation—without degrading the user experi-ence. However, the user may not be able to articu-late, or may not be bothered to articulate, the base(task-level) or meta- (process-level) preferences.Moreover, whether the preference model is explic-itly or implicitly represented, the system shouldprovide a way to guide, understand, and correct itsbehavior.

Third, interface design should account for pref-erences. The design must respect requirements

stemming from the use of preferences (for exam-ple, it must provide a mechanism to express pref-erences). Further, the design should exploit thelearned models, delivering to the user the benefitsof customization. What are the most appropriateinteraction modalities and interface elements forthe user interface (UI)? For instance, Netflix asksusers to rate movies and songs with an intuitivefive-star scale. A further consideration for applica-tions on mobile devices is the physical and com-putational limitations of the device. Finally, andnot least, since all these elements are correlated,user studies are necessary in the iterative designand evaluation of an interactive application. In thecase of preference elicitation, for instance, the elic-itation strategy that is best theoretically may fail toelicit the model if the UI renders it unintelligible tothe user.

These challenges cannot be solved in isolation.They are inseparably linked within the overarch-ing cost/benefit equation in the design of interac-tive systems: what design delivers the maximumbenefit to the user while keeping user effort to anacceptable level?

Application AreasIn this article, we are going to explore systems andconcepts from three broadly defined functionalareas: recommender systems, personal agents, andpersonalized user interfaces. The systems we high-light collectively show the diversity of preference-aware applications, both in the dimensions dis-cussed in table 1, and in the spectrum of researchand commercial systems.

Recommender SystemsRecommender systems represent a broad family oftools giving automatic personalized recommenda-tions to a user, based on his or her preferences(Adomavicius and Tuzhilin 2005, Burke 2002).

Articles

14 AI MAGAZINE

1. Degree of interactivity across the spectrum of user-system collaboration (Horvitz 1999), from fully manual throughvarying degrees of mixed initiative, to fully automated.

2. Preference representation: the type and fidelity of the model, for instance, a utility function.

3. Preference elicitation: the population of some or all of the model. Pertinent here are the methods used and the natureof the interactivity involved (Pu and Chen 2008).

4. Nature of the reasoning with preferences, which depends on the application’s functionality, degree of interactivity, andpreference representation.

5. Resulting output or behavior of the system, ranging from information to decisions to actions.

Table 1. Assessing Interactive, Preference-Aware Applications.

Page 3: Preferences in Interactive Systems: Technical Challenges and Case

These systems map explicit or implicit user prefer-ences to options that are likely to fit these prefer-ences. They range from applications that requirerelatively little user input (such as Amazon.com) tomore mixed-initiative systems such as conversa-tional recommenders. The resulting recommenda-tions are given based on a user profile derived fromdifferent information sources. Preference elicitationcan vary between question answering, ratingoptions, critiquing, and conversational interaction.

Some systems generate recommendations withas little effort as possible from the user. An exam-ple of such a technique is Amazon.com’s “Peoplewho bought this item also bought....” These sys-tems provide recommendations to the user basedon demographics, content (for example, vieweditems), or similar users’ data. The system uses thisinformation to provide the recommendations tothe passive user; thus preferences are implicitlyand indirectly elicited.

In contrast, conversational recommender systemsinvolve the user elaborating his or her require-ments and desiderata over the course of an extend-ed dialogue. They are often used in situationswhere the user is actively searching for a particularitem but he or she has only partial knowledge ofthe domain, a task sometimes called a preference-based search.

Recommender systems are a wide area of researchand a detailed survey is not our aim in this article;we refer to the existing surveys in literature (Ado-mavicius and Tuzhilin 2005, Burke 2002). In the fol-lowing paragraphs, we describe the popular class ofcollaborative filtering systems, and two conversa-tional recommenders, SmartClient (a tool based onexample-critiquing interaction) and Teaching Sales-man (a tool that explicitly represents user needsalong with preferences), to highlight the differentfacets of preference elicitation, recommendationquality, and interface design and usability.

Collaborative Filtering Systems. The idea thatsimilar people like similar things is at the core of

collaborative filtering techniques (Konstan et al.1997). Here, an implicit model of a user’s prefer-ences is inferred from items that he or she has rat-ed. Reasoning is based on statistical predictions toestimate the missing ratings. Collaborative filter-ing has obtained attention because it can uncoverserendipitous associations. It is applied by websitessuch as Amazon.com (for example, books, videos)and Last.fm and Mystrands (music), and by servic-es such as Netflix. The interaction is asynchro-nous—the user can provide new feedback andobtain new recommendations at any time—andthe recommendation engine works inside theframework of an online user community.

Social context has become an important ele-ment of the equation. Users of Mystrands, forexample, in addition to personalized recommen-dations, can discover what songs other members ofthe community are listening to. This idea has beenexploited further in systems such as TaRS (Massaand Avesani 2007) where recommendations arebased on explicitly trusted neighbors. The qualityof recommendations depends on the number ofusers participating in the community and theamount of feedback that each of them is willing togive. Preference fidelity is necessary for recom-mendation quality, motivating research about howto stimulate participation. Rashid et al. (2006), forexample, found that users of social recommendersare surprisingly altruistic: they are more willing toprovide more feedback if the uniqueness of theircontribution to the community is made salient.

SmartClient. A tool for planning travel arrange-ments, SmartClient (Pu and Faltings 2000) is basedon critiquing examples of possible solutions. Forinstance, “the arrival time of this flight leg is toolate.” SmartClient has been commercialized forbusiness travelers as Reality, shown in figure 1. Theinteraction is cyclical: (1) the system offers exam-ple solutions, (2) the user examines any of themand may state a critique on any aspect of it, (3) thecritique becomes an additional preference in the

Articles

WINTER 2008 15

In domains such as travel recommendations, an option is primarily specified by its attributes. In this case, the user’spreference order over the possible options is almost fixed once the user preferences and tradeoffs are known. Thesedomains are addressed by systems with explicit representations, where the system and the user interact in a conver-sation (by means of critiquing, for instance).

In domains such as film or book recommendations, attributes refer only to external characteristics. These domains aresuited to systems that exploit auxiliary information such as from demographic and statistical sources.

Table 2. Choosing an Appropriate Recommender for a Domain.

The application domain plays an important role in determining which kind of recommender is more appropriate.

Page 4: Preferences in Interactive Systems: Technical Challenges and Case

Articles

16 AI MAGAZINE

Figure 1. SmartClient, Commercialized as Reality, Is an Example-Critiquing Tool for Arranging Business Travel.

Page 5: Preferences in Interactive Systems: Technical Challenges and Case

model, and (4) the system refines the solution set.Ricci and Nguyen (2007) propose a similar cri-tiquing interaction to provide recommendationsof restaurants in a mobile context.

As discussed in Pu and Chen (2008), the moti-vation for this methodology is that people usuallycannot state preferences up front but constructtheir preferences as they see the available options.However, because the critiques come from the userin response to the shown examples, the currentsolutions can hinder the user from refocusing thesearch in another direction (the anchoring effect). Acomplete preference model can be acquired only ifthe tool is able to stimulate the user by showingdiverse examples.

A refinement is to employ model-based sugges-tions, such as in FlatFinder (Viappiani, Faltings,and Pu 2006), to teach the user about availableoptions, based on an analysis of the user’s currentpreference model and his or her potential hiddenpreferences. For instance, a student looking for alow-price accommodation could realize that, bypaying just a bit more, he or she can find an apart-ment with nearby subway access. Suggestions canprofit from prior knowledge (for example, mostpeople prefer unfurnished apartments) and can beadapted online, observing user reactions to the dis-played examples.

Even so, the user still takes an active role in theconstruction of the preference model; this requiressome effort. Instead of suggesting complete out-comes, incremental dynamic critiquing (Reilly etal. 2005) makes use of data mining algorithms topropose predefined multifeature critiques such as“lighter but more expensive” to guide the explo-ration of the search space, thus reducing the bur-den on the user.

An important challenge is scalability. Forinstance, since travel planning involves configura-tion, the number of possible solutions is exponen-tial, yet custom solutions have to be computedspecifically for each user according to his or herpreference model. SmartClient addresses this chal-lenge by distributing the search tasks to light-weight problem-solving clients based on con-straint satisfaction.

Teaching Salesman. Teaching Salesman (Stolzeand Ströbel 2004) helps the user choose an itembased on his or her needs: for example, a camera totake holiday pictures, as shown in figure 2. Theoptions are characterized at the use layer (forexample, outdoor use compared to night use),characterized by features (for example, optics,power, and weight) that are abstractions of basicattributes (for example, consumption and batterysize). The system evaluates the alternatives bymapping the marginal score of each feature to eachspecific purpose.

The interaction is divided into three phases.

First, preference discovery is based on questionsabout needs and intended uses of the product.Next, preference optimization allows the user totweak the model: users can directly assess prefer-ences for specific features, and feature values (forexample, optical zoom, flash), and their impor-tance. Finally, in the preference debugging phase,in addition to the top-score recommendation, twoother choices are shown, labeled “Upgrade” (amore expensive option with more features) and“Alternative” (an option with different features).

A limitation of this approach is that features sat-isfying specific needs have to be defined before-hand and hard-coded in the domain model. UnlikeSmartClient’s example critiquing, Teaching Sales-man directly exposes features and goals to the user;this is meaningful in domains where complex pref-erence statements are digestible to the user.

Personal AgentsPersonal agents extend intelligent agent-based sys-tems with personalized assistance. A significantaspect of the personalization comes from the agentacquiring knowledge of the user and using it todirect its activity. This knowledge can includemodels of the user’s preferences over what to doand how to do it, preferred ways of working, anddegrees of autonomy, as well as models of theuser’s knowledge and skills, intentions, and soforth.

Interface agents (Maes 1994) actively assist auser in operating an interactive interface (forexample, an e-mail client); collaborative agentspartner with the user, acting and dialoguingtogether to complete a shared task; autonomousagents can act without user intervention, concur-rently with the user. Personal agents may have anyor all of these purposes, in addition to being adapt-ed to an individual.

Personal agents have been built for a diverserange of domains, from time management to tripplanning, with a diverse spectrum of interactivityand proactivity. Interaction modalities range fromspeech on mobile phones to animated avatars(Jacko and Sears 2003). Effective interaction isenhanced by honoring user preferences over inter-action modalities: for instance whether or not atutoring agent is manifest as an animated guide,and the extent to which it uses natural language.The employed models of preferences, the methodsof elicitation, and the use of preferences in reason-ing likewise span many approaches.

The ability to act autonomously or proactivelyon behalf of the user is one distinguishing aspect ofagent-based applications. Concerns arise abouttrust, expectations, safety, privacy, and control(Norman 1994), increasing the importance of pref-erences over how a task is accomplished and intro-ducing the need for adjustable autonomy prefer-

Articles

WINTER 2008 17

Page 6: Preferences in Interactive Systems: Technical Challenges and Case

ences that govern the agent’s operation in a mixed-initiative setting. Failure to properly account forthese aspects risks only to cause frustration to theuser, as in the infamous “Clippy” of MicrosoftOffice (Schiaffino and Amandi 2004).

With the breadth of personal agents deserving a

survey in its own right, we describe two character-istic systems and a growing class of personalagents.

CAP: Calendar Apprentice. Alongside e-mailassistance, time management is a domain receiv-

Articles

18 AI MAGAZINE

Figure 2. Teaching Salesman Helps the User Compare Different Products Based on How They Match His or Her Needs.

Page 7: Preferences in Interactive Systems: Technical Challenges and Case

ing sustained research interest. The concept of apersonal agent that assists in managing your cal-endar has been pursued since the early days of elec-tronic calendars. Studies have shown that timemanagement is an intensely personal matter;hence understanding and fulfilling the user’s indi-vidual preferences is essential if people are to val-ue this type of personal agent (Palen 1999). A rep-resentative and influential example is CAP(calendar apprentice) (Mitchell et al. 1994). A CAPagent is an interactive assistant that observes a useroperating with his or her electronic calendar andprovides advice. This advice consists of suggestedautocompletions of parameters when the user iscreating a new meeting entry.

Other systems have offered assistance for taskssuch as negotiating meetings. For instance, Sen,Hayes, and Arora (1997) reach consensus througha distributed negotiation process, ranking meetingoptions with a voting scheme. The agent votesautomatically as the user’s proxy, based on a pref-erence model. The model is composed of values ofeach feature (for example, topic, time of day, par-ticipants, location), summed with weights on fea-ture importance.

A challenge for assistive agents is the tensionbetween helpfulness “out of the box” and cus-tomized assistance after preferences have beenacquired. Similar to the cold start problem in rec-ommender systems, users are rarely willing tospend time training the system, beyond stating afew crude preferences. Is the mandated preferencespecification of Sen, Hayes, and Arora (1997) toocumbersome (for example, each user is asked torate all other known users), despite default values;is a window full of UI sliders to set weights betweenfeatures too intimidating?

CAP unobtrusively learns a preference modelfrom user actions. Its model is based on a decisiontree of rules; rules can be viewed and edited by theuser and form the basis of explanation of theagent’s actions. Other systems commonly aug-ment automated acquisition of their preferencemodel with circumscribed direct preference elicita-tion, such as of preferred meeting times (ceterisparibus), to seed the model and make the systemmore useful from the beginning.

An additional challenge for agents in thesedomains is how to expose the agent’s assistancewithout degrading the user experience with famil-iar application UIs, as discussed in Jacko and Sears(2003). As an earlier example of the genre, CAPtakes the simplest but suboptimal approach ofreplacing the calendar UI with its own.

PExA: Project Execution Assistant. PExA (Myerset al. 2007) aids a busy knowledge worker in man-aging time commitments and performing tasks.The system was designed to help (1) relieve theuser of routine tasks, thus allowing the user to

focus on tasks that critically require human prob-lem-solving skills, and (2) intervene in situationswhere cognitive overload leads to oversights ormistakes by the user.

Akin to an adept personal assistant, PExA’s utili-ty to the user critically depends on personalization.The agent must have an understanding of theuser’s tasks and preferred working style. Since PExAis part of a larger effort to explore machine learn-ing, known as the Personal Assistant that Learns(PAL), an emphasis is placed on learning tech-niques to populate the multiple preference modelsemployed by components of the system, to governthe interaction of the system with the user, and todirect and review its behavior.

Articles

WINTER 2008 19

Figure 3. Task Delegation with the PExA Personal Agent.

Page 8: Preferences in Interactive Systems: Technical Challenges and Case

A PExA agent comprises a task manager, and atime manager described in Berry et al. (2006). PExAcan performs tasks on behalf of its user at his or herrequest, such as purchasing an item of equipment,applying for expense reimbursement, planningtravel, and registering for a conference. It usesmodels of the user’s base-level preferences, such ashis or her preferred airline and preferred meetinglocations. Besides execution of delegated tasks,PExA’s task management capabilities includeexplanation of its activities, learning of task mod-els (for instance, those obtained from PLOW (Allenet al. 2007)), and execution monitoring. Based onuser activities, the agent can infer the state ofshared user-system tasks and proactively offer sug-gestions, such as performing the next step of a taskor delegating a task to another user (as shown infigure 3).

For multidomain, multifunction agents such asPExA, transparency and controllability aredemanded for user comfort and system adoption.Would you trust an agent with your credit cardnumber? Preferences and adjustable autonomyadvice from the user govern PExA’s reasoning overwhen to act, when to suggest, when to ask for clar-ification, and when to do nothing. Preferenceslikewise govern the modality of the interaction theagent employs (Horvitz 1999): for instance, a pop-up dialogue box versus an unobtrusive notificationin a sidebar.

Cognitive Assistants. A cognitive assistant trains,augments, or restores the cognitive abilities of itsuser. These assistants help students learn newknowledge and skills, help the elderly remember tocarry out daily activities, and help people withhead injuries regain lost abilities—all activities thatrequire in-depth personalization. These systemscover the entire range of interactivity. Forms oftutoring, at one extreme, consist of systems thatgive direct instruction. For instance, adaptivehypermedia systems alter the content or appear-ance of the presented information on the basis ofa dynamic understanding of the individual user, soas to adapt the content or presentation to certaincharacteristics of the user (Eklund and Sinclair2000). These systems dynamically choose whichtext to turn into hyperlinks and dynamicallychange the target of existing links, such as the“Next” button.

Domains of adaptive tutoring agents includemainstream educational pedagogy (Graesser et al.2001), practicing a musical instrument,1 learningto program (Selker 1994), and touring a city ormuseum (a form of recommender system) (Chev-erst et al. 2000).

At the other extreme of interactivity, cognitiveassistants can operate transparently to the user,intervening only when necessary. Systems such asAutominder (Pollack et al. 2003) and Opportunity

Knocks (Patterson et al. 2004) are designed to aug-ment memory skills in older adults with mild cog-nitive impairment. For instance, OpportunityKnocks takes the form of a smart mobile phonethat helps plan your route with public transport.Two critical design goals are to require no explicitprogramming by user or caregiver and to be robustin the real world. The agent learns the user’s rou-tine and does not require the user to explicitly pro-vide information about preferred modes of transitor the starting or ending points of his or her jour-neys. It provides proactive assistance with routeplanning, changing mode of transport, and recog-nizing and recovering from user errors.2 In themiddle ground, systems such as COACH (Selker1994) tutor users with instructional content asthey learn and perform. When operating in con-junction with another application, this middleclass of system can be seen as a form of interfaceagent; in contrast to CAP, here the agent is the“expert” and the user is the “apprentice.”

The primary role of preferences in these agentsconcerns the user’s mode of learning and thenature of the help desired. The agents personalizeto, for instance, whether the users are expert ornovice at a task, whether their learning style ismore inductive or deductive, their emotional state,and whether they like continuous context-sensi-tive hints or prefer more focused but attention-demanding help after they have explored on theirown. Often, these preferences are acquired—orrefined after initial explicit elicitation—unobtru-sively and automatically as the user works. Thegoal of the agent is to foster the learning or aid theperformance of the user; the user’s attentionshould be on the matter at hand, and not on theagent.

Personalized User InterfacesPersonalized user interfaces (PUI) move beyondthe pervasive “one size fits all” paradigm of UIdesign. By learning and employing models of theuser and his or her environment, personalized UIscustomize the interface to the user’s tastes and thecontext of the interaction. We have already seenthe value of interface personalization in personalagents.

PUIs form a subfield within the adaptive userinterface community, which collectively adaptsUIs to a broad range of context, including charac-teristics of the user (for example, identity, currenttask, level of ability, emotional state, preferences),the device (for example, screen size, bandwidth,available input methods), and the environment(for example, noise level, presence of other agents).For each application, a different subset of the fullcontext is relevant to adapting the interface. PUIsfocus on systems whose relevant context includesa user’s preferences and current task. What differ-

Articles

20 AI MAGAZINE

Page 9: Preferences in Interactive Systems: Technical Challenges and Case

entiates PUIs from its siblings within the adaptiveUI community is the need to focus on preferenceelicitation and reasoning.

Given general and possibly also case-specificpreferences, an interface can be personalized inone of several ways. First, the components of theinterface can be organized in a way that chooses,highlights, or modifies visual elements relevant tothe user’s current task or consistent with his or herpast behavior. Second, user actions in the UI canbe automated if patterns of use match previouspatterns (for example, display a help menu or fin-ish formatting a table). Third, the modality of theinteraction or presentation can be changed (forexample, show concept as text or as a graphic)(Billsus et al. 2002; Zhou and Aggarwal 2004; Braf-man and Friedman 2006).

ARNAULD/SUPPLE. The ARNAULD (Gajos andWeld 2005) and SUPPLE (Gajos and Weld 2004)systems collectively generate and lay out graphicaluser interfaces using descriptions of UI elements, aspecification of the target device, and a user mod-el, which is obtained through inference over userinteraction traces and elicited preferences.ARNAULD interacts with the user to elicit prefer-ences over UI widgets and locations, while SUPPLEsearches for interfaces that are optimal with respectto the ”cost“ of using the interface to perform a setof user interface tasks.

ARNAULD employs two interaction methods foreliciting preferences over interface layout. The firstis example-critiquing (shown in figure 4), wherethe user can add, remove, and reposition elementsin the UI based on his or her preferences. The sec-ond method is active elicitation, in which users areasked to choose the best widget for a given situa-tion. From these elicitations, the system infers thecost of performing an action on each element andthe effort in making the transition to the next ele-ment in the interaction sequence.

SUPPLE’s cost function is based on sequences ofelements manipulated by the user. When evaluat-ing an interface, the system totals the user’sexpected effort in performing the sequence onthat interface. The trace is both device- and in ter-

face-independent, making it applicable for trans-ferring to new devices and situations.

Which elicitation method is best, in general andfor a given application? The authors found thatexample-critiquing is most effective when the costfunction is almost correct, but that it requires aninterface allowing its customization by the user.Active elicitation, conversely, is convenient duringthe early stages of the elicitation process and bothis easy for the user and puts little demand on theUI. However, it suffers from the fact that the userdoes not direct the learning process, burdening thesystem to choose elicitation questions that willmost quickly train the model (see table 3).

Gaze-X. In addition to the choice of widget and itslocation, the Gaze-X system (Maat and Pantic2007) modifies the widgets themselves, changingfont and component sizes in response to changesin user focus or task. Gaze-X is more dynamic andmore proactive than ARNAULD/SUPPLE, showingcontextual help when it notices the user looking ata particular element for a long time. The systemsshare the goal of altering the display in response toinferred user preference, but differ (1) in the com-plexity of user model—Gaze-X’s is more shallowand broad—and (2) in the way they use the mod-el—ARNAULD/SUPPLE attempts to optimize a lay-out before use, while Gaze-X tries to constantlyadapt to the current situation.

A key challenge for Gaze-X is designing the out-put of the system. Changing the interface tooquickly, too often, or at the wrong time will dooman interactive system to a spot in the AI graveyard.Gaze-X takes a conservative approach, makingchanges one at a time and at a slow pace.

Adaptive Rich Media Presentations. Presentingrich media, such as video, Flash presentations, andanimated text, requires the interface designer torespect constraints not encountered in PUI layoutsystems such as Gaze-X and SUPPLE. Consider apromotion for a sporting event that contains mul-tiple video segments, Flash and video ads, and adiverse audience viewing the promotion from adiverse set of devices (Brafman and Friedman2006). Temporal constraints arise from video (a

Articles

WINTER 2008 21

Active elicitation, such as asking the user to compare two outcomes, can converge to the cost function quite rapidly,as indicated by ARNAULD. However a comparative evaluation of FlatFinder and questions-answering (a form of activeelicitation) showed that active elicitation can produce incorrect preference models when the user does not know thedomain. Thus there is no simple rulebook for determining the best form of elicitation — but there are guidelines (Puand Chen 2008).

Table 3. Which Type of Elicitation Is Best?

Page 10: Preferences in Interactive Systems: Technical Challenges and Case

particular ordering of the clips); device constraintsabound, in terms of both dimensions and software;and respecting user preferences is critical or theuser will not “stay tuned” to hear the entire mes-sage.

Brafman and Friedman’s approach to generatingrich media presentations uses TCP-nets (Rossi,Walsh, and Venable 2008) to reason over theseconstraints and select a qualitatively optimal pres-entation of the media. The approach allows theauthor of the presentation to focus purely on thecontent.

In contrast to the problems addressed by SUP-PLE and Gaze-X, the user model is relatively sim-ple—it is based on demographic informationonly—and so preference elicitation is straightfor-ward. Reasoning about the preference model andassociated constraints, by contrast, is much morecomplicated than it is in the other systemsdescribed, showing that while preference-awareinteractive systems share a common set of chal-lenges, the challenges manifest themselves differ-ently in each application.

Summary and Outlook

This article has surveyed preference-aware interac-tive systems, enumerating common challenges intheir design and highlighting case studies in thedomain of recommender systems, personal agents,and personalized user interfaces. The bordersbetween these application areas are somehow

fuzzy or illusory: for example, highly interactiveand persistent recommender systems assume thecharacteristics of personal agents, while adaptivehypermedia agents that simply change text tohyperlinks could be considered as a PUI. We pre-sented several technical challenges in designingthese systems and argued that designers cannotattack these challenges independently: the bestpreference model for a given application mayrequire more elicitation time than a user cares todevote, for instance, or the available reasoningtime may not afford the interaction style desired.Instead, the designer must trade off the design ofall components in such a way that the user’s per-ceived and actual benefits will significantly exceedhis or her perceived costs. The case studies pre-sented reinforce this basic idea. All the systemsaddressed these challenges in diverse, yet non-competing ways. We do not see that critiquing-based systems are superior to direct elicitation;rather, we begin to understand the scenariosappropriate for each.

Commercialized preference-aware systems tendto adopt low effort elicitation strategies. Does thisimply that the public is not familiar with the ben-efit of preference-aware systems? Or that only thelightest weight elicitation of preference is accept-able in the typical situation? Or that research hasnot produced medium-to-high effort elicitationstrategies whose return justifies their effort? Com-mercial vendors are pragmatic: their aim is not thebest technical solution but the system that maxi-mizes profit. For instance, a recommender systemmore customized to the user might make the cus-tomer happier but lead the user to purchase a low-er-end item.

The study of interface design and usability areoutside the scope of this article, but, as demon-strated with the PEACH museum guide assistant(Goren-Bar et al. 2005), poor judgment in UIdesign can significantly affect the use and adop-tion of a system, regardless of the preference-han-dling mechanisms. If the user experience is lack-ing, a preference model of low accuracy may result.

Emerging Trends and DomainsInteractive systems are a dynamic domain ofresearch. Several practical applications in industryhave already been successfully deployed, and wecan expect that the emergence of new domains,new sources of data, and technical advances willbring even more attention to the field.

A rich emerging domain is personalized medi-cine. Many predictions have identified the agingpopulation of developed countries as a grand socialand economic challenge. The rising cost of healthcare can be met in part by personalized assistanceto the elderly, keeping track of their needs, tasks,preferences, and medical treatments. In this and

Articles

22 AI MAGAZINE

Figure 4. Incorporation of Example-Critiquing in the ARNAULD/SUPPLE System.

Example-critiquing put significant demands on the interface, but the userbenefits in the short term (better interface now) and in the long term(improved models).

Page 11: Preferences in Interactive Systems: Technical Challenges and Case

similar contexts (such as providing assistance todisabled and impaired people) the system shouldbe able to monitor and understand the differentcognitive abilities of the user.

We foresee the continued emergence of systemsthat cross-fertilize between the application areasthat we have treated in this article. Further, animportant development will be to acquire and rea-son with broader preference models, adding notonly a user’s location, but also the user’s personal-ity, emotional states, and cognitive load.

New sources of data will dramatically change thecost/benefit equation in the design of interactivesystems. With the growing popularity of socialwebsites and web 2.0 and beyond applications, sys-tem designers can plan to leverage preferences,interests, and other user information readily avail-able without additional user effort. The increasedprevalence of mobile and location-aware devices(such as Apple’s iPhone) will allow, for example, atraveler to obtain recommendations for restaurantsbased on his or her current location at lunchtime,a personalized city tour in the afternoon, recom-mendations for cinema in the evening, and assis-tance in navigating around town either with pub-lic transportation or by car during the full day.

It is possible that society will warm to appropri-ate roles of carefully designed personal agents thatcan act proactively (Norman 1994). Even for strict-ly reactive, more easily understood and controlledsystems, however, a greater adoption of personal-ized applications is possible only if users trust thesystems (Konstan, Riedl, and Smyth 2007). At thesame time, users have a right to expect that theirpersonal data is used only for the purposes dele-gated to the system.

AcknowledgementsWe thank Ronen Brafman, Boi Faltings, and theanonymous reviewers for their suggestions. Thework of the two authors at SRI International wassupported by the Defense Advanced Research Proj-ects Agency (DARPA) under Contract No.NBCHD030010. Any opinions, findings, and con-clusions or recommendations are those of theauthors and do not necessarily reflect the view ofDARPA or the Department of Interior NBC. PaoloViappiani has been supported by the Swiss Nation-al Science Foundation under Contract No. 200020-111862.

Notes1. SmartMusic. www.smartmusic.com.

2. Although the initially tested system did not feature anexplicit user preference model, subsequent work under-scored the importance of adapting the guidance strate-gies and interface modalities of such devices to the pref-erences of the user (Liu et al. 2006).

ReferencesAdomavicius, G., and Tuzhilin, A. 2005. Toward the NextGeneration of Recommender Systems: A Survey of theState-of-the-Art and Possible Extensions. IEEE Transac-tions on Knowledge and Data Engineering 17(6): 734–749.

Allen, J. F.; Chambers, N.; Ferguson, G.; Galescu, L.; Jung,H.; Swift, M.; and Taysom, W. 2007. PLOW: A Collabora-tive Task Learning Agent. In Proceedings of the Twenty-Sec-ond AAAI Conference on Artificial Intelligence, 1514–1519.Menlo Park, CA: AAAI Press.

Berry, P.; Conley, K.; Gervasio, M.; Peintner, B.; Uribe, T.;and Yorke-Smith, N. 2006. Deploying a PersonalizedTime Management Agent. In Proceedings of the 5th Inter-national Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS-06), 1564–1571. New York: Associ-ation for Computing Machinery.

Billsus, D.; Brunk, C. A.; Evans, C.; Gladish, B.; and Paz-zani, M. 2002. Adaptive Interfaces for Ubiquitous WebAccess. Communications of the ACM 45(5): 34–38.

Brafman, R. I., and Friedman, D. 2006. Adaptive RichMedia Presentations Via Preference-Based ConstrainedOptimization. In Preferences: Specification, Inference,Applications, No. 04271 in Dagstuhl Seminar Proceed-ings. Internationales Begegnungs und Forschungszen-trum fuer Informatik (IBFI), Schloss Dagstuhl, Germany.

Burke, R. D. 2002. Hybrid Recommender Systems: Surveyand Experiments. User Modeling and User-Adapted Interac-tion 12(4): 331–370.

Cheverst, K.; Davies, N.; Mitchell, K.; Friday, A.; and Efs-tratiou, C. 2000. Developing a Context-Aware ElectronicTourist Guide: Some Issues and Experiences. In Proceed-ings of the 2000 SIGCHI Conference on Human Factors inComputing Systems (CHI’00), 17–24. New York: Associa-tion for Computing Machinery.

Eklund, J., and Sinclair, K. 2000. An Empirical Appraisalof the Effectiveness of Adaptive Interfaces for Instruc-tional Systems. Educational Technology and Society 3(4).

Gajos, K., and Weld, D. S. 2004. SUPPLE: AutomaticallyGenerating User Interfaces. In Proceedings of the 8th Inter-national Conference on Intelligent User Interfaces (IUI-04), 93–100. New York: Association for Computing Machinery.

Gajos, K., and Weld, D. S. 2005. Preference Elicitation forInterface Optimization. In Proceedings of the 18th AnnualACM Symposium on User Interface Software and Technology(UIST-05), 173–182. New York: Association for Comput-ing Machinery.

Goren-Bar, D.; Graziola, I.; Kuflik, T.; Pianesi, F.; Rocchi,C.; Stock, O.; and Zancanaro, M. 2005. I Like It—AnAffective Interface for a Multimodal Museum Guide.Paper presented at the Workshop on Affective Interac-tions at the 9th International Conference on IntelligentUser Interfaces (IUI-05). San Diego, CA, January 10–13.

Graesser, A. C.; VanLehn, K.; Rose, C. P.; Jordan, P. W.;and Harter, D. 2001. Intelligent Tutoring Systems withConversational Dialogue. AI Magazine 22(4): 39–52.

Horvitz, E. 1999. Principles of Mixed-Initiative User Inter-faces. In Proceedings of the 1999 SIGCHI Conference onHuman Factors in Computing Systems (CHI-99), 159–166.New York: Association for Computing Machinery.

Jacko, J. A., and Sears, A., eds. 2003. The Human-Comput-er Interaction Handbook. Mahwah, NJ: Lawrence ErlbaumAssociates, Inc.

Articles

WINTER 2008 23

Page 12: Preferences in Interactive Systems: Technical Challenges and Case

Konstan, J. A.; Miller, B. N.; Maltz, D.; Herlocker, J. L.;Gordon, L. R.; and Riedl, J. 1997. GroupLens: ApplyingCollaborative Filtering to Usenet News. Communicationsof the ACM 40(3): 77–87.

Konstan, J. A.; Riedl, J.; and Smyth, B., eds. 2007. Pro-ceedings of the 2007 ACM Conference on Recommender Sys-tems. New York: Association for Computing Machinery.

Lieberman, H., and Selker, T. 2000. Out of Context: Com-puter Systems That Adapt to, and Learn from, Context.IBM Systems Journal 39(3–4): 617–632.

Liu, A. L.; Hile, H.; Kautz, H.; Borriello, G.; Brown, P. A.;Harniss, M.; and Johnson, K. 2006. Indoor Wayfinding:Developing a Functional Interface for Individuals withCognitive Impairments. In Proceedings of the 8th Interna-tional ACM SIGACCESS Conference on Computers and Acces-sibility (ASSETS-06), 95–102. Portland, OR, October 23–25. New York: Association for Computing Machinery.

Maat, L., and Pantic, M. 2007. Gaze-X: Adaptive, Affec-tive, Multimodal Interface for Single-User Office Scenar-ios. In Artificial Intelligence for Human Computing, LectureNotes in Computer Science, Vol. 4451, 251–271. Berlin:Springer.

Maes, P. 1994. Agents That Reduce Work and Informa-tion Overload. Communications of the ACM 37(7): 30–40.

Massa, P., and Avesani, P. 2007. Trust-Aware Recom-mender Systems. In Proceedings of the 2007 ACM Confer-ence on Recommender Systems (RecSys-07), 17–24. NewYork: Association for Computing Machinery.

Mitchell, T.; Caruana, R.; Freitag, D.; McDermott, J.; andZabowski, D. 1994. Experience with a Learning PersonalAssistant. Communications of the ACM 37(7): 80–91.

Myers, K.; Berry, P.; Blythe, J.; Conley, K.; Gervasio, M.;McGuinness, D.; Morley, D.; Pfeffer, A.; Pollack, M.; andTambe, M. 2007. An Intelligent Personal Assistant forTask and Time Management. AI Magazine 28(2): 47–61.

Norman, D. A. 1994. How Might People Interact withAgents. Communications of the ACM 37(7): 68–71.

Palen, L. 1999. Social, Individual, and TechnologicalIssues for Groupware Calendar Systems. In Proceedings ofthe 1999 SIGCHI Conference on Human Factors in Comput-ing Systems (CHI-99), 17–24. New York: Association forComputing Machinery.

Patterson, D. J.; Liao, L.; Gajos, K.; Collier, M.; Livic, N.;Olson, K.; Wang, S.; Fox, D.; and Kautz, H. 2004. Oppor-tunity Knocks. In Proceedings of the 6th International Con-ference on Ubiquitous Computing (Ubi-Comp-04), 433–450.Berlin: Springer.

Pollack, M. E.; Brown, L. E.; Colbry, D.; McCarthy, C. E.;Orosz, C.; Peintner, B.; Ramakrishnan, S.; and Tsamardi-nos, I. 2003. Autominder: An Intelligent CognitiveOrthotic System for People with Memory Impairment.Robotics and Autonomous Systems 44(3–4): 273–282.

Pu, P., and Chen, L. 2008. User-Involved Preference Elic-itation for Product Search and Recommender Systems. AIMagazine 29(4): 93–103.

Pu, P., and Faltings, B. 2000. Enriching Buyers’ Experi-ences: The SmartClient Approach. In Proceedings of the2000 SIGCHI Conference on Human Factors in ComputingSystems (CHI-00), 289–296. New York: Association forComputing Machinery.

Rashid, A. M.; Ling, K. S.; Tassone, R. D.; Resnick, P.;Kraut, R. E.; and Riedl, J. 2006. Motivating Participation

by Displaying the Value of Contribution. In Proceedings ofthe 2006 Conference on Human Factors in Computing Sys-tems (CHI-06), 955–958. New York: Association for Com-puting Machinery.

Reilly, J.; McCarthy, K.; McGinty, L.; and Smyth, B. 2005.Incremental Critiquing. Knowledge-Based Systems 18(4–5):143–151.

Ricci, F., and Nguyen, Q. N. 2007. Acquiring and Revis-ing Preferences in a Critique-Based Mobile RecommenderSystem. IEEE Intelligent Systems 22(3): 22–29.

Rossi, F., Walsh, T., and Venable, K. B. 2008. Preferencesin Constraint Satisfaction and Optimization. AI Magazine29(4): 58–68.

Schiaffino, S., and Amandi, A. 2004. User-Interface AgentInteraction: Personalization Issues. International Journal ofHuman-Computer Studies 60(1): 129–148.

Selker, T. 1994. Coach: A Teaching Agent That Learns.Communications of the ACM 37(7): 92–99.

Sen, S.; Hayes, T.; and Arora, N. 1997. Satisfying UserPreferences While Negotiating Meetings. InternationalJournal of Human-Computer Studies 47(3): 407–427.

Stolze, M., and Ströbel, M. 2004. Recommending as Per-sonalized Teaching. In Designing Personalized User Experi-ences in eCommerce, 293–313. Norwell, MA: Kluwer Aca-demic Publishers.

Viappiani, P.; Faltings, B.; and Pu, P. 2006. Preference-Based Search Using Example-Critiquing with Sugges-tions. Journal of Artificial Intelligence Research 27: 465–503.

Zhou, M. X., and Aggarwal, V. 2004. An Optimization-Based Approach to Dynamic Data Content Selection inIntelligent Multimedia Interfaces. In Proceeding of the 17thAnnual ACM Symposium on User Interface Software andTechnology (UIST-04), 227–236. New York: Association forComputing Machinery.

Bart Peintner is a computer scientist at SRI Internation-al’s Artificial Intelligence Center. His most recent workhas used his expertise in scheduling, planning, constraintoptimization, and probabilistic reasoning to developtechniques for use in cognitive assistants, both for officeworkers and elderly persons. He holds a Ph.D. from theUniversity of Michigan.

Paolo Viappiani is a postdoctoral researcher at the Uni-versity of Toronto’s Department of Computer Science.His research interests include preference elicitation, intel-ligent interfaces, and recommender systems. He holds aPh.D. in artificial intelligence from the Ecole Polytech-nique Fédérale de Lausanne (EPFL) with a thesis on per-sonalized search tools.

Neil Yorke-Smith is a computer scientist at SRI Interna-tional’s Artificial Intelligence Center. His research aims tohelp people make decisions in complex situations, withinterests including temporal reasoning, planning andscheduling, preferences, advisable agents, intelligentinterfaces, and constraint programming, and their real-world applications. He holds a Ph.D. from Imperial Col-lege London.

Articles

24 AI MAGAZINE