cognitive engineering - human problem solving with tools.pdf

Embed Size (px)

Citation preview

  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    1/18

    http://hfs.sagepub.com/ SocietyFactors and Ergonomics

    Journal of the HumanHuman Factors: The

    http://hfs.sagepub.com/content/30/4/415The online version of this article can be foun d at:

    DOI: 10.1177/001872088803000404

    1988 30: 415Human Factors: The Journal of the Human Factors and Ergonomics Society D. D. Woods and E. M. Roth

    Cognitive Engineering: Human Problem Solving with Tools

    Published by:

    http://www.sagepublications.com

    On behalf of:

    Human Factors and Ergonomics Society

    can be found at:Society Human Factors: The Journal of the Human Factors and Ergonomics Additional services and information for

    http://hfs.sagepub.com/cgi/alertsEmail Alerts:

    http://hfs.sagepub.com/subscriptionsSubscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    http://hfs.sagepub.com/content/30/4/415.refs.htmlCitations: at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownl oaded from at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownl oaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/content/30/4/415http://hfs.sagepub.com/content/30/4/415http://www.sagepublications.com/http://www.sagepublications.com/http://www.hfes.org/http://hfs.sagepub.com/cgi/alertshttp://hfs.sagepub.com/cgi/alertshttp://hfs.sagepub.com/subscriptionshttp://hfs.sagepub.com/subscriptionshttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsPermissions.navhttp://www.sagepub.com/journalsPermissions.navhttp://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://hfs.sagepub.com/http://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://hfs.sagepub.com/http://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://hfs.sagepub.com/http://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/content/30/4/415.refs.htmlhttp://www.sagepub.com/journalsPermissions.navhttp://www.sagepub.com/journalsReprints.navhttp://hfs.sagepub.com/subscriptionshttp://hfs.sagepub.com/cgi/alertshttp://www.hfes.org/http://www.sagepublications.com/http://hfs.sagepub.com/content/30/4/415http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    2/18

    What is This?

    - Aug 1, 1988Version of Record>>

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://hfs.sagepub.com/content/30/4/415.full.pdfhttp://hfs.sagepub.com/content/30/4/415.full.pdfhttp://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://hfs.sagepub.com/content/30/4/415.full.pdf
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    3/18

    HU MAN FACTORS, 1988,30(4),415-430

    Cognitive Engineering: Human ProblemSolving with Tools

    D. D. WOODSt and E. M. ROTH,2 Westinghouse Research and Development Center Pittsburgh, Pennsylvania '

    C?gnitive engi~~ring is an applied cognitive science that draws on the knowledge and tech-ntqu~ of cog"!ltrve psychology and related disciplines to provide the foundation for princi-

    ple-dnven desrgn of person-machine systems. This paper examines the fundamental features

    that cha~acten.ze .c0l!nitive engineering and reviews some of the major issues faced by thisnascent mterdrscrplmary field.

    INTRODUCTION

    Why is there talk about a field of cognitiveengineering? What is cognitive engineering?

    What can it contribute to the development of more effective person-machine systems?What should it contribute? We will exploresome of the answers to these questions in this paper. As with any nascent and interdisci- plinary field, there can be very different per-spectives about what it is and how it will de-velop over time. This paper represents onesuch view.

    The same phenomenon has produced both

    the opportunity and the need for cognitiveengineering. With the rapid advances and dramatic reductions in the cost of compu-tational power, computers have becomeubiquitous in modern life. In addition to tra-ditional office applications (e.g., word-pro-cessing, accounting, information systems),computers increasingly dominate a broad

    I Requests for reprints should be sent to David D.~oods,. Department of Industrial and Systems Engineer-mg, OhIO State University, 1971 Neil Ave. Columbus OH43210. "

    zEmilie Roth, Department of Engineering and PublicPolicy, Carnegie-Mellon University, Pittsburgh, PA 15213.

    range of work environments (e.g., industrial process control, air traffic control, hospitalemergency rooms, robotic factories). Theneed for cognitive engineering occurs be-cause the introduction of computerizationoften radically changes the work environ-ment and the cognitive demands placed onthe worker. For example, increased automa-tion in process control applications has re-sulted in a shift in the human role from acontroller to a supervisor who monitors and manages semiautonomous resources. Al-though this change reduces people's physicalworkload, mental load often increases as thehuman role emphasizes monitoring and com- pensating for failures. Thus computerizationcreates an increasingly larger world of cogni-tive tasks to be performed. More and morewe create or design cognitive environments.

    The opportunity for cognitive engineeringarises because computational technologyalso offers new kinds and degrees of machine power that greatly expand the potential to

    assist and augment human cognitive activi-ties in complex problem-solving worlds, suchas monitoring, problem formulation, plan

    1988, The Human Factors Society, Inc. All rights reserved.

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    4/18

    416-August 1988

    generation and adaptation, and fault man-agement. This is highly creative time when people are exploring and testing what can becreated with the new machine power-dis- plays with multiple windows and even rooms(Henderson and Card, 1986). The new capa-

    bilities have led to large amounts of activitydevoted to building new and more powerfultools-how to build better-performing ma-chine problem solvers. The question we con-tinue to face is how we should deploy the power available through new capabilities for

    tool building to assist human performance.This question defines the central focus of cognitive engineering: to understand what iseffective support for human problem solvers.

    The capability to build more powerful ma-chines does not in itself guarantee effective performance, as witnessed by early attemptsto develop computerized alarm systems in

    process control (e.g., Pope, 1978) and at-tempts to convert paper-based procedures to

    a computerized form (e.g., Elm and Woods,1985). The conditions under which the ma-chine will be exercised and the human's rolein problem solving have a profound effect onthe quality of performance. This means thatfactors related to tool usage can and should affect the very nature of the tools to be used.This observation is not new-in actual work contexts, performance breakdowns have been observed repeatedly with support sys-tems, constructed in a variety of media and technologies including current AI tools, whenissues of tool use were not considered (seeRoth, Bennett, and Woods, 1987). This is thedark side: the capability to do more amplifiesthe potential magnitude of both our suc-cesses and our failures. Careful examinationof past shifts in technology reveals that newdifficulties (new types of errors or accidents)

    are created when the shift in machine power has changed the entire human-machine sys-tem in unforeseen ways (e.g., Hirschhorn,

    HUMAN FACTORS

    1984; Hoogovens Report, 1976; Noble, 1984;Wiener, 1985).

    Although our ability to build more power-ful machine cognitive systems has grown and been promulgated rapidly, our ability to un-derstand how to use these capabilities hasnot kept pace. Today we can describe cogni-tive tools in terms of the tool-building tech-nologies (e.g., tiled or overlapping windows).The impediment to systematic provision of effective decision support is the lack of an ad-equate cognitive language of description

    (Clancey, 1985; Rasmussen, 1986). What arethe cognitive implications of some applica-tion's task demands and of the aids and in-terfaces available to the practitioners in thesystem? How do people behave/perform inthe cognitive situations defined by these de-mands and tools? Because this independentcognitive description has been missing, anuneasy mixture of other types of descriptionof a complex situation has been substituted

    -descriptions in terms of the application it-self (e.g., internal medicine or power plantthermodynamics), the implementation tech-nology of the interfaces/aids, the user's physi-cal activities.

    Different kinds of media or technology may be more powerful than others in that they en-able or enhance certain kinds of cognitivesupport functions. Different choices of mediaor technology may also represent trade-offs between the kinds of support functions thatare provided to the practitioner. The effortrequired to provide a cognitive support func-tion in different kinds of media or technologymay also vary. In any case, performance aid-ing requires that one focus at the level of thecognitive support functions required and then at the level of what technology can pro-vide those functions or how those functions

    can be crafted within a given computationaltechnology.

    This view of cognitive technology as com-

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    5/18

    COGNITIVE ENGINEERING

    plementary to computational technology isin stark contrast to another view wherebycognitive engineering is a necessary but bothersome step to acquire the knowledgefuel necessary to run the computational en-gines of today and tomorrow.

    COGNITIVE ENGINEERING IS ...

    There has been growing recognition of thisneed to develop an applied cognitive sciencethat draws on knowledge and techniques of cognitive psychology and related disciplinesto provide the basis for principle-driven de-sign (Brown and Newman, 1985; Newell and Card, 1985; Norman, 1981; Norman and Draper, 1986; Rasmussen, 1986). In this sec-tion we will examine some of the characteris-tics of cognitive engineering (or whatever label you prefer-cognitive technologies,cognitive factors, cognitive ergonomics,knowledge engineering). The specific per-spective for this exposition is that of cognitive

    systems engineering (Hollnagel and Woods,1983; Woods, 1986).

    ... about Complex Worlds

    Cognitive engineering is about human be-havior in complex worlds. Studying human behavior in complex worlds (and designingsupport systems) is one case of people en-gaged in problem solving in a complex world,analogous to the task of other human prob-

    lem solvers (e.g., operators, troubleshooters)who confront complexity in the course of their daily tasks. Not surprisingly, the strate-gies researchers and designers use to copewith complexity are similar as well. For ex-ample, a standard tactic to manage complex-ity is to bound the world under consider-ation. Thus one might address only a singletime slice of a dynamic process or only a sub-set of the interconnections among parts of ahighly coupled world. This strategy is limited because it is not clear whether the relevant

    August 1988-417

    aspects of the whole have been captured.First, parts of the problem-solving processmay be missed or their importance underes-timated; second, some aspects of problemsolving may emerge only when more com- plex situations are directly examined.

    For example, the role of problem formula-tion and reformulation in effective perfor-mance is often overlooked. Reducing thecomplexity of design or research questions by bounding the world to be considered merelydisplaces the complexity to the person in theoperational world rather than providing astrategy to cope with the true complexity of the actual problem-solving context. It is onemajor source of failure in the design of ma-chine problem solvers. For example, the de-signer of a machine problem solver may as-sume that only one failure is possible to beable to completely enumerate possible solu-tions and to make use of classification prob-lem-solving techniques (Clancey, 1985). How-

    ever, the actual problem solver must copewith the possibility of multiple failures, mis-leading signals, and interacting disturbances(e.g., Pople, 1985; Woods and Roth, 1986).

    The result is that we need, particularly inthis time of advancing machine power, to un-derstand human behavior in complex situa-tions. What makes problem solving complex?How does complexity affect the performanceof human and machine problem solvers?

    How can problem-solving performance incomplex worlds be improved and deficien-cies avoided? Understanding the factors that produce complexity, the cognitive demandsthat they create, and some of the cognitivefailure forms that emerge when these de-mands are not met is essential if advances inmachine power are to lead to new cognitivetools that actually enhance problem-solving performance. (See Dorner, 1983; Fischhoff,Lanir, and Johnson, 1986; Klein, in press;Montmollin and De Keyser, 1985; Rasmus-

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    6/18

    418-August 1988

    sen, 1986; Selfridge, Rissland, and Arbib,1984, for other discussions on the nature of

    complexity in problem solving.)

    ... Ecological

    Cognitive engineering is ecological. It isabout multidimensional. open worlds and not about the artificially bounded closed worlds typical of the laboratory or the engi-neer's desktop (e.g., Coombs and Hartley,1987; Funder, 1987). Of course, virtually all

    of the work environments that we might beinterested in are man-made. The point is thatthese worlds encompass more than the de-sign intent-they exist in the world as natu-ral problem-solving habitats.

    An example of the ecological perspective isthe need to study humans solving problemswith tools (i.e., support systems) as opposed to laboratory research that continues, for themost part, to examine human performance

    stripped of any tools. How to put effective cog-nitive tools into the hands of practitioners isthe sine qua non for cognitive engineering.From this viewpoint, quite a lot could belearned from examining the nature of thetools that people spontaneously create towork more effectively in some problem-solv-ing environment, or examining how preexist-ing mechanisms are adapted to serve as tools,

    as occurred in Roth et al. (1987), or examin-ing how tools provided for a practitioner arereally put to use by practitioners. The studies by De Keyser (e.g., 1986) are extremelyunique with respect to the latter.

    In reducing the target world to a tractablelaboratory or desktop world in search of pre-cise results, we run the risk of eliminating thecritical features of the world that drive be-havior. This creates the problem of decidingwhat counts as an effective stimulus (as Gib-son has pointed out in ecological perception)or, to use an alternative terminology, decid-

    HUMAN FACTORS

    ing what counts for a symbol. To decide thisquestion, Gibson (1979) and Dennett (1982),

    among others, have pointed out the need for a semantic and pragmatic analysis of envi-ronment-cognitive agent relationships withrespect to the goals/resources of the agentand the demands/constraints in the environ-ment. As a result, one has to pay very closeattention to what people actually do in a problem-solving world, given the actual de-mands that they face (Woods, Roth, and Pople, 1987). Principle-driven design of sup-

    port systems begins wi th understandingwhat are the difficult aspects of a problem-solving situation (e.g., Rasmussen, 1986;Woods and Hollnagel, 1987).

    . .. about the Semantics of a Domain

    A corollary to the foregoing points is thatcognitive engineering must address the con-tents or semantics of a domain (e.g., Coombs,1986; Coombs and Hartley, 1987). Purely

    syntactic and exclusively tool-driven ap- proaches to develop support systems are vul-nerable to the error of the third kind: solvingthe wrong problem. The danger is to fall intothe psychologist's fallacy of William James(1890) whereby the psychologist's reality isconfused with the psychological reality of thehuman practitioner in his or her problem-solving world. To guard against this danger,

    the psychologist or cognitive engineer muststart with the working assumption that prac-titioner behavior is reasonable and attemptto understand how this behavior copes withthe demands and constraints imposed by the problem-solving world in question. For ex-ample, the introduction of computerized alarm systems into power plant controlrooms inadvertently so badly undermined the strategies operational personnel used tocope with some problem-solving demandsthat the systems had to be removed and the previous alarm system restored (Pope, 1978).

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    7/18

    COGNITIVE ENGINEERING

    The question is not why people failed to ac-cept a useful technology but, rather, how theoriginal alarm system supported and the newsystem failed to support operator strategiesto cope with the world's problem-solving de-mands. This is not to say that the strategiesdeveloped to cope with the original task de-mands are always optimal, or even that theyalways produce acceptable levels of perfor-mance, but only that understanding howthey function in the initial cognitive environ-ment is a starting point to develop truly ef-

    fective support systems (e.g., Roth and Woods, 1988).

    Semantic approaches, on the other hand,are vulnerable to myopia. If each world isseen as completely unique and must be in-vestigated tabula rasa, then cognitive engi-neering can be no more than a set of tech-niques that are used to investigate everyworld anew. If this were the case, it would impose strong practical constraints on prin-

    ciple-driven development of support systems,restricting it to cases in which the conse-quences of poor performance are extremelyhigh.

    To achieve relevance to specific worlds and generalizability across worlds, the cognitivelanguage must be able to escape the languageof particular worlds, as well as the languageof particular computational mechanisms,and identify pragmatic reasoning situations,after Cheng and Holyoak (1985) and Cheng,Holyoak, Nisbett, and Oliver (1986). Thesereasoning situations are abstract relative tothe language of the particular application inquestion and therefore transportable acrossworlds, but they are also pragmatic in thesense that the reasoning involves knowledgeof the things being reasoned about. More am- bitious are attempts to build a formal cogni-

    tive language-for example, that by Coombsand Hartley (1987) through their work on co-herence in model generative reasoning.

    August 1988-419

    ... about Improved Performance

    Cognitive engineering is not merely aboutthe contents of a world; it is about changingbehavior/performance in that world. This is both a practical consideration-improving performance or reducing errors justifies theinvestment from the point of view of theworld in question-and a theoretical consid-eration-the ability to produce practicalchanges in performance is the cri terion for demonstrating an understanding of the fac-

    tors involved. Basic concepts are confirmed only when they generate treatments (aidingeither on-line or off-line) that make a differ-ence in the target world. Cheng's conceptsabout human deductive reasoning (Chengand Holyoak, 1985; Cheng et aI., 1986) gener-ated treatments that produced very large performance changes both absolutely and relative to the history of rather ineffectual al-ternative treatments to human biases in de-

    ductive reasoning.To achieve the goal of enhanced perfor-

    mance, cognitive engineering must first iden-tify the sources of error that impair the per-

    formance of the current problem-solvingsystem. This means that there is a need for cognitive engineering to understand where,how, and why machine, human, and human- plus-machine problem solving breaks downin natural problem-solving habitats.

    Buggy knowledge-missing, incomplete,or erroneous knowledge-is one source of error (e.g., Brown and Burton, 1978; Brownand VanLehn, 1980; Gentner and Stevens,1983). The buggy knowledge approach pro-vides a specification of the knowledge struc-tures (e.g., incomplete or erroneous knowl-edge) that are postulated to produce the pattern of errors and correct responses that

    characterize the performance of particular individuals. The specification is typically em- bodied as a computer program and consti-

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    8/18

    420-August 1988

    tutes a "theory" of what these individuals"know" (including misconceptions). Human performance aiding then focuses on provid-ing missing knowledge and correcting theknowledge bugs. From the point of view of computational power, more knowledge and more correct knowledge can be embodied and delivered in the form of a rule-based ex- pert system following a knowledge acquisi-tion phase that determines the fine-grained domain knowledge.

    But the more critical question for effective

    human performance may be how knowledgeis activated and utilized in the actual prob-lem-solving environment (e.g., Bransford,Sherwood, Vye, and Rieser, 1986; Cheng etaI., 1986). The question concerns not merelywhether the problem solver knows some par-ticular piece of domain knowledge, such asthe relationship between two entities. Doeshe or she know that it is relevant to the prob-lem at hand, and does he or she know how to

    utilize this knowledge in problem solving?Studies of education and training often showthat students successfully acquire knowledgethat is potentially relevant to solving domain problems but that they often fail to exhibitskilled performance-for example, differ-ences in solving mathematical exercisesversus word problems (see Gentner and Ste-vens, 1983, for examples).

    The fact that people possess relevantknowledge does not guarantee that thatknowledge will be activated and utilized when needed in the actual problem-solvingenvironment. This is the issue of expression o f knowledge. Education and training tend toassume that if a person can be shown to pos-sess a piece of knowledge in any circum-stance, then this knowledge should be acces-sible under all conditions in which it might

    be useful. In contrast, a variety of researchhas revealed dissociation effects wherebyknowledge accessed in one context remains

    HUMAN FACTORS

    inert in another (Bransford et aI., 1986;Cheng et aI., 1986; Perkins and Martin, 1986).For example, Gick and Holyoak (1980) found that unless explicitly prompted, people willfail to apply a recently learned problem-solu-tion strategy to an isomorphic problem (seealso Kotovsky, Hayes, and Simon, 1985).Thus the fact that people possess relevantknowledge does not guarantee that thisknowledge will be activated when needed.The critical question is not to show that the

    problem solver possesses domain knowledge,

    but rather the more stringent criterion thatsituation-relevant knowledge is accessibleunder the conditions in which the task is per-formed. This has been called the problem of inert knowledge-knowledge that is accessed only in a restricted set of contexts.

    The general conclusion of studies on the problem of inert knowledge is that possessionof the relevant domain knowledge or strate-gies by themselves is not sufficient to ensure

    that this knowledge will be accessed in newcontexts. Off-line training experiences need to promote an understanding of how con-cepts and procedures can function as tools for solving relevant problems (Bransford et aI.,1986; Brown, Bransford, Ferrara, and Cam-

    pione, 1983; Brown and Campione, 1986).Training has to be about more than simplystudent knowledge acquisition; it must alsoenhance the expression of knowledge by con-ditionalizing knowledge to its use via infor-mation about "triggering conditions" and constraints (Glaser, 1984).

    Similarly, on-line representations of theworld can help or hinder problem solvers inrecognizing what information or strategiesare relevant to the problem at hand (Woods,1986). For example, Fischhoff, Slovic, and Lichtenstein (1979) and Kruglanski, Fried-

    land, and Farkash (1984) found that judg-mental biases (e.g., representativeness) weregreatly reduced or eliminated when aspects

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    9/18

    COGNITIVE ENGINEERING

    of the situation cued the relevance of statisti-cal information and reasoning. Thus one di-

    mension along which representations vary istheir ability to provide prompts to the knowl-edge relevant in a given context.

    The challenge for cognitive engineering isto study and develop ways to enhance the ex- pression of knowledge and to avoid inertknowledge. What training content and expe-riences are necessary to develop condi tion-alized knowledge (Glaser, 1984; Lesh, 1987;Perkins and Martin, 1986)? What representa-

    tions cue people about the knowledge that isrelevant to the current context of goals, sys-tem state, and practitioner intentions (Wie-cha and Henrion, 1987; Woods and Roth,1988)?

    ... about Systems

    Cognitive engineering is about systems. Onesource of tremendous confusion has been aninability to clearly define the "systems" of in-

    terest. From one point of view the computer program being executed is the end applica-tion of concern. In this case, one often speaksof the interface, the tasks performed withinthe syntax of the interface, and human usersof the interface. Notice that the applicationworld (what the interface is used for) isdeemphasized. The bulk of work on human-computer interaction takes this perspective.Issues of concern include designing for learn-ability (e.g., Brown and Newman, 1985; Car-roll and Carrithers, 1984; Kieras and Polson,1985) and designing for ease and pleasureab-leness of use (Malone, 1983; Norman, 1983;Shneiderman, 1986).

    A second perspective is to distinguish theinterface from the application world (Hollna-gel, Mancini, and Woods, 1986; Mancini,Woods, and Hollnagel, in press; Miyata and

    Norman, 1986; Rasmussen, 1986; Stefik etaI., 1985). For example, text-editing tasks are

    performed only in some larger context such

    August 1988-421

    as transcription, data entry, and composi-tion. The interface is an external representa-

    tion of an application world; that is, a me-dium through which agents come to knowand act on the world-troubleshooting elec-tronic devices (Davis, 1983), logistic mainte-nance systems, managing data communica-tion networks, managing power distributionnetworks, medical diagnosis (Cohen et aI.,1987; Gadd and Pople, 1987), aircraft and helicopter flight decks (Pew et aI., 1986), air traffic control systems, process control acci-

    dent response (Woods, Roth, and Pople,1987), and command and control of a battle-field (e.g., Fischhoff et aI., 1986). Tasks are properties of the world in question, although performance of these fundamental tasks (i.e.,demands) is affected by the design of the ex-ternal representation (e.g., Mitchell and Saisi, 1987). The human is not a passive user of a computer program but is an active prob-lem-solver in some world. Therefore we will

    generally refer to people as domain agents or actors or problem solvers and not as users.

    In part, the difference in the foregoingviews can be traced to differences in the cog-nitive complexity of the domain task beingsupported. Research on person-computer in-teraction has typically dealt with office ap- plications (e.g., word processors for docu-ment preparation or copying machines for duplicating material), in which the goals to be accomplished (e.g., replace word 1 withword 2) and the steps required to accomplishthem are relatively straightforward. Theseapplications fall at one extreme of the cogni-tive complexity space. In contrast, there aremany decision-making and supervisory envi-ronments (e.g., military situation assess-ment; medical diagnosis) in which problemformulation, situation assessment, goal defi-

    nition, plan generation, and plan monitoringand adaptation are significantly more com-

    plex. It is in designing interfaces and aids for

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    10/18

    422-August 1988

    these applications that it is essential to dis-tinguish the world to be acted on from the

    interface or window on the world (how onecomes to know that world), and from agentswho can act directly or indirectly on theworld.

    The "system" of interest in design should not be the machine problem solver per se,nor should the focus of interest in evaluation be the performance of the machine problemsolver alone. Ultimately the focus must bethe design and the performance of the

    human-machine problem-solving ensemble-how to "couple" human intelligence and machine power in a single integrated systemthat maximizes overall performance.

    ... about Multiple Cognitive Agents

    Alarge number of the worlds that cognitiveengineering should be able to address con-tain multiple agents who can act on the

    world in question (e.g., command and con-trol, process control, data communicationnetworks). Not only do we need to be clear about where systemic boundaries are drawnwith respect to the application world and in-terfaces to or representations of the world,we also need to be clear about the differentagents who can act directly or indirectly onthe world. Cognitive engineering must be ableto address systems with multiple cognitiveagents. This applies to multiple human cog-nitive systems (often called distributed decision making); e.g., (Fischhoff, 1986;Fischhoff, et ai., 1986; March and Weisinger-Baylon, 1986; Rochlin, La Porte, and Rob-erts, in press; Schum, 1980).

    Because of the expansions in computa-tional powers, the machine element can bethought of as a partially autonomous cogni-

    tive agent in its own right. This raises the problem of how to build a cognitive systemthat combines both human and machine cog-

    HUMAN FACTORS

    nitive systems or, in other words, joint cogni-tive systems (Hollnagel, Mancini, and Woods,

    1986; Mancini et aI., in press). When a systemincludes these machine agents, the humanrole is not eliminated but shifted. This meansthat changes in automation are changes inthe joint human-machine cognitive system,and the design goal is to maximize overall performance.

    One metaphor that is often invoked toframe questions about the relationship be-tween human and machine intelligence is to

    examine human-human relationships inmulti person problem-solving or advisory sit-uations and then to transpose the results tohuman-intelligent machine interaction (e.g.,Coombs and Alty, 1984). Following this meta- phor leads Muir (1987) to raise the questionof the role of "trust" between man and ma-chine in effective performance. One provoca-tive question that Muir's analysis generatesis, how does the level of trust between human

    and machine problem solvers affect perfor-mance? The practitioner's judgment of ma-chine competence or predictability can bemiscalibrated, leading to excessive trust or mistrust. Either a system will be underuti-lized or ignored when it could provide effec-tive assistance or the practitioner will defer to the machine even in areas that challengeor exceed the machine's range of competence.

    Another question concerns how trust is es-tablished between human and machine.Trust or mistrust is based on cumulative ex- perience with the other agent that providesevidence about enduring characteristics of the agent such as competence and predict-ability. This means that factors about hownew technology is introduced into the work environment can playa critical role in build-ing or undermining trust in the machine

    problem solver. If this stage of technology in-troduction is mishandled (for example, prac-titioners are exposed to the system before it

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    11/18

    COGNITIVE ENGINEERING

    is adequately debugged), the practitioner'strust in the machine's competence can be un-dermined. Muir's analysis shows how varia-tions in explanation and display facilities af-fect how the person will use the machine byaffecting his or her ability to see how the ma-chine works and therefore his or her level of calibration. Muir also points out how humaninformation processing biases can affect howthe evidence of experience is interpreted inthe calibration process.

    A second metaphor that is frequently in-

    voked is supervisory control (Rasmussen,1986; Sheridan and Hennessy, 1984). Again,the machine element is thought of as a se-miautonomous cognitive system, but in thiscase it is a lower-order subordinate, albeit partially autonomous. The human supervisor generally has a wider range of responsibility,and he or she possesses ultimate responsibil-ity and authority. Boy (1987) uses this meta- phor to guide the development of assistant

    systems built from AI technology.In order for a supervisory control architec-

    ture between human and machine agents tofunction effectively, several requirementsmust be met that, as Woods (1986) has pointed out, are often overlooked when tool-driven constraints dominate design. First,the supervisor must have real as well as titu-lar authority; machine problem solvers can be designed and introduced in such a way

    that the human retains the responsibility for outcomes without any effective authority.Second, the supervisor must be able to redi-rect the lower-order machine cognitive sys-tem. Roth et al. (1987) found that some prac-titioners tried to devise ways to instruct anexpert system in situations in which the ma-chine's problem solving had broken down,even when the machine's designer had pro-vided no such mechanisms. Third, in order to be able to supervise another agent, there isneed for a common or shared representation

    August 1988-423

    of the state of the world and of the state of the problem-solving process (Woods and Roth,1988b); otherwise communication betweenthe agents will break down (e.g., Suchman,1987).

    Significant attention has been devoted tothe issue of how to get intelligent machinesto assess the goals and intentions of humanswithout requiring explicit statements (e.g.,Allen and Perrault, 1980; Quinn and Russell,1986). However, the supervisory control met-aphor highlights that it is at least as impor-

    tant to pay attention to what information or knowledge people need to track the intelli-gent machine's "state of mind" (Woods and Roth, 1988a).

    A third metaphor is to consider the newmachine capabilities as extensions and ex- pansions along a dimension of machine power. In this metaphor machines are tools; people are tool builders and tool users. Tech-nological development has moved from phys-

    ical tools (tools that magnify capacity for physical work) to perceptual tools (exten-sions to human perceptual apparatus such asmedical imaging) and now, with the arrivalof AI technology, to cognitive tools. (Althoughthis type of tool has a much longer history-e.g., aide-memories or decision analysis-AIhas certainly increased the interest in and ability to provide cognitive tools.)

    In this metaphor the question of the rela-tionship between machine and human takesthe form of what kind of tool is an intelligentmachine (e.g., Ehn and Kyng, 1984; Such-man, 1987; Woods, 1986). At one extreme, themachine can be a prosthesis that compen-sates for a deficiency in human reasoning or problem solving. This could be a local defi-ciency for the population of expected human practitioners or a global weakness in humanreasoning. At the other extreme, the machinecan be an instrument in the hands of a funda-mentally competent but limited-resource

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    12/18

    424-August 1988

    human practitioner (Woods, 1986). The ma-chine aids the practitioner by providing in-creased or new kinds of resources (either knowledge resources or processing resourcessuch as an expanded field of attention).

    The extra resources may support improved performance in several ways. One path is tooff-load overhead information-processing ac-tivities from the person to the machine toallow the human practitioner to focus his or her resources on "higher-level" issues and strategies. Examples include keeping track of

    multiple ongoing activities in an externalmemory, performing basic data computa-tions or transformations, and collecting theevidence related to decisions about particu-lar domain issues, as occurred recently withnew computer-based displays in nuclear power plant control rooms. Extra resourcesmay help to improve performance in another way by allowing a restructuring of how thehuman performs the task, shifting perfor-

    mance onto a new higher plateau (see Pea,1985). This restructuring concept is in con-trast to the usual notion of new systems asamplifiers of user capabilities. As Pea (1985) points out, the amplification metaphor im- plies that support systems improve human performance by increasing the strength or power of the cognitive processes the human problem solver goes through to solve the

    problem, but without any change in the un-derlying activities, processes, or strategiesthat determine how the problem is solved.Alternatively, the resources provided (or not provided) by new performance aids and in-terface systems can support restructuring of the activities, processes, or strategies thatcarry out the cognitive functions relevant to performing domain tasks (e.g., Woods and Roth, 1988). New levels of performance are

    now possible, and the kinds of errors one is prone to (and therefore the consequences of errors) change as well.

    HUMAN FACTORS

    The instrumental perspective suggests thatthe most effective power provided by good cognitive tools is conceptualization power (Woods and Roth, 1988a). The importance of conceptualization power in effective prob-lem-solving performance is often overlooked because the part of the problem-solving pro-cess that it most crucially affects, problemformulation and reformulation, is often leftout of studies of problem solving and the de-sign basis of new support systems. Supportsystems that increase conceptualization

    power (1) enhance a problem solver's abilityto experiment with possible worlds or strate-gies (e.g., Hollan, Hutchins, and Weitzman,1984; Pea, 1985; Woods et aI., 1987), (2) en-hance their ability to visualize or to makeconcrete the abstract and uninspectable(analogous to perceptual tools) in order to better see the implications of concept and tohelp one restructure one's view of the prob-lem (Becker and Cleveland, 1984; Coombs

    and Hartley, 1987; Hutchins, Hollan, and Norman, 1985; Pople, 1985); and (3) to en-hance error detection by providing better feedback about the effects/results of actions(Rizzo, Bagnara, and Visciola, 1987).

    ... Problem-Driven

    Cognitive engineering is problem-driven,tool-constrained. This means that cognitive

    engineering must be able to analyze a prob-lem-solving context and understand thesources of both good and poor performance-that is, the cognitive problems to be solved or challenges to be met (e.g., Rasmussen,1986; Woods and Hollnagel, 1987).

    To build a cognitive description of a prob-lem-solving world, one must understand howrepresentations of the world interact withdifferent cognitive demands imposed by the

    application world in question and with char-acteristics of the cognitive agents, both for existing and prospective changes in the

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    13/18

    COGNITIVE ENGINEERING

    world. Building of a cognitive description is part of a problem-driven approach to the ap- plication of computational power.

    The results from this analysis are used todefine the kind of solutions that are needed toenhance successful performance-to meetcognitive demands of the world, to help thehuman function more expertly, to eliminateor mitigate error-prone points in the totalcognitive system (demand-resource mis-matches). The results of this process then can be deployed in many possible ways as con-

    strained by tool-building limitations and tool-building possibili ties-explora tiontraining worlds, new information, represen-tation aids, advisory systems, or machine problem solvers (see Roth and Woods, 1988;Woods and Roth, 1988).

    In tool-driven approaches, knowledge ac-quisi tion focuses on describing domainknowledge in terms of the syntax of compu-tational mechanisms-that is, the language

    of implementation is used as a cognitive lan-guage. Semantic questions are displaced ei-ther to whomever selects the computationalmechanisms or to the domain expert whoenters knowledge.

    The alternative is to provide an umbrellastructure of domain semantics that organizesand makes explicit what particular pieces of knowledge mean about problem solving inthe domain (Woods, 1988). Acquiring and

    using a domain semantics is essential toavoiding potential errors and specifying per-formance boundaries when building "intelli-gent" machines (Roth et aI., 1987). Tech-niques for analyzing cognitive demands notonly help characterize a particular world butalso help to build a repertoire of general cog-nitive situations that are transportable.There is a clear trend toward this conceptionof knowledge acquisition in order to achievemore effective decision support and fewer brittle machine problem solvers (e.g., Clan-

    August 1988-425

    cey, 1985; Coombs, 1986; Gruber and Cohen,1987).

    AN EXAMPLE OF HOW COGNITIVEAND COMPUTATIONAL

    TECHNOLOGIES INTERACT

    To illustrate the role of cognitive engineer-ing in the deployment of new computational powers, consider a case in human-computer interaction (for other cases see Mitchell and Forren, 1987; Mitchell and Saisi, 1987; Rothand Woods, 1988; Woods and Roth, 1988). It

    is one example of how purely technology-driven deployment of new automation capa- bilities can produce unintended and unfore-seen negative consequences. In this case anattempt was made to implement a computer-ized procedure system using a commercialhypertext system for building and navigatinglarge network data bases. Because cognitiveengineering issues were not considered in theapplication of the new technology, a high-

    level person-machine performance problemresulted-the" getting lost" phenomenon(Woods, 1984). Based on a cognitive analysisof the world's demands, it was possible to re-design the system to support domain actorsand eliminate the getting-lost problems (Elmand Woods, 1985). Through cognitive engi-neering it proved possible to build a more ef-fective computerized procedure system that,for the most part, was within the technologi-

    cal boundaries set by the original technologychosen for the application.

    The data base application in question wasdesigned to computerize paper-based in-structions for nuclear power plant emer-gency operation. The system was built based on a network data base "shell" with a built-in interface for navigating the network (Rob-ertson, McCraken, and Newell, 1981). Theshell aiready treated human-computer inter-face issues, so it was assumed possible tocreate the computerized system simply by

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    14/18

    426-August 1988

    entering domain knowledge (i.e., the currentinstructions as implemented for the paper

    medium) into the interface and network data base framework provided by the shell.The system contained two kinds of frames:

    menu frames, which served to point to other frames, and content frames, which contained instructions from the paper procedures (gen-erally one procedure step per frame). In pre-liminary tests of the system it was found that

    people uniformly failed to complete recoverytasks with procedures computerized in this

    way. They became disoriented or "lost," un-able to keep procedure steps in pace with plant behavior, unable to determine wherethey were in the network of frames, unable todecide where to go next, or unable even tofind places where they knew they should be(i.e., they diagnosed the situation, knew theappropriate responses as trained operators,yet could not find the relevant proceduralsteps in the network). These results were

    found with people experienced with the paper-based procedures and plant operationsas well as with people knowledgeable in theframe-network software package and howthe procedures were implemented within it.

    What was the source of the disorientation problem? It resulted from a failure to analyzethe cognitive demands associated with using procedures in an externally paced world. For

    example, in using the procedures the opera-tor often is required to interrupt one activityand transition to another step in the proce-dure or to a different procedure depending on plant conditions and plant responses to oper-ator actions. As a result, operators need to beable to rapidly transition across procedure boundaries and to return to incomplete steps.Because of the size of a frame, there was avery high proportion of menu frames relativeto content frames, and the content frames

    provided a narrow window on the world.This structure made it difficult to read ahead

    HUMAN FACTORS

    to anticipate instructions, to mark steps pending completion and return to them eas-

    ily, to see the organization of the steps, or tomark a "trail" of activities carried out duringthe recovery. Many activities that are inher-ently easy to perform in a physical book turned out to be very difficult to carry out-for example, reading ahead. The result was amismatch between user information-process-ing activities during domain tasks and thestructure of the interface as a representationof the world of recovery from abnormalities.

    These results triggered a full design cyclethat began with a cognitive analysis to deter-mine the user information-handling activi-ties needed to effectively accomplish recov-ery tasks in emergency situations. Following

    procedures was not simply a matter of lin-ear, step-by-step execution of instructions;rather, it required the ability to maintain a broad context of the purpose and relation-ships among the elements in the procedure

    (see also Brown et aI., 1982; Roth et aI.,1987). Operators needed to maintain aware-ness of the global context (i.e., how a givenstep fits into the overall plan), to anticipatethe need for actions by looking ahead, and tomonitor for changes in plant state that would require adaptation of the current response plan.

    A variety of cognitive engineering tech-

    niques were utilized in a new interface designto support the demands (see Woods, 1984).First, a spatial metaphor was used to makethe system more like a physical book. Sec-ond, display selection/movement optionswere presented in parallel, rather than se-quentially, with procedural information (de-fining two types of windows in the interface).Transition options were presented at severalgrains of analysis to support moves from stepto step as easily as moves across larger unitsin the structure of the procedure system. Inaddition, incomplete steps were automati-

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    15/18

    COGNITIVE ENGINEERING

    cally tracked, and those steps were made di-rectly accessible (e.g., electronic bookmarksor placeholders).

    To provide the global context within whichthe current procedure step occurs, the step of interest is presented in detail and is embed-ded in a skeletal structure of the larger re-sponse plan of which it is a part (Furnas,1986; Woods, 1984). Context sensitivity wassupported by displaying the rules for possibleadaptation or shifts in the current response plan that are relevant to the current context

    in parallel with current relevant options and the current region of interest in the proce-dures (a third window). Note how the cogni-tive analysis of the domain defined whattypes of data needed to be seen effectively in parallel. which then determined the number of windows required. Also note that the cog-nitive engineering redesign was, with a fewexceptions, directly implementable withinthe base capabilities of the interface shell.

    As the foregoing example saliently demon-strates, there can be severe penalties for fail-ing to adequately map the cognitive demandsof the environment. However, if we under-stand the cognitive requirements imposed bythe domain, then a variety of techniques can be employed to build support systems for those functions.

    SUMMARY

    The problem of providing effective decisionsupport hinges on how the designer decideswhat will be useful in a particular applica-tion. Can researchers provide designers withconcepts and techniques to determine whatwill be useful support systems, or are we con-demned to simply build what can be built practically and wait for the judgment of ex- perience? Is principle-driven design possi- ble?

    A vigorous and viable cognitive engineer-ing can provide the knowledge and tech-

    August 1988-427

    niques necessary for principle-driven design.Cognitive engineering does this by providingthe basis for a problem-driven rather than atechnology-driven approach whereby the re-quirements and bottlenecks in cognitive task performance drive the development of toolsto support the human problem solver. Cogni-tive engineering can address (a) existing cog-nitive systems in order to identify deficien-cies that cognitive system redesign cancorrect and (b) prospective cognitive systemsas a design tool during the allocation of cog-

    nitive tasks and the development of an effec-tive joint architecture. In this paper we haveattempted to outline the questions that need to be answered to make this promise real and to point to research that already has begun to provide the necessary concepts and tech-niques.

    REFERENCES

    Allen, J., and Perrault, C. (1980). Analyzing intention in

    utterances. ArtificialIntelligence, 15,143-178.

    Becker, R. A., and Cleveland, W. S. (1984). Brushing thescatlerplot matrix: High-interaction graphical methods for analyzing multidimensional data (Tech. Report).Murray Hill, NJ: AT&T Bell Laboratories.

    Boy, G. A. (1987). Operator assistant systems. Interna-tional Journal of Man-Machine Studies, 27, 541-554.Also in G. Mancini, D. Woods, and E. Hollnagel (Eds.).(in press). Cognitive engineering in dynamic worlds.London: Academic Press.

    Bransford, J., Sherwood, R., Vye, N., and Rieser, J. (1986).Teaching and problem solving: Research foundations.

    American Psychologist, 41, 1078-1089.Brown, A. L., Bransford, J. D., Ferrara, R. A., and Cam-

    pione, J. C. (1983). Learning, remembering, and under-

    standing. In J. H. Flavell and E. M. Markman (Eds.),Carmichael's manual of child psychology. New York:Wiley.

    Brown, A. L., and Campione, J. C. (1986). Psychologicaltheory and the study of learning disabilities. AmericanPsychologist, 41,1059-1068.

    Brown, J. S., and Burton, R. R. (1978). Diagnostic modelsfor procedural bugs in basic mathematics. CognitiveScience, 2, 155-192.

    Brown, J. 5., Moran, T. P., and Williams, M. D. (1982). Thesemantics of procedures (Tech. Report). Palo Alto, CA:Xerox Palo Alto Research Center.

    Brown, J. 5., and Newman, S. E. (1985). Issues in cogni-tive and social ergonomics: From our house to Bau-haus. Human-Computer Interaction, 1, 359-391.

    Brown, J. S., and VanLehn, K. (1980). Repair theory: Agenerative theory of bugs in procedural skills. Cogni-tive Science, 4, 379-426.

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    16/18

    428-August 1988

    Carroll, J. M., and Carrithers, C. (1984). Training wheels ina user interface. Communications of the ACM, 27,800-806.

    Cheng, P. W., and Holyoak, K. J. (1985). Pragmatic reason-ing schemas. Cognitive Psychology, 17, 391-416.

    Cheng, P., Holyoak, K., Nisbett, R., and Oliver, 1. (1986).Pragmatic versus syntactic approaches to training de-ductive reasoning. Cognitive Psychology, 18,293-328.

    Clancey, W. J. (1985). Heuristic classification. Artificial In-telligence, 27, 289-350.

    Cohen, P., Day, D., Delisio, J., Greenberg, M., Kjeldsen, R.,and Suthers, D. (1987). Management of uncertainty inmedicine. In Proceedings of the IEEE Conference onComputers and Communications. New York: IEEE.

    Coombs, M. J. (1986). Artificial intelligence and cognitivetechnology: Foundations and perspectives. In E. Holl-nagel, G. Mancini, and D. D. Woods (Eds.), Intelligent

    decision support in process environments. New York:Springer- Verlag.

    Coombs, M. J., and Alty, J. 1. (1984). Expert systems: Analternative paradigm. International Journal of Man- Machine Studies, 20, 21-43.

    Coombs, M.J., and Hartley, R. T. (1987). The MGR algo-rithm and its application to the generation of explana-tions for novel events. International Journal of Man-

    Machine Studies, 27, 679-708. Also in Mancini, G.,Woods, D., and Hollnagel. E. (Eds.). (in press). Cogni-tive engineering in dynamic worlds. London: AcademicPress.

    Davis, R. (1983). Reasoning from first principles in elec-tronic troubleshooting. International Journal of Man-

    Machine Studies, 19,403-423.De Keyser, V. (1986). Les interactions hommes-machine:

    Caracteristiques et utilisations des different supportsd'information par les operateurs. (Person-machine inter-action: How operators use different information chan-nels). Rapport Politique Scientifique/FAST no. 8.Liege, Belgium: Psychologie du Travail, Universite deI'Etat a Liege.

    Dennett, D. (1982). Beyond belief. In A. Woodfield (Ed.),Thought and object. Oxford: Clarendon Press.

    Dorner, D. (1983). Heuristics and cognition in complexsystems. In R. Groner, M. Groner, and W. F. Bischof (Eds.), Methods of heuristics. Hillsdale, NJ: Erlbaum.

    Ehn, P., and Kyng, M. (1984). A tool perspective on design of interactive computer support for skilled workers. Unpub-lished manuscript, Swedish Center for Working Life,

    Stockholm.Elm, W. C., and Woods, D. D. (1985). Getting lost: A casestudy in interface design. In Proceedings of the HumanFactors Society 29th Annual Meeting (pp. 927-931).Santa Monica, CA: Human Factors Society.

    Fischhoff, B. (1986). Decision making in complex systems.In E. Hollnagel. G. Mancini, and D. D. Woods, (Eds.), Intelligent decision support. New York: Springer-Ver-lag.

    Fischhoff, B., Slovic, P., and Lichtenstein, S. (1979). Im- proving intuitive judgment by subjective sensitivityanalysis. Organizational Behavior and Human Perfor-mance, 23, 339-359.

    Fischhoff, B., Lanir, Z., and Johnson, S. (1986). Military

    risk taking and modem C31 (Tech. Report 86-2). Eugene,OR: Decision Research.Funder, D. C. (1987). Errors and mistakes: Evaluating the

    accuracy of social judgments. Psychological Bulletin,101,75-90.

    Furnas, G. W. (1986). Generalized fisheye views. In M.

    HUMAN FACTORS

    Mantei and P. Orbeton (Eds.), Human factors in com- puting systems: CHJ'86 Conference Proceedings (pp.16-23). New York: ACM/SIGCHI.

    Gadd, C. S., and Pople, H. E. (1987). An interpretation syn-thesis model of medical teaching rounds discourse:Implications for expert system interaction. Interna-tional Journal of Educational Research, 1.

    Gentner, D., and Stevens, A. 1. (Eds.). (1983). Mentalmodels. Hillsdale, NJ: Erlbaum.

    Gibson, J. J. (1979). The ecological approach to visual per-ception. Boston: Houghton Mifflin.

    Gick, M. 1., and Holyoak, K. J. (1980). Analogical problemsolving. Cognitive Psychology, 12, 306-365.

    Glaser, R. (1984). Education and thinking: The role of knowledge. American Psychologist, 39, 93-104.

    Gruber, T., and Cohen, P. (1987). Design for acquisition:Principles of knowledge system design to facilitateknowledge acquisition (special issue on knowledge ac-

    quisition for knowledge-based systems). International Journal of Man-Machine Studies, 26, 143-159.Henderson, A., and Card, S. (1986). Rooms: The use of mul-

    tiple virtual workspaces to reduce space contention in awindow-based graphical user interface (Tech. Report).Palo Alto: Xerox PARCo

    Hirschhorn, 1. (1984). Beyond mechanization: Work and technology in a postindustrial age. Cambridge, MA: MITPress.

    Hollan, J., Hutchins, E., and Weitzman, 1. (1984).Steamer: An interactive inspectable simulation-based training system. AI Magazine, 4, 15-27.

    Hollnagel. E., Mancini, G., and Woods, D. D. (Eds.). (1986). Intelligent decision support in process environments. New York: Springer-Verlag.

    Hollnagel. E., and Woods, D. D. (1983). Cognitive systemsengineering: New wine in new bottles. International

    Journal of Man-Machine Studies, 18, 583-600.Hoogovens Report. (1976). Human factors evaluation: Hoo-

    govens No.2 hot strip mill (Tech. Report FR251). Lon-don: British Steel Corporation/Hoogovens.

    Hutchins, E., Hollan, J., and Norman, D. A. (1985). Directmanipulation interfaces. Human-Computer Interac-tion, 1,311-338.

    James, W. (1890). The principles of psychology. New York:Holt.

    Kieras, D. E., and Polson, P. G. (1985). An approach to theformal analysis of user complexity. International Jour-nal of Man-Machine Studies, 22, 365-394.

    Klein, G. A. (in press). Recognition-primed decisions. InW. B. Rouse (t:d.), Advances in man-machine research,vol. 5. Greenwich, CT: JAI Press.

    Kotovsky, K., Hayes, J. R., and Simon, H. A. (1985). Whyare some problems hard? Evidence from Tower of Hanoi. Cognitive Psychology, 17, 248-294.

    Kruglanski, A., Friedland, N., and Farkash, E. (1984). Lay persons' sensitivity to statistical information: The caseof high perceived applicability. Journal of Personalityand Social Psychology, 46, 503-518.

    Lesh, R. (1987). The evolution of problem representationsin the presence of powerful conceptual amplifiers. InC. Janvier (Ed.), Problems of representation in the teach-ing and learning of mathematics. Hillsdale, NJ: Erl-

    baum.Malone, T. W. (1983). How do people organize their desks:

    Implications for designing office automation systems. ACM Transactions on Office Information Systems, 1,99-112.

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    17/18

    COGNITIVE ENGINEERING

    Mancini, G., Woods, D. D., and Hollnagel. E. (Eds.). (in press). Cognitive engineering in dynamic worlds. Lon-don: Academic Press. (Special issue of International

    Journal of Man-Machine Studies, vol. 27).March, J. G., and Weisinger-Baylon, R. (Eds.). (1986). Am-

    biguity and command. Marshfield, MA: Pitman Pub-lishing.

    McKendree, 1., and Carroll, J. M. (1986). Advising roles of a computer consultant. In M. Mantei and P. Oberton(Eds.), Human factors in computing systems: CHI'86 Conference Proceedings (pp. 35-40). New York: ACM/SIGCHI.

    Miller, P. L. (1983). ATTENDING: Critiquing a physician'smanagement plan. IEEE Transactions on Pattern Anal-

    ysis and Machine Intelligence, PAMI-5, 449-461.Mitchell, C., and Forren, M. G. (1987). Multimodal user

    input to supervisory control systems: Voice-aug-mented keyboard. IEEE Transactions on Systems,

    Man, and Cybernetics, SMC-I7, 594-607.Mitchell, C., and Saisi, D. (1987). Use of model-based qual-itative icons and adaptive windows in workstations for supervisory control systems. IEEE Transactions onSystems, Man, and Cybernetics, SMC-I7, 573-593.

    Miyata, Y., and Norman, D. A. (1986). Psychological issuesin support of multiple activities. In D. A. Norman and S. W. Draper (Eds.), User-centered system design: New

    perspectives on human-computer interaction. Hillsdale, NJ: Erlbaum.

    Montmollin, M. de, and De Keyser, V. (1985). Expert logicvs. operator logic. In G. Johannsen, G. Mancini, and L.Martensson (Eds.), Analysis, design, and evaluation of man-machine systems. CEC-JRC Ispra, Italy: IFAC.

    Muir, B. (1987). Trust between humans and machines.ln-ternational Journal of Man-Machine Studies, 27,527-539. Also in Mancini, G., Woods, D., and Hollna-gel. E. (Eds.). (in press). Cognitive engineering in dy-namic worlds. London: Academic Press.

    Newell, A., and Card. S. K. (1985). The prospects for psy-chological science in human-computer interaction. Human-Computer Interaction, I, 209-242.

    Noble, D. F. (1984). Forces of production: A social history of industrial automation. New York: Alfred A. Knopf.

    Norman, D. A. (1981). Steps towards a cognitive engineering(Tech. Report). San Diego: University of California,San Diego, Program in Cognitive Science.

    Norman, D. A. (1983). Design rules based on analyses of human error. Communications of the ACM, 26,254-258.

    Norman, D. A., and Draper, S. W. (1986). User-centered system design: New perspectives on human-computer in-teraction. Hillsdale, NJ: Erlbaum.

    Pea, R. D. (1985). Beyond amplification: Using the com- puter to reorganize mental functioning. Educational psychologist, 20, 167-182.

    Perkins, D., and Martin, F. (1986). Fragile knowledge and neglected strategies in novice programmers. In E. So-loway and S. Iyengar (Eds.), Empirical studies of pro-grammers. Norwood, NJ: Ablex.

    Pew, R. W., et at (1986). Cockpit automation technology(Tech. Report 6133). Cambridge, MA: BBN Laborato-ries Inc.

    Pope, R. H. (1978). Power station control room and desk

    design: Alarm system and experience in the use of CRTdisplays. In Proceedings of the International Sympo-sium on Nuclear Power Plant Control and Instrumenta-tion. Cannes, France.

    August 1988-429

    Pople, H., Jr. (1985). Evolution of an expert system: Frominternist to caduceus. In L De Lotto and M. Stefanelli(Eds.), Artificial intelligence in medicine. New York: El-sevier Science Publishers.

    Quinn, L., and Russell, D. M. (1986). Intelligent interfaces:User models and planners. In M. Mantei and P. Ober-ton (Eds.), Human factors in computing systems:CHI'86 Conference Proceedings (pp. 314-320). NewYork: ACM/SIGCHI.

    Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineer-ing. New York: North-Holland.

    Rizzo, A., Bagnara, S., and Visciola, M. (1987). Humanerror detection processes. International Journal of Man-Machine Studies, 27, 555-570. Also in Mancini,G., Woods, D., and Hollnagel, E. (Eds.). (in press). Cog-nitive engineering in dynamic worlds. London: Aca-demic Press.

    Robertson, G., McCracken, D., and Newell, A. (1981). TheZOG approach to man-machine communication. Inter-nationa/J ournal of M an-Machine Studies, 14, 461-488.

    Rochlin, G. I., La Porte, T. R., and Roberts, K. H. (in press). The self-designing high-reliability organiza-tion: Aircraft carrier flight operations at sea. NavalWar College Review.

    Roth, E. M., Bennett, K., and Woods, D. D. (1987). Humaninteraction with an "intelligent" machine. Interna-tional Journal of Man-Machine Studies, 27, 479-525.Also in Mancini, G., Woods, D., and Hollnagel. E.(Eds.). (in press). Cognitive engineering in dynamicworlds. London: Academic Press.

    Roth, E. M., and Woods, D. D. (1988). Aiding human per-formance: L Cognitive analysis. Le Travail Humain,51(1),39-64.

    Schum, D. A. (1980). Current developments in research oncascaded inference. In T. S. Wallstein (Ed.), Cognitive

    processes in decision and choice behavior. Hillsdale, NJ: Erlbaurn.

    Selfridge, O. G., Rissland, E. L., and Arbib, M. A. (1984). Adaptive control of ill-defined systems. New York: Ple-num Press.

    Sheridan, T., and Hennessy, R. (Eds.). (1984). Research and modeling of supervisory control behavior. Washington,DC: National Academy Press.

    Shneiderman, B. (1986). Seven plus or minus two centralissues in human-computer interaction. In M. Manteiand P. Obreton (Eds.), Human factors in computingsystems: CHI'86 Conference Proceedings (pp. 343-349). New York: ACM/SIGCHL

    Stefik, M., Foster, G., Bobrow, D., Kahn, K., Lanning, S.,and Suchman, L. (1985, September). Beyond the chalk-board: Using computers to support collaboration and

    problem solving in meetings (Tech. Report). Palo Alto,CA: Intelligent Systems Laboratory, Xerox Palo AltoResearch Center.

    Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cam- bridge: Cambridge University Press.

    Wiecha, C., and Henrion, M. (1987). A graphical tool for structuring and understanding quantitative decisionmodels. In Proceedings of Workshop on Visual Lan-guages. New York: IEEE Computer Society.

    Wiener, E. (1985). Beyond the sterile cockpit. Human Fac-tors, 27, 75-90.

    Woods, D. D. (1984). Visual momentum: A concept to im- prove the cognitive coupling of person and computer.

    at Afyon Kocatepe Universitesi on April 23, 2014hfs.sagepub.comDownloaded from

    http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/http://hfs.sagepub.com/
  • 8/12/2019 cognitive engineering - human problem solving with tools.pdf

    18/18

    430 -August 1988

    International Journal of Man-Machine Studies, 21,229-244.

    Woods, D. D. (1986). Paradigms for intelligent decision

    support. In E. Hollnagel, G. Mancini, and D. D. Woods(Eds.), Intelligent decision support in process environ-ments. New York: Springer-Verlag.

    Woods, D. D. (1988). Coping with complexity: The psy-chology of human behavior in complex systems. InL. P. Goodstein, H. B. Andersen, and S. E. Olsen (Eds.), Mental models, tasks and errors: A collection of essays tocelebrate Jens Rasmussen's 60th birthday. London: Tay-lor and Francis.

    Woods, D. D., and Hollnagel, E. (1987). Mapping cognitivedemands in complex problem solving worlds (specialissue on knowledge acquisition for knowledge-based systems). International Journal of Man-Machine Stud-ies, 26, 257-275.

    HUMAN FACTORS

    Woods, D. D., and Roth, E. M. (1986). Models of cognitivebehavior in nuclear power plant personnel. (NUREG-CR-4532). Washington, DC: U.S. Nuclear Regulatory

    Commission.Woods, D. D., and Roth, E. M. (l988a). Cognitive systems

    engineering. In M. Helander (Ed.), Handbook of human-computer interaction. New York: North-Hol-land.

    Woods, D. D., and Roth, E. M. (l988b). Aiding human per-formance: II. From cognitive analysis to support sys-tems. Le Travail Humain, 51, 139-172.

    Woods, D. D., Roth, E. M., and Pople, H. (1987). Cognitive Environment Simulation: An artificial intelligence sys-tem for human performance assessment (NUREG-CR-4862). Washington, DC: U.S. Nuclear Regulatory Com-mission.