11

Click here to load reader

Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

  • Upload
    vodiep

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

Human Robot Interactions: Creating Synergistic Cyber Forces

Jean ScholtzNational Institute of Standards and Technology

100 Bureau Drive, MS 8940Gaithersburg, MD 20817

[email protected]

AbstractHuman-robot interaction (HRI) for mobile robots isstill in its infancy. Most user interactions withrobots have been limited to tele-operationcapabilities where the most common interfaceprovided to the user has been the video feed from therobotic platform and some way of directing the pathof the robot. For mobile robots with semi-autonomous capabilities, the user is also providedwith a means of setting way points. Moreimportantly, most HRI capabilities have beendeveloped by robotics experts for use by roboticsexperts. As robots increase in capabilities and areable to perform more tasks in an autonomousmanner we need to think about the interactions thathumans will have with robots and what softwarearchitecture and user interface designs canaccommodate the human in-the-loop. We also needto design systems that can be used by domainexperts but not robotics experts. This paper outlinesa theory of human-robot interaction and proposes theinteractions and information needed by both humansand robots for the different levels of interaction,including an evaluation methodology based onsituational awareness.

IntroductionThe goal in synergistic cyber forces is to create teamsof humans and robots that are efficient and effectiveand take advantage of the skills of each team member.An important subgoal is to increase the number ofrobotic platforms that can be handled by individuals.In order to accomplish this goal we need to examinethe types of interactions that will be needed betweenhumans and robots, the information that humans androbots need to have desirable interchanges, and todevelop the software architectures and interactionarchitectures to accommodate these needs.

Human-robot interaction is fundamentallydifferent from typical human-computer interaction inseveral dimensions. One study (Fong, Thorpe, andBauer 2001) notes that HRI differs from HCI andHuman-machine Interaction (HMI) because itconcerns systems which have complex, dynamiccontrol systems, exhibit autonomy and cognition, and

which operate in changing, real-world environments.In addition differences occur in the types ofinteractions (interaction roles); the physical nature ofrobots; the number of systems a user may be called tointeraction with simultaneously; and the environmentin which the interactions occur. Each of thesedifferences is discussed in the ensuing paragraphs.

I originally defined three roles: supervisor,operator, and peer (Scholtz 2002). To expand on theseroles slightly I have added a mechanic role anddivided the peer role into a bystander and teammaterole. Supervisory and teammate roles imply the samerelationships between humans and robots as they dowhen applied to human- human interactions. Anoperator is needed to work “inside” the robot;adjusting various parameters in the robot’s controlmechanism to modify abnormal behavior; to change agiven behavior to a more appropriate one; or to takeover and tele-operate the robot. The mechanic typeof interaction is undertaken when a human needs toadjust physical components of the robot, such asadjusting the camera or adjusting various mechanisms.A bystander does not explicitly interact with a robotbut needs some model of robot behavior to understandthe consequences of the robot’s actions. For example,will the floor cleaning robot in the workplace sense thepresence of a person and stop or must the person movefrom the robot’s path. Each of these interactions hasdifferent tasks and hence, different situationalawareness needs.

The second dimension is the physical nature ofmobile robots. Robots need some awareness of thephysical world in which they move. Robots that canphysically move from one location to another asopposed to robot platforms that stay in one locationbut have mobile components present more interestingchallenges. Ground robots encounter more obstaclesthan unmanned systems in the air and under water.

Therefore we consider the more complicated caseof mobile ground robots for the purposes ofdeveloping our framework. As robots move about inthe real world, they build up a “world model” (Albuset.al. 2002). The model the robot platform builds up

From: AAAI Technical Report FS-02-03. Compilation copyright © 2002, AAAI (www.aaai.org). All rights reserved.

Page 2: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

needs to be conveyed to the human in order tounderstand decisions made by the robot as the modelmay not correspond exactly to reality due to thelimitations of the robot’s sensors and processingalgorithms. A third dimension is the dynamic nature ofthe robot platform. Typical human-computerinteractions assume that computer behavior is for themost part deterministic and that the physical state ofthe computer does not change in such a way that thehuman must track. However, robotic platforms havephysical sensors that may fail or degrade. While somefunctionality may be affected, the platform may beable to carryout some limited tasks.

The fourth dimension is the environment in whichinteractions occur. Platforms to monitor robots mayhave to function in harsh conditions such as dust,noisy and low-light conditions. Environments may bedynamic as well. Search and rescue robots mayencounter more building or tunnel collapses during theoperation. In a military environment, explosions maydrastically change the environment during the mission.Not only will the robot have to function in theseconditions but the user interacting with the robot maybe co-located as well (a team member, perhaps). Thusinteractions may have to be carried out in noisy,stressful, and confusing conditions.

The fifth dimension is the number of independentsystems the user needs to interact with. Typicalhuman-computer interaction assumes one userinteracting with one system. Even in collaborativesystems we usually consider one user to one systemwith the added property that this user-computer systemis connected to at least one other such system. Thisallows interaction between users, moderated by thecomputers, as well as computer – computerinteraction. In the case of humans and robots, ourultimate goal is to have a person (at least for a numberof the interaction roles we’re specified) interactingwith a number of heterogeneous robots.

The final dimension is the ability of the robot toperform autonomously for periods of time. Whiletypical desktop computers perform autonomously inthat they execute code based on user commands,robots use planning software to alleviate the user fromdealing with low level commands and decisions. Thusa robot can go from point A to point B without askingthe operator how to deal with each obstacleencountered along the path.

A background: human-robot interactionHuman-robot interaction was first associated withteleoperation of factory robotic platforms. Sheridan(Sheridan 1992) defines telerobotics as : “direct andcontinuous human control of the teleoperator” or

“machine that extends a person’s sensing and/ormanipulating capability to a location remote from thatperson.” He distinguishes telerobotics or supervisorycontrol of a remote machine from supervisory controlof any semi-autonomous system regardless of thedistance. Human –computer interaction in Sheridan’sview includes telerobotics. Human-computerinteraction is the term most commonly used to denotethat a computer application and its associated files arethe objects being manipulated, not a physical systemcontrolled through the computer.

Human –robot interaction (HRI) goes beyondteleoperation of a remote platform and allows for someset of autonomous behaviors to be carried out by therobot. This could range from a robot responding toextremely precise commands from a human aboutadjustment of a control arm to a more sophisticatedrobot system planning and executing a path from astart point to an end point supplied by a user. Theconcept of human-robot interaction has only becomepossible in the last decade because of advances in thefield of robotics (perception, reasoning, programming)that make semi-autonomous systems feasible. AnNSF/DOE IEEE workshop (NSF/DOE, IEEEworkshop 1992; Bekey 1996) identified issues forhuman-machine interfaces and intelligent machineassistants. These issues included:- Efficient ways for a human controller to interact

with multiple-semi autonomous machines- Interfaces and interactions that adapt depending

on the functions being performed

Kidd (Kidd 1992) noted that human skill isalways required in robotic systems. Kidd maintainsthat designers should use robot technology to supportand enhance skills of the human as opposed tosubstituting skills of the robots for skills of the human.He argued for developing and using robotictechnology such that human skills and abilitiesbecome more productive and effective, such as freeinghumans from routine or dangerous tasks. He pointsout that robotic researchers tend to focus on issues thatare governed by legislative requirements such assafety. Human-centered design issues have beenmostly ignored. Kidd suggests that human-centereddesign of human-robot interaction needs to lookbeyond technology issues and to consider issues suchas task allocations between people and robots; safety;group structure. These issues need to be considered inthe early stages of the technology designs. If they areonly considered in the final stages, the issues becomesecondary and have little impact on designconsiderations.

Fong, Thorpe, and Bauer (Fong, Thorpe, andBauer 2001) note that it is clear that benefits are to be

Page 3: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

gained if robots and humans work together as partners.But partners must engage in dialogue, ask questions ofeach other, and jointly solve problems. They proposea system for collaborative control which provides thebest aspects of supervisory control without requiringuser intervention within a critical window of time. Incollaborative control, the human gives advice but therobot can decide how to use human advice. This is notto say that the robot has the final authority but ratherthe robot follows a higher level strategy set by thehuman with some freedom in execution. If the user isable to provide relevant advice, the robot can act onthat. However, if the user is not available within thetime needed, the robot will use default behaviors toreact to the situation. Collaborative control is onlypossible if the robot is self-aware; has self-reliance andcan maintain its own safety; has a dialogue capacity;and is adaptive. Dialogue management and usermodels are needed to implement collaborative controlsystems.

Hill (Hill and Tirre 2000) notes that it is importantthat research in HRI include human factorspractitioners in multidisciplinary teams. It shouldalso be stressed that HRI includes much more than justa clever interface for the user. To truly developsynergistic teams it is necessary to consider the skillsof both humans and robots and to develop the overallsystem that allows all parties to fully utilize theirskills. This is even more challenging given thedynamic nature of robotic platforms today. We needto design HRI in such a way that it is useful today butfully capable of evolving as the capabilities of robotsevolve.

Robotics researchers use the term human-robotintervention, often in place of human-robot interaction.For robotic systems that have plan-based capabilities,the term intervention is used when a human needs tomodify a plan that has some deficiency or when therobot is currently unable to execute some aspect of aplan. While robots carrying out preplanned behaviorsis certainly a desired activity (e.g., clean the kitchenfloor, watch the perimeter, check all the rooms on the3rd floor for X), more closely coupled human-robotteams need to interact spontaneously as well. In thispaper I use the term “human-robot interaction” to referto the overall research area of teams of humans androbots, including intervention on the part of the humanor the robot. I use “intervention” to classify instanceswhen the expected actions of the robot are notappropriate given the current situation and the usereither revamps a plan ; gives guidance about executingthe plan; or gives more specific commands to the robotto modify behavior.

Human-computer InteractionIn the introduction I listed six dimensions in whichHRI is fundamentally different from traditionalhuman-computer interactions. A first step indeveloping a framework for HRI is to determine what,if anything, is applicable from work done in previousHCI research. One model of human-computerinteraction is Norman’s seven stages of interaction(Norman 1986). Norman considers these seven stages:1. Formulation of the goal – think in high level terms

of what it is you want to accomplish.2. Formulation of the intention – think more

specifically about what will satisfy this goal.3. Specification of the action – determine what

actions are necessary to carry out the intention.These actions will then be carried out one at atime.

4. Execution of the action – physically doing theaction. In computer terms this would be selectingthe commands needed to carryout a specificaction.

5. Perception of the system state – the user must thenassess what has occurred based on the actionspecified and execution. In the perception part theuser must notice what has happened.

6. Interpretation of the system state – havingperceived the system state, the user must now useher knowledge of the system to interpret what hashappened.

7. Evaluation of the outcome – the user nowcompares the system state (as perceived andinterpreted by her) to the intention and to decide ifprogress is being made and what action will beneeded next.

These seven stages are iterated until the intention andgoal are achieved – or the user decides that theintention or goal has to be modified.

Norman defines two issues with these sevenstages: the gulf of execution and the gulf ofevaluation. The gulf of execution is a mismatchbetween the user’s intentions and the allowable actionsin the system. The gulf of evaluation is a mismatchbetween the system’s representation and the user’sexpectations. These correspond to four critical pointswhere failures can occur. Users can form aninadequate goal or may not know how to specify aparticular action or may not be able to locate aninteraction object. These result in a gulf of execution.Inappropriate or misleading feedback from the systemmay lead the user to an incorrect interpretation of thesystem state resulting in a gulf of evaluation

Figure 1 is a diagram of Norman’s HCI model.The graphic clearly illustrates the cycles that the usermay go through. Once the user has identified a goal,formulated an intention, then selected an action that

Page 4: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

seems appropriate to accomplishing the goal, thesystem feedback is examined and evaluated by theuser. If the goal has still not been met, then the usereither selects another action if it is in the realm of theoriginal intention or changes the intention. Once thegoal is achieved, the cycle occurs again. The basicinteraction cycle is specifying actions, examining thestate of the system to decide if the goal has beenaccomplish, and continuing this cycle if not.

Figure 1: Norman’s HCI Model

A theory of human-robot interactionSome assumptions are necessary first. For our theoryof human-robot interaction we are concerned withsemiautonomous mobile robots interacting alone andin teams. Sheridan (Sheridan 1992) outlines fivegeneric supervisory functions: planning what task todo and how to do it; teaching or programming thecomputer; monitoring the automatic action to detectfailures; intervening to specify a new goal in the eventof trouble or to take over control once the desired goalstate has been reached; and learning from experience.In our theory we are concerned with support forspecifying the actions; monitoring the actions; andintervention. We make the assumption that the robotis already programmed to carry out basic functions andany “reprogramming” happens during intervention.For the initial version of our theory we are notconsidering learning on the part of the robot or on thepart of the user. To illustrate the different levels ofinteraction, here are two different scenarios.

Military Operations in an Urban TerrainA number of soldiers and robots are approaching asmall urban area. Their goal is to make sure that thearea is secured – free of enemy forces. The robots willbe used to move through congested areas to send backimages to the soldiers so that they know what toexpect. Robots will also be used to enter buildingsthat may be hiding places for enemy soldiers. Therobots have some degree of coordination in coveringthe area. A supervisor is overseeing the scouting

mission from a remote location close to but not in theurban area. Ground troops are close behind the robotsand individual robots are associated with a certaingroup of the soldiers. In each group one soldier is anexpert in maintaining the robot (both physically andprogrammatically) associated with that group. Thatsoldier can function as the mechanic or the operator.The robots are heterogeneous and are assigneddifferent tasks within the larger scouting mission.That is, one robot may be assigned to scout out landmines, while another smaller robot may be used toenter buildings undetected and send back images to itsteam. The supervisor needs to know that all robots aredoing their jobs and reassigns robots as the missionmoves forward. If there is a problem, the supervisorcan either intervene or can alert the soldier assigned tothat robot.

The soldiers functioning as the mechanics andoperators do what is necessary to get the robot backinto an operational state. This could be as simple asidentifying an image that the robot can’t classify or ascomplicated as adjusting sensor controls orreprogramming the robot. The soldiers are alsoteammates of the robot and depend on the images thatrobot is collecting. They may want to specify actionsdepending on the content of these images (look again,look again at closer range) but have to coordinate thattasking with the supervisor. The robots will alsoencounter civilians in the urban environment. Somedegree of social interaction will be necessary so thatthe civilians don’t feel threatened by the robots and arenot harmed by the robots.

Elder CareAn elder care facility has deployed a number of robotsto help in watching and caring for its residents. Thesupervisor oversees the robots which are distributedthroughout the facility and makes sure that the robotsare properly functioning and that residents are eitherbeing watched or are being cared for – either by arobot or by a human caregiver. A number of humancaregivers are experts in robot operation and assist asneeded depending on their duties at the time. Theoperators might use a mobile device, such as a PDA toadjust parameters in the robot software. The elder carefacility also employs a mechanic who is called upwhen needed to adjust the physical capabilities of therobot – such as cameras becoming dislodged. Thecaregiver robots can perform routine tasks such ashelping with feeding, handing out supplies toresidents, and assisting residents to move betweenlocations in the facility. Watcher robots monitorresidents and have the capability to send backcontinual video feeds but also alert the supervisor or anearby human caregiver to an emergency situation. In

Goals

Intentions

Actions

Perception

Evaluation

Page 5: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

most cases, the human and robot caregivers work asteams. Human caregivers can override preplannedbehaviors to ask robots to assist with more criticalsituations, such as moving residents to another part ofthe room if an emergency situation occurred – such asa resident falling. Robots interact with the residents aswell as visitors to the facility who may not be aware oftheir capabilities.

What can we learn from these two scenarios?First of all, the boundary between the levels ofinteractions is fuzzy. The supervisor can take theoperator role assuming the supervisor has thenecessary cycles to do so and that this might be moreefficient than notifying the designated operator andhanding off the problem. The team members cancommand the robots as can the supervisor. Bystanderswho have little or no idea of the capabilities of therobot and who do not have access to computerdisplays of robot status will have some level ofinteraction with the robots. This may simply be to getout of the way, though an interesting issue is whetherbystanders should have some subset of interactionsavailable to them. All of the different interactionroles can occur at the same time. The same personmight assume more than one role at a time or differentpeople could have different interaction roles.

Models of HRIWhat changes to this model of HCI are necessary todescribe HRI systems? The following sections containpossible models of interaction for the various HRIroles.

Supervisor Interaction A supervisor role could be characterized asmonitoring and controlling the overall situation. Thiscould mean that a number of robots would bemonitored and the supervisor would be evaluating thegiven situation with respect to a goal that needs to becarried out. For robots that possess planning systems,the goals and intentions have been given to theplanning system, and the robot software is generatingthe actions based on a perception of the real world.The supervisor can step in and specify an action ormodify plans. In either case, a formal representationof the goal and intention is necessary so that thesupervisor can formulate the effect an intervention willhave on the longer term plan.

Figure 2 contains a proposed model for thesupervisor- robot interaction. The main loop is theperception/evaluation loop as most of the actions areautomatically generated by the robot software.Supervisor interactions at the action and intentionlevel must be supported as well. Note that for multiplerobotic systems the supervisor must monitor the status

of all platforms. The issue is how to support efficientmonitoring. We need to understand what abstractionsare appropriate for monitoring purposes? How can anoperator be alerted to a potential intervention? Weneed to consider the number and heterogeneity of thesystems that must be monitored. The graphic in figure2 clearly shows that the human-robot interaction forthe supervisor is heavily perceptually based, and thatinteractions need to be supported on both the actionand intention level.

Figure 2: HRI Model- Supervisor Role

Operator Interaction The operator is called upon to modify internalsoftware or models when the robot behavior is notacceptable. The operator will deal mainly withinteracting at an action level –actions allowed to theoperator. It will be necessary to then determine ifthese actions are being carried out correctly and if theactions are in accordance with the longer term goal.The assumption is that the supervisor role is where theintentions or longer term plan can be formally changed– not at the operator level.

Figure 3: HRI Model - Operator Role

Goals

Intentions

Actions

Perception

Evaluation

Goals

Intentions

Actions

Perception

Evaluation

Page 6: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

Mechanic InteractionThe mechanic deals with physical interventions, but itis still necessary for the mechanic to determine if theinteraction has the desired effect on the behavior. So,the model looks similar to the model for the operatorinteraction. However, the difference is that while themodifications have been made to the hardware, thebehavior testing needs to be initiated in software andobservations of both software and hardware behaviorare necessary to ensure that the behavior is nowcorrect.

Figure 4: HRI Model – Mechanic Role

Peer InteractionTeammates of the robots can give them commandswithin the larger goal/ intentions, though we followthe same assumption here – that only the supervisorrole has the authority to change the larger goal/intentions. This assumption is based on the time thatis needed to alter goals and plans. Even with gooduser interfaces, teammates may not have the necessarytime to perform these interactions. If they do, they cancertainly switch to the supervisory role if appropriate.

Figure 5: HRI Model – Peer Role

The model in Figure 5 shows the interactionmodel proposed for peer interactions. We propose thatthis interaction needs to occur at a higher level ofbehavior than the operator interactions allow. Humanteam members talk to each other in terms of higherlevel intentions – not in terms of lower levelbehaviors. Terms such as follow me, make a sharpleft turn, wait until I get there would be reasonableunits of dialogue between a robot and a human teammember in the peer role. In this case, directobservation is probably the perceptual input used forevaluation. In the case that the behavior is notcorrectly carried out, the peer has the choice ofswitching to the operator model or handing off theproblem to someone more qualified as the operator.

Bystander RoleThe final role is that of the bystander. Earlier weposed an interesting question: should a bystander begiven a subset of interactions with the robotappropriate to this role? For the purposes of thismodel, let’s assume that this is true. The bystandermight be able to cause the robot to stop by walking infront of the robot, for example.

Figure 6: HRI Model – Bystander Role

In this model, the bystander has only a subset(sub A) of the actions available. She is not able tointeract at the goal or intention level. Feedback mustbe directly observable. The largest challenge here ishow to advise the bystander of the capabilities of therobot that are under her control. There will mostlikely not be a typical display. Much of the researchon emotion and robots is applicable here (Breazeal andScassellati 1999; Bruce, Nourbakhsh, and Simmons2001).

Goals

Intentions

Actions

Perception

Evaluation

Software Hardware

Goals

Intentions

Actions

Perception

Evaluation

Goals

Intentions

Actionssub A

Perception

Evaluation

Page 7: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

Situational awarenessGiven the HRI models proposed in section 4, onequestion becomes how to evaluate human-robotinteractions. In all the models the perceptual step isquite necessary. And in many of the roles, it isnecessary to understand not just the state of the robotsystem after the action has occurred, it is also criticalto understand what the robot state was when the actionwas given. This helps us understand possiblemismatches in behaviors specified versus behaviorsactually carried out.

A second issue is the separation of theperformance of the HRI system from the performanceof the user interaction design and the actual interface.Due to the physical nature of the robots as well as thesophisticated software incorporating perception,learning, and planning, a failure in performance maynot be due to an issue with the user interaction butmay be attributed to the robot’s software system ormalfunctions of the robot’s sensory system.

Therefore we plan to carryout our HRIevaluations in two stages. We will evaluate theperceptual part of the model separately from theintervention part of the interaction design and we willseparate both of those from the actual performance ofthe HRI system. The evaluation of the interventionportion will not be discussed in this paper as it will bebased on current usability evaluation methodologies.The evaluation of the perceptual part of the model willbe based on assessing situational awareness.

However, each of the levels of interaction willrequire a different perspective and hence differentsituational awareness. These issues will be discussedin the sections detailing the proposed HRI roles. Asbackground it is necessary to have an understanding ofsituational awareness, as well as methodologies andmeasurement tools to assess situational awareness.

Situational awareness (Endsley 2000b) is theknowledge of what is going on around you. Theimplication in this definition is that you understandwhat information is important to attend to in order toacquire situational awareness. Consider your drivehome in the evening. As your drive down the freewayand urban streets there is much information you couldattend to. You most likely do not notice if someonehas painted their house a new color but you definitelynotice if a car parked in front of that house starts topull out in your path. There are three levels ofsituational awareness (Endseley 2000a) whichcorrespond to various stages of evaluation inNorman’s model of HCI.

Level One of situational awareness is basic - theperception of cues. You have to perceive importantinformation in order to be able to proceed. Failures toperceive information can result as shortcomings of a

system or they can be due to a user’s cognitivefailures. In studies of situational awareness in pilots,76% of SA (Jones and Endsley 1996) errors weretraced to problems in perception of neededinformation.

Level Two of situation awareness is the ability tocomprehend or to integrate multiple pieces ofinformation and determine the relevance to the goalsthe user wants to achieve. This corresponds tointerpretation and a portion of evaluation in Norman’sseven stages.

A person achieves the third level of situationalawareness if she is able to forecast future situationevents and dynamics based on her perception andcomprehension of the present situation. Thiscorresponds to the evaluation and iterative formulationand specification stages of Norman’s theory.

Performance and situational awareness, whilerelated, are not directly correlated. It is entirelypossible for a person to have achieved level threesituational awareness but not perform well. This isevident in Norman’s stages of action – other reasonsfor not achieving the correct execution are certainlypossible. Some of these reasons can be attributed topoorly designed systems while others can be attributedto a user’s cognitive failures. Direct systemmeasurements of performance of selected scenarios incontext is one way to measure situational awarenessbut only if it can be shown that performance dependsonly on situational assessment. One method of directsystem measurements to overcome this is to introducesome sort of disruption into the system, such as acompletely unrealistic pattern, and measure theamount of time that it takes users to detect theanomaly.

The most common way to measure situationalawareness is by direct experimentation using queries(Endsley 2000a). The task is frozen, questions areasked to determine the user’s situational assessment atthe time, then the task is resumed. The SituationAwareness Global Assessment Technique (SAGAT)tool was developed as a measurement instrument forthis methodology (Endsley 1988). The SAGAT tooluses a goal-directed task analysis to construct a list ofthe situational awareness requirements for an entiredomain or for particular goals and subgoals. Then it isnecessary to construct the query in such a way that theoperator’s response is minimized. For example, if auser were being queried about the status of a particularrobot, the query might present the robot by locationrather than replying on the user to recall a name or tounderstand a description. The various options forstatus could be presented as choices rather than relyingon the user to formulate a response that might notinclude all the variables desired.

Page 8: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

Issues with situational awarenessThere are individual differences in situationalawareness. Experiments by Gugerty and Tirre[Gugerty and Tirre 2000) show that situationalawareness is correlated with working memory;perceptual –motor ability; static visual processing;dynamic visual processing, and temporal processingability. In addition, studies have shown that the abilityto acquire situational awareness decreases with age(Bolstad and Hess 2000). These are factors that mustbe accounted for when doing assessments ofsituational awareness with respect to interface designsin the human-robot interaction domain.

Operators of fully automated systems often havedifficulty in responding to emergency situations. TheSAGAT tool has been used to show that there is adecrease of situational awareness with fully automatedsystems (Endsley and Kiris 1995). Goodrich, Olsen,Crandall, and Palmer (Goodrich et al. 2001) introducethe concept of neglect to capture the relationshipbetween user attention and robot autonomy. The ideais that a robot’s effectiveness decreases as the operatorfails to attend to that robot. Neglect can be caused bytime delays in remote operations or by increasedworkload on the part of the operator. As robotsbecome more autonomous the breadth of the tasks theycan handle decreases. This makes them less effectivebut more tolerant of neglect.

Situational awareness requirements for HRIrolesAs noted early the different roles within HRI requiredifferent awareness of the situation. In the followingsections we propose some information we hypothesizeis appropriate to the various roles. We propose to useseveral sources for guidance. We will attempt to finda corresponding domain and use successful interactiondesigns as a first basis. Secondly, we will use subjectmatter experts (as available) for each role to verify thisinformation. In some instances (particularly the peerand bystander roles) we will have to conduct someexperiments to gather the necessary information.Based on this knowledge, we will construct situationawareness assessment tools and user interfaces. Usingthe situation awareness assessment tool we willproduce a baseline metric for a number of situations.HRI researchers will be able to use our user interfaceand assessment tool to assess their work.

The Supervisory Role. We assume that the supervisoryinterface is done from a remote location. Ourhypothesis is that the supervisor needs the followinginformation:- an overview of the situation, including progress of

multiple platforms- the mission or task plan- current behaviors of any robots including

deviations that may require intervention- other roles interacting with the robot(s) under her

control, including interactions between robotsA corresponding HCI domain is that of complex

monitoring devices (Vincente, Roth, and Muman2001). Complex monitoring devices were originallybased on displays of physical devices. The originaldevices were just lights and switches thatcorresponded to a sensor or actuator. Initially thesewere displayed on physical panels. When thesedisplays were switched to computer-based displays, asingle display was unable to show all the information.This produced a keyhole effect – the notion that aproblem was most likely occurring on a display thatwasn’t currently being viewed.

Another issue in complex monitoring devices isthat of having an indication of what “normal” is. Thisis also true in human-robot interactions where physicalcapabilities of the system change and the supervisorneeds to know the “normal” status of the robot at anygiven time. Another issue is that single devices maynot be the problem but rather relationships betweenexisting devices. Displays should support not onlyproblem driven monitoring but knowledge drivenmonitoring when the supervisor actively seeks outinformation based on the current situation or task. Due to the amount of information present in complexmonitoring devices, users have strategies to reducecognitive demands. These include reducing noise byturning off meaningless alarms, documenting baselineconditions, creating external cues and reminders formonitoring various components. Computer baseddisplays of complex systems present more flexibilityfor users to view information in different forms. Butthere is a tradeoff between the time to manipulate theinterface and any performance increase because of thisincreased flexibility. We suggest that lessons learned in producingdisplays for monitoring complex systems can be usedas a starting point for supervisory interfaces for HRI.In addition, basic HRI research issues that are notaddressed in complex systems include:- what information is needed to give an overview

of teams of robots- can a robot team world model be created and

would it be useful

Page 9: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

- what (if any) views of an individual robot worldmodels are useful

- how to give an awareness of other interactionroles occurring

- handoff strategies for assigning interventions toothers

Situational awareness indicators will be developedbased on a task- analysis of the supervisor’s role in anumber of scenarios (such as those described earlier inthis paper). An initial hypothesis about possibleindicators of situational awareness includes:- which robots have other interactions going on- which robots are operating in a reduced capability- the type of task and behaviors that the robots are

currently carrying out- the current status of the mission

Operator interaction. We make the assumption that thiswill be either a remote interaction or will occur in anenvironment in which any addition cognitive demandsplaced on the user are by the environment are light.We will also assume that the operator has an externaldevice to use as an interface to the robot. Theoperator must be a skilled user, having knowledge ofthe robotic architecture and robotic programming. Ifthe robot has teleoperation capabilities the operatorcould take over control. This is the most conventionalrole for HRI. Moreover, as the capabilities and rolesof robots expand, this role has to be capable ofsupporting interaction in a more complex situation.

We hypothesize that the operator needs thefollowing information:- The robot’s world model- The robot’s plans- The current status of any robotic sensors- Other interactions currently occurring- Any other jobs that are currently vying for the

operator’s attention (assuming it is possible toservice more than one robot)

- The effects of any adjustments on plans and otherinteractions

- Mission overview and any timing constraints

Murphy and Rogers (Murphy and Rogers 1996)note three drawbacks to telesystems in general:- The need for a high communication bandwidth for

operator perception and intervention- Cognitive fatigue due to repetitive nature of tasks- Too much data and too many simultaneous

activities to monitor.

Murphy and Rogers propose the mode ofteleassistance which consists of a basic cooperativeassistance architecture, joining sensor fusion effects to

support the motor behavior of a fully autonomousrobot with a visual interaction assistant that focusesuser attention to relevant information using knowledgebased techniques.

Mechanic role. The mechanic must be co-located withthe robot as these interactions will be focused on thephysical nature of the robot platform. The mechanicwill need to adjust some physical aspect of the robotand then check a number of behaviors to determine ifthe problem has been solved. The mechanic needs thefollowing information:- what behaviors were failing and how- information pertaining to any settings of

mechanical parts and sensors- software setting associated with behaviors of

various sensors

In addition, the mechanic needs a way to take therobot “off-line” for testing behaviors. An issue toaddress here is the nature of an interface. Should anexternal device be used or should the robot hardwaresupport access to this information? We havespeculated that the automated diagnosis and repairdomain might be beneficial to examine for possibleapproaches. At the present time we have not locatedliterature that has been useful but we plan to conductsome field observations in the near future in this area.

Peer role. We make the assumption that these are faceto face interactions. This is the most controversialtype of interaction. Our use of the terms “peers” and“teammates” is not meant to suggest that humans androbots are equivalent but that each contributes skills tothe team according to their ability. The ultimatecontrol rests with the user –the team member or thesupervisor. The issue is how the user (in this case, thepeer) gets feedback from the robot concerning itsunderstanding of the situation and actions beingundertaken. In human-human teams this feedbackoccurs through communication and direct observation.Current research (Breazeal and Scassellati 1999;Bruce, Nourbakhsh, and Simmons 2001) looks at howrobots should present information and feedback to itsuser. Bruce et. al stress that regular people should beable to interpret the information that a robot is givingthem and that robots have to behave in socially correctways to interaction in a useful manner in society.Breazeal and Scassellati use perceptual inputs andclassify these as social and non-social stimuli, usingsets of behaviors to react to these stimuli.

Earlier work in service robots illustrate someof the issues that must be investigated for successfulpeer to peer interactions. Engelhardt and Edwards(Engelhardt and Edwards 1992) looked at using

Page 10: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

command and control vocabularies for mobile, remoterobots including natural language interfaces. Theyfound that users needed to know what commands werepossible at any one time. This will be challenging ifwe determine that it is not feasible to have a separatedevice that can be used as an interface to displayadditional status from the robots that would bedifficult to display via robotic gestures.

We intend to investigate research results in mixedinitiative spoken language systems as a basis forcommunicating an understanding of the robot to theuser and vice versa. Our hypothesis about informationthat the team mate will need include:- What other interactions are occurring- What the current status of the robot is- What the robot’s world model is- What actions are currently possible for the robot

to carry outOther interesting challenges include the distance fromthe robot that the team can operate. We use othercommunication devices to operate human-humanteams from a distance. What are the constraints andrequirements for robot team members?

Bystander role. This is perhaps the most difficult rolefor interaction, even though bystanders will have themost limited interactions. As described in ourscenarios, a bystander role is principally concernedwith co-existing in the same environment as the robot.A bystander might be a victim that the search andrescue robot has discovered in the rubble. The victimwould like to be able to discover that the robot hasdelivered water or air and is reporting her location tothe rescue team. Or a bystander might simply be adriver passing an autonomous vehicle. What is itnecessary for that bystander to know? Most drivers inthat situation would want some assurance that thevehicle has equivalent skills as the majority of licenseddrivers. The interface for bystanders is most likelylimited to some form of behavioral indications: arobot “smile” or an action on the part of the robot,such as staying in the correct lane on the highway, thatgives the bystander an indication of competence. Newexperiments with robot pets and service robots (suchas robot lawn mowers) will also help determine whatinformation is needed to make bystanders comfortablewith robots in their environment.

A very limited situation assessment might bepossible for the bystander role. We would like todetermine if the bystander understands:- what caused the current behavior of the robot

(something in the environment, something thebystander did, external forces)

- what the robot might do next, especially given anaction on the part of the bystander

- the range of behaviors that the robot can exhibit- what, if any, behaviors can be caused by the

bystander

ConclusionsWe propose that human-robot interactions are of fivevarieties, each needed different information and beingused by different types of users. In our research wewill develop a number of scenarios within a specificdomain, and do a task-based analysis of these types ofhuman-robot interactions suggested by each scenario.We will then develop both a baseline interface for thevarious roles and a situational assessmentmeasurement tool. We propose to conduct a numberof user experiments and make the results publiclyavailable. Other HRI researchers can then use thesame experimental design, varying either the userinterfaces or the information available to the users, andcompare their results to these baseline results. Ourinitial work will focus on the supervisory role within adriving domain. A research challenge will be whatgeneralizes between different domains. For example,can we take what we learn in the driving domain andapply this to the search and rescue domain?

Our work in this area is interdisciplinary. Noteonly must we be concerned with generating the userinterface we must ensure that the necessaryinformation is available to the user. This will requirecoordination with experts in robotic softwarearchitectures. We have concentrated on the user andher information needs in this paper. However, toachieve a successful synergistic team, it will benecessary to furnish information about the user to therobot and to create a dialogue space for teamcommunication. We will start by concentrating on theuser aspects of the information but intent to expandour research to include capture and use of userinformation as well.

AcknowledgementsThis work was supported in by the DARPA MARSprogram.

ReferencesAlbus, J., Huang, H., Messina, E., Murphy, K., et al. 2002.4D/RCS Version 2.0: A Reference Model Architecture forUnmanned Vehicle Systems, Technical Report NISTIR6910, National Institute of Standards and Technology.

Bekey, George, 1996. Needs for Robotics in EmergingApplications: A Research Agenda, Report for IEEE/RIARobotics and Intelligent Machines Coordinating CouncilWorkshop on Needs for Robotics in Emerging Applications.http://www.sandia.gov/isrc/Bekey.paper.html

Page 11: Human Robot Interactions: Creating Synergistic Cyber Forces · Human Robot Interactions: Creating Synergistic Cyber Forces Jean Scholtz National Institute of Standards and Technology

Bolstad, C. and Hess, T. 2000. Situation Awareness andAging. In (Eds.) Mica R. Endsley and Daniel J. Garland.Awarenss Analsysis and Measurement, Lawrence ErlbaumAssociates,: Mahwah, New Jersey.

Breazeal C. and Scassellati,B. 1999. A context-dependentattention system for a social robot, 1999 International JointConference in Artifical Intelligence.

Bruce, A., Nourbakhsh, I., & Simmons. R. 2001. The Roleof Expressiveness and Attention in Human-RobotInteraction, AAAI Fall Symposium, Boston MA, October.

Endsley, M. R. 1988. Design and evaluation for situationawareness enhancement. In Proceedings of the HumanFactors Society 32nd Annual meeting (Vol. 1, pp 97-101)Santa Monica, CA: Human Factors Society.

Endsley, M. R. and Kiris, E.O. 1995. The out -of -the loopperformance problem and level of control in automation.Human Factors, 37(2), 381-394.

Endsley, M. R., 2000a. Direct Measurement of situationAwareness: Validity and Use of SAGAT in (Eds) Mica R.Endsley and Daniel J. Garland . Situation AwarenssAnalsysis and Measurement, Lawrence Erlbaum Associates:Mahwah, New Jersey.

Endsley, M. R., 2000b. Theoretical Underpinnings ofSituation Awareness: A Critical Review, in (Eds) Eds MicaR. Endsley and Daniel J. Garland . Situation AwarenessAnalysis and Measurement. Lawrence Erlbaum Associates,Publishers: Mahwah, NJ

Engelhardt, K. G. and Edwards, R.A. 1992. Human-robotintegration for Service robots (pp 315-346) In (eds)Mansour Rahimi and Waldemar Karwowski, Human robotinteraction , Taylor and Francis : London.

Fong, T., Thorpe, C. and Bauer, C. 2001. Collaboration,Dialogue, and Human-robot Interaction, 10th InternationalSymposium of Robotics Research, November, Lorne,Victoria, Australia.

Goodrich, M., Olsen, D., Crandall, J. and Palmer , T. 2001.Experiments in Adjustable Autonomy. Proceedings of theIJCAI01 Workshop on Autonomy, Delegation, and Control:Interacting with Autonomous Agents.

Gugerty, L. and Tirre, W. 2000. Individual Differences inSituation Awareness. In (Eds). Mica R. Endsley and DanielJ. Garland. Situation Awarenss Analsysis andMeasurement, Lawrence Erlbaum Associates: Mahwah,New Jersey.

[Hill, S. and Hallbert, B. 2000. Human Interface Conceptsfor Autonomous/Distributed Control, Idaho NationalEngineering and Environmental Laboratory, DARPA ITOSponsored Research, 2000 Project Summary.http://www.darpa.mil/ito/psum2000/J932-0.html

Jones, D.G. and Endsley, M. R. 1996. Sources of situationawareness errors in aviation. Aviation, Space andEnvironmental Medicine 67(6), pp. 507-512.

Kidd, P.T. 1992. Design of human-centred robotic systems.In Mansour Rahimi and Waldemar Karwowski (Eds.)Human Robot Interaction. Taylor and Francis: London.225-241.

Murphy, R. and Rogers, E. 1996. Cooperative Assistancefor Remote Robot Supervision, Presence: volume 5, number2, Spring, pp. 224-240.

Norman, D. 1986. Cognitive Engineering. in DonaldNorman and Stephen Draper (Eds.) User-centered design:new perspectives on human-computer interaction, ErlbaumAssociates: Hillsdale, N.J, 31-62.

NSF/DOE, IEEE Robotics & Automation Society & RoboticIndustries Association Workshop on Research Needs inRobotic and Intelligent Machines for Emerging Industrialand Service Applications, Oct 1996, Albuquerque, NM.http://www.sandia.gov/isrc/Whitepaper/whitepaper.html.

Scholtz, J. 2002. Creating Synergistic CyberForces. In AlanC. Schultz and Lynne E. Parker (eds.), Multi-Robot Systems:From Swarms to Intelligent Automata. Kluwer.

Sheridan, Thomas B. 1992. Telerobotics, Automation, andHuman Supervisory Control, MIT Press: Cambridge, MA.

Vincente, K., Roth,E. & Muman, R. 2001. How dooperators monitor a complex, dynamic work domain? Theimpact of control room technology, Journal of Human-computer studies 54, 831-856