148
NASA Technical Memorandum 104756 Crew Interface Analysis: Selected Articles on Space Human Factors Research, 1987 - 1991 July 1993 (NASA-TM-I04756) CREW INTERFACE ANALYSIS: SE[ECTEO ARTICLES ON SPACE HUMAN FACTORS RESEARCH t 1987 - 1991 (NASA) 137 p ORIGINAL PAGE COLOR PHOTOGRAPH G3/54 N94-24185 --THRU-- N94-24206 Unc|as 0183180 https://ntrs.nasa.gov/search.jsp?R=19940019712 2019-03-23T14:00:39+00:00Z

Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

  • Upload
    vandat

  • View
    225

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

NASA Technical Memorandum 104756

Crew Interface Analysis: SelectedArticles on Space Human FactorsResearch, 1987 - 1991

July 1993 (NASA-TM-I04756) CREW INTERFACE

ANALYSIS: SE[ECTEO ARTICLES ON

SPACE HUMAN FACTORS RESEARCH t 1987

- 1991 (NASA) 137 p

ORIGINAL PAGE

COLOR PHOTOGRAPH

G3/54

N94-24185

--THRU--

N94-24206

Unc|as

0183180

https://ntrs.nasa.gov/search.jsp?R=19940019712 2019-03-23T14:00:39+00:00Z

Page 2: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

r_r _

Page 3: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

NASA Technical Memorandum 104756

Crew Interface Analysis: SelectedArticles on Space Human FactorsResearch, 1987 - 1991

National Aeronautics and Space AdministrationLyndon B. Johnson Space CenterHouston, Texas

July 1993

f

Page 4: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

2

Page 5: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

PREFACE

This document contains human factors articleswritten between 1987 and 1991 and an

acknowledgement with publication in-formation. From its inception, this documentwas intended to focus on presenting a crosssection of the Johnson Space Center's CrewInterface Analysis Section's work in progress.

Operator Interaction Laboratory, the LightingEnvironment Test Facility, and the TaskAnalysis Laboratory. The articles are orga-nized by topic area and not laboratory thusemphasizing the interdisciplinary and

inte.grated approach to research adopted by thesection.

These articles were generated in the course ofeveryday work and were not written specifi-cally for this document. They provide a sam-piing of the work in progress in theAnthropometry and Biomechanics Laboratory,the Graphics Analysis Facility, the Human-Computer Interaction Laboratory, the Remote

It is hoped that these articles demonstrate theversatility and professionalism of the section,and, in so doing, instill a broad appreciationfor the importance of human factors engineer-ing in the design of human-operated systems

w!th .particular emphasis on space systems andrmsslons.

PIBOIDIN6 PAGE BLANK NOT FILMEDiii

Page 6: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

%"

Page 7: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

CONTENTS

INTRODUCTION ....................................................................................... 1

SECTION ONE - Human-Computer Interaction

• Spacecraft Crew Procedures from Paper to Computers ......................................O'Neal, Manahan

Process and Representation in Graphical Displays ...........................................- Gillan, Lewis, Rudisill

Designers' Models of the Human'-Computer Interface .......................................- Gillan, Breedin

Automated System Function Allocation and Display Format: Task Information

Processing Requirements ........................................................................Czerwinsld

5-/

10

17

f

• The Use of Analytical Models in Human-Computer Interface Design ...................... 36 -9Gugerty

SECTION TWO - Space Habitability

• Using Computer Graphics to Design Space Station Freedom Viewing ....................Goldsberry, Lippert, McKee, Lewis, Mount

• A Simulation System for Space Station Extravehicular Activity ............................Marmolejo, Shepherd

Use of Infrared Telemetry as Part of a Nonintrusive Inflight Data CollectionSystem to Collect Human Factors Data ........................................................- Micocci

• Performing Speech Recognition Research with Hypercard ..................................- Shepherd

• Illumination Requirements for Operating a Space Remote Manipulator ....................- Chandlee, Smith, Wheelwright

Previous Experience in Manned Space Flight: A Survey of Human FactorsLessons Learned ..................................................................................

- Chandlee, Woolford

f

47-

52 - ;/

r.59

64 --/

68 _/_

PAGE BLANK ,NOT FU../_

V

Page 8: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

SECTION THREE - Remote Operator Interaction

• Hand Controller Commonality Evaluation Process ...........................................Stuart, Bierschwale, Wilmington, Adam, Diaz, Jensen

• Programmable Display Pushbuttons on the Space Station's Telerobot Control Panel ....Stuart, Smith, Moore

• Speech Versus Manual Control of Camera Functions During a Telerobotic Task ........Bierschwale, Sampaio, Stuart, Smith

i

79-

85"- ]q!

91-/_

Space Station Freedom Coupling Tasks: An Evaluation of their Telerobotic and EVA .__/Compatibility ...................................................................................... 98 -_ -

Sampaio, Bierschwale, Fleming, Stuart

The Effects of Spatially Displaced Visual Feedback on Remote ManipulatorPerformance .......................................................................................

Smith, Stuart

• Simulation of the Human Telerobot Interface on the Space Station ...........................Stuart, Smith

SECTION FOUR - Ergonomics

• Quantitative Assessment of Human Motion Using Video Motion Analysis ...............Probe

Reach Performance While Wearing the Space Shuttle Launch and Entry Suit DuringExposure to Launch Accelerations ..............................................................

Bagian, Greenisen, Schafer, Probe, Krutz

• Development of Biomechanical Models for Human Factors Evaluations ..................Woolford, Pandya, Maida

Establishing a Relationship Between Maximum Torque Production of Isolated Joints _ ?to Simulate EVA Ratchet Push-Pull Maneuver: A Case Study ............................. 132 -

Pandya, Maida, Hasson, Greenisen, Woolford

vi

Page 9: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

ACKNOWLEDGEMENTS

The following material reprinted with permission by Elsevier Science Publishers B.V.

Chandlee, G. O., Smith, R. L., and Wheelwright, C.D. Illumination requirements foroperating a space remote manipulator. Ergonomics of Hybrid Automated Systems, volume 1,pp. 241-248.

The following reprinted with permission © 1990/1988 Society of Automotive Engineers, Inc.

Bagian, J. P., Greenisen, M. C., Schafer, L. E., Probe, J. D., and Krutz, R. W. (1990)Reach Performance While Wearing the Space Shuttle Launch and Entry Suit During Exposureto Launch Accelerations. SAE Technical Paper Series No. 901357.

Marmolejo, J. A. and Shepherd, C. K. (1988) A simulation system for Space Stationextravehicular activity. SAE Technical Paper Series No. 881104

The following reprinted with permission. Copyright 1990, Association for Computing Machinery,Inc.

Gillan, D. J. and Breedin, S. D. (1990) Designers' models of the human-computer interface.Proceedings of the Association for Computing Machinery's Computer Human Interaction(CHI) Conference. New York, NY:ACM.

The following reprinted with permission by the International Astronautical Federation.

Goldsberry, B. S. Lippert, B. O., McKee, S. D., Lewis, J. L, Jr., and Mount, F. E. (1989)Using computer graphics to design Space Station Freedom viewing. Proceedings of theInternational Astronautical Federation (IAF) Congress of Malaga, Vol. 22 of ActaAstronautica.

The following reprinted with permission from Proceedings of the Human Factors Society AnnualMeeting. Copyright (1987/1988/1989/1990) by The Human Factors Society, Inc. All rightsreserved.

Bierschwale, J. M., Sampaio, C. E., Stuart, M. A., and Smith, R. L. (1989) Speech versusmanual control of camera functions during a telerobotic task, Proceedings of the HumanFactors Society 33rd Annual Meeting, pp. 134-138.

Chandlee, G.O. and Woolford, B. (1988). Previous experience in manned space flight: asurvey of human factors lessons learned. Proceedings of the Human Factors Society 32ndAnnual Meeting. pp. 49-52.

Micocci, A. J. (1987) Use of infrared telemetry as part of a nonintrusive inflight datacollection. Proceedings of the Human Factors Society 31st Annual Meeting, pp. 796-799.

Shepherd, C. K. (1990) Performing speech recognition research with Hypercard. Proceedingsof the Human Factors Society 34th Annual Meeting, pp. 415-418.

vii

Page 10: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Smith,R. L. andStuart,M. A. (1989) Theeffectsof spatiallydisplacedvisual feedbackonremotemanipulatorperformance.Proceedings of the Human Factors Society 32nd AnnualMeeting. pp. 1430-1434.

Stuart, M. A., Smith, R. L., and Moore, E. P. (1988) Guidelines for the use of programmabledisplay pushbuttons on the Space Station's telerobot control panel. Proceedings of the HumanFactors 32nd Annual Meeting, pp. 44-48.

The following articles were printed as NASA Conference Publications.

Gillan, D. J., Lewis, R., and Rudisill, M. (1989) Process and representation in graphicaldisplays. Proceedings of SOAR (Space Operations, Automation, and Robotics), the ThirdAnnual Workshop on Automation and Robotics, pp. 661-665.

Gugerty, L. (1990) The use of analytical models in human-computer interface design.Proceedings of SOAR (Space Operations, Automation, and Robotics), the Third AnnualWorkshop on Automation and Robotics, pp. 574-581.

O'Neal, M. R. and Manahan, M. K. (1990) Spacecraft crew procedures from paper to

computers. Proceedings of SOAR (Space Operations, Automation, and Robotics), the ThirdAnnual Workshop on Automation and Robotics, pp. 595-600.

Probe, J. D. (1989) Quantitative assessment of human motion using video motion analysis.Proceedings of SOAR (Space Operations, Automation, and Robotics), the Third Annual

Workshop on Automation and Robotics, pp. 155-157.

Sampaio, C. E., Bierschwale, J. D., Fleming, T. F., and Stuart, M. A. (1990) Space StationFreedom coupling tasks: an evaluation of their telerobotic and EVA compatibility. Proceedingsof SOAR (Space Operations, Automation, and Robotics), the Third Annual Workshop onAutomation and Robotics, pp. 179-183.

Stuart, M. A. and Smith, R. A. (1988) Simulation of the human telerobotic interface on the

Space Station. Proceedings of SOAR (Space Operations, Automation, and Robotics), theThird Annual Workshop on Automation and Robotics, pp. 321-326.

Woolford, B. Pandya, A. K. and Maida, J. (1990) Development of biomechanical models forhuman factors evaluations. Proceedings of SOAR (Space Operations, Automation, andRobotics), the Third Annual Workshop on Automation and Robotics, pp. 552-556.

viii

Page 11: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

INTRODUCTION

This document is a collection of studies that

illustrate recent work conducted by membersof the Crew Interface Analysis Section(CIAS). It represents an advancement ofknowledge concerning integrating humans intothe spaceflight environment. The studiesinclude such subdisciplines as ergonomics,

space habitability, human-computer interac-tion, and remote operator interaction. The

CIAS is dedicated to human factors study inthe manned space program.

The section provides human factors engineer-ing for the Space Shuttle program, the SpaceStation Freedom program, and advancedprograms that have the goal of human Lunarand Martian exploration. The CIAS alsocontributes to the Space and Life SciencesDirectorate's goal to serve as a focal point ofexcellence for the development and implemen-tation of procedures, hardware, and sciencepayloads that relate directly to the health,

safety, and performance of humans in space.

The CIAS is one of the Flight Crew SupportDivision's seven sections in NASA Johnson

Space Center's Space and Life SciencesDirectorate. The Flight Crew Support Divisionis concerned with human factors issues in

spaceflight and the CIAS assumes a morespecialized role that focuses on research, eval-

uation, development, and preliminary designfor advanced projects. A key contribution ofthe CIAS is to provide knowledge and infor-mation about the capabilities and limitations ofthe human so that spacecraft systems andhabitats are optimally designed and used, andcrew safety and productivity are enhanced.

A goal of the CIAS is to increase programcommitment to designing for efficient human

productivity in the space environment. On-going research and development activitiesenhance the understanding of crew capabilitiesand ensure the best use of humans in the

manned space programs. The application ofhuman factors principles to spacecraft andmission design will result in optimally engi-neered systems, and contribute to the achieve-

ment of NASA's goals. The following set ofarticles illustrates how these principles havealready benefited the space program.

Page 12: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

t _

Page 13: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Human-Computer Interface

Principles of human-computer interaction are researched and developed to be applied to the

design of computer interfaces on NASA space missions. Interfaces between the human operator

and the computer system are evaluated to measure and record parameters such as formats for

displays, input/output devices and workstation layouts.

ORIGINAL PAGECOLOR PHOTOGRAPH

PI1tt)i_ PAGE BLANK NOT FILM_'D

Page 14: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A
Page 15: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

SPACECRAFT CREW PROCEDURES

FROM PAPER TO COMPUTERS

Michael O'Neal and Meera Manahan

l.x_ckheed Engineering and Sciences Company

.N94-241 6

Research directed by Marianne Rudisill,Manager, Human Computer Interaction Lab,NASA JSC.

INTRODUCTION

Large volumes of paper are launched with eachSpace Shuttle mission that contain step-by-stepinstructions for various activities that are to be

performed by the crew during the mission.These instructions include normal operationalprocedures and malfunction or contingencyprocedures and are collectively known as theFlight Data File, or FDF. An example ofnominal procedures would be those used in thedeployment of a satellite from the SpaceShuttle; a malfunction procedure woulddescribe actions to be taken if a specificproblem developed during the deployment.

A new Flight Data File and associated systemis being created for Space Station Freedom.The system will be called the Space StationFlight Data File, or SFDF. NASA hasdetermined that the SFDF will be computer-based rather than paper-based for reasonsincluding the following:

The time involved in implementing anddelivering approved Space Station crewprocedure changes or updates in a paper-based system would be significant,including scheduling of resources on aSpace Shuttle flight.

The main components of interest in a Human-Computer Interface (HCI) include theinformation available on the screen at anygiven time, how to change the quantity orcontent of the information present on thescreen, how the information is organized, andhow the user interacts with the displayedinformation. Designing an effective HCI is animportant step zn developing a viablecomputer-based crew procedure system forreasons including the following:

An effective HCI will allow faster, more

accurate crew interaction with spacecraftcomputer procedure systems.

The HCI will facilitate the crew's

monitoring of other spacecraft computersystems while performing crewprocedures.

The long duration of the Space Stationprogram precludes one-time launch of allcrew procedures.

Repeated launch of crew proceduresegments is not cost effective since each

pound of launch weight costsapproximately $20,000.

Large amounts of manual effort arerequired to create, edit, and maintainpaper-based crew procedures.

Changes made after crew procedureprinting require annotation of eachindividual copy, a time-consuming anderror-prone process.

The HCI will allow the crew to easilyverify procedure steps performed by thecomputer system as procedure automationincreases.

A context- and user- sensitive help andannotation system within the HCI will

allow the user to rapidly and efficientlyaccess this type of information whileperforming the procedures.

The effective HCI will provide rapid, easyaccess to required supporting informationsuch as procedure reference items.

The development of a standard HCI acrossall crew procedures will lessen the amount

PAGE' BLANK NOT FP_M_l'

!

.'J •

5

Page 16: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

of cross-training required for different

types of procedures and will thus lessenthe amount of errors made during

procedures.

The research project described in this paperuses human factors and computer systemsknowledge to explore and help guide thedesign and creation of an effective HCI forcomputer-based spacecraft crew proceduresystems. The research project includes thedevelopment of computer-based proceduresystem HCI prototypes and a test-bed includ-ing a complete system for procedure author-ing, editing, training, and execution to be usedfor experiments that measure the effectivenessof HCI alternatives in order to make designrecommendations.

CREW PROCEDURE TASKSAND USERS

Many different tasks are required to create andmaintain a spacecraft crew procedure system.Procedures must be created by personnelfamiliar with the tasks in question and byprocedure authors and editors. The crewresponsible for performing the proceduresmust be trained in how to use the procedures.Training crew personnel to be familiar withoff-nominal procedures is also required so theprocedures can be used quickly and effectivelyif needed during a mission. The main proce-dure task will be its actual performance duringa mission, including assistance and adaptation

to changing conditions if necessary. If a pro-cedure is used repeatedly during one or moremissions, changes to the procedure may berequired to correct inefficiencies or errors, andcurrent versions of such procedures must bemaintained and distributed to all appropriate

personnel.

Personnel groups responsible for specific crewprocedure tasks represent different user groupsof the crew procedure system. Procedureauthors create the procedures, assuring thatthey correctly describe the work to beperformed and that they conform to a standardprocedural format (e.g., FDF or SFDF); theyare also involved in scheduling proceduresduring a mission to create mission plans and

crew member short-term plans. Authors mayalso work with individual payload specialistsor experimental scientists. Trainers review theprocedures with the crew members who willperform the tasks; comments or problems withprocedure details or clarity are reported toprocedure authors or editors for correction.Crew members are involved with actual

procedure performance, training, andcorrection or editing if required. Missioncontrol personnel assist in schedulingprocedures, working with the crew during themission, and in monitoring the mission planand short-term plans. Experimentalinvestigators and payload specialists areinvolved in creation and execution of those

procedures relevant to their experiment orpayload. Procedure editors are alsoresponsible for updating and distributingrequired procedure changes found duringtraining or execution.

An effective computer-based crew proceduresystem, and an effective HCI to this system,must take into account the full range of tasksand users of the procedure system. Inparticular, a common interface that can becreated by authors and used by trainers, crewmembers, and mission control personnel willcontribute to faster, more accurate interaction

with crew procedures.

PROJECT GOALS

The final goal of the current research is tocreate HCI design guidelines that can be usedfor spacecraft crew procedures and othercomputer systems that display proceduralinformation to procedure users. These guide-lines should lead to faster, more accurate user

interaction with procedural information on acomputer.

The first step in the project is a review ofavailable literature on computer presentation ofprocedural material and the evaluation of thecurrent paper-based FDF procedure system forSpace Shuttle. With this information, keyissues are identified and their role in the

research outlined. Using background infor-mation and human factors and computersystem knowledge, alternative interfaces are

6

Page 17: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

createdvia prototypes. Theseprototypesarethen evaluatedby the various usersof crewprocedureslistedabove. Experimentsarethenperformed using different presentationandinteraction techniques; these experimentsprovidespecificdataon therelativespeedandaccuracy of proceduretasks using differentinterfaces. Commentsfrom prototypes andresultsandconclusionsfrom interfaceexperi-mentsarethencompiledintohuman-computerinterfaceguidelinesfor presentationandinter-actionwith spacecraftcrewprocedures.

CREW PROCEDURE ISSUES

There are both advantages and disadvantagesof moving from a paper-based to a computer-based crew procedure system. The currentresearch project addresses these issues as theyrelate to the human-computer interface of thesystem. Advantages of using a computer willbe utilized while disadvantages will beaddressed and minimized.

COMPUTER ADVANTAGES

procedure is being performed. The amount ofdetail (i.e., the prompt level) of the procedurecan change for different users and situations.Finally, expert systems can be integrated intothe procedure system, thus providing a moreintelligent interface to crew procedures.

COMPUTER DISADVANTAGES

When procedural information is presented on acomputer screen, the context of theinformation presented typically seems morelimited than with a page of paper, although theactual amount of information present on acomputer screen may or may not be smaller.There is less context information on where thecurrent screen of information fits into the

overall system; in a book, the location of thepage in the overall book is an example ofavailable context data. This issue will be

addressed in the HCI to the computer-basedsystem by generating and evaluating ideas toprovide additional context information (e.g.,screen number, screen position in overalloutline, etc.).

Having a computer system behind the interfaceto a crew procedure system offers manyadvantages. By monitoring related onboardsystems, the computer system canautomatically perform many procedure stepsthat require simple status verification (e.g.,"Check that switch F6 is ON"), thus reducingthe time required to perform the procedure. Atraining mode is now feasible so that the crewmember can practice using the procedure inexactly its final form with the exception thatsystem actions are not actually performed;training and execution modes for the sameprocedure will increase the effectiveness oftraining. Personal annotation files can beattached to each procedure, thus allowing eachcrew member to create and refer to individual

notes during both training and execution ofprocedures; these notes will be availablewhenever and wherever the crew member uses

the procedure. The computer-based proceduresystem can coordinate with other spacecraftcomputer systems, providing easier transitionsto and from other systems. The computer-based help system can adapt to both the user ofthe procedure and the context in which the

In a complex computer system such as the on-board Data Management System (DMS) forSpace Station Freedom, many levels ofsubsystems are present. The inability torapidly navigate among the systems andsubsystems can be a serious detriment tooverall performance. This issue will be

addressed in the HCI to the computer-basedsystem by generating and evaluating ideas toprovide information on current position withinthe system hierarchy and to provide tools torapidly and directly move between subsystemseither during or after a computer task.

RESEARCH FOUNDATIONS

Initially, a review of NASA literature oncomputer presentation of proceduralinformation was completed. Information onwork performed at MITRE for the ProcedureFormatting System (PFS) project was receivedand prototypes were viewed (Johns 1987 and1988, Kelly 1988). Previous research in theHuman-Computer Interaction Laboratory(HCIL) of the NASA Johnson Space Centerwas reviewed, and results from experiments

7

Page 18: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

on procedure context and format will beincorporatedinto thecurrentresearchproject(Desaulniers,Gillan, and Rudisill 1988and1989). Coordination is in progresswith theMission OperationsDirectorate(MOD) at theNASA JohnsonSpaceCenter, as describedbelow.

PROJECT STATUS

CURRENT PROJECT PROTOTYPES

Prototype development is in progress for twoSpace Shuttle experiments. The procedureswere selected for prototyping due to theirsimilarity to typical research that will beconducted on Space Station Freedom sinceSpace Station procedures are not yet available.The two prototypes will also use different HCIapproaches.

The first system is a computer-based prototypeof a middeck experiment, PolymerMorphology (PM), that was performed onSpace Shuttle mission STS-34. The PMexperiment consists of four procedures (setup, sequence initiation, sample check, andstowage) and six procedure reference items(interconnection overview, keystrokedefinitions, window definitions, notebook,sequences, and worksheets). The prototype iscreated within the framework of the SpaceStation basic screen layout being developed bythe DMS development team. Included in thisprototype is an initial version of an InterfaceNavigation Tool developed at the HCIL that iscurrently being reviewed by the DMS team.Initial versions of the six reference items have

been created. Development of the interface forthe four procedures of the experiment is in

progress.

The second system is a computer-basedprototype of an expert system for medicalexperiments to be performed on two upcomingSpace Shuttle missions. The system, PrincipalInvestigator in a Box, or [PI], will include anexpert system. The motivation for this medicalexpert system is to provide the capability toperform medical experiments with minimumground control or support. A separate HCILresearch project is in progress to study the

interface as it relates to the expert system, andthis research will be coordinated with thecurrent research which examines the same

interface from the viewpoint of presentation of

the procedures. The [PI] interface is beingmodified for the Space Station basic screenlayout and will be evaluated as an alternativeHCI design for crew procedures.

CURRENT PROJECT EXPERIMENTS

As discussed above, the current proceduresresearch will include the performance ofexperiments to gather specific data to supportHCI guidelines for computer presentation ofprocedures. These experiments will begin asspecific questions arise from the creation andanalysis of HCI prototypes. The experimentswill use subjective comments and speed andaccuracy measurements to provide data forcomparing different HCI alternatives. Theexperimental test-bed will include a completesystem for procedure authoring, editing,training, and execution that will allow HCIalternatives to be easily generated andcompared.

COOPERATIVE WORK

In addition to continuing work with theMITRE PFS system, two cooperative projectswith the NASA Johnson Space Center MissionOperations Directorate (MOD) are in the

.planning stages. Research will be performedin the HCIL to assist MOD in creating

procedure standards for SFDF. Studies andexperiments will be performed to providehuman factors input into the standards created.Also, procedure authoring and executionsoftware being developed within MOD will beevaluated from a human factors and HCI

perspective.

FUTURE RESEARCH ISSUES

The current research project will continue toexplore human factors issues relevant to theinterface to electronic spacecraft crewprocedures. The effect on the cognitiveworkload of the procedure users will beexamined, with the goal of reducing thisworkload through automation. The allocation

Page 19: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

of procedure tasks between the user and thecomputer system will also be examined.Creating an interface that is adaptable tochanging environments will be explored,including the method and user aids availableduring interruption and resumption ofprocedures. Research will also be performedon the use of the same computer interfaceduring both training and execution ofprocedures.

CONCLUSION

Spacecraft crew procedures are increasinglybeing computerized, as in NASA's SpaceStation Freedom program. The humaninterface to these computer-based crewprocedure systems is an important component,and research into improving the interface willprovide faster and more accurate humaninteraction with the computer. The currentresearch project uses prototypes andexperiments to explore and help guide thedesign and creation of the human-computerinterface for spacecraft crew proceduresystems such as the Space Station. Prototypeand experiment development is currently inprogress. Issues relevant to human interactionwith procedures will continue to be researchedwithin the HCIL and in cooperation with othercrew procedures researchers and developers.

A CKNO WLEDGEMENTS

This research was funded by the NationalAeronautics and Space Administration, Officeof Aeronautics and Exploration Technology,through contract NAS9-17900 to LockheedEngineering and Sciences Company. Theresearch was performed at the Johnson SpaceCenter Human-Computer InteractionLaboratory.

REFERENCES

. Desaulniers, D. R., Gillan, D. J., andRudisill, M. R. (1988). The Effects ofFormat in Computer-Based ProcedureDisplays. NASA Technical Report.Houston, Texas: Lockheed

Engineering and Sciences Company.

.

.

.

.

.

Desaulniers, D. R., Gillan, D. J., andRudisill, M. R. (1989). The Effects ofContext in Computer-Based ProcedureDisplays. NASA Technical Report.Houston, Texas: Lockheed

Engineering and Sciences Company.

Desaulniers, D. R., Gillan, D. J., andRudisill, M. R. (1988). AComparison of Flowchart and ProseFormats for the Presentation of

Computer-Based ProceduralInformation. NASA Technical Report.Houston, Texas: LockheedEngineering and Sciences Company.

Johns, G. J. (1987). Flight Data Filefor the Space Station, Volume I,Baseline Definition, and Volume II,

Baseline Functional Requirements.(MITRE Report No. MTR10019).Houston, Texas: The MITRECorporation.

Johns, G. J. (1988). Dynamic Displayof Crew Procedures for Space Station,Volume I, Baseline Definition, and

Volume II, Functional Requirements.(MITRE Report No. MTR-88D033).McLean, Virginia: The MITRECorporation.

Kelly, C. M. (1988). ConceptualDefinition for a Flight Data FileAutomated Control and TrackingSystem. (MITRE Report No. MTR-88D0017). McLean, Virginia: TheMITRE Corporation.

Page 20: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

d "'3 :_ -/ -°

.I

N94- 241 7

PROCESS AND REPRESENTATION IN GRAPHICAL DISPLAYS

Douglas J. GillanLockheed Engineering and Sciences Company

Robert Lewis

Rice University

Marianne Rudisill

NASA Lyndon B. Johnson Space Center

INTRODUCTION

To survive and succeed in the world, peoplehave to comprehend both diverse naturalsources of information, such as landscapes,weather conditions, and animal sounds, andhuman-created information artifacts such as

pictorial representations (i.e., graphics) andtext. Researchers have developed theories andmodels that describe how people comprehendtext (for example, see [8]), but have largelyignored graphics. However, an increasingamount of information is provided to peopleby means of graphics, as can be seen in anynewspaper or news magazine, on televisionprograms, in scientific journals and,especially, on computer displays.

Our initial model of graphic comprehensionhas focused on statistical graphs for threereasons: (1) recent work by statisticians whichprovides guidelines for producing statisticalgraphs (Bertin [2], Cleveland and McGill [4,5]and Tufte [10]) could be translated into prelim-inary versions of comprehension models, (2)statistical graphs play an important role in twokey areas of the human-computer interface --direct manipulation interfaces (see [7] for areview) and task-specific tools for presentinginformation, e.g., statistical graphics pack-

ages, and (3) computer-displayed graphs willbe crucial for a variety of tasks for the SpaceStation Freedom and future advanced space-craft. Like other models of human-computerinteraction (see [3] for example), models of

graphical comprehension can be used byhuman-computer interface designers anddevelopers to create interfaces that presentinformation in an efficient and usable manner.

Our investigation of graph comprehensionaddresses two primary questions -- how dopeople represent the information contained in adata graph and how do they process informa-tion from the graph? The topics of focus forgraphic representation concern the features intowhich people decompose a graph and therepresentation of the graph in memory. Theissue of processing can be further analyzed astwo questions, what overall processing strate-gies do people use and what are the specificprocessing skills required?

GRAPHIC REPRESENTA TION

FEATURES OF GRAPHIC DISPLAYS

Both Bertin [2] and Tufte [10] address thefeatures underlying the perception and use of

graphs. Bertin [2] focuses on threeconstructs, (1) "implantation," i.e., the varia-tion in the spatial dimensions of the graphicplane as a point, line, or area; (2) "elevation,"i.e., variation in the spatial dimensions of thegraphical element's qualities -- size, value,texture, color orientation, or shape; and (3)"imposition," i.e. how information isrepresented, as in a statistical graph, anetwork, a geographic map, or a symbol.Tufte [ 10] proposes two features as important

for graphic construction, data ink and datadensity. Tufte describes data ink as "thenonerasable core of a graphic" [10, p. 93] andprovides a measure, the data-ink ratio, whichis the "proportion of a graphic's ink devoted tothe nonredundant display of data information"[10, p. 93]. Data density is the ratio of thenumber of data points and the areas of thegraphic. Tufte's guidelines call for maximiz-ing both the data-ink ratio and, within reason,

10

Page 21: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

the data density, in other words, displayinggraphicswith asmuchinformationandaslittleink aspossible.

Both Bertin's and Tufte's ideas about thefeaturesof datagraphswerederivedfrom theirexperienceas statisticians,rather than fromexperimentalevidence.We decidedto fill theempirical void concerning the featuresunderlying graphic comprehension. In ourfirst experiment, people simply judged thesimilarity in appearanceand informationdisplayedby all possiblepairsof 17differenttypesof graphs(thatis, 136pairs of graphs).The graphsranged from the familiar (linegraphs,bar graphs,and scatterplots) to themore unusual (star graphs,ray graphs, andstick man graphs). The similarity judgmentswere analyzed with multivariate statisticaltechniques, including (1) cluster analysis,which shows the groupings or categories(clusters) that underlie people'sjudgmentsabouta setof objectsand(2)multidimensionalscaling (MDS), which shows the lineardimensions underlying people's similarityjudgments. The logic of theseanalyseswasthat people would cluster graphsand placegraphsalongdimensionsbasedon thefeaturesof thegraph[9].

The cluster analyses indicated that peoplegroupgraphs,at leastin part,accordingto thephysicalelementsof thegraphs. Key clustersinclude graphs in which points were thedominant element (the two types of scatterplots, the range and density graphs), graphsconsisting of straight lines (the surface,textured surface, and stacked bar graph), andthose consisting of solid areas (the column andbar graphs). The categorization of the graphsaccording to physical elements agreesgenerally with Bertin's [2] construct ofimplantation.

The MDS analyses of the similarity judgmentswere combined with a factor analysis whichresulted in three factors, each consisting of oneinformational dimension and one perceptualdimension, which accounted for 97% of the

data. One factor differentiated perceptuallysimple graphs (e.g., the bar and line graphs)from perceptually complex graphs (the scatter

plots, the 3-dimensional graph, and the surfacegraphs). A second factor separated graphs forwhich axes were unnecessary to read the graph(the pie, star, 3-dimensional, and stick mangraphs) from those for which the axiscontained information (especially the modifiedscatter plots -- the range and density graphs[10]). Finally, the third factor tended to haveinformationally complex graphs (those withthe most data) at one end and informationallysimple graphs (those with the least data) at theother end. Accordingly, we hypothesize thatpeople decompose a graph according to itsperceptual complexity, figure-to-axes relation,and informational complexity. A subsequentexperiment has shown that each of thesefactors relates to peoples' speed and accuracyin answering questions using these graphs [6].

REPRESENTATION IN MEMORY

The previous section of this paper addressedthe features present when a user looks at agraphic. This section addresses the featuresthat the user walks away with. Accordingly,

the experiments looked at how a userrepresents the information from a graphic in

memory.

Our research on memorial representation ofgraphics involved a simple experimentaldesign: Our subjects worked with a set ofgraphs on one day, then we assessed whatthey retained about the graphic on a secondday. The initial training day consisted of onetrial with each of six different graphs during a30 second trial. For three graphs, the subjectsanswered questions about the graphs, (e.g.,What is the mean of the variables in the graph?and Which has the greater value, variable A orvariable B?). For the other three graphs, theyidentified and drew the perceptual componentsof the graph, each component in a separatebox. For example, in a line graph a subjectmight draw the points representing eachvariable, the lines connecting the points, theaxes, verbal labels, and numerical labels.

Twenty-four hours after training, we tested thesubjects using two different methods. Wegave one group of 16 subjects a recognitiontest in which they looked at 24 different graphs

11

Page 22: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

andhadto saywhethertheyhadseenpreciselythat graph during the training session. Weconstructedthe24 testgraphssystematically.Each of the six graphs from the trainingsessionwerepresentedduring the test. Eachtraining graph had three "offspring" thatserved as the distractors (or incorrect teststimuli) during thetest. Onetypeof distractorcontained the same data as the trainingstimulus,but used a different graph type todisplay the data (New Graph-SameData); aseconddistractordisplayedthedatausingthesametypeof graph(SameGraph-NewData);the third distractordiffered from the traininggraph in both graph type and data (NewGraph-NewData). Perfectrecognitionwouldhave resulted in 100% yes answersto thetraining graphsand 0% yes answersto thedistractors. A secondgroup of 14 subjectsreceivedarecall testin which theywereaskedto draw the graphsfrom Day 1 in as muchdetailastheycouldremember.

The results showedthat people'srecognitionof the training graphswasvery good. Theycorrectlyrecognizedthetraininggraph88%ofthe time, with little difference between thegraphsusedduring training in the perceptualtask (85% recognition)and thoseusedin theinformational task (90% recognition).Although falserecognitionsof thedistractorswerelow overall (10%yesanswersto distrac-tors),thedistributionof falserecognitionswasinteresting.Of the39 falserecognitionsby the16subjects,29 (74%) weremadeto the SameGraph-New Data distractor. Friedman testchi-square(2 df) = 10.1,p < .05. The highfalse recognition rate when the samegraphtypewasused(30%falserecognitionsto thatdistractor) suggestthat theperceptualtypeofthe graph has a strong representation inmemory. We found thatboth trainingwith aninformationaltaskandtrainingwith apercep-tual task yielded similar high proportionsofthe total false recognitions for the SameGraph-NewData distractor, 77% and 70%,respectively.

The resultsfrom the recall testprovideevengreater support for the hypothesis that therepresentationof the graph type and certainperceptualfeatureswasexceptionallystrong.

Subjectshad good recall for the graph type(71%of thegraphs),thepresenceor absenceof axes(71% correct recall of axes),andtheperceptualelements(lines, areas,andpoints)in the graphs (53% correct recall of graphelements). In contrast,recall of informationfrom the graphs was generally poor. Forexample,subjectshadlow recall ratesfor thenumber of data points in the graph (29%correct recall), the quantitative labelson theaxes(10%of the labels),andtheverballabelsof the axesand data points (12% of verballabels). They recalled the correct spatialrelationsbetweendatapoints only 22%of thetime. In addition to showingthe strengthofthe perceptual representation, these datasuggestthat theperceptualand informationalrepresentationsof a graphareindependent.

STRATEGIES FOR PROCESSINGINFORMATION

Basedon formal thinking-aloudprotocols,aswell as informal discussionswith users,wehave hypothesized that people use twodifferent typesof strategieswhenprocessinginformationfrom adatagraph- anarithmetic,look-up strategy and a perceptual, spatialstrategy. With the arithmetic strategy,a usertreatsagraphin muchthesamewayasatable,usingthegraphto locatevariablesandlook uptheir values, then performing the requiredarithmetic manipulationson thosevariables.In contrast,theperceptualstrategymakesuseof the unique spatial characteristicsof thegraph,comparingtherelative locationof datapoints.

We have hypothesizedthat usersapply thestrategiesasa function of the task. Certaintasksappearto lend themselvesbetter to onestrategythananother. Answering acompari-sonquestionlike "Which is greater,variableAor B?" would probably be answeredrapidlyand with high accuracy by comparing thespatiallocationof A andB. In contrast,auseranswering the question "What is the differencebetween variables A and B?" about a line

graph might be able to apply the perceptualstrategy, but would be able to determine theanswer more easily and accurately with thearithmetic strategy. In addition, we propose

12

Page 23: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Ao 20000'@

E

E 1oooo

o

_ 0

A. Arithmetic Model

0 0

o 20000

• E

,i

E 10000

Oy _y= 770.66 + 828."/0 x I--

, I_ 0

5 10 15 20 P'/

Number of Processing Steps

B. Mixed Model

f=- _ -5 10 15 20

Number of Precessing Steps

Figure 1. Response times for answering eight types of questions using three types of graphs as a function ofthe number of processing steps. A. Arithmetic strategy B. Mixed arithmetic-perceptual strategy.

that users vary their strategy according to thecharacteristics of the graph. For example, if auser were faced with a graph that hadinadequate numerical labels on the axes, he orshe would be forced to use the perceptualstrategy to the greatest extent possible.

We have run a series of experiments to test our

hypotheses about graphic processing strate-gies. The response time data from theseexperiments are consistent with a model thatsuggests that users tend to apply the arithmeticstrategy, but will shift to the perceptualstrategy under certain conditions. In the basicexperiment, subjects used three types ofgraphs -- scatter plot, a line graph, and astacked bar graph. They were asked eighttypes of questions about each graph type: (1)identification -- what is the value of variable

A? (2) comparison -- which is greater A or B?(3) addition of two numbers -- A+B. (4)subtraction -- A-B, (5) division -- A/B, (6)mean- (A+B+C+D+E)/5, (7) addition and

division by 5- (A+B)/5, and (8) addition ofthree numbers A+B+C. Subjects wereinstructed to be as fast and accurate as possi-ble. We predicted that the subjects' time toanswer the questions using a graph would be afunction of the number of processing stepsrequired by a given strategy. Accordingly,

with the arithmetic strategy, determining themean should take longer than adding threenumbers, which should take longer thanadding two numbers.

We began by fitting the data to a model basedon the assumption that subjects used an arith-metic strategy for all questions with all graphs.Figure 1A shows the fit of that model to theresponse time data. The response time gener-ally increases as the number of processingsteps increases, so the model accounts forsome of the variance, 61%, but many of thedata points fall far from the regression line.This model is poorest at predicting perfor-mance on two trials with the stacked bar graph-- the mean and the addition of two numbers

-- and for the comparison trials with all threetypes of graphs; subjects responded on thecomparison trials and the mean trial morequickly than predicted.

As discussed above, a comparison appears tobe a likely task for subjects to use a perceptualstrategy. In addition, the stacked bar graphintrinsically lends itself to adding the fivevariables by a perceptual strategy. The totalheight of the stack represents the cumulativevalue of the five variables. Accordingly, formodel 2, we assumed that subjects used a

13

Page 24: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

perceptualstrategyto determinethecumulativevalueof the stackedbargraph(thenlookedupthevalueanddivided by 5 arithmetically)andusedonly the perceptualstrategyto makeallcomparisons.Figure 1Bshowshow aversionof that model fits the data. This modelcapturesa substantiallygreateramountof thevariance, 91%, than did Model 1. In thisversionof the model,the regressionfunctionslope suggests that each processing steprequired about 1 secondto complete,exceptfor steps requiring subtraction or division(which the model assumestook 1.5 and 2seconds,respectively).

The fit of the mixed arithmetic-perceptualmodel to the data, together with subjects'verbal protocols when answering questionsusinggraphs,supportour hypotheses:(1) thatpeople use both arithmetic and perceptualstrategieswith graphics, (2) that for manytypicalquestions,thebiasappearsto befor thearithmetic strategy (perhapsbecauseof thegreateraccuracywith that strategy),and (3)subjectsswitch strategiesasa function of thecharacteristicsof thequestionandgraph.

A THEORY OF GRAPHICCOMPREHENSION

The focus of the rest of this paper is on anoverall theory of graphical comprehensiondesigned to help in the development of graphicdisplays. The theory covers the entire processof graphic comprehension from the motivationto look at a graph, to the use of the graph, toremembering the graph.

In general, when I look at a graph, I have aparticular purpose in mind -- I am usuallytrying to answer a specific question. Thus,stage 1 in graphic comprehension wouldconsist of either forming a representation of thequestion to be answered (if the question had tobe remembered), or producing the question byinference or generalization. The final cognitiverepresentation of the question would probablybe much the same, regardless of whether I readit, remembered it, or generated it. The likelyrepresentational format for the question wouldbe a semantic network (e.g., [1] and [8]).Determining the answer to the question would

function as the goal of my graphiccomprehension.

At the start of the second stage in graphiccomprehension, I would look at the graph. Onlooking at the graph, I would encode theprimary global features -- the presence or

absence of the axes and the type of graph.These would be encoded in a format that

would permit reproduction of certain lowerlevel features, such as the orientation of both

the elements that make up the graph type andthe axes. For example, subjects in our repre-sentation experiments generally recalled thehorizontal orientation of the bars in a column

graph, despite (or, perhaps, because of) theirdifference from the more typical vertical bargraphs. Interestingly, features that one mightexpect to be important to a graph user, such asthe number of data points, appear not to beencoded as part of this global encoding stage.One hypothesis of this model is that features

represented during the global encoding stagereceive the bulk of the representationalstrength. That is to say, they will be the bestremembered.

The third stage in graphic comprehension is touse the goal and the global features of thegraph to select a processing strategy. If mygoal were to compare the value of variables or(possibly) to compare a trend, I would select aperceptual strategy. If my goal were to deter-mine the sum of four variables, and numberedaxes were present and the graph typesupported it (e.g., a line graph or a bar graph),then I would select the arithmetic strategy.

During the next stage, I would implement theprocessing steps called for in the strategydetermined in the third stage. For example,adding variables A and B from a line graphwould involve the following processing steps:(1) locate the name of variable A on the X axis,(2) locate variable A in the x-y coordinatespace of the body of the graph, (3) locate thevalue of variable A on the Y axis and store in

working memory, (4) locate the name ofvariable B on the X axis, (5) locate variable B

in the x-y coordinate space of the body of thegraph, (6) locate the value of variable B on theY axis and store in working memory, and (7)

14

Page 25: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

add the value of variable A to the value ofvariableB to producethevalue"sum."

Because the semantic and quantitativeinformation (i.e., the variable names andvalues, respectively) are processedto someextent during this phase, some of thatinformation will be represented,but, asourrecall data suggest,not strongly. As a finalstage in graphic comprehension, I wouldexaminetheresultfrom processingstep7, the"sum,"to determineif it plausiblymetthegoalsetin comprehensionstage1. If theresponsewas a plausible fit with the goal, I wouldincorporate the answer into the semanticnetworkthatrepresentedthegoal.

This theory directs both future researchingraphicsandthedesignof graphicalcomputerinterfaces. For example,future researchwillbe neededto determine specific processingmodels for different questions using theperceptualstrategy. In addition, predictionsaboutthememoryfor quantitativeandsemanticinformation in a graph need to be tested.Finally, manyof thedesignprinciplesderivedfrom the theory are concerned with thecomplexrelationsbetweenthetask (or goal),thecharacteristicsof thegraphicaldisplay,andthe processingstrategies. For example,if asubjectis likely to usearithmeticstrategy(e.g.,with anaddition or subtractionquestion),theaxes should be numbered with sufficientnumericalresolution. The graphtype shouldallow the user to read a variable's valuedirectly from theaxis and shouldnot requiremultiplecomputationsto determineavariable'svalue (asa stackedbar graphdoes). Oneofour long-termgoals is to producea modelofgraphic comprehensionthat is sufficientlyelaborateto allowusto build toolsto aid in thedesignof graphicalinterfaces.

A CKNO WLEDGEMENTS

The authors would like to thank Sarah Breedin

and Tim McKay for helpful comments on thispaper and, along with Kritina Holden andSusan Adam, for helpful discussions aboutgraphics.

.

,

,

°

.

.

.

°

,

REFERENCES

Anderson, J. R. The Architecture of

Cognition, Harvard University Press,

Cambridge, MA, 1983.

Bertin, J. Seminology of Graphics

(translated by W. J. Berg), University ofWisconsin Press, Madison, WI, 1983.

Card, S. K., Moran, T. P., and Newell, A.

The Psychology of Human-ComputerInteraction, L. Erlbaum Associates,

Hillsdale, NJ, 1983.

Cleveland, W. S., and McGill, R.

"Graphical perception: Theory,

experimentation, and application to the

development of graphical methods,"

]ournaI of the American StatisticalAssociation, 79, 1984, 531-554.

Cleveland, W. S., and McGill, R.

"Graphical perception and graphical

methods for analyzing and presentingscientific data," Science, 229, 1985, 828-833.

Gillan, D. J., Lewis, R., and Rudisill, M."Models of user interaction with

graphical interfaces: I. Statistical

graphs," Proceedings of CHI, 1989, ACM,New York, 375-380.

Hutchins, E. L., Hollan, J. D., and

Norman, D. A. "Direct manipulation

interfaces," User-System Centered

Design, L. Erlbaum Associates,Hillsdale, NJ, 1986, 87-124.

Kintsch, W., and van Dijk, T. A. "Towarda model of text comprehension and

production," Psychological Review, 1978,85, 363-394.

Shepherd, R. N., Romney, A. K., andNerlove, S. B. Multidimensional Scaling:

Theory and Application in theBehavioral Sciences, Seminar Press,

New York, 1972.

15

Page 26: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

10. Tufte, ]_.R. The Visual Display of

Quantitative Information, GraphicsPress, Cheshire, CT, 1983.

16

Page 27: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

N94-2 16 DESIGNERS' MODELS OF THE HUMAN-COMPUTER INTERFACE ¢: !?

Douglas J. GillanRice University

Sarah D. Breedin

Lockheed Engineering and Sciences Company

Research directed by Marianne Rudisill,

Manager, Human Computer Interaction Lab,NASA JSC.

INTRODUCTION

Do people's cognitive models of the human-computer interface (HCI) differ as a functionof their experience with HCI design? Acognitive model can be defined as arepresentation of a person's knowledgeconsisting of (1) a set of elemental concepts(elements in a model of an HCI might includewindows, menus, tables, and graphics), (2)the relations among the elements (for example,a mouse and a touch screen might be related as

input devices), and (3) the relations amonggroups of associated elements (for example, agroup of input devices might be related to agroup of user-computer dialogue techniques).(See [4], [7], and [10] for additionaldefimitions.)

Cognitive modeling in the area of human-computer interaction has generally focused onhow the user represents a system or a task [4].The results of this approach provideinformation relevant to Norman's concept of auser's model [9]. In contrast, the presentpaper focuses on the models of HCI designers,specifically on designers' declarativeknowledge about the HCI. Declarativeknowledge involves the facts about a givendomain and the semantic relations among thosefacts (e.g., [1]); for example, knowing that themouse, trackbaU, and touch screen are all typesof interactive devices. The results of our

approach provide information relevant toNorman's concept of a design model [9].

Understanding design models of the HCI mayproduce two types of benefits. First, interfacedevelopment often requires inputs from two

different types of experts--human factors

specialists and software developers. Theprimary work of the human factors specialistsmay involve identifying the ways in which asystem should display information to the user,the interactive dialogue between the user and

system, and the types of inputs that the usershould provide to the system. The primarywork of the software developers may centeraround writing the code for a user interfacedesign and integrating that code with the rest ofthe system. Given the differences in theirbackgrounds and roles, human factorsspecialists and software developers may havedifferent cognitive models of the HCI. Yet,they have to communicate about the interfaceas part of the design process. If they havedifferent models, their interactions are likely toinvolve a certain amount of mis-

communication. Second, the design process in

general is likely to be guided by designers'cognitive models of the HCI, as well as bytheir knowledge of the user, tasks, andsystem. Designers in any field do not start witha tabula rasa; rather they begin the designprocess with a general model of the object thatthey are designing, whether it be a bridge, ahouse, or an HCI.

Our approach to a design model of the HCIwas to have three groups make judgments ofcategorical similarity about the components ofan interface: (1) human factors specialists withHCI design experience, (2) softwaredevelopers with HCI design experience, and(3) a baseline group of computer users whohad no experience in HCI design. Thecomponents of the user interface included bothdisplay components such as windows, text,and graphics, and user interaction concepts,such as command language, editing, and help.

The judgments of the three groups wereanalyzed using hierarchical cluster analysis [8],

17

Page 28: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

and Pathfinder ([12] and [13]). Thesemethodsindicated, respectively,(1) how thegroups categorized the concepts, and (2)network representationsof the conceptsforeachgroup.The Pathfinderanalysisprovidesgreater information about local, pairwiserelationsamongconcepts,whereastheclusteranalysisshowsglobal,categoricalrelationstoagreaterextent.

METHOD

SUBJECTS

Thirty-five subjects (members of a NASASpace Station Freedom user interface workinggroup, employees at Lockheed and AT&T BellLaboratories, and students at Rice University)were assigned to one of three groups on thebasis of their work and/or academic experiencein human factors and software development:human factors specialists (n = 13), softwaredevelopers (n = 11), and computer users withno experience in HCI design (n = 11). Thehuman factors specialists reported that theirmedian years of working experience in humanfactors was 4.5, in user interface issues was

4.5, and in software development was 2. Thesoftware development group reportedsubstantially more software experience than thehuman factors group, a median of 5.5 years ofwork, slightly longer experience with interfaceissues, a median of 6 years, but markedly lesshuman factors experience, 1 year. The non-HCI group's relevant experience was minimal,with only software courses (median number ofcourses = 1) and experience as users ofsoftware (primarily for word processing).

MATERIALS

A questionnaire was designed to investigateindividual's models and knowledge of theHCI. The first part of the questionnaireconsisted of a list of 50 HCI terms (for

example, auditory interface, characters,command language, and keystroke) selectedfrom (1) the indices of CHI Proceedings from1986 to 1988 and (2) recent general books onhuman-computer interaction ([2], [3], [10],and [11]). Terms were selected based, in part,on their co-occurrence in these sources and the

frequency of occurrence within the sources.The terms were presented in alphabetical order.

The final part of the questionnaire asked forinformation about the subject's experience withand knowledge of human factors and softwaredesign. The answers from this section wereused in assigning subjects to one of the three

groups.

PROCEDURE

Subjects read a set of general instructions thatoriented them to the tasks. Included in these

instructions was a comprehensive example thathad the subjects apply the procedure to a set offood concepts. Then, subjects started with PartI by reading through the entire list of 50 terms.

If a subject was unfamiliar with a term, he orshe was instructed to cross that term off the

list. Next, subjects sorted related terms into'piles' by writing the terms into columns on adata sheet. Subjects could place items in morethan one pile or leave items out of any pile.

RESULTS

The results from Part I of the questionnairewere analyzed using two multivariate statisticaltechniques--hierarchical cluster analysis [8]and Pathfinder analysis ([12] and [13]). Thecluster analysis indicates how subjectscategorize concepts, whereas the Pathfinderanalysis provides a network representation ofthe concepts.

CLUSTER ANALYSIS: CATEGORIESOF DECLARATIVE KNOWLEDGE

To prepare the data for the cluster analysis, aco-occurrence matrix of the concepts was

created for each subject. When a subjectplaced two concepts in the same pile, a countwas entered into the corresponding cell of thematrix. Then, the matrices for all of the

subjects within a group were combined. Theco-occurrence matrices for each group wereconverted to dissimilarity matrices bysubtracting the co-occurrence value from the

18

Page 29: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

numberof subjectsplus 1, and a minimum-distance hierarchical cluster analysis wasperformed.

The clusteranalysisdisplayedin Figures 1A,1B, and 1C shows substantial differencesbetweenthe non-HCI group and the experts,but revealssomesimilaritiesanddissimilaritiesbetween the two expert groups. The datadisplayedincludesonly thoseclustersin which50% or more of the subjects in that groupsortedtheitemsinto thesamepile. Thefiguresshow (1) subclusterswith a relatively smallnumberof conceptsandfor which agreementof categoricalco-occurrencewas thegreatest,and (2) various levels of supraclustersconsisting of one or more subclustersandadditionalconcepts.Thestrengthof agreementwithin a group(i.e., thepercentageof subjectswhoplacedtheconceptsin thesamecategory)is indicated by the percentagein the clusterboundaryandby thewidth of the line aroundacluster (thicker lines indicate greateragreement).Thelabelfor acluster,selectedbytheauthors,is in bold abovethecluster.

The two expert groups had both a greaternumber of clusters and generally morecomplex hierarchical relations among theclusters than did the non-HCI group. Inaddition, both expert groups differedsubstantiallyfrom thenonexpertgroupin thecontentof their clusters,with two exceptions:(1) All three groups had relatively highagreementthat the terms, expert user I andnovice user, belonged to the same cluster,which was hierarchically unrelated to otherclusters, and (2) the three groups of subjectscategorized mouse, touch screen, trackball andinteractive devices together. However, thetypes of devices were not part of a largerhierarchy for the non-HCI group, but wereincluded in the Interaction Techniquessupracluster for both expert groups. Otherareas of basic agreement between the twoexpert groups were a Guidance/Helpsupracluster and an Output cluster.

The cluster analysis shows two key areas ofdisagreement between the human factors and

1In the description that follows, the terms from thequestionnaire are italicized.

software experts: (1) the contents andorganization of the Display Elements clusterand (2) the relation of software concepts toother user interface concepts. In the DisplayElements cluster, human factors experts had

three categories at the same level in thehierarchy--Textual Elements 2, GraphicalElements, and Tabular Elements. In contrast,software experts had a Graphical Elementssubcluster which was nested in a

Coding/Graphics subcluster, which, in turn,was nested in a larger Nontextual DisplayElements subcluster. Note also that the

software developers grouped color coding andhighlighting in the Display Elementssubcluster, whereas the human factors

specialists grouped those two concepts withdata grouping and symbolic codes in a separatecluster, Display Coding. This difference incategorizing display coding concepts may bedue to a greater emphasis by human factorsexperts on the similarities in function amongmethods for coding information on a display.

As Figure 1B shows, the software groupincluded six software concepts concerned withthe user interface and applications in the UserInterface Elements supracluster. In contrast,the human factors group categorized thesoftware-related concepts in a separatesupracluster unconnected to other userinterface concepts. This finding suggests that,in the software developers' design model,software is more fully integrated with otherHCI concepts than it is in the human factorsspecialists' model.

PATHFINDER: NETWORKS OFDECLARATIVE KNOWLEDGE

The similarity matrices derived from thesorting data for each group were also analyzedwith the Pathfinder algorithm using theMinkowski r-metric, r = oo and q = 49 (see[13]). The Pathfinder algorithm generated anetwork solution for each of the three matrices.

However, the network for the non-HCI groupwas exceedingly complex and difficult to

2Names for the subclusters are indicated in Figure 1 bya boxed label with an arrow pointing to the specifiedsubcluster.

19

Page 30: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

A Users

_.xpa_Ur.m73%1Novi¢_ Usa'=

Display ofInformation

Mc_Bg

Tides & La_h,

Tables 73%

re.st

_'indowl

Mes_ge Area 64%

Di_hy of [aformafoE

Graphi_Icons 55%

a Ulerl

ovice Users

Help

_s_r Guidance 82%

Prmtexi .Manuals 73%

User Interface Elements

I IICursor

Display of InformationM_nus

Command Line/Area

Tides & Labels

Message ._ea 73%

Charect_s

Tables

Text 6.4%

]_,pplication Software_rogr amming

L@olotyping,'LMS

[_teracxiveDcvice.t 55%

Interactive Devicel Data Manipulation

IJTr==kbaU 6Z%

]lamcaaion Devices

[ 55%

Screen Pr]mhivee

icChai'l ¢.L¢_

ursor 55% J

Pagmg& Scroliin 4

Searching 100% |

JExitiag 1Savmg Data 73% ]

Color Coding

Hi_USh_gEditing 64%

Data Manipulation

Output

T g:% Ipy Presentation 55

....=._m_r_c6 on Tcchnique_

b_Disphy Elements ]

User|

_xpert Use_

ovi_ Users 92% 1

Data Manipulation

fill liiSaving I_ta

SeLrching 100%

/p_ting_% Jata Entry

ging & Sca_lling 69%

User Interface Elemen_

Help

M=u_ 77%1|erGuidance 62% )

Output

PY Prese_:aab°a 1

tput Devic_ [

T 54% J

Software

]_ot=yp_g85% I [I°_',M'_g_ I /IL_s69_ I ]

pplicacion Software 54%j

C.canma_ Keyserokes

Interactive Dialogues 69%

Auditory Into"face

Speech Ree.,oiffdtion 62%b

|c_=d Liner7%

lwi_o,,.,7'7%

Tebh_s

Titles & Labels 77%

Menus

Text

Gt Lphk.s 69%

!Interact oni-_Techn ique' I

Display Coding]

Figure 1. Cluster analysis for non-HCI subjects (1A), software developers (IB), and humanfactors specialists (1C).

20

Page 31: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

interpret, with 171 links among the 50concepts. Consequently, we will only focus onthe more interpretable results from the twoexpert groups. The human factors group had81 links and the software experts 69 linksamong the 50 nodes. Figure 2 shows theresults of the Pathfinder analysis for the humanfactors (2A) and software experts (2B). Thegraphs show each concept as a node in anetwork and show the links between the

nodes. The strength of each link is representedby its width, with wider lines indicatingstronger connections.

Human Factors Specialists. The network

representation for the human factors expertsconsists primarily of subnetworks of intercon-nected concepts, indicated in Figure 2A by thedashed lines around the groups of concepts(with subnetwork labels, selected by theauthors, contained in the boxes pointing to therelevant subnetwork). Subnetworks weredefined as groups of three or more concepts, inwhich each concept linked directly to at leasttwo other concepts in that subnetwork, and inwhich the interconcept distance was no greaterthan two links for all concepts. This definitionmaintains a high level of interconnection andclose association of concepts within thesubnetwork. With the exception of speechrecognition, which appears in both InputDevices and Advanced User Interface

Techniques, the subnetworks are cleanlyseparated, in that the concepts are not sharedby subnetworks.

Each subnetwork for the human factors expertsconnects with other subnetworks. Several ofthe subnetworks have a direct link between

two concepts. For example, the User-Computer Dialogue Methods subnetwork andthe Input Devices subnetwork are connected bya link between command keystrokes andfunction keys. The other subnetworks makeconnections through one or two intermediateconcepts. For example, menus provides aconceptual connection between User-ComputerDialogue Methods and Graphical DisplayElements. Similarly, data forms links the DataManipulation subnetwork to InformationDisplay Types.

Only a few concepts are offshoots of asubnetwork unconnected to another concept--

graphics, natural language, command line, anduser guidance. The major departure from thesubnetwork structure is the string of conceptsrelated to software, with display of informationlinked to display manager, which connectswith UIMS, which in turn links to

prototyping, and so on.

Software Developers. The structure of thenetwork representation for the software experts(Figure 2B) consists of both (1) central nodesfrom which links radiate out in axle and spokefashion and (2) subnetworks consisting ofinterconnected concepts. We defined a centralnode as a concept with at least three links inaddition to any links it might have within asubnetwork. Central nodes are shown in greyin the figure; as in Figure 2A, subnetworks arebounded by a dashed line with labels containedin boxes.

The software experts had only twosubnetworks containing more than threeconcepts, Data Manipulation and InformationOutput, and had only three triads of concepts.Among the central nodes, both mouse andexpert users are of interest because they linkdirectly to other central nodes, with mousehaving strong connections to interactivedevices and keyboard input and expert usersweakly linked to programming and naturallanguage. In addition, mouse functions bothas a member of a subnetwork and as a central

node. Graphics is also well connected, withmembership in two subnetworks and centralnode status.

Comparing the Expert Groups. The networksreveal important differences between the twoexpert groups. Overall, the ratio of the numberof links shared by the two groups to the totalnumber of links was 0.23. Looking at specificconcepts, several of the concepts that haveonly one link in one group's representation arestrongly interconnected in the other group'snetwork. For example, graphics and naturallanguage are linked directly to a number ofother concepts in the software experts'network, but have only one link apiece forhuman factors experts.

21

Page 32: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Display Coding ]

Data Manipulation [

User-Computerue Methods

Information

Information DisplayTypes

r raphical Display ]

Elements [

Advanced Interface

Techniques

Input Devices ]User

Knowledge AquisitionTechnic

Note. Line thickness indicates the strength of the link. Thicker lines mean greater strength.

Figure 2A. Pathfinder network for human factors specialists.

22

Page 33: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

UIMS

Information

Color

DataManipulation

Trackball

Pointing Devices I

Note: Line thickness indicates the strength of the link. Thicker lines mean greater strength. Central

nodes are shaded in gray.

Figure 2B. Pathfinder network for software developers.

On the other hand, function keys is a memberof the Input Devices subnetwork and connectsthat subnet work to the User-Computer

Dialogues subnetwork for human factorsexperts, but links only with keyboard input forsoftware experts. An additional difference is

23

Page 34: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

that the networksfor the softwareandhumanfactors expertsshow no overlapbetweentheconceptsthatlink to only oneotherconcept.

Whenaconcepthasthesamenumberof directlinks for the two groups, it may revealimportantdifferencesin thedesignmodelsif itdiffers in the other concepts to whichconnectionsaremade.For example, look atuser interface management system in Figures2A and 2B. For both groups, one of itsconnections is with display manager,

indicating knowledge of the relationshipbetween the software that manages the entireuser interface and the software that writes to

the screen. For human factors experts, theother connection of UIMS is with prototyping,

suggesting that the prototyping capability is animportant part of a UIMS for interfacedesigners with a human factors background.However, for software experts, UIMSconnects with application software, which isconsistent with the software architecture of the

user interface--with the UIMS interacting withthe application software, as well as the display

manager.

DISCUSSION

GENERAL EFFECTS OF HCIDESIGN EXPERIENCE

The data from both the cluster analysis andPathfinder analysis show differences as aneffect of expertise in human-computerinteraction. Both expert groups had (1) agreater number of clusters containing moreconcepts and (2) more complex hierarchicalstructures of the clusters than did the non-HCI

group. The Pathfinder solution for the non-HCI group was a mass of links betweenconcepts with minimal differentiation. Incontrast, both expert groups showedsubstantial and meaningful differentiation ofgroups of concepts within the networks.These findings indicate that training andexperience with HCI design has a clear impacton the mental model of the interface. This

finding, by itself, may not be surprising.However, many people outside of the field ofhuman-computer interaction may hold contraryopinions--for example, that HCI design is

simply a matter of common sense or thatcomputer users' experience is the equivalent ofHCI design experience. The present data argueagainst those opinions by showing the effectsof user interface design experience.

EFFECTS OF SPECIFIC HCIDESIGN EXPERIENCE

Differences between the mental models of

experts and novices abound (for example, see[5]). We present evidence here that expertsmay differ in their cognitive models as afunction of their roles and experience in acommon area of expertise.

The Pathfinder analyses suggest that the differ-ent types of experts differ in the overall organi-zation of their cognitive models. Humanfactors experts had a network made up ofdistinct subnetworks, with the subnetworks

tending towards heavy internal interconnectionwith a single connection betweensubnetworks. The software experts' cognitivemodel had multiple organizing schemes,including central nodes, as well as complexand simple subnetworks. Cooke, Durso, andSchvaneveldt [6] have shown that the networkrepresentations derived by Pathfinder arerelated to recall from memory, with closelylinked items in the Pathfinder network being

more likely to be recalled together.Consequently, recall of an HCI concept maytend to have an effect that is localized withinthe subnetwork for human factors experts.However, recall of that same concept may

spread more broadly for software experts. Forexample, a software developer who thinks ofkeyboard input would be likely to recallmouse,function key, command keystrokes,and command language. In contrast, keyboardinput would be most likely to produce recall ofonly mouse and function keys for humanfactors experts. The localization of recallmight help human factors experts to maintain amore focused stream of thought, but the

broader spread of recall may help softwareexperts to think more innovatively about HCIconcepts by activating more varied concepts.

Differences in the concepts that are linked or in

the categories in which HCI designers place

24

Page 35: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

conceptsmight beexpectedas a function ofexperience.For example,softwaredeveloperswouldbemuchmorelikely to seetherelationsbetweensoftwareandotherHCI conceptsthanwould human factors specialists. However,why would these two groups have verydifferent organizing schemes for theirconcepts? One possibility is that softwaredevelopershaveto beconcernedwith boththeways in which theHCI softwarewill beusedand with the methodsfor implementing thesoftware. In other words, their cognitivemodel may representa compromisebetweenknowledgeabout the function and about theimplementation of the human-computerinterface. In contrast,the cognitivemodelofhumanfactorsspecialistsmaybemorecloselytiedonly to function.

PRACTICAL IMPLICATIONS

The Pathfinder and cluster analyses showedsubstantial differences in the number of

connections and the conceptual links for avariety of the HCI concepts, such as graphicsand function keys. These findings suggest thatdesign team members with different types ofexpertise should take care to define their termswhen discussing the conceptual categories--user interface elements and display coding--and about specific concepts like graphics,function keys, speech recognition, and natural

language. A term like graphics may evoke amore elaborate set of associated concepts fordesign team members with backgrounds insoftware development than it does for those inhuman factors, whereas function key mayevoke more concepts for human factorsspecialists.

One way of eliminating the problems ofmiscommunication due to different designmodels might be to train all of the designers tothink alike. However, even if this were

possible, it might lead to unintended problemsin user interface design. Diversity of thinkingmay improve the design process. Thus,training out the diversity might result in a teamthat could not make conceptual breakthroughsor recognize when they were going down ablind alley. The best user interface designs arelikely to emerge when the human factors

specialists on a team can think their way andthe software developers can think their way,but when each member understands the

meaning of the others' thoughts whenexpressed in language or design. Therepresentation of design team members'cognitive models described in this paperprovides the first step in enhancing thatunderstanding.

A CKNO WLEDGEMENTS

This research was funded by NASA ContractNAS9-17900 and was performed withinNASA Johnson Space Center, Human-Computer Interaction Laboratory. The authorswould like to thank Dr. Nancy I. Cooke forher advice on the analysis methods used in thispaper and her comments on the paper itself.

.

,

.

.

REFERENCES

.

Anderson, J. R. (1983). Architecture

of cognition. Cambridge, MA: HarvardUniversity Press.

.

Baecker, R. M., and Buxton, W. A. S.(1987). Readings in human-computerinteraction; A multidisciplinaryapproach. Los Alamos, CA: MorganKaufmann Publishers, Inc.

Card, S. K., Moran, T. P., andNewell, A. (1983). The psychology ofhuman-computer interaction. Hillsdale,NJ: Lawrence Erlbaum Associates.

Carroll, J. M., and Olson, J. R.(1988). Mental models in human-computer interaction. In M. Helander(Ed.), Handbook of human-computerinteraction (pp. 45-65). Amsterdam:Elsevier Science Publishers.

Chi, M. T. H., Feltovich, P. J., and

Glaser, R. (1981). Categorization andrepresentation of physics problems byexperts and novices. CognitiveScience, 5, 121-152.

Cooke, N. M., Durso, F. T., andSchvaneveldt, R. W. (1986). Recall

25

Page 36: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

.

.

o

10.

11.

12.

13.

and measures of memory organization.Journal of Experimental Psychology:

Learning, Memory, and Cognition, 12,538-549.

Gentner, D., and Stevens, A. (Eds.).(1983). Mental models. Hillsdale, NJ:Lawrence Erlbaum Associates.

Miller, G. A. (1969). A psychologicalmethod to investigate verbal concepts.Journal of Mathematical Psychology,6, 169-191.

Norman, D. A. (1986). Cognitive

engineering. In D.A. Norman & S.Draper (Eds.), User-centered systemdesign. New perspectives in human-computer interaction. Hillsdale, NJ:Lawrence Erlbaum Associates.

Norman, D. A., and Draper, S.(1986). User-centered system design:New perspectives in human-computerinteraction (pp. 31-61). Hillsdale, NJ:Lawrence Erlbaum Associates.

Shneiderman, B. (1987). Designingthe user interface: Strategies foreffective human-computercommunication. Reading, MA:Addison-Wesley.

Schvaneveldt, R. W., and Durso, F.T. (1981). Generalized semanticnetworks. Paper presented at theMeetings of the Psychonomics Society.

Schvaneveldt, R. W., Durso, F. T.,and Dearholt, D. W. (1985). Path-finder: Scaling with networkstructures. Memorandum in Computerand Cognitive Science, MCCS-85-9,Computing Research Laboratory, NewMexico State University.

26

Page 37: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

AUTOMATED SYSTEM FUNCTION ALLOCATION AND DISPLAY FORMAT: 7'TASK INFORMATION PROCESSING REQUIREMENTS _')

N94 241" _ _5_Mary P. Czerwinski

Lockheed Engineering and Sciences Company

Research directed by Marianne Rudisill,Manager, Human Computer Interaction Lab,NASA JSC.

INTRODUCTION

Questions relevant to the Human Factorscommunity attempting to design the display ofinformation presented by an intelligent systemare many: What information does the userneed? What does the user have to do with thedata? What functions should be allocated to

the machine versus the user? Currently,Johnson Space Center is the test site for anintelligent Thermal Control System (TCS),TEXSYS, being tested for use with SpaceStation Freedom. The implementation ofTEXSYS' user interface provided the Human-Computer Interaction Laboratory with anopportunity to investigate some of theperceptual and cognitive issues underlying ahuman's interaction with an intelligent system.

An important consideration when designing theinterface to an intelligent system concernsfunction allocation between the system and theuser. The display of information could be held

constant, or "fixed," leaving the user with thetask of searching through all of the availableinformation, integrating it, and classifying thedata into a known system state. On the otherhand, the system, based on its own intelligentdiagnosis, could display only relevantinformation in order to reduce the user's searchset. The user would still be left the task of

perceiving and integrating the data andclassifying it into the appropriate system state.Finally, the system could display the patternsof data. In this scenario, the task of integratingthe data is carded out by the system, and theuser's information processing load is reduced,leaving only the tasks of perception andclassification of the patterns of data. Humansare especially adept at this form of display

processing [1, 2, 11, and 12].

Although others have examined the relativeeffectiveness of alphanumeric and graphicaldisplay formats [7], it is interesting toreexamine this issue together with the functionallocation problem. Expert TCS engineers, aswell as novices, were asked to classify several

displays of TEXSYS data into various systemstates (including nominal and anomalousstates). Three different display formats wereused: fixed (the TEXSYS "System Status at aGlance"), subset (a relevant subset of theTEXSYS "System Status at a Glance"), andgraphical. These three formats were chosendue to previous research showing the relevantadvantages and disadvantages of graphicalversus alphanumeric displays (see Sandersonet al., 1989 for a review), and because of thevast amount of literature on the beneficial

effects of reducing display size during visualsearch in cognitive psychology (see Shiffrinand Schneider, 1977; Schneider and Shiffrin,1977). The hypothesis tested was that thegraphical displays would provide for fewererrors and faster classification times by bothexperts and novices, regardless of the kind ofsystem state represented within the display[11]. The subset displays were hypothesizedto be the second most effective displayformat/function allocation condition, based onthe fact that the search set is reduced in these

displays [5, 6]. Both the subset and thegraphic display conditions were hypothesizedto be processed more efficiently than the fixeddisplay condition, which corresponds to the"System Status at a Glance" display currentlyused in TEXSYS.

METHOD '

SUBJECTS

Four frequent users of TEXSYS, thermalcontrol engineers at JSC, participated in theexperiment. The subjects had an average of

27

Page 38: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

GMT CST DATE ORI

Diagnosis

Buttons

Nomlnal

[yap Dryout

Pump Cavitatn

: ter B ockage

_o Subcooling

Setpoint Dev

C

W

Evaporators

,4, 50% _n7O F

OOkW

selpoint 70 F

70F

1 0 kw oullet letup: 37 F inlet temp 35 F

filler pressure: 0 20 psid _ condenser return flow 13.6 Ib/hr

70 F

10 kWpumphead: 320 psid speed: 2500 npm

70 F

OOkW

=1,

70F

00 kW

outlet letup: t 35 F t 35 F t 35 F t 35 F

avg radiator t t

vapor temp : • 70 F t 70 F 70 F 70 F

Condensors

t 35F

t 70F

Figure 1. The "fixed" display.

eight years experience. Six novices, allengineers, also participated in the experiment.None of the novice subjects was familiar withthe two-phase thermal bus system used in theTEXSYS project, nor with thermal controlsystems in general. All subjects wereexperienced users of Macintosh computers,and all had normal or corrected-to-normalvision.

STIMULI AND MATERIALS

The design, presentation, and collection of allstimulus materials and data were carried out on

a Macintosh IIx computer using SuperCardand SuperTalk. A mouse was used for allsubject inputs. Examples of the fixed, subset,and graphical display formats can be seen inFigures 1, 2, and 3, respectively. Note that,while the fixed and graphical displays bothcontain information about all of the major

system components, the subset displays onlyshow a subset of the system data.

System Faults. Five different systemanomalies could occur during the experiment:evaporator dryout, filter blockage, pumpcavitation, loss of subcooling and setpointdeviation.

MATCHING NOMINAL ANDANOMALOUS DISPLAYS

Nominal displays were matched withanomalous displays for two reasons. First,designing the experiment in this manner avoidsbiasing the subjects toward responding "fault"or "no fault." The second reason is related to

a peculiarity in the subset display condition. Inthese displays, subjects were told that theexpert system had made a reasonable guess asto the critical system state, and onlyinformation concerning that state was shown.In nominal conditions, in order to control for

the amount of information displayed to thesubject, the same component subsets wereshown as in the fault conditions. However,

28

Page 39: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Evaporators

70F

0.0kW

70 F

1.0 kW

70 F

1.0 kW

70F

O,0kW

70F

0.0kW

Figure 2. The "adaptive" display.

1(

8

6

Avg

5Jrr

4

2

43

_ J"_ D 25

D$1d

30

25

2O

x x

2 ] 4 5

Eva3o-alo"

I I I 1 ---'1_

-23 -5 -'0 -5 n _,_.,

J0

25

=ilte_2 3

25t_ -

"0

05

I I I I I-50 -45 -30 - 5 now

22_

3u:I_[ :e-_

DSla

_5C =i.le._ ,aDOrC3_3e_g°r

' " m:e:

?g :e_ D

I I I I I I

25 5_ ,"5 30 2=- :53

--333

-533

Figure 3. The "graphic" display.

29

Page 40: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

sincethedisplayswerenominal,thedisplayeddatavalueswereneveraberrant.Thematchingof displayssimplyinvolvedreplicatingtheno-fault displays and then changing particularcomponentvaluesto off-nominal for thefaultdisplays.

DESIGN

The experimental design was a 3 x 2 x 5 x 2factorial, with three different display formats

(fixed, subset, and graphic), both nominal andanomalous display instances, five differentstate instances, and two repetitions percondition. Note that this design implies that a

system fault occurred on 50% of the trials.There were two groups of subjects run in the

experiment: experts and novices. The noviceswere given two sessions of training, whichadded an extra factor (session) to their design.All variables were run within subjects, but

experts and novices were analyzed separately.The three different display formats wereblocked, such that there were three blocks of20 trials (including the repetitions) in eachexperimental session. The order in which eachsubject received the three display formats wascounterbalanced. All of the other factors wererandomized within a display condition block.

The dependent measures collected werereaction time and percent correct.

PROCED URES

Experts. During an orientation, prior to actualdata collection, the experts were shown a tableof nominal data values (as well as the accept-

able ranges of deviation for those values) forthe major components of the system.

Novices. The same materials that were used

for orientation of the experts were used to trainthe novices. Unlike the expert subjects, thenovices studied the nominal operations table

for approximately 50 minutes 1. During thistime, they were informed about the patterns of

1This was the average amount of time needed totrain each individual subject, although each

subject's time varied slightly due to the number

of questions they asked.

data which might occur for each of the five

system faults 2.

Both expert and novice subjects wereinstructed to monitor the displays presented tothem for one of the six system states. Theywere instructed to search the system display

quickly, without making errors, for systemstatus information. Once the displayed datahad been categorized by the subject, s/he wasinstructed to indicate which system state hadoccurred via a button-click with the mouse

input device.

All subjects were run through a practice exper-iment, in which an example of each DisplayFormat x System State combination wasincluded. Feedback in the case of an error was

provided for the subjects as a computer beep.

The diagnosis buttons were located to the farleft of the display, as can be seen in Figure 1.The CONTINUE button (on the intertrialscreen) was located in the center of the

position previously occupied by the sixdiagnosis buttons. This button placement wasused in order to reduce the motor movementtime involved in selecting any of the six

diagnosis buttons. Trials were self-paced, andsubjects were encouraged to take a short breakbetween blocks. The experimental session

lasted approximately one hour.

RESULTS AND DISCUSSION

ERRORS

Experts. Overall, the experts operated at anaccuracy level of 93% correct. A separateanalysis of variance (ANOVA) with repeatedmeasures was run on the error data for both

2Novice subjects were run through the experimentfor two reasons: there were too few experts

available to participate in the experiment, and

the experts were extremely well-practiced at

diagnosing the System Status-at-Glance

displays. Both problems might have biasedresults. The extra novice session was to ensure

that novice subjects had a chance to attain near-

expert levels of performance in this task.

3O

Page 41: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

TABLE 1.

Average Logged Reaction Times for Diagnosing the Six System States in Each Display Format forExpert and Novice Subjects.

NovicesState Fixed AdaN.m fztmb__ Fixed _ fi._naz_

Nominal 9.3 8.5 9.4 8.5 8.0 8.1Evap Dryout 3 9.4 9.1 9.8 8.3 7.6 7.9

Filter Block 4 9.4 8.7 8.9 8.9 8.3 8.0

Pump Cav 5 9.1 8.6 8.8 8.5 8.0 7.7

No Subcooling 6 9.5 9.3 9.7 8.5 8.0 8.6Setpoint Dev _ 9.3 8.2 8.9 8.9 8.6 8.7

Ave_e 9.3 8.7 9.3 8.6 8.1 8.2

experts and novices. For experts, the ANOVAwas a 3 x 2 x 5 x 2, representing the factors ofdisplay (fixed, subset, and graphic), fault orno fault, type of fault, and repetition. Theanalysis revealed a significantly larger numberof errors with nominal displays, F(1,3)=22.09, p < .02. No other effects weresignificant for the experts.

Novices. On the average, the novice subjectsperformed at an accuracy level of 91.2%

correct in session 1, and 93% correct duringsession 2. For novice subjects, a 2 x 3 x 2 x 5x 2 ANOVA with repeated measures wascarried out on the error data. The first variable

corresponds to the two sessions of trainingthat novice subjects received during theexperiment; all other factors are identical to

those used in the expert subject's ANOVA.There was a significantly larger number oferrors in the nominal display condition, F(1,5)= 20.05, p < .01. No other effects weresignificant.

REACTION TIMES

A t-test was performed between the overall

average reaction times of the experts and theoverall average (across two sessions) of thenovices. No significant difference was found

between the two groups 8, t(8)= 1.61, p >.05.

Experts. The pattern of results for the expertsubjects can be seen in Table 1. The ANOVA

revealed significant main effects of displaycondition, F(2,6) = 7.9, p < .05, with subsetdisplays processed the most quickly, followedby the graphical displays. No other maineffects were significant for the expert subjects.However, there was a significant interactionbetween whether or not a fault was present andwhich type of fault had to be diagnosed,F(4,12) = 3.27, p < .05. This interaction

reflected the fact that there were largerresponse time differences within theanomalous display instances than within thenominal displays, although plannedcomparisons did not reveal any significantdifferences between the anomalous displayinstances (all p's > .05).

3Evaporator Dryout

nFilter Blockage

5pump Cavitation

6Loss of Subcooling

7Setpoint Deviation

8No significant difference was found in the error data,as well.

31

Page 42: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Novices. The pattern of results for the novicesubjects is shown in Table 1. The ANOVArevealed significant main effects of session,F(1,5) = 38.33, p < .01; display condition,F(2,10) = 14.04, p < .01; and type of faultbeing diagnosed, F(4,20) = 13.51, p < .001.Session 2 was faster than session 1, and,

again, the subset displays were processedmost quickly. A significant interactionoccurred between display condition and the

type of fault being diagnosed, F(8, 40) =2.76, p < .05. This interaction was notobserved for the expert subjects, and reveals apattern of data whereby certain faults areprocessed more quickly in particular formats.Finally, there was a significant interactionbetween whether or not a fault was occurring

and the type of fault to be diag. nosed, F(4,20)= 3.98, p < .05. This interacnon is similar tothat observed in the expert data. Thisinteraction reflected the fact that, for nominal

conditions, none of the display instances were

processed significantly faster than the averageof the others, as determined by planned

comparisons (all p's > .05). However, in thefault condition, the evaporator dryout fault was

processed significantly faster than the averageof the other faults, t(9) = -1.88, p < .05, and

the setpoint deviation fault was processedsignificantly slower than the average of theother faults, t(9) = 2.13, p < .05.

Finally, it should be noted that for both theexperts and the novices there was probably aspeed-accuracy trade-off operating on the reac-tion times within the no-fault condition.

Specifically, errors increased significantly inthe nominal condition, while reaction timeswere no different than those in the fault

displays. This may have masked anysignificant effects occurring in the no-faultdisplay conditions.

EXPERIMENT 2

Experiment 1 demonstrated the benefit ofshowing only relevant information to thesubject. It was also shown that novices appearto diagnose certain faults better in a subset,alphanumeric format, while other faultdiagnoses benefit from a graphical displayformat. However, one problem with

interpreting this result has to do with the factthat the amount of information was notcontrolled between the subset alphanumeric

and the graphical display conditions. In otherwords, there was no subset, graphic displaycondition. Experiment 2 equated more fullythe two conditions and it was a means by

which to explore the issue that a graphicalformat would always be a better representationwhen only the relevant state information is

displayed.

It was also hypothesized in Experiment 2 thatthe kind of information processing requiredwhile diagnosing a display could affect perfor-mance. This was because one subset of the

Experiment 1 faults (evaporator dryout andloss of subcooling) could be described as

requiring a serial scan of the data followed byone memory comparison in all of the formatconditions (the one memory comparison refersto the comparison of the displayed data valuewith a memorized nominal value for that

system component). All other faults requiredthe identification of one or more data values,the same sort of mental comparison with anominal value, and then a further comparisonwith other component values. This extracomparison step could be argued to add load toworking memory, and perhaps a graphicalformat is better in these conditions [11].

These ideas were tested in Experiment 2 aswell.

For this experiment, one of the subset displays(relevant to the evaporator dryout fault) wasused throughout the entire experiment. In onehalf of the experiment, subjects simplyscanned evaporators to detect off-nominalsurface temperatures in both graphical andalphanumeric display formats. In another halfof the experiment, an extra comparison stepwas required in order to diagnose the datadisplayed in both formats.

METHOD

SUBJECTS

Seventeen Lockheed Engineering and Sciences

engineers voluntarily participated in theexperiment. All subjects were naive

32

Page 43: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

concerning the operation of the automatedThermalControlSystembeingsimulated.

STIMULI AND MATERIALS

For the "scanning" level of the decision-making variable, the alphanumeric displaysfrom the subset condition in Experiment 1were used for this experiment. The graphicaldisplay was modified from Experiment 1 forthis condition, so that a bar graph format wasused. For the "scan + compare" condition,pump information was added to each of thesedisplay formats. Essentially, a pump outlettemperature was added to the displays forcomparison with the evaporator information.

DESIGN

The experiment was a 2 x 2 x 2 factorialdesign, with two levels of the kind of

decision-making steps required to diagnose afault (scan, and scan + compare), bothalphanumeric and graphical display formats,and nominal vs. anomalous display instances.Nested within the anomalous displayinstances, and only within the scan + compareconditions, was another factor- type ofanomalous fault. This variable could not be

added to the nomalous displays because noma-lous displays do not fall into subcategories inthis system. However, we did vary theparticular data values within the nomalous

displays so that the nomalous and anomalousdisplays were balanced in the number of

unique system instances presented to anygiven subject during a session. This wasbecause more faults were available for

diagnosis when pump information was presentin the display. Specifically, during the scan +compare trials, the subject had to distinguishfour different system states: nominal,

evaporator dryout, pump cavitation, orsetpoint deviation. Note that in the scan onlycondition nominal and anomalous trials are

equated, while in the scan + compare conditionthe subject received three times as manyanomalous trials as nominal. Both the

decision-making and the format variables were

blocked, and the order in which subjectsreceived the decision-making conditions was

counterbalanced. However, if a subject

randomly received the scan only (or scan +compare) decision-making condition first, thatsubject always received both display formatconditions (in a random order) prior todiagnosing the scan + compare (scan only)blocks of the experiment. The magnitude andpattern of the faults within the displays werecontrolled across the graphic and alphanumericdisplay formats.

PROCEDURE

The procedure for running this experiment wasidentical to that for Experiment 1, althoughonly novice subjects were run for a singlesession.

RESULTS AND DISCUSSION

ERRORS

The errors were submitted to an ANOVA,

including the variables of decision-makingsteps, display format, and type of response(nominal or anomalous). There was nosignificant pattern of errors.

REACTION TIMES

The reaction time results are shown in Figure4. The reaction times were submitted to an

overall ANOVA, including the variables ofdecision-making steps, display format, andtype of response (nominal or anomalous). The

analysis revealed significant main effects ofdecision-making condition, F(1,16) = 89.85,p < .001, and display format condition,F(1,16) = 34.72, p < .001. The scanningonly condition was diagnosed more quicklythan the scanning and comparing condition,while the graphical format was processed morequickly than the alphanumeric display format.The interaction of decision-making conditionand display format was not significant,F(1,16) = 1.3, p = .2. However, theinteraction of display format condition andsystem state (nominal vs. anomalous) wassignificant, F(1,16) = 7.37, p < .05. Finally,a significant three-way interaction wasobserved between decision-making condition,display format, and system state, F(1,16) =9.16, p < .01. The higher-level interactions

33

Page 44: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

3000

Experiment 2

Scanning OnlyScanning +

Comparison

_. 2ooo

® • NoFau.o_

[] Fault

><

1000

alph graph alph

Display Format

graph

Figure 4. Average reaction time data as a function of decision-making condition and displayformat in Experiment 2. (alph = alphanumeric, graph = bar graph format).

reflect the fact that nominal (no fault)conditions were detected more readily thanfaults in all conditions except with the

alphanumeric display format involving bothscanning and comparing.

The results observed in Experiment 2 showed

that diagnosing a subset graphical display tookless time than diagnosing a subset alphanu-meric display. The scanning only versusscanning and comparison manipulation couldbe argued to have increased the subjects'

processing requirements, since diagnosis timeswere significantly longer in that condition.However, this increase in processing load didnot lead to the interaction between display

format and fault type observed in Experiment1. It may be that the bar graph is a better wayof representing data than the graphicalrepresentations used in Experiment 1. Severalresearchers have reported the integral.process-ing benefits of a bar graph representatxon [3, 4,

and 9]. Subjects may have been capitalizingon the configural [8] properties inherent in the

bar graph representation in both decision-making conditions. This may be especiallyimportant when processing load is high. Somedata to suggest that the bar graph representa-tion is beneficial during heavy processing loadconditions was observed in the three-way

interaction reported in Experiment 2. Thepattern of data showed that in the scanning andcomparing condition subjects were faster atdiagnosing faults in the alphanumeric displays(although still slower than in the graphicdisplays). Perhaps subjects were reverting to aserial search through the data in the formerconditions, due to the high cognitive demandsof the task. An obvious test of this notion

would be to vary the number of system

components showing aberrant data values forthis task, in both alphanumeric and bar graphdisplay formats. (In Experiments 1 and 2,only one system component was ever showing

34

Page 45: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

off-nominal datavalueswithin a display). Ifsubjects revert to scanning in either of thedisplay format conditions due to heavycognitive task demands, diagnosis timesshouldbe shorter,on the average,the greaterthenumberof off-nominal systemcomponents[10]. Thisexperimentis currentlybeingrun inour laboratory.

A CKNO WLEDGEMENTS

This research was funded by the NationalAeronautics and Space Administration, Officeof Aeronautics, Exploration and Technology(OAET), and was performed at the NASA/Johnson Space Center Human-ComputerInteraction Laboratory. The author thanksKevin O'Brien and Kim Donner for valuable

contributions during the design phase of thiswork. The author also thanks Nancy Cooke,Shannon Halgren and Marianne Rudisill forhelpful discussions and comments on earlierdrafts of this paper.

.

.

REFERENCES

.

Casey, E. J. (1986). Visual displayrepresentation of multidimensionalsystems: The effect of informationcorrelation and display integrality. InProceedings of the Human Factors

Society 30th Annual Meeting (pp. 430-434). Santa Monica, CA: HumanFactors Society.

.

Carswell, C. M. and Wickens, C. D.

(1984). Stimulus integrality in displaysof system input-output relationships: afailure detection study. In Proceedingsof the Human Factors Society, 28thAnnual Meeting.

Chernoff, H. (1973). The use of facesto represent points in k-dimensionalspace graphically. Journal of theAmerican Statistical Association, 68,361-368.

Coury, B. G., Boulette, M. D. andSmith, R. A. (1989). Effect of

.

6.

o

°

.

10.

11.

12.

uncertainty and diagnosticity onclassification of multidimensional data

with integral and separable displays ofsystem status. Human Factors, 31,551-569.

Donderi, D. C. and Zelnicker, D.

(1969). Parallel processing in visualsame-different decisions. Perception &Psychophysics, 5, 197-200.

Egeth, H., Jonides, J., and Wall, S.(1972). Parallel processing of multiele-ment displays. Cognitive Psychology,3, 674-698.

Keiras, D. E. (1988). Diagrammaticdisplays for engineered systems:Effects on human performance ininteracting with malfunctioningsystems. NASA Technical ReportNo. 29, July 29, 1988.

Pomerantz, J. R., Pristach, E. A., andCarson, C. E. (1989). Attention andobject perception. In B. E. Shepp andS. Ballesteros (Eds.), Objectperception: Structure & process.Hillsdale, NJ: Lawrence ErlbaumAssociates.

Sanderson, P. M., Flach, J. M.,

Buttigieg, M. A. and Casey, E. J.(1989). Object displays do not alwayssupport integrated task performance.Human Factors, 31, 183-198.

Sternberg, (1966). High-speedscanning in human memory. Science,153, 652-654.

Wickens, C. D. (1984). Engineeringpsychology and human performance.Columbus, OH: Charles E. Merrill.

Woods, D. D. & Roth, E. M. (1988).Cognitive systems engineering. In M.Helander (Ed.), Handbook of human-computer interaction, North-Holland:Elsevier Science Publishers.

35

Page 46: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

,: i_ p _" .. /7

f

L

THE USE OF ANALYTICAL MODELS

IN HUMAN-COMPUTER INTERFACE DESIGN

Leo GugertyLockheed Engineering and Sciences Company

N04-241_00

Research directed by Marianne Rudisill,

Manager, Human Computer Interaction Lab,

NASA JSC.

INTRODUCTION

Researchers in the human-computer interaction(HCI) field commonly advise interfacedesigners to "know the user." Variousapproaches are currently used to getinformation about the user into the hands (and

mind) of the designer. One approach is to usedesign guidelines (e.g., NASA Johnson SpaceCenter, 1988) which can incorporateknowledge of human psychological strengthsand weaknesses and make them accessible to

designers. However, guidelines give onlyoverview information. They do not help thedesigner to configure the interface for aspecific task and specific users (Gould andLewis, 1985). Another way to know the useris to conduct usability tests (Gould and Lewis,1985). This involves building prototypeinterfaces as early as possible in the designprocess, observing typical users as they workwith the prototype, and fixing any observedproblems during the next iteration of thedesign. While effective in making the designeraware of user needs, usability testing adds asignificant amount of time to the design of userinterfaces.

Recently, a large number of HCI researchershave investigated another way to know theuser - building analytical models of the user,which are often implemented as computermodels. These models simulate the cognitiveprocesses and task knowledge of the user inways that allow a researcher or designer toestimate various aspects of an interface'susability, such as when user errors are likelyto occur. This information can lead to design

improvements. Analytical models can

supplement design guidelines by providingdesigners rigorous ways of analyzing theinformation-processing requirements ofspecific tasks (i.e., task analysis). Thesemodels offer the potential of improving earlydesigns and replacing some of the early phasesof usability testing, thus reducing the cost ofinterface design.

This paper describes some of the manyanalytical models that are currently beingdeveloped and evaluates the usefulness ofanalytical models for human-computerinterface design. The paper is intended forresearchers who are interested in applyingmodels to design and for interface designers.This is a summary of an extensive literaturereview paper on the use of analytical models indesign that is being conducted at the JohnsonSpace Center's Human-Computer InteractionLaboratory.

The question of whether analytical models canreally help interface designers is currentlyreceiving much attention in the field of human-computer interaction. Advocates of model-based design claim that our knowledge ofcognitive psychology is becomingsophisticated enough to allow analyticalmodels of the user to play a useful role ininterface design (Kieras, 1988; Butler,Bennett, Polson, and Karat, 1989). Modelingproponents suggest that models could be usedduring interface design in two important ways:

. Models can help designers conduct arigorous task analysis, which in turn mayhelp generate design ideas. A number ofanalytical models (e.g., the GOMS model,Card, Moran, and Newell, 1983) involve

specifying the goals, actions, andinformation requirements of the user'stask. Research suggests that these task

36

Page 47: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

analyses can help designers generateeffectivedesignideas.

o After interface designs have beengenerated, models can help evaluate theireffectiveness. A human-factors psychol-ogist or engineer could work with adesigner to build a computer model of howa user would interact with a new interface.

This model could be run with various inputconditions to predict how long the userwill take to perform tasks using theinterface, and likely sources of user errors.

The benefits of analytical models are by nomeans universally accepted in the HCIcommunity. Many HCI researchers andpractitioners have questioned the usefulness ofmodels for interface design. Whiteside andWixon (1987) claim that current models are

only applicable to the specific task and contextfor which they were developed and cannot beapplied to new interfaces. Others (e.g.,Curtis, Krasner, and Iscoe, 1988; Rossen,

Maas, and Kellogg, 1988) suggest that modelsmay not fit in with the needs of designorganizations or with the intuitive thinking andinformal planning that designers sometimesuse.

This paper will focus on computational,analytical models, such as the GOMS model,rather than less formal, verbal models, because

the more exact predictions and taskdescriptions of computational models may beuseful to designers. The literature reviewpaper that is summarized here evaluated anumber of models in detail, focusing on theempirical evidence for the validity of themodels. Empirical validation is importantbecause without it models will not have the

credibility to be accepted by designorganizations. This paper will briefly describetwo analytical models in order to illustrateimportant conclusions from the literaturereview. Following this, the paper will discusssome of the practical requirements for usinganalytical models in complex designorganizations such as NASA.

EMPIRICAL EVALUATION OFILLUSTRA TIVE MODELS

GOMS MODEL

The GOMS model was developed as anengineering model to be used by HCIdesigners, and it has received much moreempirical testing than any other analyticalmodel of HCI tasks. Many of the issuesconcerning the use of GOMS models in designare relevant to other analytical models as well.

GOMS models are applicable to routinecognitive skills. They are best suited for taskswhere users make few errors. More open-ended tasks that involve extensive problemsolving and frequent user errors (e.g.,troubleshooting) are not good candidates forGOMS modeling.

GOMS stands for goals, operators, methods,and selection rules, the four elements of themodel. GOMS models are hierarchical. The

assumption is that at the highest level people'sbehavior on a routine computer task can bedescribed by a hierarchy of goals andsubgoals. At the most detailed level, behavioris described by operators, which can be

physical (such as typing) or mental (such ascomparing two words). Operators that areoften used together as a unit are built up intomethods. For example, one might have astandard method of deleting text in a texteditor. Sometimes more than one method can

meet a goal and selection rules are used tochoose among them.

GOMS models can help an interface designerget a qualitative understanding of the goalstructure and information requirements of atask (i.e., a task analysis). In addition, Kierasand Poison (1985) developed a formalimplementation of GOMS models, CognitiveComplexity Theory (CCT), that allowsdesigners to make quantitative statementsabout users' errors, learning time, andperformance time for particular interfaces. InCCT, GOMS models are represented asproduction systems. In a production systemthe parts of a GOMS model are representedby a series of if-then rules (production rules)

37

Page 48: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

that can be run as a computer simulationmodel. A numberof quantitativemetricscanbe derived from a CCT production systemthat, accordingto proponentsof CCT, canbeusedto predict users'performanceon a task(Kieras, 1988; Olson and Olson, in press).For example, task learning time, taskperformance time, and the number of usererrorscanbepredicted.

To date,GOMSmodelshavenot beenusedtohelp design a commercial interface. Mostempiricalstudiesof GOMS modelshavebeenevaluationsof existing interfaces that weredesignedwithout usingGOMS. For example,Bovair, Kieras, and Polson (in press)evaluated GOMS estimates of taskperformance time for existing interfaces.Using a text editing task, they found that thenumber of production-systemcycles and ofcertaincomplexoperators(suchaslooking atthetextmanuscript)couldmatchperformancetime fairly well, explaining about80% of thevariability of users'performancetimesacrossediting tasks.

It is important to point out that in studieslikethis data (suchaserrorsandthetime to learnandperform tasks)arecollectedfrom usersofaninterface,andstatisticaltechniques(suchasregression)areusedto determinewhethertheGOMS predictions matchthe data. In thesestudies,GOMSmodelsarenotusedto makeapriori predictions of user performance.Rather, the models' estimates of user

performance are statistically compared to theempirical data to see how much of thevariability in users' performance data can beexplained by the model. Although someresearchers suggest that GOMS models can beused to make a priori predictions of userperformance (Olson and Olson, in press), thishas not been done successfully to date.

In addition to evaluations of existinginterfaces, a few studies have looked at howGOMS models can be used to generate ideasfor redesigning interfaces. These studies takeadvantage of the fact that GOMS modelsprovide a detailed task analysis (i.e., arepresentation of the goals, subgoals, andprocedural steps) required to perform a task.

Elkerton and Palmiter (1989) used a GOMS

model of the knowledge required forHypercard authoring tasks to design a menu-based Hypercard help system that allowedfaster information retrieval and that was liked

better than the original help system.

This study is important because it shows thatGOMS models can be used for more than

post-hoc evaluation of existing designs. Inthis study, the task analyses provided byGOMS models were used to generatecomputer-related artifacts (in this case,procedural instructions). In addition, theseartifacts were generated fairly directly from thetask analyses without extensive interpretationor "judgment calls."

To summarize the empirical evaluation ofGOMS models, models developed for asingle, existing interface can be used in a post-hoc, quantitative fashion to explainperformance time, learning time, and numberof errors with that interface. No one has yettested whether GOMS models can make

accurate quantitative performance predictionsfor an interface that is still in design.However, encouraging progress has beenmade in using the task analyses provided by aGOMS model to help generate effectiveinstructions that can be incorporated in helpsystems and user manuals.

TULLIS' MODEL

The next model to be described has a much

narrower range of application than GOMSmodels and focuses on general psychologicalprocesses rather than task analysis. Perhapsbecause of these differences, this model,developed by Tullis (1984), is better thanGOMS at making a priori predictions of userperformance. Tullis' model focuses onaspects of a display, such as display density,that affect how well people can find informa-tion in the display. It emphasizes generalprocesses, such as perceptual grouping, thataffect display perception regardless of thecontent of the display. The effects of taskknowledge on display perception (e.g., effectsof user expertise) are not considered. Tullis'model is applicable only to alphanumeric dis-

38

Page 49: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

.playsthat makeno useof color or highlight-xng. The model hasbeenapplied to simplesearchtasksinvolving displaysfor airline andmotel reservations and for aerospaceandmilitary applications(Tullis, 1984).

Based on a literature review, Tullishypothesizedthatfive factorswouldaffecttheusability of alphanumeric displays: overalldensity,local density,numberandsizeof theperceptualgroups,andlayoutcomplexity. Hedeveloped operational definitions so thatquantitativevaluescouldbecalculatedfor eachfactor, given a display layout asinput. Then,heconductedanexperimentin which subjectssearchedfor informationin displaysandratedthe usefulnessof the displays. Regressionanalysesshowed that the five factors couldexplain subjects'searchtimes andsubjectiveratingsfairly well.

Tullis implementedhisregressionmodelin theDisplay Analysis Program (Tullis, 1986).Thisprogramacceptsadisplaylayoutasinput.It outputs quantitative estimatesof overalldensity, local density, numberof perceptualgroups, and average group size. It alsoprovides graphical output describing thedisplay density analysis and the perceptualgroups. Finally, it predicts averagesearchtimeandsubjectiveratingsfor thedisplay.

Tullis (1984) then usedhis model to predictsearch times and subjective ratings for asecondexperiment,using different subjectsanddisplaysthantheexperimentthatwasusedto develop the regression equations. Thepredictedsearchtimes andsubjectiveratingsmatchedthe actual times and ratings fairlywell, with a correlationof about0.64 (r2) foreachvariable. The modelcorrectlypredictedthe displays with the best searchtime andrating. Tullis' modelwasalsoableto predictsearchtimesfrom threepreviousstudiesin theliterature (r2 > 0.63 in each study) (Tullis,1984). However, when Tullis' model wastestedon tasks more complex than simpledisplay search, it did not predict subjects'performancewell (Schwartz,1988).

To summarize, Tullis' model is applicablewithin a limited domain--inexperiencedusers

performing simple search tasks involvingalphanumericdisplays. Within this domain,however, the model's performance isimpressive. Tullis has taken the step thatGOMS users have neglected and used hismodelto predictperformancefor displaysandsubjectsdifferent from theoneson which themodelwasdeveloped.Themodelwasabletopredictwell in thesecases.Onedisadvantageof Tullis' model is that it neglectscognitivefactors affecting display perception, suchastheeffectof auser'staskknowledge.

CONCLUSION:EMPIRICAL EVALUATION OF

ANALYTICAL MODELS

Earlier in the paper, it was suggested thatanalytical models could be used in interfacedesign in two ways. The first of theseinvolves using models early in the designprocess to conduct rigorous task analyses,which are then used to generate ideas forpreliminary designs (e.g., menu structures).The second potential use of models occurslater in the design process, after preliminarydesigns have been developed. In this casemodels are used to evaluate designs by makingquantitative predictions about expected userperformance given a particular design.

The empirical evidence considered in theliterature review, and summarized here,

suggests that, except for one model with anarrow range of application, there is noempirical evidence that analytical models canpredict user performance on a new interface.There is some encouraging evidence thatanalytic models used for task analysis can helpin the process of generating designs; however,this conclusion is based on only a few studies.The review of the empirical evidence suggests,then, that future research aimed atdemonstrating model-based improvements ininterfaces should focus on three areas:

Replicating and extending the studies ofmodel-based interface redesign (e.g.,Elkerton and Palmiter, 1989).

• Demonstrating model-based interfacedesign for a new interface.

39

Page 50: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

• Demonstratingthepredictiveuseof modelsto evaluatepreliminarydesigns.

Basedon the empirical evidenceto date, thefirst two of thesewould bethemostpromisingavenuesof research.

What aresomepossiblereasonsfor thefailureof models to accuratelypredict performancewith a new interface? It may be that criticssuch as Whiteside and Wixon (1987) arecorrect in thatpeople'sprocedures,goals,andcognitiveoperatorsaretoo contextspecifictoallow predictionin acontextasdifferent asanew interface. A large body of researchincognitive psychology suggeststhat experts'performancein a particulardomainis largelydependenton domain-specificknowledge,asopposedto general-purposecognitive skills(Chi, Glaser,andRees,1982;Glaser, 1984).AndmodelssuchasGOMSfocusprimarily onthe task-specific knowledge of experiencedusers. It is interestingthat themodel thatwasableto predictuserperformanceon a slightlydifferent interface (Tullis') is not a taskanalytic model. Tullis' model focusesongeneralperceptualabilities. This suggeststhatin order to predict performance for newinterfaces,task analytic modelsmust includemore explicit representationof how generalpurpose cognitive characteristics (such asworking memory limitations) affect userperformance.

An additionshouldbemadeto theabovelist ofresearchareas. This suggestionis basedonthefact thattherearenoempirically validatedmodelsthatcandescribeHCI tasksinvolvinghigher-level cognitive processes such asproblem solving. However, space-relatedcomputer systems are rapidly becomingintelligentenoughto assistpeoplein complextasks,suchasmedicaldiagnosisandscientificresearch, which involve more complexcognition. Models are currently beingdevelopedwith the goal of describingthesemorecomplextasksin a way thatis usefultointerface designers. An example is theProgrammableUserModels (PUMs) (Youngand Whittington, 1990). However, most ofthese models have not been empiricallyvalidated.

A fourthareaof furtherresearch,then,is:

Developingandtestingmodelsof complexHCI tasksinvolving high-level cognitiveprocesses.

USING MODELS IN DESIGNOR GA NIZA TIONS

So far, this paper has focused on whetheranalytical models can improve interfacedesigns. However, even if models wereconclusively demonstrated to improveinterfaces, this would still not ensure their useby design organizations such as NASA. Whatis needed is evidence for the usefulness as well

as the validity of models. That is, it must beshown that models can meet the needs of

individual designers (e.g., preferred designmethods) and of design organizations (e.g.,cost, scheduling, and personnel constraints).

With respect to individual designers, anunderstanding of the various ways thatdesigners generate, develop, and evaluateideas is needed. Analytical models would beprovided to designers as detailed procedures oras software tools. The principle ofconsidering the cognitive and motivationalprocesses of users applies to model developersjust as it does to the designers of othersoftware tools. In short, designers are userstoo. Therefore, if model developers want theirmodels to be used in actual design projects,they must either construct their models to fit inwith the preferred design processes ofdesigners or provide ways of trainingdesigners to use the models.

But decisions regarding the commercial use ofmodels are made by managers, not byindividual designers. Therefore, models alsomust be shown to meet the multifaceted needs

of design organizations, for example, cost,schedule, and personnel requirements. Thissection will discuss the problems that must beovercome before analytical models areaccepted by designers and their workorganizations.

40

Page 51: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

NEEDS OF INDIVIDUALDESIGNERS

Two studies conducted by Curtis and hiscolleagues showed that major difficulties insoftware design are caused by a lack ofapplication-domain knowledge on the part ofdesigners. (Curtis et al., 1988; Guindon,Krasner, and Curtis, 1987). The analogousproblem in the case of interface design wouldbe a lack of knowledge of the user's task.When Rosson et al., (1988) interviewedinterface designers about the techniques theyused to generate design ideas, they found thatthe most frequently mentioned techniques(about 30%) were for analyzing the user'stask. Most of this task analysis involvedinformal techniques, such as interviewingusers or generating a task scenario.

These findings present both an opportunityand an obstacle to the use of models byinterface designers. First, since designersoften lack knowledge of the user's task andspend a large amount of effort getting it, theymight see the usefulness of task analyticmodels such as GOMS. The potential obstacleis that designers may prefer to stick with theirinformal techniques, instead of the morerigorous task analytic models. Rosson et al.,suggest that tools to aid in idea generationshould primarily support designers' informaltechniques. Lewis, Poison, Wharton, andRieman (1990) offer an interesting way ofcombining formal modeling with a techniquecurrently used by software designers---designwalkthroughs. They developed a formalmodel of initial learning and problem solvingin HCI tasks, and then derived from the model

a set of structured questions (a cognitivewalkthrough) that can be used to evaluate theusability of an interface.

This discussion presents only an example ofthe kind of issues that need to be considered

regarding the needs of individual designers.Further research is needed on the cognitive andmotivational processes of designers and whatthese processes suggest about the design ofanalytic models.

NEEDS OF DESIGNORGANIZATIONS

The Curtis et al., (1988) study mentionedabove also considered the organizationalaspects of software design. In addition,Grudin and Poltrock (1989) conducted an

extensive interview study of the organizationalfactors affecting interface design. Some of thefindings of these studies that relate to the useof analytical models are discussed below.

An important characteristic of many computer-system design organizations is complexity.Many groups may contribute to a final designproduct: interface and system designers,human factors personnel, training developers,technical writers, and users (e.g., astronauts).Curtis et al., (1988) noted a wide variety ofcommunications problems that resultedbecause of this organizational complexity.One such problem arises when groupsinterpret shared information differentlybecause of differences in backgroundknowledge. This could easily cause problems,for example, if the people in an organizationwho are experienced with modeling (e.g., adesigner or human factors expert) have tocommunicate the results of a modeling analysisto a project manager. A possible solution tothis problem of misinterpretation is for modeldevelopers to make the structure and outputsof their models as clear as possible.

In addition to communication problems,another problem arising from the variety ofroles in design organizations has to do withpersonnel and training. A manager consider-ing the use of models on a design project facesa number of questions along these lines. Canexisting personnel do the modeling (e.g.,designers or human factors personnel)? Howmuch training will they require? If newpersonnel must be hired, what kinds ofbackground must they have? Model devel-opers must have answers to these questions.

One answer comes from the work of Kieras

(1988). He has developed and published aprocedure for building GOMS models.Informal testing showed that computer science

41

Page 52: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

undergraduatescould use this proceduretogenerateGOMS models and makeusabilitypredictions "with reasonablefacility." Morethan this is necessary,however. Validationstudies must be done to test whether thepersonnel that would use models in designorganizationscanbuild modelsthat makethesamekinds of predictionsasthe expertswhoinitially developedthe model. Thesestudiesshould also document the kind of trainingnecessaryto achievetheseends.

In addition to complexity, othercharacteris-tics of designorganizationsthat affect theiropenness to modeling are strict projectschedulingandaconcernwith monetarycosts.Detailedestimatesareneededof thetime andmoney costs of using analytical models incommercialdesign.

CONCLUSION:THE USE OF ANALYTICAL MODELS

IN INTERFACE DESIGN

Can the use of analytical models be recom-mended to interface designers? Based on theempirical research summarized here, theanswer is: not at this time. There are too manyunanswered questions concerning the validityof models and their ability to meet the practicalneeds of design organizations. However,some of the research described here suggeststhat models can be of practical use to designersin the near future. Of special interest is theresearch that used models as task analytic toolsto generate interface design ideas (e.g.,Elkerton and Palmiter, 1989).

This paper has suggested research anddevelopment that is necessary in order foranalytical models to be accepted by complexdesign organizations. These suggestions aresummarized in Table 1. It seems that the

empirical research on analytical models givesgood reason to pursue the research anddevelopment goals outlined here.

ANALYTICAL MODELS AND SPACE-

RELATED INTERFACE DESIGN

So far, this paper has provided a generalanalysis of the use of analytical models in

TABLE 1.

Methods of Increasing the Use of AnalyticalModels in Interface Design

Demonstrate design improvements:• Validate model-based interface redesign.• Validate model-based interface design.• Validate predictive use of models to eval-

uate preliminary designs.• Develop and validate models of complex

HCI tasks involving high-level cognitiveprocesses.

Meet the needs of individual designers:• Study the design methods and cognitive

processes of individual designers.• Change the models and/or develop train-

ing materials to ensure that models fit inwith designers' methods and cognitiveprocesses.

Meet the needs of design organizations:• Make models' structure and outputs easily

interpretable.• Develop means of training designers to

use models. Validate that this trainingworks and document the costs of training.

• Document the time and monetary costs ofusing models.

human-computer interface design. How muchof this analysis is applicable to the design ofspace-related interfaces? The Human-Computer Interaction Laboratory (HCIL) atthe Johnson Space Center is currentlyconducting preliminary task analyses for thetasks required on a long-duration spacemission, such as a mission to Mars (Gugertyand Murthy, in preparation). This worksuggests that the range of tasks on such amission is quite broad--ranging from readingto controlling complex equipment toconducting scientific research. The possibleinformation technologies for long-termmissions are also quite diverse, for example,workstations for supervisory control, graphicsworkstations for scientific research, computer-supported group meetings, medical expertsystems, and virtual workstations for

42

Page 53: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

teleroboticcontrol. It seemsthatspace-relatedtasksare diverseenoughto spanalmost theentire range of human-computerinteractiontasks. Therefore,the generalanalysisof thispaperwill beapplicableto space-relatedtasksm mostcases.

One project in the JSCHCIL is focusing onthe use of analytical models in designingmedical decision support systemsfor spacecrews. This project is following up on theworkof ElkertonandPalmiter(1989)in whichGOMS was usedas a task analytic model tohelp generateinterface design ideas. Onemedicaltaskthatspacecrewmemberswill faceis learningor relearningmedical proceduresfrom computerdisplays. Thisprojectwill testwhetherbuilding GOMS modelsof medicalprocedurescanhelp interfacedesignersbuildbetterinterfacesfor displayingthisproceduralinformation. The GOMS approachwill becomparedwith othermethodsof taskanalysis,including psychological scaling techniquessuchasthe Pathfinderalgorithm (McDonaldandSchvaneveldt,1988).

REFERENCES

o Bovair, S., Kieras, D. E., and Polson,

P. G. The acquisition and performanceof text editing skill: A productionsystem analysis, Human ComputerInteraction, in press.

. Butler, K, Bennett, J., Polson, P., andKarat, J. Report on the workshop onanalytical models, SIGCHI Bulletin,20, 4, 1989, pp. 63-79.

. Card, S. K., Moran, T., and Newell,

A. The psychology of human-computer interaction, LawrenceErlbaum Associates, Hillsdale, NJ,1983.

. Chi, M. T. H., Glaser, R., and Rees,E. Expertise in problem solving, in R.J. Sternberg (Ed.), Advances in thepsychology of human intelligence(Vol. 1), Lawrence ErlbaumAssociates, Hillsdale, NJ, 1982.

.

,

.

.

,

10.

11.

12.

Curtis, B., Krasner, H., and Iscoe, N.

A field study of the software designprocess for large systems,Communications of the ACM, 31,11,1988, pp. 1268-1287.

Elkerton, J. and Palmiter, S.,

Designing help systems using theGOMS model: An information retrieval

evaluation, Proceedings of the Human-Factors Society 33rd Annual Meeting,Santa Monica, CA, 1989.

Glaser, R. Education and thinking: Therole of knowledge, AmericanPsychologist, 39, 2, 1984, pp. 93-104.

Gould, J. D. and Lewis, C. H.

Designing for usability and whatdesigners think, Communications ofthe ACM, 28, 3, 1985, pp. 300-311.

Grudin, J. and Poltrock, S. E. User

interface design in large corporations:Coordination and communication

across disciplines. In K. Bice and CLewis (Eds.), Proceedings of the CHI89 Conference on Human Factors in

Computing Systems, Association forComputing Machinery, New York,NY 1989.

Gugerty, L. J. and Murthy, P. (inpreparation). Using task analysis toguide information-technology develop-ment for future space missions, Paperto be presented at the AIAA SpacePrograms and TechnologiesConference, September, 1990.

Guindon, R., Krasner, H., and Curtis,

B. Breakdowns and processes duringthe early activities of software designby professionals. In G. M. Olson, S.Sheppard, and E. Soloway (Eds.),Empirical studies of programmers:Second workshop, Ablex PublishingCorp., Norwood, NJ, 1987.

Kieras, D. E. Towards a practicalGOMS model methodology for user

43

Page 54: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

13.

14.

15.

16.

17.

18.

interfacedesign.In M. Helander(Ed.),Handbook of human-computerinteraction, Elsevier SciencePublishers, Amsterdam, 1988.

Kieras, D. E. and Polson, P G. An

approach to the formal analysis of usercomplexity, International Journal ofMan-Machine Studies, 22, 1985, pp.365-394.

Lewis, C., Polson, P., Wharton, C.,

and Rieman, J. Testing a walkthroughmethodology for theory-based designof walk-up-and-use interfaces, in J. C.Chew and J. Whiteside (Eds.),Proceedings of the CH190 Conferenceon Human Factors in Computing.Systems, Association for ComputingMachinery, New York, NY, 1990.

McDonald, J. E. and Schvaneveldt, R.W. The application of user knowledgeto interface design. In R. Guindon(Ed.), Cognitive science and itsapplications for human-computerinteraction, Lawrence ErlbaumAssociates, Hillsdale, NJ, 1988.

NASA Johnson Space Center, SpaceStation Freedom Program human-computer interface guide (USE 1000,Version 2.1), Johnson Space Center,Houston, TX, 1988.

Olson, J. R. and Olson, G. M. Thegrowth of cognitive modeling inhuman-computer interaction sinceGOMS, Human-Computer Interaction,in press.

Poltrock, S. E. Innovation in userinterface development: Obstacles andopportunities. In K. Bice and C. Lewis(Eds.), Proceedings of the CHI 89Conference on Human Factors in

19.

20.

21.

22.

23.

24.

Computing Systems, Association forComputing Machinery, New York,NY, 1989.

Rosson, M. B., Maass, S., and

Kellogg, W. A. The designer as user:Building requirements for design toolsfrom design practice, Communicationsof the ACM, 31, 11, 1988, pp. 1288-1299.

Schwartz, D. R. The impact of taskcharacteristics on display formateffects, Proceedings of the HumanFactors Society 32nd AnnualMeeting, Human Factors Society,Santa Monica, CA, 1988.

Tullis, T. S. Predicting the usability ofalphanumeric displays, DoctoralDissertation, Rice University, TheReport Store, Lawrence, Kansas,1984.

Tullis, T. S. A system for evaluatingscreen formats, Proceedings of theHuman-Factors Society 30th AnnualMeeting, Human Factors Society,Santa Monica, CA, 1986.

Whiteside, J. and Wixon, D.Discussion: Improving human-computer interactxon - A quest forcognitive science. In J. M. Carroll(Ed.), Interfacing thought: Cognitiveaspects of human-computer interaction,MIT Press, Cambridge, MA, 1987.

Young, R. M. and Whittington, J.Using a knowledge analysis to predictconceptual errors in text-editor usage,in J. C. Chew and J. Whiteside (Eds.),Proceedings of the CHI 90 Conferenceon Human Factors in ComputingSystems, Association for ComputingMachinery, New York, NY, 1990.

44

Page 55: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Space Habitability

A three-dimensional interactive computer graphics package called PLAID is used to address

human factors issues in spacecraft design and mission planning. Premission studies produced

this PLAID rendition to show where an EVA astronaut would stand while restraining a satellite

manually and what the IVA crewmember would be able to see from the window.

(See cover for the actual photo taken during mission from aft crew station.)

Page 56: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A
Page 57: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

iii

I

USING COMPUTER GRAPHICS TO DESIGNSPACE STATION FREEDOM VIEWING

Betty S. Goldsberry, Buddy O. Lippert, and Sandra D. McKeeLockheed Engineering and Sciences Company

James L. Lewis, Jr. and Francis E. Mount

NASA Lyndon B. Johnson Space Center

" - :> </z.ZN 9 4:'-24 1 91

/_ ,.-_I

,v-_ J6Cd._J/

INTRODUCTION

Viewing requirements were identified early inthe Space Station Freedom program for bothdirect viewing via windows and indirectviewing via cameras and closed-circuittelevision (CCTV). These requirements residein NASA Program Definition andRequirements Document (PDRD), Section 3:Space Station Systems Requirements.

Currently, analyses are addressing thefeasibility of direct and indirect viewing. Thegoal of these analyses is to determine theoptimum locations for the windows, cameras,and CCTVs in order to meet established

requirements, to adequately support spacestation assembly, and to operate on-boardequipment.

PLAID, a three-dimensional computer graphicsprogram developed at NASA JSC, wasselected for use as the major tool in theseanalyses. PLAID provides the capability tosimulate the assembly of the station, as well asto examine operations as the station evolves.This program has been used successfully as atool to analyze general viewing conditions formany Space Shuttle elements and can be usedfor virtually all Space Station components.Additionally, PLAID provides the ability tointegrate an anthropometric scale-modeledhuman (representing a crewmember) withinterior and exterior architecture.

BA CKGRO UND

COMPUTER SIMULATION

The design of a computer simulation system,such as PLAID, that includes human models is

a complex process. Total system performanceis dependent on the accuracy of the modelsgenerated as well as the interactions betweenhumans, hardware, and software. Model

requirements can be based on workspacegeometry, figure anthropometry, strength/forcecharacteristics, and reach envelopes. If resultsof simulation analyses are to be valuable,development of the models must be based on avalid representation of the environment. Also,procedures must be employed that makeavailable alternative human/system designs.Ultimately, quantitative predictions of eventsand behaviors in response to realistic operatingconditions for various design alternatives mustbe compared in order to select the optimumdesign characteristics.

While there are many ways of quantitativelypredicting human behavior and performance,computer simulation of the human operator in amission context represents a method that isintemally consistent and compatible with othercontemporary system engineering evaluativetechniques. Computer simulations usuallyresolve many design and developmentproblems in an effective manner earlier than

experiments with human subjects. Eventhough simulation does not eliminate the needfor empirical tests, properly exploited, it canhelp to focus test time and energy onappropriate issues and potential problems.

The analyst interested in evaluating humanperformance can use computer simulation toconstruct a graphic human model in much thesame way that an airplane or automobiledesigner can model a vehicle. Then, the modelcan be used to test various design conceptsrelevant to human-system interactions andintegrations. Actually, a computer simulation

I:_ll,II_ PAqE .JBI..AI_II(NOT PN.MII)

47

Page 58: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

system can be developed long before aprototype or test facility can be madeoperational.

PLAID MAN-MODELING SYSTEM

PLAID uses anthropometric data collectedfrom astronaut candidates in the JSC

Anthropometrics and Biomechanics Laboratory

to generate human models with realistic jointlimits and user-specified size characteristics. A

high-fidelity model of the Space ShuttleRemote Manipulator System (RMS) is alsocontained in the PLAID database. For many

years, the models have facilitated thecomparison of reach envelopes for humans andremote manipulators in order to determinewhich space operations are feasible forastronaut extravehicular activities (EVA) and

which are more appropriately accomplished by

robotics applications. PLAID has also playeda major role in satellite retrieval execution,equipment failure diagnosis, and vehicledamage prediction.

PLAID is a three-dimensional-solids modelingsystem which generates computer models thatcan be examined from an infinite number of

viewpoints. This feature allows the systemuser to position its viewpoint where the eyes ofthe crewmember or lens of the camera would

be located. Then, subsequent analyses canreveal what a crewmember can or cannot see.

The PLAID system has taken intoconsideration that the visual environment of

space is different from a normal Earthatmosphere and has incorporated appropriatecontrast ratios, shadowing, and light scatteringfor the space environment.

SPACE STATION

PROCEDURE

Since viewing tasks vary in complexity,relevant viewing requirements and proposedwindow locations were identified. Using thePLAID system, geometric models of SpaceStation Freedom elements and configurationmodels were created. These models reflected

up-to-date Space Station Freedom architecturalinformation for internal and external elements.

Also, proposed window and camera locationswere integrated with the models. Then,computer graphics of various viewingscenarios were generated which revealedfields-of-view for selected locations of interest.

Finally, the graphics generated by the systemwere analyzed relative to published designrequirements and specifications.

The simulation is influenced by more than athree-dimensional layout of the environment.In particular, task sequence and task time areimportant ingredients. For example, if thehumans must monitor exterior facilities while

operating interior controls, their fields-of-viewwill be decreased because their positions will

probably be farther from the window.Shadowing can also reduce visibility.

The simulation helped analysts evaluate theproposed window locations and identifyproblem areas. Recommended windowplacements were determined by the integrationof the view available from a particular positionwith various other factors such as operationsrequirements, anthropometric clearances,traffic flow, and window accessibility.

DATA

Data in the form of PLAID graphics were

generated and maintained at the JSC/Man-Systems Division. Figures 1 through 4,illustrated on the following page, exemplify thetypes of data used in the analyses. Figure 1was generated to determine the viewingclearance needed if an airlock were placed nearthe window.

Figure 2 illustrates that the cupola willaccommodate two crewmembers. Note the

design requirement that the Crew EscapeRescue Vehicle (CERV) be visible from thecupola is met. Displays that share thecrewmembers' fields-of-view, are shown.

Figures 3 and 4 illustrate that viewing of theexternal environment can be accomplished.Analyses have determined that Earth viewingor optic experiments could be obstructed bytruss work from some locations.

48

Page 59: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

| i| 7

J

i

Figure 1. A 95th percentile male crewmemberlooking out of the cone-shaped end portion of a module(endcone) window.

a rnl o¢'lr

Figure 3. A view of extravehicular activity (EVA).

Figure 4. An Orbiter window or external cameraFigure 2. Two 50th percentile males in a cupola, view during approach or departure.

DOCUMENTA TION

Comprehensive documentation of the SpaceStation Viewing Analyses results will be

organized in a series of volumes. Volume 1covers viewing of the Space Station Freedomassembly sequence as seen through the Orbiterwindows and CCTVs. Volume 2 uses the

Man-Systems candidate topolog.ies to evaluatepossible window locations in the U. S.Habitation module. Volume 3 evaluates the

candidate window locations as proposed by the

European Space Agency (ESA) for theColumbus Module, as well as selected alternatelocations. Volume 4 addresses viewingconditions of the National Space Development

Agency (NASDA) Japanese ExperimentModule (JEM). Volume 5 is dedicated to thediscussion of viewing from the nodes and thetwo cupolas baselined on nodes three and four.Volume 6 addresses indirect viewing and the

placement of cameras throughout the station.Volume 7 pertains to the viewing conditions ofthe U. S. Laboratory module. Volume 8 will

49

Page 60: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

address Phase II Space Station Freedomviewing impacts. The planned order ofpreparationfor thesebasicvolumesis 3, 2, 4,7, 5, 1, 6, and 8. The first four volumes inthis series have been released. Publishedrecommendationsfrom thecompletedvolumesfollow.

RECOMMENDA TIONS

l.I.S. Habitation Module

1. A requirement for windows in the crewquarters should be established by the program.

2. Two viewing stations, currently proposed,should be relocated to positions that will

provide unobstructed views.

3. Modular interior layout and wardroom areawindow placement should be integrated suchthat recreational viewing (i.e., viewing of theEarth limb) is optimized. Selected port racklocations are well suited for wardroom areawindows.

4. Windows should be placed in the portionof the quadrant where the eyes are most likelyto be, while maintaining compliance with thelocal vertical.

U. S. Laboratory_ Module

1. Evaluate scientific celestial viewing fromthe zenith (-Z) oriented windows in the moduleto insure minimal interference of Space Stationelements.

2. A proposed floor window is suitable forEarth viewing.

3. Optimum high-oblique viewing in the -Ydirection for scientific uses can be achieved

from a starboard window. Exact placement ofthe window below the module centerline

should be determined based upon intended useof the window.

ESA Columbus Module (CM)

1. Design of the module should provideample space for movement, interactions,

interfaces, and other human factorsconsiderations.

2. Orient a minimum of two windows in theCM in the +X and -X directions.

3. Rotate the aft endcone at least 45 degreesfrom 12 o'clock position to meet Space StationProgram 30000 requirements and supportrecreational viewing.

4. Rotate the window location in the forward

endcone of the CM 45 degrees clockwise.

5. Consider the need for celestial viewingfrom the CM in the overall station

configuration.

6. Establish commonality of window designcharacteristics in all modules.

7. Supplement the view through the hatchviewport by providing interior video whichcould also be used to remotely monitoractivities within the modules.

NASDA Japanese Experiment Module

1. Supplement direct viewing of MobileServicing Center (MSC) operations by usingthe CCTV for indirect viewing.

2. Supplement direct viewing of theExperiment Logistics Module (ELM) transferand berthing/deberthing operation by using theCCTV for indirect viewing and for supportingdirect viewing from the hatch viewport beneaththe JEM berthing ring during the berthing

process.

3. Retain proposed windows in the aftbulkhead.

4. Retain baseline with no windows in the

forward bulkhead, since no meaningful viewcould be achieved.

5. Retain floor window for Earth observation.

6. For purposes of celestial viewing, use theviewport between the JEM and ELM duringabsence of the ELM. Further information is

5O

Page 61: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

neededconcerning(a) durationof the ELM'sabsenceand (b) external covering of theviewpoint duringELM's absence.

CONCLUSION

VIEWING ANALYSIS

PLAID allows for an in-depth analysis of theimpact of direct and indirect viewing on

architectural design and human performanceissues. Specifically, window and videolocation selections have been facilitated by theuse of this computer simulation system. Thissystem permits the integration of selectedwindow/camera locations with the internal

architecture, the validation of crew activitysequences, and the determination of necessarybody movements required to accomplishviewing tasks. Through findings in theseanalyses, analysts have been able to write aChange Request (CR) affecting high levelSpace Station Program documents. Requestsaddress placement issues in the modules.Future volumes will address indirect viewing,cupola viewing, node windows, and SpaceStation assembly.

PLAID SYSTEM

Development of a more interactive, real-timesystem which uses natural language willenhance capabilities of the system. Expansionof the human-modeling capability will includea population range from the 95th percentile tothe 5th percentile female. Currently, work inthis area is being conducted for JSC and otherusers at the University of Pennsylvania underthe direction of Norman Badler, PhD.

Also, the capability of generating videoanimation sequences is being added to PLAID.This feature will allow for simulation of a

complete task sequence rather than a"snapshot" approach. Animation will providefor a more comprehensive evaluation ofcrewmember activities, as well as reveal the

relationship between a series of contiguousactivities. By following the action flow, thesystem user can isolate spatial interferences,procedural inconsistencies, and crewmemberinteractions.

Finally, a dynamic program extension isplanned that will allow system users to moreaccurately evaluate factors such as force andpath trajectory. In addition, an efficient raytrace algorithm would enhance lighting studycapabilities.

A CKNO WLEDGEMENTS

These viewing analyses could not have beenaccomplished without the NASA GraphicsAnalysis Facility, Facility Manager Linda Orr,and Lockheed technical support.

REFERENCES

. Mount, Frances E. and McKee,

Sandra. Space Station FreedomViewing Analysis: Volumes 2, 3, 4,and 7, JSC No. 32089, NASAJohnson Space Center, 1989.

. Mount, Frances E. and Lewis, James

L. Space Station Viewing Require-ments (1986). 1986 SAE AerospaceTechnology Conference andExposition.

5I

Page 62: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

--'7/

}>_- - .._

N94

A SIMULATION SYSTEM FOR SPACE STATIONEXTRAVEHICULAR ACTIVITY

Jose A. Marmolejo

NASA Lyndon B. Johnson Space Center

Chip ShepherdLockheed Engineering and Sciences Company

Research directed by Barbara Woolford, FlightCrew Support Division, NASA JSC.

INTRODUCTION

America's next major step into space will bethe construction of a permanently manned

Space Station which is currently underdevelopment and scheduled for full operationin the mid-1990s. Most of the construction of

the Space Station will be performed overseveral flights by suited crewmembers duringan extravehicular activity (EVA) from theSpace Shuttle. Once fully operational, EVAswill be performed from the Space Station on aroutine basis to provide, among other services,maintenance and repair operations of satellitescurrently in Earth orbit.

BACKGROUND

When a crewmember ventures outside the

spacecraft, an extravehicular activity (EVA) isperformed. To perform an EVA from theSpace Shuttle, an astronaut must don anextravehicular mobility unit (EMU) -- a spacesuit assembly and portable life-supportbackpack that provides, respectively, the

pressure retention and habitable atmospherethat a human requires to perform a productiveumbilical-free EVA in the vacuum of space.Attached to the front of _he Shuttle EMU

shown in Figure 1 is a display and controlmodule (DCM), a large chestpack with life-support controls and a 12-character red LEDdisplay located just beneath the helmet visor.Through the DCM display, the astronaut isable to read various life-support parameters

(e.g., oxygen pressure, suit pressure, etc.).These parameters are continuously measuredand monitored by a caution and warning

system (CWS) microcomputer located withinthe backpack. If any anomalous conditions aredetected, caution or warning messages are

generated and then relayed from the CWS tothe EVA crewmember via the DCM display.

Should any of these messages be reported bythe CWS, the astronaut then refers tocorrective instructions in the EMU cuff

checklist, a "flip-through" reference bookletattached to the wrist of the suit. (In addition tothese corrective instructions, the cuff checklist

also contains information about generalmission procedures and EVA equipmentoperation.)

LIMITATIONS TO THE SHUTTLEEMU INFORMATION SYSTEM

Although several spectacular EVAs have beenperformed from the Shuttle, a number oflimitations with the Shuttle EMU information-

exchange system have been identified. Theselimitations relate to the DCM visibility,information accessibility, information capacity,and the DCM size. Both the display and thecontrols on the DCM are difficult to view. The

location of the DCM display requires thecrewmember to look down at an uncomfortable

angle. Since its viewing distance is so small,some astronauts require the use of a speciallens to read the characters. In addition, brightenvironments "wash out" the red LED

characters, sometimes forcing the crewmemberto cup his hands over the display for viewing.Difficulties also exist in viewing the controlslocated on the front of the DCM, requiring theEVA crewmember to wear a wrist mirror to

read the reflected images.

Information retrieval during EVA must bemanually sequenced. The astronaut must

52

Page 63: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

toggle a switch to receive each 12-characterline of DCM data, and referring to the cuffchecklistis atwo-handoperation.

The amountof data that may be presentedisseverelylimited. The entire library availableduring EVA consistsof a numberof twelve-charactermessages(anastronautwould haveto toggle24 timesjust to readthisparagraph)and40 to 50 pagesof a 3.25-inchx 4.5 inchcuff checklist.

The DCM not only invades the astronaut'sprimework envelope(i. e., theareadirectly infront of the chest),but its merepresencehasrestrictedimprovementsto thereachcapabilityandarmmobility of theEMU.

Figure 1.Unit (EMU).

,','\

_j

Space Shuttle Extravehicular Mobility

INFORMATION SYSTEMIMPROVEMENTS

Although the DCM and cuff checklist aresuitable for the occasional EVAs performedfrom the Shuttle, the nearly routine EVAanticipated for the Space Station dictates thatproductivity be maximized. Ideally for SpaceStation EVA, the access to information shouldbe a "hands-free" operation, especially forlabor-intensive EVA scenarios such as satellite

servicing, unplanned maintenance, or emer-

gency operations. Furthermore, the displaymedium should support text, graphical, andvideo formats in all EVA environments. To

meet these ambitious goals, NASA isdeveloping a voice recognition and controlsystem to provide "hands-free" operations anda helmet-mounted projection device to displayinformation to the EVA crewmember.

Although space is inherently a harshenvironment, the rather benign conditionsinside the EMU facilitate the use of speech

recognition technology. First, sounds withinthe EMU are absent of noises that commonly

cause problems for speech recognizers, suchas background voices and extraneous clamorfound at industrial workstations or breathingnoises inherent with masked fighter pilots.The EMU helmet contains only fan and airflownoises which are high-frequency sounds that

can be easily filtered. Second, an extensivevocabulary is not required for the EMU since avocabulary of about 50 words will suffice forthe procedurally oriented tasks of EVA.Furthermore, even though a connected-word

recognizer may allow more flexibility, anisolated-word recognizer is all that isnecessary. Finally, the voice recognitionsystem need not be speaker-independent toallow multiple users since each EVA-qualifiedcrewmember aboard the Space Station willhave an EMU. Similar to military helmet-mounted and heads-up displays, the EMUhelmet-mounted display (HMD) consists ofprojection optics and image/illumination sourceelectronics that permit convenient viewing oftext, graphics, and video on a transparentdisplay deposited onto the helmet visor. With atransparent screen, the astronaut can readinformation overlaid onto the real world whenthe HMD is active. When the HMD is off, the

screen is practically invisible. Displayedimages are located conveniently above theastronaut's prime field-of-view to preventvisual interference while performing EVAtasks.

Through control system electronics, the voicerecognizer can select imagery on the HMD,thus providing a powerful and valuable tool forthe Station EVA astronaut (other functions are

also available). The combined system will

53

Page 64: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

replace the need for the DCM, the cuffchecklist, and the EMU wrist mirror, thusenhancingEVA productivity while sacrificingnoneof their services.

With the HMD and the aid of a voicerecognitionsystem,almostunlimitedamountsof informationareconvenientlyavailableto theastronautduring theEVA. TheastronautmayreviewEMU statusinformation,receiveEMUalarmswith corrective action,obtain generalEVA operational information (previouslysuppliedby the cuff checklist),control EMUswitchesbyvoice,monitortransmissionsfromtheastronaut'spersonalTV camera,andaccessthe Station's main computer (which canprovidedataretrievalfrom onboardmemoryoragrounddatabase).

NASA EVA SIMULATION

It progress

At the NASA Johnson Space Center, the data

available to a Space Station EVA astronaut maybe simulated in an Advanced EVA SystemsLaboratory. This simulation concentrates onthe anticipated Space Station EMU data logicflow required to efficiently provide the EVAcrewmember with various informational typeswhile maximizing the usefulness of the voicerecognition/control system and helmet mounteddisplay. The simulation program is run on aMacintosh II and operates in conjunction witha bench model of the HMD and a speaker-dependent voice recognition system. Thesimulation allows the user to perform the pre-EVA series of checklists required to exit thespacecraft, offers the user all informationaldata types (including alarms) available duringthe EVA, and provides the series of checkliststo return safely inside the vehicle. Below is abrief description of the simulation program.

PRE-EVA

Pre-EVA procedures are performed within theairlock prior to exiting the spacecraft. After theEMU is donned and powered up, the HMDsupports a preparatzon mode where theastronaut user is led through a series of checksprior to depressurizing the airlock. Detailedinstructions including illustrations fordepressurization and subsequent airlock egress

are provided. In the simulation, the pre-EVAHMD screens are split into three fields(excluding the header). The top field containsprocedural information, the middle fieldcontains a series of switches that may bemanipulated by voice, and the bottom field listsvalid functions selectable by voice. The pre-EVA checks and switch manipulations arecontrolled by voice commands. The followingis the sequence of events required to performthe pre-EVA operation. (Note the quotedcommands spoken by the user.)

(1) Simulating power applied to the EMU, theHMD will display a power restart message anda Systems Check of the EMU (See Figure 2).

Switches / Pre-EVA I/5

EMU Power-up

Systems check

- _mm

• Trinlducer [A/D)

- Eledriell sy$11mll

- CoOling syllemi

MAVOAY I RELAX I DISPLAV.-OFF I UPDATE / VOCABULARY / S'TATUSJ

Figure 2. EMU Power-up and Systems Check

Switches / Pre-EVA / Pre-Leakcheck 2/B

Pre-Leakcheck

I Switch Fan On

2 SwltchO2Actuator- PRESS

Suit Pressure I

CONTINUE to begin Leakcheck

EXIT to by-pass Leakcheck

MAYDAV I RELAX / DISPLAY_OFF I UPDATE / VOCABULARY /STATUS

Figure 3. Pre-EVA Checklist

(2) Following the systems status check (and ifOK), the first page of the pre-EVA file isshown (see Figure 3). The crewmember isdirected to turn the EMU fan on and then to

54

Page 65: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

pressurizethe suit. To activate the fan, thecrewmember simply says "FAN ON." TheFAN block in themiddle field will thentoggleposition. In a similar fashion, the suit ispressurized by first selecting "02ACTUATOR" followed by "PRESS."Afterthesuit is fully pressurized,thenextpage,suitleakcheck,will beshown.

(3) A leakcheck(to determinesuit integrity)isperformed automatically in the simulation.During the leak checksimulation,a statusofthecheckincludingtimeleft, suitpressure,and02 actuatorpositionsaredisplayed(SeeFigure4). Theleakcheckmaybeabortedin progressby saying"EXIT." An abortedleakcheckmaybe either restarted("ACKNOWLEDGE") orby-passed entirely ("CONTINUE"). Anunabortedleak check with positive results isshown in Figure 5. (Note that a second test, ifdesired, can be conducted by saying"ACKNOWLEDGE.")

Switches / Pre-EVA / Leakcheck 3/8

Automatic Leakcheck

To ensure a valid check Time left

• Mimmize physTcal activity

a Breathe normally• Do not alter 02 actuator or fan

Suit pressure

02 Actuator has been switched

EXIT to abort Leakcheck

MAYDAV / RELAX / DISPLAV_rlFF / UPDATE / VOCABULARY / STATUS

Figure 4. Pre-EVA Leak Check (in-progress)

(4) By saying "CONTINUE," the crew-member wiI1 proceed to the Airlock DepressChecklist, a series of procedures todepressurize the airlock to vacuum (see Figure6). Note the use of graphics. (A "GO BACK"command at this point will display Figure 5,but will not redo the actual check unless the

previous leak check was aborted or was notsatisfactory.) By saying "CONTINUE" again,the crewmember will enter the second page ofthe checklist (see Figure 7). Note that airlock

pressure is located within the checklist steprequiring the monitoring.

Switches / Pre-EVA / Leekcheck 3/II

Automatic Leakcheck: [ IlK J Time Left:

Suit Pressure:

02 Actuator has been s_tched: [-_

CONTINUE to display Airlock Depress

GO_BACK Lo diplay Pre-LeakcheckACKNOWLEDGE to redo Leekcheck

I'lAYOAY / RELAX I OISPLAY-.DFF I UPDATE / VOCABULARY / STATUS

Figure 5. Pre-EVA Leak Check (completed)

Switches / Pre-EVA t Airlock Depress

Airlock DaDrass

I Egress EMU mount, slow handles2 Disconnect vent duct, stow m mlddeck3 Close and lock tuner hatch

4 Confirm that both inner hatch EQUAL valves are OFF

K_o. _oR _ EnUAL valvl.

4/8

CONTINUE to display Aidock Depress (cont.)GO_BACK to display Leakcheck

MAYDAY / RELAX / DISPLAY_OFF I UPDATE / VOCABULARY I STATUS

Figure 6. Airlock Depress Checklist (page 1)

Switches / Pre-EVA I Airlock Depress (contJ

Airtock Dearass Icont._

$ Set AIRLOCK DEPRESS to 0 on Awlock panel AW82B

6 Connect waist tether to ring above ouLer Match

7 When mrlock pressure _ O 5 psi, setAirlock DEPRESS valve to CLOSED

[ Airlock PresSUre - Illllldlt_ll I

8 End of depress procedure

5/8

[,l.l=l°ll=l []CONTINUE to display Pra-Egress ChecklistGO_BACK to display Airlock Depress

MAYOAY I RELAX I DISPLAY_OFF I UPDATE / VOCABULARY / STATUSw

Figure 7. Airlock Depress Checklist (page 2)

55

Page 66: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

(5) Once the simulateddepressurizationtovacuumis accomplished,thecrewmemberwill"CONTINUE" to the Pre-EgressChecklist(seeFigure 8). (Note that the crewmembermayswitchseveralcontrolswith asinglevoicecommand.) Once he has completed thischecklist, hewill "CONTINUE" to theEgressChecklist (Figure 9). The Pre-EgressandEgressproceduresleadthecrewmembersafelyout of the airlock to perform the actualEVA.After thefinal pageof this checklist hasbeenread, an "EXIT" commandwill display theNormal EMU Statuspagefor useduring theEVA.

Switches / Pre-EVA / Pre-Egreas 7t8

_J_

Note "Acknowledge" selects all of the followmg at once

1 Switch H20 On

2 Switch PWR EMU3 Switch Comm Line A

4 Switch 02 Actuator EVA

5 Switch RF ON

Ci_U_

[] [] [] []ACKNOWLEDGE to _Iecl aJl _ttJr_s

CONTINUE to display Pre-Egrsss checklist {conl.)

GO_BACK to display Aidock Depress {cont.)

HAYDAY / RELAX / DISPLAY. OFF / UPDATE / VOCABULARY / STATUS

Figure 8. Pre-EgTess Checklist

Switches I Pre-EVA / Airlock Egress Checklist 8/8

EVA

The actual EVA, by definition, begins whenthe crewmember leaves the airlock. This

begins the operational portion of theastronaut's venture. Nearly all of the HMD'sfunctions are available to the EVA

crewmember at this time. During this period,the EVA astronaut may view EMU statusinformation, access general EVA operationalinformation, control EMU switches by voice,monitor transmissions from the astronaut's

personal TV camera, and obtain data directlyfrom the Space Station's main computers viaradio link. The simulated functions are

described below along with a description ofother available functions.

EMU Status - The EMU Status file is the base

state of the simulation program. Each page issplit into two fields (excluding the header) asshown in Figure 10. The top field displaysspecific EMU parameters and consumablevalues in two formats: on a quick-read bargraph format and actual (or calculated) values.This method allows the astronaut to quicklyand accurately assess the performance of theEMU. The lower field lists valid availablefunctions.

EVI I Egress the mrlock

2 Right waist tethers (both) attach to reel

3 Reels - remove from contomers and unlock

EV2 4 Left waist tethers (both) - unhook and attach to crew

Both 5 Egress the olrlock

5 Tether hne - Unsnap strop and release slidewwe cover

7 Pre-EVA procedure complete

M=o _v

CONTINUE to quil Pro-EVA and display EVA Status

GO._BACK to display Pro-Egress Checklist

MAYDAY I RELAX I DISPLAY-OFF I UPDATE / VOCABULARY / STATUS

Figure 9. Egress Checklist

It should be noted that during the pre-EVAsimulation (as will be in the flight system)other functions which include display on/offand brightness controls, for example, will beavailable to the user.

Status 1 / 3

Wen_g CiuJJm N_ C_e'_ _(I

( FT4Z95.0 Prim 02 Pru_re

0.3 c02 l._.,w l.q mill q,,,li,,l ,,,lnn, i,,,l,111.3 DC SliCk Versos

4.5 Dc a,,m_ i,.i,, I,.Ii,,;.I,,T.I., I2040.0 Fan RIM

a7,5 X LImEIn9 Comur_q_ " ' " i J

I,x_;ucn

[ T,._,( a:al:aa )T_..U_ ( ,:so:s, ) }

CONTINUE/GO_BACK to ,..few ob_er Status pages

EXIT 1o go to POSt-EVA file

CAMERA/CHECK LIST/DISPLA Y_OFF IMAINTEN-ANCEIMA YDA Y /RELAX/SWITCHES/U PDATE/UPLINK/VOCA BULA RY

Figure 10. Normal EVA Status

EVA Operational Information - This subroutineallows the astronaut to select and read variousdata files from a main menu by voice

commands. By saying "MAINTENANCE,"the user obtains the menu as shown in Figure11. This file and the "CHECKLIST" file

56

Page 67: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

provide information about maintenanceprocedures, equipment operation, andcontingency or troubleshootingflow charts,amongother information subfiles in text andgraphical formats. Once accessed,the usermaypagebackandforth throughthefile. Asan example, if "THREE" were selected,thecrewmemberwould retrieve the "PAM LargeSunshield"file, which detailsa procedureforinstalling "sunglasses"on the SpaceShuttle'swindows. Page one of the "PAM LargeSunshield" file lists the tools the EVAcrewmembermust collect to perform thejob(seeFigure 12). From here,theastronautmay"CONTINUE" to obtain the next page ofinformation. An "EXIT" within a file wouldbringtheuserbackto themainmenu. Another"EXIT" wouldbring theuserto thebasestate,thefirst Statuspage.

EMU Switch Manipulation - (This procedure isidentical to that described in the pre-EVAsection.)

EMU Camera Monitoring - Shuttle EVAastronauts are not capable of monitoring TVcameras. They depend on ground or Shuttlecommunication for feedback on pointing thecamera. On the other hand, with the Station

EMU, an astronaut can say "CAMERA" andreceive live video from his TV camera directlyon his HMD. Currently the simulation programonly displays still graphics to the user. Anenhancement to the simulation program willsoon be added to allow the crewmember to

move and position a camera by voice.

Accessing Information From The Station-From the Station's central computer, the EVAastronaut may receive unlimited amounts ofinformation via a radio frequency link to thecrewmember. This information, in the form of

files, will be accessed by the EVAcrewmember much like the operationalinformation described above. These files mayhave been produced prior to launch, prior tothe EVA by that astronaut, or by someone onthe ground to be transmitted, real-time, to anEVA astronaut. A procedure to simulate thisfunction is currently being developed.

MsInlihsn¢O t MSln Menu 1/I

I Trssh DIsDOISI {ONE)

2 Standard Sunshleld MsnulIl CIpenlhg (TWO}

3 PAIl Lsrgil Sunsnlold {'D_EE}

4 HMU Donning Procedure {T-OU_

0NE-FOUR to select file

EXIT to return Io s|alus

MA_AY_REL_ CLEARO_SPLAY

Figure ]L EMLT Maintenance File

Plalnttnencl / PAM Llroe Sunlhleld / Tools 1 !2

Tool box

1. probe & hemmer

2, Trash confiner

3. 3/8" Onve ratchet

4. Adl wnlt tether

5 Cab_o cullor_

Airlock

I IFM tool (7/16")

2 PAM IOO_ (SpnlIde¢)

3. IFM ddvs¢ handle with 3/8" socket

CONTINUE to display next page

EXIT to return to main menu

MAYDAY I RELAX/CLEAR DISPLAY

Figure 12. PAM Large Sunfield File

Other Functions - Other functions are also

available to the crewmember while receipt ofalarms with corrective action, notifying Stationastronauts of anomalous conditions (which

includes an option for sending them EMUtransmitted data), a complete recognizervocabulary menu, and updating voicetemplates by voice.

Post-EVA To complete the EVA, theastronaut enters the airlock and connects hisumbilical line to the EMU. Post-EVA

instructions are provided to the crewmembervia the HMD and are basically a reversesequence of the pre-EVA set of events,concluding with a power-down and doff of theEMU.

57

Page 68: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

CONCL USION

Both voice recognition and helmet-mounteddisplay technologies can improve theproductivity of workers in space by potentiallyreducing the time, risk, and cost involved inperforming manned extravehicular activity.NASA has recognized this potential and iscurrently developing a voice-controlledinformation system for Space Station EVA.Two bench-model helmet-mounted displaysand an EVA simulation program have beendeveloped to demonstrate the functionality andpracticality of the system.

58

Page 69: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

USE OF INFRARED TELEMETRY AS PART OF A NONINTRUSIVE INFLIGHTDATA COLLECTION SYSTEM TO COLLECT

HUMAN FACTORS DATA

Angelo MicocciLockheed Engineering and Sciences Company

Research directed by Francis Mount, FlightCrew Support Division, NASA JSC.

INTRODUCTION

The objective of this paper is to present amethodology and rationale for development ofa Nonintrusive Inflight Data Collection System(NIDCS) to collect Human Factors (HF) dataduring a space mission. These data will enablethe research team to identify and resolveissues.

This paper will present the background andhistory of the NIDCS, the methodology andtechniques employed versus those in currentuse on Earth, initial results of the effort,

including a brief description of the equipment,and, finally, a discussion of the scientificimportance and possible future applications ofthis system elsewhere.

The schema for the NIDCS includes a

collection of three types of data, behavioral,physiological, and biomechanical. These willbe collected using videotape of crewmembers'activities, bioelectric signal measurement, andmeasurement of kinematics and kinetics,

respectively. This paper will focus on thesecond type of data, physiological activity asdetermined by changes in bioelectric potentialsas crewmembers perform daily assignments.

BACKGROUND AND HISTORY

Before proceeding, it is necessary to definenonintrusive data collection. The strictest

definition of such a system is one in which thesubject is unaware of the recording apparatus,i.e., has no knowledge of the fact that data arebeing collected, etc. Given trainingprocedures, awareness of video cameras,audio report/debriefing taping on Earth, etc.,the crew will be aware that data are being

N9 4 z-2 ':l

:- =-,

recorded. Therefore, the alternative solution to

data collection is to employ a system that willnot (or only minimally) disrupt routine,ongoing activities (Callaway, 1975). In anycase, nonintrusive data collection should not beconfused with "noninvasive" data collection, a

method frequently used in collection ofbiomedical data. While nonintrusive methods

are noninvasive, the reverse is not true. Thisdistinction excludes collection of blood orother activities where catheterization orcollection of other data which alter crew

activities, etc., is necessary. We can nowproceed to the history of the development of anNIDCS.

A review of the literature from other projects

using nonintrusive data collection methodsprovides a basis for understanding the reasonsfor the application of such methods, how othermethods were employed, the degree of theirsuccess, and what was learned from them.The literature search focused on situations

likely to be encountered during space flight.

Literature was reviewed from the SEALAB

(Radloff, 1966), TEKTITE (Nowliss, 1972),and SKYLAB (LaFevers, NASA JSC)missions. The schedule for development of anNIDCS precluded review of data fromsubmarines, due in part to time limitations andbecause most submarine data have been

collected and evaluated for other purposes(contamination monitoring/control, etc.) andare not structured for application to HF issues.Because data collected on the referencedmissions focused on behavioral rather than

physiological data, the detailed findings onthese projects will not be discussed here.Researchers found that the most expeditiousway to collect (behavioral) data was throughsimple, passive observation of video andthrough the use of questionnaires and post-mission debriefings. It is reasonable to expect

59

Page 70: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

that passiverecordingof datato yield resultscan be applied to physiological andbiomechanicaldataaswell.

METHODOLOGY AND TECHNIQUES

As mentioned in the introduction, there arethree types of data to be collected using theNIDCS during space missions.

First, there are the behavioral data wherein theprimary method used for collection isvideotapes. Except for those instances wherearrangements were prescheduled, researcherswere able to see only portions of activitieswhich might be of interest to their disciplines.Camera angles were based on factors otherthan those which the researcher might select,duration of taping was not optimum, detailedverbal explanations were not forthcoming, etc.The best to be hoped for was that the crewmember(s) could provide sufficient detailsduring debriefings, either in flight orpostflight. While satisfactory, it is notoptimum for determination and resolution ofissues.

One of the objectives of this project is toprovide input and direction for the programduring the early planning stages to ensure thatHuman Factors-oriented tasks are part of themission; to attend training and planningsessions involving crewmembers to ensure thattraining includes performance of these taskswhen required; and to specify video cameralocations, angles, etc., to ensure that adequatecoverage exists for evaluation. Videotapes willbe supplemented by the information obtainedduring debriefings, through the use ofquestionnaires, etc.

Secondly, there are the biomechanical datawhich will provide information concerningkinetics and kinematics during performance ofspace-oriented tasks. Current planning is thatthese data will be collected in conjunction withanother experiment dealing with human forcecapacity in space. It is anticipated that some ofthese data will be collected using videocameras, while most will be obtained with theuse of sensors and instruments such as

transducers, force plates, accelerometers,strain gauges, etc.

Finally, there are the physiological data. Muchof modern research has focused on clinical

applications, diagnostic purposes to developprosthetic devices, etc. Various studies havebeen performed to measure muscle fatigue, perse. Only recently (in the past 25-30 years) hasresearch been done toward interpretation ofthese data as indicators of human performance(sports medicine, etc.) and/or operatoralertness.

Some work has been done in manufacturingfacilities, some in academic laboratories

(Evoked Potentials Response-ERP) and someby HF engineers involved with the SpaceProgram. These studies have been conductedto measure operator stress and alertness duringthe performance of assigned tasks. Withregard to space, studies have focused onmeasuring electromyograms (EMGs) collectedfrom subjects operating under both shirtsleeveand pressure suited conditions whileperforming simulated space vehicle controltasks. Some studies have been performedusing electrocardiograms (ECGs) or electro-oculograms (EOGs) as measures of stress andperformance, etc.

This research has been performed in settingswhere the subject is connected to the recordingdevice(s) through a system of cables or "hardwires." In a simulated space vehicle controlenvironment, the subject might exert forcesagainst some controller device, track a target,etc. This approach is suitable and works wellfor Earthbound experiments; however, itpresents problems during a space mission. OnEarth, gravity acts on the interconnectingcables and keeps them in a cohesive bundle.This minimizes interference with task

performance. In most Earth testing situations,cables dangling from the subject present noproblems. Such is not the case during spaceflight. Feedback from videotapes, postflightdebriefings, and other sources indicate that theinterconnecting cables tend to float andinterfere with task performance. This is true ofaudio system cables as well as cablesinterconnecting experiments, crewmembers,

6O

Page 71: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

oo |

O0,__44__O0_

0_0_,04_

I Digital Recorder

IR Signal

Analo 0 Monitor/Printer

Figure 1. Paradigm of Telemetry System

etc. Much time is spent sweeping these cablesout of the way. Shortening of cables providesan unsatisfactory solution, since this restrictscrewmembers' mobility and range. Thesetypes of problems, concomitant with the HFexperimenter's desire to collect data undernormal conditions, make the use of hardwiringundesirable for data collection during spacemissions. With these considerations in mind,the NIDCS was devised.

EQUIPMENT

Meetings were held to discuss the viability andfeasibility for developing a "noninterference"(telemetry) system for transmission ofphysiological signals. Infrared was selected asthe signal carrier because the RF channels areassigned for flight data or other critical inflightparameters. There are few channels availablefor other uses. Also, IR tends to reflect off

certain surfaces and increases the probability ofdetection.

A system was developed by modifyingexisting equipment. This modified systemcontained both a transmitter and a receiver unit

with an "end-to-end" response of 6 kHz. It

was felt that this bandwidth would allow the

systems to handle 3 channels of physiologicaldata. This portion of the NIDCS is referred toas the IPDL (Infrared Physiological DataLink).

Further modifications were performed on theunit to increase wearing comfort, to adapt theunit to physiological data handling, etc. A"breadboard" model was developed and tested.Testing was accomplished through the use ofcommercially available electrodes attached to asubject and to specifically developed signalconditioners through a (body-worn) cablewhich then attached to the IPDL transmitter

(Figure 1). The subject then walked, flexedmuscles, and simulated task performance.Data were recorded on a 4-channel strip chartrecorder where one channel was used as theevent marker. None of the data collected

during testing were analyzed since they had notbeen collected using strict scientific standardsor practices. They were used only to indicatethe presence of bioelectric potentials. Systemreliability was determined through recordingsignals on the strip chart, both through theIPDL system and hardwiring the subject to therecorder. Wave forms were then compared for

61

Page 72: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

distortion or any other changes which mighthave occurred and could be attributed to theIPDL.

The system operated satisfactorily except forinterference (crosstalk) between the centerchannel and both end channels. As a result,

plans were revised from development of achannel unit to a 2-channel system. Using the

same approach as described above, a 2-channelprototype unit was developed, tested, andfound to be satisfactory.

Experiments were designed, and a pilot studywas run. Data were collected through the useof an instrument-quality tape recorder. Signalswere then played into a personal computer fordata storage and waveform and statisticalanalyses. The results of that study are beingpresented separately (by F. E. Mount, NASAJSC, under whose auspices this project wasfunded and directed) and will not be reportedherein.

FUTURE APPLICATIONS

The importance of this effort is that a meanshas been developed which allows datacollection while reducing, if not eliminating,certain types of experimental bias, such as theguinea pig effect, role selection, response sets,and measurement as an independent variable.Further, experimental error created by theinvestigator (subject induced bias due to sex,race, etc.) can be eliminated, data collectioncan occur under everyday settings rather than a

laboratory (which may affect some data),errors due to changes in procedures orinstructions are eliminated, etc. Finally, datacan be collected that might not be collected inany other way.

Future applications for this approach are seenin the fields of medicine where restriction of a

patient's movement is undesirable and in othersimilar situations. This technique could beutilized to make patients with certain disordersmore independent by allowing them to be awayfrom a laboratory or medical environmentwhile still allowing for monitoring ofhomeostatic functions on a continuous basis.

Other research areas which need to be

addressed are development of better, morenatural and more comfortable electrodes,

digitizing signals at the transmitting unit tocompress data for storage, etc. This is only ashort, and by no means exhaustive, list ofareas for future research.

REFERENCES

Due to space limitations, the following is anabbreviated list of references.

I. Basmajian, J. V. Muscles Alive(Williams and Wilkins Co., 1974).

. Callaway, E. Brain Electrical Potentialsand Individual PsychologicalDifferences. (Grune and Stratton,1975).

. Gale, A. Some EEG Correlates ofSustained Attention. Vigilance (PlenumPress, 1977).

. Gould, J. D. Eye Movements DuringVisual Search. IBM Research ReportRC 2680 (Yorktown Heights, NY,1969).

, Kubis, J. F., et al. Time and Work onSkylab Missions 2, 3 and 4.Biomedical Results From Skylab.NASA SP-377 (1977).

. LaFevers, E. A., and Hursta, W.

Final Report: Lower Spectral DensityAnalysis of the SkyLab 3 Data. NASAReport (JSC).

° LeVeau, B. Biomechanics of HumanMotion (W. B. Saunders, 1977).

° McKay, R. S. Biomedical Telemetry(John Wiley, 1965).

. NASA Workshop on Advances inNASA-Relevant, Minimally InvasiveInstrumentation. (Pacific Grove, CA,April 1984).

62

Page 73: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

10.

11.

12.

Nowliss, D. P., Wortz, E.C., andWaters,H. TEKTITE II, HabitabilityResearchProgram.AirsearchMfg. Co.Report.7-6192-1(January1972).

Radloff, R., andHelmfich, R. GroupsUnder Stress: Psychological Researchin SEALAB H (Appleton-Century-Crofts, 1968).

Weaver, C. S. Wearable Blood

Pressure and ECG Recording System.Stanford Research Institute Interim

Report (February 1976).

63

Page 74: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

"9

PERFORMING SPEECH RECOGNITION RESEARCHWITH HYPERCARD

Chip ShepherdLockheed Engineering and Sciences Company

Research directed by Barbara Woolford, FlightCrew Support Division, NASA JSC.

PURPOSE

The purpose of this paper is to describe aHyperCard-based system for performingspeech recognition research and to instructHuman Factors professionals on how to usethe system to obtain detailed data about theuser interface of a prototype speech recognitionapplication.

BA CKGRO UND

The development of the first Macintosh-basedspeech recognizer (Voice Navigator byArticulate Systems, Inc.) has enabledengineers at the NASA Johnson Space Center(JSC) to develop rapid-prototype speechrecognition interfaces for space applicationswith HyperCard, an information managementsoftware package. A layout of the requiredhardware is presented in Figure 1.

Just like most speech recognizers, the VoiceNavigator will allow a user to define a uniquevocabulary, assign the computer action(s) to beassociated with each vocabulary word, andrecord a personalized voice pattern for eachword. The Navigator goes a step farther thanmost recognizers, however, because it allowsaccess to the various recognition parametersgenerated within the machine while it is in anoperating mode.

To obtain data on speech recognitionparameters while the unit is being operated,engineers have taken advantage of the fact thatHyperCard's command language may beexpanded through the implementation ofexecutable commands (XCMDs), which areuser-created subroutines written in assembly or

C languages.

In JSC's speech recognition research, threetypes of recognition data were required eachtime a command was spoken: the recognizedword, the loudness of the speech, and the"confidence score," which is an internallycalculated measure of how closely the spokenword compares to a pre-recorded sample; thescore is expressed as a number from 1 to 100,with 100 being a perfect match. Usingspecifications from JSC contractor engineers,special HyperCard XCMDs were developed byArticulate Systems to obtain speech recognitiondata and assist in JSC's research applications.By integrating the XCMDs into experiments,project engineers have learned how to use theXCMDs to perform quantitative speechrecognition research.

XCMD DESCRIPTIONS

Articulate Systems included twelve XCMDs ina HyperCard stack called VoiceTalk, created toassist with the completion of the speechrecognition research. Seven of the twelveXCMDs were designed to control necessaryfile management tasks. The other five havebeen applied in conducting research and aredescribed below.

"Vocabulary" (name of vocabulary file) -Writes the active vocabulary list intothe local HyperCard variable 'it.' Thesubject and/or researcher may beinformed at all times which vocabularyset the recognizer is listening for.

"Collect" (vocabulary word) - Collects newvoice samples of the specifiedvocabulary word. Thus, the researchermay record new voice samples justbefore, or even during, an experimentsession.

"Macro" (vocabulary word) - This featurereturns the command, or macro,

associated with the specifiedvocabulary word. The macro is

64

Page 75: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Apple Macintosh (SE or ll series)IHard drive (lOMb or more) I2.5 Mb RAM minimuml

System 6.0.3 ___

Articulate Systems

Voice Navigator

(includes Voice Driver

software and microphone)

Figure 1. Recommended minimum hardware for HyperCard-based speech recognition

research system.

usually a string of text, but can also bea mouse click or a series of HyperTalkcommands, complete with returns,tabs, and linefeeds.

"Listen" - Prepares the recognizer to acceptspeech input.

"Recognize" When an utterance isdetected, this command returns thename of the recognized word. Inaddition, several associated speech

recognition parameters are stored inreserved variables. The most valuableof these associated parameters is theconfidence score, which is stored in thereserved variable "Confidence." The

loudness of the speech is reported indecibels in the reserved variable

"Amplitude."

GETTING THE SYSTEMTO WORK FOR YOU

The present Voice Navigator, which iscommercially available, does not include theHyperCard XCMDs as part of its standardpackage. This is because the XCMDs werecreated while the Voice Navigator was aprototype unit. However, the XCMDs may beeasily obtained from Articulate Systems andwill operate with available hardware after a fewminor software adjustments are made by theuser.

To utilize the XCMDs in HyperCard

applications, the researcher should follow theprocedures outlined below. These procedureswill work with Voice Navigator System 1.0.1:

(1) Obtain a Voice Navigator from ArticulateSystems, specifically requesting VoiceTalk.

(2) After you have connected the Voice

Navigator and installed its software systemfiles into the Macintosh as described in the

Voice Navigator User's Manual, use the"Duplicate" command to make copies of the"Voice Control" and "Voice Driver" files that

reside in the System folder. You should nowhave two new files named "Copy of VoiceControl" and "Copy of Voice Driver."

(3) Change the name of the file "Copy ofVoice Control" to "Dragon" and "Copy ofVoice Driver" to "Dragon.LOD." You must do

this to get the XCMDs to operate because, asstated before, the XCMDs were created when

the Voice Navigator was in the prototype stageand, at that stage, the driver files were named"Dragon" and "Dragon.LOD." The Macintoshmust be rebooted before the new files can be

accessed, so you may as well do so beforecontinuing to the next step.

(4) Install the HyperCard XCMDs ontowhichever HyperCard stack(s) you will be

using for speech recognition applications. You

65

Page 76: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

may accomplishthis with either the ResEditprogramor the "install" featureprovidedwiththeVoiceTalkstack.

(5) Now that the XCMDs are installed, youmayincludetheminto anyHyperTalkscriptonthe stack. Examplesof how to integratethecommands into HyperTalk programs areincludedwith theVoiceTalkpackage.

Articulate Systems reports that improvedversions of the HyperCard stacks will beavailablein thenearfuture,but theproceduresjust describedshouldenablepresentlyavailablecomponentsto function.

DISCUSSION

Using the XCMDs created by ArticulateSystems, engineers at JSC have been able toextract important information about the speechrecognition application unobtrusively in realtime as the user operates the application.Information about the recognized word, itsconfidence score, the loudness of the speech,and the elapsed time may be recorded in aninvisible background data field that is storedand analyzed after the user has completed thesession. With this system, the collection of thespeech parameter data has not affected the userinterface and has not added noticeable time

delays to the execution of the interfaceprogramming.

EXAMPLE APPLICATION

The described system has been usedextensively at JSC in examining speech as ameans of controlling a computer display in theextravehicular environment. Future space suitsmay be equipped with a Helmet-MountedDisplay (HMD), capable of providingsubstantial amounts of text, graphics, andvideo information. Controlled by spokencommands, this system would provide hands-free access to information for the suitedastronaut.

A test was conducted with a prototype HMDsystem at NASA-JSC by having subjects usethe HMD-based information system to

construct an electronic circuit (Shepherd,

I989). The speech recognizer was active at alltimes and four types of data were collectedeach time a word was recognized: the word,the confidence score, the amplitude, and theelapsed time for the session (see Figure 2). Asegment of the HyperTalk script used toperform these functions is included at the endof this paper (see EXAMPLE STACKSCRIPT).

5 CONNECT BLUE WIRE

BETWEEN LEADS 18 AND 62

6 END OF PROCEDURE [..... /,_._

Mike 01or_°r

Figure 3. HyperCard screen from HMD

construction task experiment with data screenshown.

A list of the recognized words was correlatedwith the elapsed times and compared to thevideotape of the experiment, enablingresearchers to easily classify each entry as acorrect recognition or an error. Upon furtheranalysis of the errors, it was found that themost common type of error occurred when the

recognizer had inexplicably registered acommand from the subject's conversationalspeech.

The confidence scores of the errors were

analyzed and compared to those of the correctrecognitions. The patterns were very different.While almost all of the correct recognitions hadscores above 70, the errors were scatteredthroughout the range of scores, indicating thatthe errors had occurred randomly.

The amplitudes of the errors were analyzed andcompared to those of the correct recognitionsto see if the errors were said more loudly ormore quietly than commands. No significantdifferences were found, which indicated that

the errors were said just as loudly as thecommands.

66

Page 77: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

It shouldbe notedthat the XCMDs havenotonly beenusefulin collectingthe databut arealso helping to improve the interface itself.Becauseof thedifferencein confidencescorepatterns between correct recognitions anderrors,systemperformancehasbeenincreasedby settinga confidencescorethresholdin thesystem(i.e., eachtime a word is recognized,thecorrespondingconfidencescore,which isgeneratedby theXCMD, is checkedand,if itis not 70 or above,the commandis ignored).With this thresholdin place, manyerrorsarescreenedout while almostno correctrecogni-tions areaffected. A similar thresholdfor theamplitudecould havebeenput into placeif adifferencein thepatternshadbeenfound.

The describedsystemhasbeenbeneficial instudyingspeechcontrolfor spaceapplications,but it can also be employed in evaluatingprototype interfaces in any of the leadingspeechrecognitionfields, includingmedicine,defense,products for the handicapped,andconsumersystems.

EXAMPLE STACK SCRIPT

Explanation: The first script basically instructsthe recognizer to keep listening until it hears avocabulary word. Once a vocabulary wordregisters, a second script would be triggered.The script for "Mike_On" is provided as asample. This script "activated" the microphoneto accept a page forward/backward command.However, if the confidence score was not 30

or more, the script instructs the recognizer toignore the command it just heard and to startlistening for another.

On OpenCardglobal RecognizedWord, StartTimeput the seconds into StartTimeput empty into RecognizedWordlisten

repeat until the mouseclickput recognize() into RecognizedWordif RecognizedWord is not empty then

get macro(RecognizedWord)do it

end if

end repeatend OpenCard

on Mike_On

global RecognizedWord, StartTime,Confidence

global Amplitudeif Confidence > 30 then

put RecognizedWord &&Confidence&&Amplitude-,

&& ((the seconds) - StartTime) & return--,after bkgnd field "Recording"

else

put 3 into dsd_state :(i.e., ignore thecommand)

end ifend Mike_On

A CKNO WLEDGEMENTS

The author would like to thank Barbara

Woolford and Jose Marmolejo of NASA-JSCand Tim Morgan of Articulate Systems for

their support in this research.

References

, Articulate Systems, "Voice NavigatorUser's Manual," 1989.

2, Cohen, A. D. "Rapid PrototypeDevelopment with HyperCard," Work-shop at the 33rd Annual Meeting of theHuman Factors Society, October 1989.

o Shepherd, C. K., Jr. "Human FactorsInvestigation of a Speech-ControlledBench-Model Extra-vehicular Mobility

Unit Information System in aConstruction Task," NASA documentno. JSC-23632, Lockheed Engineeringand Sciences Company document no.LESC-26799, May 1989.

, Shepherd, C. K., Jr. "The Helmet-Mounted Display as a Tool to IncreaseProductivity During Space StationExtravehicular Activity," Proceedingsof the 32nd Annual Meeting of the

Human Factors Society, October 1988.

, Weimer, J. "HyperCard for HumanFactors Applications," Workshop at the32nd Annual Meeting of the HumanFactors Society, October 1988.

67

Page 78: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

y-- ) J &

J

- V_

ILLUMINATION REQUIREMENTS FOR OPERATINGA SPACE REMOTE MANIPULATOR

George O. Chandlee and Randy L. SmithLockheed Engineering and Sciences Company

Charles D. WheelwrightNASA Lyndon B. Johnson Space Center

INTRODUCTION

Critical issues and requirements involved inilluminating remote manipulator operations inspace help establish engineering designs forthese manipulators. A remote manipulator isdefined as any mechanical device that is

controlled indirectly or from a distance by ahuman operator for the purpose of performingpotentially dangerous or hazardous tasks to

increase safety, reliability, and efficiency.Future space flights will rely on remotemanipulators for a variety of tasks includingsatellite repair and servicing, structuralassembly, data collection and analysis, andperformance of contingency tasks. Carefullydesigned illumination of these manipulatorswill assure that these tasks will be completedefficiently and successfully.

Studies concerning the influence ofillumination on operation of a remotemanipulator are few. Available results showthat illumination can influence how

successfully a human operates a remotemanipulator. Previous studies have indicated

that illumination should be in the range of 400-600 footcandles, the currently recommendedrange for fine to medium assembly work tasks[ 1, 2, 3, and 4]. However, on-orbit operationshave demonstrated that effective operation canbe performed under illumination levelsbetween 100-500 footcandles provided thatspecularity and glare are minimized.Increasingly complex and finer work taskswould necessarily require an increase inillumination [1, 2, and 3]. Brightness shouldnot exceed 300-450 footlamberts (roughlyequivalent to a zenith sky or slightly brighter[3]) so as to eliminate glare and possible

blinding of the operator. The reflectance

percentage of the manipulator and target shouldbe about 75%, if possible [5]. The beneficialaspect of high specularity is that detectiondistances may be as great as 5 miles duringrendezvous operations [5]. Reflectance is anespecially critical illumination-related parameterbecause a target object and a manipulatorshould be visible to a human operator fro.mdistances of at least 30-40 meters.

Remote manipulators and their task and targetobjects are operated optimally when direct andindirect glare is eliminated and lighting tends tobe diffuse. Several ways exist to reduce glare:position light sources outside the operator'sline of sight, use low-intensity light sources,increase luminance of the area around glaresources (direct glare), position light sources soa minimum amount of light is directed towardthe eyes to prevent frontal and side blinding,and use luminaires with diffusing or polarizinglenses. Reflective surfaces should diffuse the

light (flat paint, nongloss paper, and texturedfinishes), nluminance levels should be kept aslow as possible and use indirect lighting [2, 4].A neutral density filter with a transmission of20 percent can help reduce general reflectedglare, except for the very harsh specular type,to an acceptable level [6].

Design of lamp lenses is an important factor tobe considered in establishing lightingrequirements. Lenses distort light distributionand can otherwise alter viewing conditions andmay contribute to poor operator performance.Previous studies indicate that a planar-planarlens is probably the best because properillumination levels will be maintained and

contrast ratios are adequate [7]. Planar-planar

68

Page 79: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

lenses are currently used in Space Shuttlecargobay lights.

MAJOR ISSUES

Literature surveys, evaluation of publishedexperimental results, and consideration ofrequirements in Earth-based operations suggestthat seven major issues may be identified as

major contributors to establishing illuminationrequirements for a remote manipulator inspace. These seven issues are listed anddiscussed below.

(1) Sun angles and their influence onreflectance�viewing characteristics of remotemanipulators. Wheelwright [5, 6] has shownexperimentally, with the use of scale models,the effects that sun angles have on the

operation of the remote manipulator system onthe Space Shuttle. Effects of sun anglesdepend largely on the viewing configuration.Remote manipulator teleoperations shouldavoid direct sun viewing to prevent blinding ofthe operator. Specular glare, veiling lumens,and extremely bright areas occur at various sunangles. Although objectionable, most on-orbitoperations can still be completed. Reorientationof the viewing angle by no more than 5

degrees will, however, produce morefavorable lighting conditions [6].

(2) Reflectance properties of the manipulatorsunder solar illumination and artificial lighting.

Comparison of the reflectance properties ofmanipulators under solar and artificial lightingis important to establish when and how eachlighting regime can be used to advantage foroptimum performance, to determine properartificial light sources, and to determine whatspecial filters might be necessary to enhancerecognition and minimize specularity. Becausesolar light is collimated, special considerationsmay involve reflectance characteristics so as tominimize deleterious edge effects encountered

in on-orbit operations.

(3) Recognition and reflectance properties oftask structures and target objects. Recognitionof task structures and target objects is

important in order to optimize operations andto reduce hazards associated with incorrect

identification. Target-background contrastratios of at least 0.6 should be used to attain

optimum size discrimination in two-targettasks. Size discrimination performance

depends on target-background contrast. Withcontrast ratios of at least 0.6, the lineardimension size discrimination is on the orderof 0.10. A reduced contrast ratio of 0.125,

however, raises the threshold value to 0.30.

Brightness discrimination between two targetsis enhanced for contrast values of 0.25 or

greater [8]. Some tasks may involverecognition of alpha-numeric characters.Character density, character contrast, viewingdistance, and monitor size are some of thevariables that can affect correct identification of

alphanumeric characters [8].

(4) Search and rendezvous requirementsusing running lights of a free-flying remotemanipulator. Establishing search andrendezvous requirements is critical becausedocking with a remote, free-flying manipulatorwill be an essential activity in on-orbit

operations. Remote manipulator docking andoperation require that the operator be able toacquire depth and range information from thevisual system. Range estimation depends ontarget size, brightness, and contrast [8]Configuration of running lights is importantbecause recognition of form and orientation ofaxes will be critical for successful rendezvous

and docking operations.

(5) Tracking and recognition of remotemanipulators by direct vision and monitorviewing. The issue of direct vision and TVmonitor viewing remains unresolved.Evidently, each viewing method hasadvantages under certain conditions. Viewingis the primary form of feedback to the operatorregarding manipulator position, orientation,and rate of movement. The illumination system

and viewing system are interdependent andtogether result in operator perception ofmanipulator motion.

(6) The influence of light intensity, position,and type on operator performance. A fewstudies have measured how operator

performance is influenced by light intensity,position, and type. Operator physical and

69

Page 80: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

mentalworkloadmaybedramaticallyaffectedby these parameters. Onboard lighting iseffective for close-in illumination of shapesandspaceshiddenin deepshadow.A varietyof lights differing in illumination output,powerconsumption,spectralspecularity,beamwidth, and efficiency will probably benecessary for on-orbit operations. Resultssuggestthatperformanceis bestmaximizedbytailoring light intensity, position, andtype tothespecifictask[9].

(7) The effects of shadow patterns onoperator interpretation and performance. Theinterpretation of shadow patterns could have asignificant effect on operator performance butjust exactly what these effects are remains to bedetermined. Local task-specific lighting maybe necessary to overcome some of theproblems associated with shadow patterns.Total elimination of shadows appearsimpossible, however, and more research isneeded to determine cognitive processesinvolved in shadow interpretation.

All seven issues must be resolved in the

context of realistically achievable physicalconditions in space. Perhaps two of the mostlimiting conditions will be the poweravailability (the power requirements for the useof remote manipulators) and thermalconditions. Power is a necessary, but limitedcommodity in a space environment and will bea restrictive factor in remote manipulatoroperations. Illumination designs and hardwaremust account for potential problems in heatdissipation.

DISCUSSION

Identification, characterization, and analysis ofeach of the seven major issues will contributeto engineering design plans for a successfulhuman operator-remote manipulator interface.An initial approach to determine the relativeimportance of each issue with respect to remotemanipulator operation is to establish some

critical visual activities and how they arerelated to each of the seven issues. A

suggested relationship among the seven issuesdiscussed in this paper and six critical visualactivities as defined by Huggins, et al. [10] is

displayed in Table 1. Critical visual activitieswere defined and studied by Huggins, et al.[10] in their evaluation of human visual

performance for teleoperator tasks.

In this study, we define acuity as keeness ofperception, discrimination as the ability todistinguish among objects, and recognition asthe ability to identify an object. Using thesedefinitions, we suggest which of the six criticalvisual activities are most likely to be affectedby each of the seven issues. In no way is thisintended to mean that some activities are not

affected by all the issues; indeed, each issuewill have some, albeit small in some instances,influence on each activity. Instead, we infersome critical visual activities to be more

vulnerable to anticipated specifications definedby the issues than others. Expectations willmost probably change as new data becomeavailable, so the suggested relationships givenin Table 1 should not be considered as fixed.

Rather, the given relationships are intendedonly as a guideline to help plan, evaluate, andinterpret future studies and, possibly, designs.

Once each illumination parameter has beenspecified, continuing human factorsengineering studies should evaluate the variouskinds of mental models used by an operator.Mental models can be used to account for

human interactions with a remote manipulatorby helping to define the cognitive processesinvolved in human-remote manipulatorinteractions [11]. Definition of mental models

used by an operator of a remote manipulatorcould help establish lighting arrangement andintensity, brightness and illuminationrequirements, and mental processes associatedwith shadow interpretation.

The major issues identified in this study mayalso be helpful in defining illuminationrequirements in other applications usingindirect human operation and in creatingoptimum engineering designs for remotemanipulators used m undersea tasks,assembly-line work, and in the nuclearindustry.

70

Page 81: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

TABLE 1

Summary of suggested relationships among the illumination issues discussed in this paper andcritical visual activities evaluated by Huggins, et al. [10]. (See text for definition of terms and

issues.) Each X shows what critical activities are most likely to be affected by the issues discussedin the text.

ILLUMINATION

CRITICAL VISUAL ACTIVITY

Acuity Size Form Brightness Pattern DepthSize Estimation Discrimination Discrimination Recognition Distance

.ISSUE

Reflectance of remote X

manipulator

Solar vs. artificial lighting

Reflectance of target X

Effect of running lights

Direct vision vs. TV X

Lighting parameters X

Effect of shadows X

X

X X X

x x x

x

x

X X

x x

x

x

X

X

x

X

x

S UMMA R Y

(1) Preliminary guidelines for illuminationrequirements in remote manipulator tasks inspace are suggested: illumination should be inthe range of 400-600 footcandles (althoughunder some circumstances a range of 100-500footcandles could suffice), brightness shouldnot exceed 300-450 lamberts, reflectance of

target/task objects should be about 75% (wesuggest a range of between 50-75%), and anoptimum contrast ratio between target andbackground is at least 0.6%.

(2) Seven major issues related to illuminationof a remote manipulator in space are discussed:the influence of sun angles onreflectance/viewing characteristics of remotemanipulators, reflectance properties of theremote manipulator under solar and artificiallighting, task/target object recognition andreflectance properties, rendezvous

requirements, tracking and recognition ofremote manipulators by direct vision and TVmonitors, lighting parameters (such asintensity, position, and type), and the effect ofshadow patterns on operator interpretation andperformance.

(3) Critical visual activities, such as acuity,discrimination processes, and recognition tasksin the optimum operation of a remotemanipulator, are known to be influenced by theillumination environment. Future research

intended to measure and interpret fully theinfluence of illumination should use scale-

model and full mockup testing to evaluate

human operator performance under variousillumination regimes.

A CKNO WLEDGEMENTS

We thank Lockheed Engineering and Manage-ment Services Company and the Lyndon B.

71

Page 82: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Johnson Space Center, National Aeronauticsand Space Administration, for supporting thisresearch under contract NAS 9-17900 and for

granting the opportunity to publish ourresults.

REFERENCES

. W. H. Cushman and B. Crist.

Illumination, in: G. Salvendy (Ed.),Handbook of Human Factors, John

Wiley and Sons, New York, 1987, pp.670-695.

. R. D. Huchingson, New Horizons forHuman Factors in Design. McGraw-Hill Book Company. New York, 1981.

, P. R. Boyce. Human Factors inLighting, Macmillan, New York, 1981.W. F. Grether and C. Baker, Visualpresentation of information, in: H.P.Van Cott and R. G. Kinkade (Eds.),

Human Engineering Guide toEquipment Design. U.S. GovernmentPrinting Office, Washington. 1972, pp.41-121.

, W. F. Grether and C. Baker, Visualpresentation of information, in: H.P.Van Cott and R. G. Kinkade (Eds.),Human Engineering Guide toEquipment Design, U.S. GovernmentPrinting Office, Washington. 1972, pp.41-121.

. C. D. Wheelwright, Illuminationevaluation using 1/10 scale model ofpayload bay and payloads. Phase II.Payload bay artificial lightingevaluation, Note 77 EW 4, 1977.

. C. D. Wheelwright, Illuminationevaluation using 1/10 scale model of

,

.

.

10.

11.

payload bay and payloads. Phase I.Solar illumination evaluation. Johnson

Space Center, Internal Note 77 EW 2,1977.

N. Shields, Jr. and D.E. Henderson,

Earth Orbital Teleoperator SystemEvaluation: 1979-1980 Test Report,Essex Corporation, Report Number H-81-01, 1981.

N. Shields. Jr., Francis Piccione, M.Kirkpatrick III, and T. B. Malone.Human Operator Performance ofRemotely Controlled Tasks: ASummary of Teleoperator ResearchConducted at NASA's George C.Marshall Space Flight Center between1971 and 1981, Essex Corporation,Report Number H-82-01, 1982.

C. D. Wheelwright, Teleoperatorretrieval system: Lighting evaluation,Johnson Space Center, Internal Note 79EW, 1979.

C. T. Huggins, T.B. Malone, and N.L.Shields, Jr., Evaluation of humanoperator visual performance capabilityfor teleoperator missions, in: E. Heer(Ed.) Remotely Manned Systems:Exploration and Operation in Space.Proc. of the First National Conference.

California Institute of Technology,September 13-15, 1972, 1973, pp.337-350.

R. L. Smith and D. J. Gillan, Human-

telerobot interactions: Information,control, and mental models. Proc. of

the Thirty-first Ann. Mtg. HumanFactors Society, Volume 2, 1987, pp.806-810.

72

Page 83: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

PREVIOUS EXPERIENCE IN MANNED SPACE FLIGHT:A SURVEY OF HUMAN FACTORS LESSONS LEARNED

George O. Chandlee N 9 4 -Lockheed Engineering and Sciences Company

Barbara Woolford

NASA Lyndon B. Johnson Space Center

t'

//_

2L I9t

INTRODUCTION

Previous experience in manned space flightprograms can be used to compile a data base ofhuman factors lessons learned for the purpose

of developing aids in the future design ofinhabited spacecraft. The objectives of thisstudy are to gather information available fromrelevant sources, to develop a taxonomy ofhuman factors data, and to produce a data basethat can be used in the future for those people

involved in the design of manned spacecraftoperations. A study is currently underway atthe Johnson Space Center with the objective of

compiling, classifying, and summarizingrelevant human factors data bearing on thelessons learned from previous manned space

flights. The research reported here definessources of data, methods for collection, and

proposes a classification for human factorsdata that may be a model for other humanfactors disciplines.

PERSPECTIVE

Three major manned space programs havebeen conducted since the mid-1960s: the

Apollo, Skylab, and Space Shuttle programs.Each program has contributed significant newdata to the field of human factors and to

gaining a greater understanding of howhumans operate, function, behave, and adaptto the environment encountered in space.Because of various circumstances, includingtime constraints, human factors data collected

during the past two decades of manned spaceflight have been transferred in a way such thatthe data remain scattered in various locations

and do not reside in a single central locationthat is accessible to interested individuals,

including those that might be involved in futureadvanced spacecraft design. Difficulties

encountered in the systematic collection of pastdata are compounded because new data andtechnology appear frequently and must also bestored for easy use by appropriate individuals.

METHODS

The technique for data collection involvesidentifying information sources includingtechnical reports, films, or video tapes,minutes of meetings, and records of in flightand postflight debriefings. A taxonomy isimposed on these data and the taxonomy maybe a model for other human factors researchactivities. Data are transferred to an

appropriate data archival/retrieval system thatserves as a resource from which individuals

involved in spacecraft design can draw relevanthuman factors data. Data sources include

documents, published and unpublishedtechnical reports, individuals, transcripts ofmeetings, audio and visual recording media,and computer-stored information, and may besupplemented with information acquired ininterviews or from questionnaires. Systematiccollection of data from these identified sourcesinvolves establishing a way of coding,tabulating, and cataloging the data beforeincorporating it into a data managementapplication for use. Since human factors dataare so diverse, a scheme for classifying data

helps impose a meaningful structure on thedata and renders the data more easily

incorporated into an appropriate data base.

Commercially available software packageshave been evaluated as candidate applicationsfor the development of the computer-basedretrieval system. These packages generally fallinto two categories: data base managementsystems and hypertext-based text/graphics

73

Page 84: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

handling tools. Data base managementsystemscorrespondto thetraditionallinearandhierarchicalmethodof storingdata.This formof dataarchival andretrieval systemdoesnottake advantage of complex interconnectedLinks betweentextual/graphicaldataand,asaresult,databrowsingcanbecumbersomeandslow. Hypertext (and hypermedia) systemsallow for complex organizationof data (textand graphics)by allowing machine-supportedreferencesfrom one dataunit to anotherbytakingadvantageof theability of acomputertoperform interactive branching and dynamicdisplay (Conklin, 1987). In this fashion,hypertext systemsallow a user to jump fromone dataunit to another through links. Databrowsingbecomessimpleandefficient.

PROPOSED CLASSIFICATIONSCHEME

The need for a classification system of humanfactors data has been recognized for years(Melton and Briggs, 1960), yet attempts toproduce such a classification have been few.

The main reason appears to be the belief thatsuch a task would require enormous amountsof time and effort because of the quantity ofliterature and data available. The classification

.proposed here relates to human-machinemteractxon in the context of manned spaceflight but some aspects should be applicable toother endeavors.

The classification proposed here builds upon aprevious one used to systematically categorizeSkylab man-machine data. The groupings areoperationally based. The following 19categories are suggested to classify humanfactors lessons learned in previous mannedspaceflight programs: Architecture,Communications, Crew Activities,Environment, EVA-suited activities, Food

Management, Garments, Housekeeping,Locomotion, Logistics management (includingfailure management and the logistics andprocedures involved in coping with systemanomalies), Maintenance (scheduled and

unscheduled), Manual dexterity,Mobility/restraint, Off-duty activity, Personalhygiene, Personal equipment, Physiologicaldata, Tool inventory, and Waste management.

Within each of these categories are other, lessbroad and more specific categories. Thenumber of categories may appear large, yetprevious studies have had to use manycategories. Meister and Mills (1971), for

example, created 18 categories in their attemptto determine requirements for and the elementsof a human performance reliability data base.The number of categories must be large, giventhe large number of activities encompassed byhuman-machine interaction. Other workers

(Chiles, 1967; Christensen and Mills, 1967)have indicated the difficulties involved in

establishing a taxonomy of human factors.

DISCUSSION

Previous studies attempting to classify humanfactors data (Fleishman, Kinkade, andChambers, 1968; Chambers, 1969; andMeister and Mills, 1971) have relied less onoperational categories and more on behavioraland performance criteria. Meister and Mills(1971), for example, developed a taxonomybased on functional behavior and established

categories such as auditory perception, tactileperception, and motor behaviors. Thistaxonomy reflected the goal of the study: todevelop a data base of human behavior(behavioral data acquired during actualexperimentation) for predicting human-machine performance. The taxonomypresented here reflects a different purpose: todevelop a data base of human operatorexperience (operational experience dataacquired as the result of various activities in

space) for the purpose of providing a source ofdata to be used in the future design of manned-spacecraft operations.

Evaluations of presently available text/graphicssoftware applications suggest that certaincriteria must be considered when a data

archival and retrieval system such as this one isdeveloped. One fundamental criterion is thedegree to which the system successfullyretrieves relevant articles. The precision ratio(Lancaster, 1968) is one way of measuring thissuccess. The ratio, developed in the context ofinformation theory, is defined as R/L x 100where R is the number of relevant documentsretrieved in a search and L is the total number

74

Page 85: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

of documentsretrievedin asearch.A retrievalapplication should have a high precision (atleast 80% or above) to prove useful. Toachieve such reliability, design of theapplication software and the data itself arecritical. Flexibility, nearly unlimited growthpotential,andthe ability to effectively handleincreasinglycomplexlinks thatareestablishedwithin thenetworkaresomeattributesaviableapplicationshouldpossess.

Resultspresentedherehavesignificancein theestablishmentand designof a SpaceStation,lunar base,andMartian colony. Themethodsdeveloped in the collection and systematicarchival of human factors data during thecourseof this studymayalsobeardirectly onquestionsof ways in which to systematicallycompileandcharacterizehumanfactorsdatainother areasof researchincluding designofaircraft, automobiles, manned sea-faringvessels,andother similar activities.Researchin humanfactorsengineeringhasescalatedinrecentyearsandtremendousamountsof dataare being generated (see, for example,Huchingson, 1981 or Woodson, 1981).Appropriatearchivalandretrievalsystemswillneedto bedevelopedto storethisdata.

Resultsof this work could haveat leastthreepossibleapplications:(1) otherworkers withlarge data sets might be able to use thecollectionmethodsdevelopedin thisstudy,(2)thetaxonomyof humanfactorsdatadevelopedwill be applicable to other human factorsresearchandmightbeusedin instanceswherelarge volumes of unsystematicallycollecteddata exist, and (3) future research in thedefinition and design of human factorsrequirementsmaybe ableto benefit from themethods, taxonomy, and data organizationtechniquesdevelopedin thisstudy.

A CKNO WLEDGEMENTS

We wish to thank the Lockheed Engineeringand Sciences Company and the Lyndon B.Johnson Space Center, National Aeronauticsand Space Administration, for supporting thisresearch under contract NAS 9 - 17900 and for

granting the opportunity to publish the results.

.

.

.

.

.

.

.

.

.

REFERENCES

Chambers, A. N. (1969). Development

of a taxonomy of human performance:A heuristic model for the developmentof classification systems. Report 4A,American Institutes for Research.

October, 1969. Silver Spring, MD.

Chiles, W. D. (1967). Methodologyin the assessment of complex

performance -- Discussion andconclusions. Human Factors, 9, 385-

392.

Christensen, J. M., and Mills, R. G.

(1967). What does the operator do incomplex systems? Human Factors, 9,329-340.

Conklin, J. (1987). A survey of

Hypertext. MCC Technical Report.Number STP-356-86, Rev. 2.70.

Fleishman, E.,A., Kinkade, R.G., andChambers, A.N. (1968). Development

of a taxonomy of human performance:A review of the first year's progress.Technical Progress Report 1, AmericanInstitutes for Research, 59.

Huchingson, R. D. (1981). Newhorizons for human factors in design.New York: McGraw-Hill Book

Company.

Lancaster, F. W. (1968). Informationretrieval systems. New York: Wiley.

Meister, D., and Mills, R. G. (1971).

Development of a human performancereliability data system. In Annals ofReliability and Maintainability, 1971.Tenth Reliability and MaintainabilityConference, Anaheim, CA. June 27-30, 1971, 425-439.

Melton, A. W., and Briggs, G. E.(1960). Engineering psychology. InAnnual Review of Psychology, 11, 71-78.

75

Page 86: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

10. Woodson, W. E. (1981). Humanfactors design handbook. New York:

McGraw-Hill Book Company.

76

Page 87: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Remote Operator Interaction

In the Remote Operator Interaction Lab, researchers design and conduct experiments and

evaluations dealing with human informational needs during the use of telerobots and other

remotely operated systems. Operators use various hand controllers with television and direct

visual feedback to perform remote manipulation tasks.

Page 88: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A
Page 89: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

HAND CONTROLLER COMMONALITY EVALUATION PROCESS

-.S/C/ t,', ") /,.4_

/ 8 -_ ''/

Mark A. Stuart, John M. Bierschwale, Robert P. Wilmington, 7 :"Susan C. Adam, and Manuel F. Diaz < -

Lockheed Engineering and Sciences Company

Dean G. Jensen

NASA Lyndon B. Johnson Space Center

N94-2 197

INTRODUCTION

Hand controller selection for NASA's Orbiter

and Space Station Freedom is an importantarea of human-telerobot interface design andevaluation. These input devices will control

remotely operated systems that include largecrane-like manipulators (e.g., RemoteManipulator System or RMS), smaller, moredexterous manipulators (e.g., FlightTelerobotic Servicer or FTS), and free flyers

(e.g., Orbital Maneuvering Vehicle or OMV).Candidate hand controller configurations forthese systems vary in many ways: shape, size,number of degrees-of-freedom (DOF),operating modes, provision of force reflection,range of movement, and "naturalness" of use.Unresolved design implementation issuesremain, including such topics as how thecurrent Orbiter RMS rotational and

translational rate hand controllers compare with

the proposed Space Station Freedom handcontrollers, the advantages that position handcontrollers offer for these applications, andwhether separate hand controllerconfigurations are required for eachapplication.

Common Space Station and Orbiter handcontrollers are desirable for many practicalreasons. Common hand controllers would

reduce the negative transfer that could occur ifmany different types of hand controllers wereused. The hand controllers need to be selected

to minimize astronaut training requirements.Other considerations include the number of

controllers required if each system had uniquecontrollers and the associated weight and

volume required to accommodate multiple setsand spares.

Several previous studies have evaluatedoperator performance differences caused byusing different hand controller configurationsduring remote manipulation tasks. Forexample, O'Hara (1987) compared bilateralforce-reflecting replica master controllers toproportional rate six degrees-of-freedom(DOF) controllers during dual-armed remotemanipulation tasks and discovered severaldifferences. The six-DOF rate controllers were

rated significantly higher in cognitive workloadand manual-control workload (ability to controlthe end effector and the equipment) duringdual-armed tasks. O'Hara also reported thatthe force-reflecting master controller was rated

significantly higher in physical workloadcompared to the six-DOF rate controller. Inconclusion, O'Hara found that mastercontrollers resulted in lower performance timesand allowed more "natural" control, while six-

DOF rate controllers were lower in physicalworkload. This study was significant, yetlimited because only two hand controller typeswere evaluated under limited operatingconditions.

Another relevant study conducted byHoneywell (1989) described current handcontroller concepts, the hand controllerconfigurations proposed for Space StationFreedom, and the requirements of the spacestation systems that will use hand controllers.Much of the report was based upon a surveyadministered to industrial participants, NASA,and universities. A third study (Stuart, Smith,Bierschwale, and Jones, 1989) evaluated the

anthropometric and biomechanical interfacebetween test subjects and three and six-DOFjoystick and mini-master hand controllers andfound that subjects can experience various

__ PAGE ,.BLAOK NO'I" F_H_MLe_

79

Page 90: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

typesof musclediscomfortdueto certainhandcontroller features. Since thesetwo reportscontain little empirical hand controller taskperformancedata,a controlledstudyis neededthat testsSpaceStationFreedom candidatehand controllers during representative tasks.This study also needs to include anthro-pometric and biomechanical considerations.

EVALUA TION

The NASA hand controller commonalityevaluation objective was to recommend thehand controller configuration(s) that can meetthe Space Station requirements whileaccomplishing optimal control of eachparticular system. The recommendedconfiguration(s) shall be chosen to maximizeperformance, minimize training, and minimizecost of providing safe and productive controlsfor the Space Station Freedom crew.

The hand controller commonality evaluationwas conducted as three separate experiments.Experiment One was a non-astronaut handcontroller evaluation at three test facilities.

Experiment Two was an astronaut handcontroller evaluation at the same three test

facilities. Experiment Three was a handcontroller volumetric evaluation done primarilyin the Orbiter and Space Station mockups. Allof the evaluations took place at NASA JohnsonSpace Center (JSC).

EXPERIMENT ONEMETHODS

Experiment One was conducted as a repeated

measures evaluation (within-subjects design)for each of the six tasks evaluated. These

tasks are described in the Apparatus sectionbelow. Test subjects used all of the handcontrollers for their respective tasks in thosemodes supported by the hand controllers andthe facilities.

SUBJECTS

Twenty-four non-astronaut test subjects wereused in Experiment One. Test subjects were

partitioned into six independent groups of fourtest subjects. Each test subject groupperformed one of the six remote manipulationtasks. Twelve test subjects who had priordexterous manipulator experience formed threegroups, eight test subjects who had prior RMSsimulation experience formed two groups, andfour test subjects who had prior free flyerexperience formed one group.

APPARATUS

Physical Simulations. Physical simulationswere performed in the Remote Operator Inter-action Laboratory (ROIL). These consisted ofthe following tasks: fluid quick-disconnectcoupling; simulated ORU change-out; andthermal insulation blanket removal. These

tasks were performed using a Kraft manip-ulator slave arm with a JR3 force-torquesensor.

Computer Simulations. Computer simulationstook place at two different test sites -- theSystems Engineering Simulator (SES) and theDisplays and Controls Laboratory (D&CL).The SES tasks were used to investigate ratecontrol mode hand controller characteristics

while controlling dynamic free flyer and SpaceStation Remote Manipulator System (SSRMS)simulations. The specific tasks were OMVdocking and logistics module transfer.

The D&CL tasks were used to investigate handcontroller characteristics during rate mode for acrane-type manipulator and both rate and

position modes for a dexterous manipulator(both kinematic simulations). The D&CL taskconsists of a sequential SSRMS/dexterousmanipulator operation (SSRMS used as atransport device) to perform an ORUreplacement task.

Hand Controllers Evaluated. Hand controllers

evaluated in this study were provided byNASDA of Japan, McDonnell Douglas/Honeywell, the Canadian Space Agency, andGoddard Space Flight Center. These handcontrollers are illustrated and described inFigure 1.

8O

Page 91: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

qSCHILLING OMEGA 6-DOF

(Rate, position, non-force reflectingand force reflecting mini-master)

HONEYWELL 2x3-DOF(Rate joysticks)

CAE 6-DOF(Rate joystick)

_OLL Y_W

HONEYWELL 6-DOF(Rate, position, non-force reflecting

and force reflecting Joystick)

NASDA 6-DOF(Rate, position, non-force

reflecting and force reflecting)

KRAFT NATIVE 6-DOF(Position, force reflecting

mini-master)

MARTIN-MARIETTA/KRAFT 6-DOF(Rate, position, non-force reflecting

mini-master)

Figure 1. Illustrations and characteristics of hand controllers evaluated.

81

Page 92: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

PROCEDURE

The procedure at each test site included pilottesting with operations-experienced testsubjects. At each of the three test sites, testsubjects were allowed to switch betweendifferent camera views as well as use fine-

adjustment camera controls such as focus, pan,tilt, and zoom. The switching and adjustingwas done by the test administrator. Tasks atall test sites were broken into their respectivesubtasks for performance analysis purposes.

ROLL. Test subjects within each of the threedexterous manipulator-experienced groupsperformed one of the three physicallysimulated tasks. The ROIL tested positionmode with no force reflection (haptic-proprioceptic), position mode with force

reflection, and rate mode while operators useda dexterous manipulator. The test subjectsfollowed a prescribed procedure during theperformance of the three physical simulationtasks. The subjects used predeterminedcamera positions of the remote worksite. Oneof the cameras provided a global view of theentire taskboard area. Camera positions wereoptimized per task prior to data collection.Test subjects received an equal amount oflaboratory training with each of the handcontrollers before data collection began. Afterreceiving training for a specific handcontroller, each test subject performed the tasktwo times with that controller. The procedurecontinued in this fashion until subjects withineach group performed their respective tasktwice while using each of the hand controllers.Hand controller use was counterbalanced to

control for order effects. After completing twotask trials using each hand controller, eachsubject was administered questionnaires tocollect subjective data.

SES. Test subjects within one of the RMS-simulation-experienced groups performed thelogistics module transfer task and test subjectswithin the free-flyer-experienced groupperformed the OMV task. The SES tested the

controllers in rate mode while test subjectsused the SSRMS or the OMV. The OMV was

controlled in pulse mode and the SSRMS taskswere controlled using the standard proportional

rate mode. The subjects used predeterminedsimulated camera views of the remote worksite

as well as a simulated direct view. Subjectscompleted a familiarization session prior todata collection in the SES and also performedthe simulated task two times with each hand

controller. Questionnaires were administeredafter performance of the tasks.

D&CL. Test subjects within the second RMS-simulation-experienced group performed thedual SSRMS/FTS ORU task. The D&CL

tested rate and position (non-force-reflective)while operators used a dexterous manipulatorin conjunction with the SSRMS. The SSRMSwas controlled in the rate mode and the

dexterous manipulator was controlled in both

rate and in position mode. The subjects usedsimulated camera views of the remote

worksite. After performing the simulated tasktwo times questionnaires were administered.

DATA COLLECTION

Task performance data included the following:time to complete each subtask, reach limits,active hand controller time, the number of handcontroller inputs, and error or accuracy counts.Questionnaires were administered to collect the

following types of subjective impressions:general acceptability, mental and physicalfatigue, and hand controller suitability forspecific tasks.

EXPERIMENT TWOMETHODS

Experiment Two used astronaut test subjectswho performed each of the six tasks at aI1 threetest sites.

SUBJECTS

Six crewmembers were used as test subjects inthis phase of the evaluation. Prior handcontroller experience of each crewmember wasassessed.

PROCEDURE

Familiarization with the tasks was requiredbefore the crew evaluation took place. This

Page 93: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

variedaccordingto theexperiencelevelof eachindividual crewmember. For example,somewhat more familiarization time wasnecessaryfor thosecrewmemberswho hadnoprior OMV or dexterous manipulator taskexperience.

Each crewmember performed a structuredsubsetof eachof thesix tasksdescribedin theExperimentOneMethodsSection.During thetask, performancedata, such as speedandaccuracy, were collected from eachcrewmember.After performingthestructuredsubsetof eachof thesix taskswith eachhandcontroller, the crewmemberwasgivena briefquestionnaireto fill out.

EXPERIMENT THREEMETHODS

Experiment Three was a volumetric evaluationwhich involved astronaut test subjects using allof the hand controllers.

TEST SUBJECTS

Four astronauts performed the evaluations.Attempts were made to have test subjects thatrange in body sizes from the 95th percentilemale to the 5th percentile female (workstationsare required to accommodate this range ofusers).

APPARATUS

Hand controller volumetric evaluations were

performed in the Space Station, Cupola, andShuttle mockups located at NASA JSC. Handcontrollers evaluated in Experiment Three arelisted in the Experiment One Apparatussection.

PROCEDURE

Single and dual hand controller usage for oneoperator was addressed at the command andcontrol workstation and the cupola

workstation. Side-by-side operator operationwas addressed in the cupola. Hand controllermounting and adjustment in the Space Station

and Cupola mockups were achieved using twotripods.

DATA MEASUREMENT

Data were collected with both a video recorderand a 35mm camera. Hand controller locations

for the various subjects were also recorded.The evaluations consisted of questionnaireadministration and anthropometric datacollection that addressed the following issues:hand controller swept volume;

operator/workstation placement (e.g., crewmovement ability in the area); display viewingcharacteristics (e.g., line of sightcharacteristics, display obstruction from handcontrollers); and reach envelope characteristics

(e.g., ability to reach workstation controls).The anthropometric data were incorporated into

an analysis of each hand controllerconfiguration within the appropriateworkstations on Space Station.

RESULTS

Results of data analyses are summarized asfollows: no appreciable astronaut/non-astronaut differences on the performance and

subjective data collected; subjective datasupported objective (performance) data; trendswere consistent across all three tasksconducted; rate control-mode was consistently

superior to position control-mode; noadvantage demonstrated for force reflection;joystick controllers were superior to mini-master controllers; and the 2x3 DOFs, CAE,

and the Honeywell rate-mode wereconsistently the top hand controllerconfigurations. As a result of theseevaluations, a 2x3-DOF hand controllerconfiguration was decreed the Space StationFreedom baseline configuration.

REFERENCES

o Honeywell, Inc. (1989). Handcontroller commonality study -- 289-16007. Clearwater, Florida.

. O_Hara, J. (1987). Telerobotic control

of a dextrous manipulator using masterand six-DOF hand controllers for spaceassembly and servicing tasks.Proceedings of the Human Factors

83

Page 94: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

.

Society 31st Annual Meeting. SantaMonica, CA: Human Factors Society.

Stuart, M. A., Jones, S. F., Smith, R.

L., and Bierschwale, J. M.Preliminary hand controller evaluation.

Houston, TX: NASA Lyndon B.Johnson Space Center, 1989; NASATech. Report JSC-23892.

84

Page 95: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

N94-PROGRAMMABLE DISPLAY PUSHBUTTONS ON

THE SPACE STATION'S TELEROBOT CONTROL PANEL

Mark A. Stuart, Randy L. Smith, and Ervette P. MooreLockheed Engineering and Sciences Company

l

Research directed by Jay Legendre, Manager,Remote Operator Interaction Lab, NASA JSC.

INTRODUCTION

The Man-Systems Telerobotics Laboratory atNASA's Johnson Space Center and supported

by Lockheed, is working to ensure that theFlight Telerobotic Servicer (FTS) to be usedon the Space Shuttle (Orbiter) and the SpaceStation has a well designed user interface froma Human Factors perspective. The FTS,which is a project led by NASA's GoddardSpace Flight Center, will be a telerobot usedfor Space Station construction, maintenance,and satellite repair. It will be directlycontrolled from workstations on the Orbiter

and the Space Station and monitored from aground workstation. The FTS will eventuallyevolve into a more autonomous system, but inthe short-term the system will be manuallyoperated (teleoperated) for many tasks. Thisemphasizes the importance of the human/telerobot interface on this system.

The information drix, ing the design of the FTScontrol panel is being provided by taskanalyses, workstation evaluations, andastronaut/FTS function allocations. Due to

space constraints on the Orbiter and the SpaceStation, an overriding objective of the designof the FTS workstation is that it take up as little

panel space as possible.

This phase of the FTS workstation evaluationcovers a preliminary study of programmabledisplay pushbuttons (PDPs). The PDP isconstructed of a matrix of directly addressableelectroluminescent (EL) pixels which can beused to form dot-matrix characters. PDPs can

be used to display more than one message andto control more than one function. Since the

PDPs have these features, then a single PDP

may possibly replace the use of many single-function pushbuttons, rotary switches, and

toggle switches, thus using less panel space.It is of interest to determine if PDPs can be

used to adequately perform complexhierarchically structured task sequences.

Other investigators have reported on thefeasibility of using PDPs in systems design(Hawkins, Reising, and Woodson, 1984; andBurns and Warren, 1985), but the presentendeavor was deemed necessary so that a

clearly d_fined set of guidelines concerning theadvantages and disadvantages of PDP use inthe FTS workstation could be established.

This would ensure that PDP use was optimizedin the FTS workstation.

The objective of this investigation was tocompare the performance of experienced andinexperienced Remote Manipulator System(RMS) operators while performing an RMS-like task on simulated PDP and non-PDP

computer prototypes so that guidelinesgoverning the use of programmable displaypushbuttons on the FTS workstation could becreated. The functionality of the RMS on theOrbiter was used as a model for this evaluation

since the functionality of the FTS at the time ofthis writing has not been solidified.

METHOD

APPARATUS

Computer prototyping was used as the meansof evaluating the two different FTS controlpanel layouts. Hypercard was used as theprototyping package and it was run on anApple Macintosh computer. Hypercard wasalso used as a data acquisition package oncetesting began. Total task time and the totalnumber of commands activated were recorded.

The simulated task consisted of the operations

to deploy a satellite on the Space Shuttle. Thistask required simulated RMS joint mani-

85

Page 96: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

pulations, camera manipulations, as well asother RMS-like activities, while using thecomputer prototypes.

The non-PDP control panel is depicted inFigure 1. The distinguishing feature of thisconfiguration is that traditional single-functionpushbuttons are used in conjunction with asimulated EL panel to activate commands. TheEL panel was simulated in this evaluation bydisplaying single-function commands as theywould appear on the EL panel in the upperright-hand corner of the prototyped screen.The simulated EL panel was used because thespace constraints of the Macintosh computerwould not allow the display of all of thefunctionality at one time. This then made itpossible to study a task as complex as anRMS-like operation on this particularmicrocomputer.

.0°(,.D.CATOO+

"+ ,,, ,A, OAU---U_/_T02 AUTO 3 AUTO 4 I

u, c,* trm'_ra D,,ttT ,(ST

C51ED C K)MOO[ [NT(A PAOCE(0 STOP

SAFING RUT0 (AN(EL MANURL

(NO (FF 0P AUTO OFF MANUAL

PARRM(T[_ RIGIO 0ff 0(-RIG

BRAKE OFF ON

(AM(RA

I'_INAKIS-'-+r+I Joint

IN01CATORS

NNN

SHOLOR'¥

SHOL0fl'P

(LBOW-P

WHIST-P

WRIST-Y

WRIST-R

functional category are then displayed by thePDPs. For example, when SINGLE isselected in Figure 2, the display changes to thatdepicted in Figure 3. In Figure 3, SINGLE isnow displayed in the EL display and the PDPshave changed to list the options that follow

under SINGLE. The small EL display wasdesigned to serve as a navigational aid to helporient operators throughout performance of thehierarchically structured tasks. It wascontended that the use of the navigational aid inthe PDP hierarchy would be useful since a

previous evaluation (Gray, 1986) found thatnavigational aids are helpful with hierarchicalsearch tasks through menu structures on acomputer.

NOD( INDICATORS

fT5 UHt TiP (FF f(5 tO PAYLO

AUTO l AUTO 2 AUTO $ AUTO 4

OP CMD SliIGLE DI|(CT TEST

UNLORD(OJ (NO [FF LOAOE0 PAYLO

AUTO SINGL[ OIA[CT liST

SAFING (N0 Elf 0P PRRAM CAMERAS

f_ 8RAILS "_l

@o.OFF

SgAK(S

Figure 2. PDP control panel prototype.

Figure 1. Non-PDP control panel prototype.

The PDP control panel is depicted in Figure 2.This control panel utilized simulated PDPsinstead of single-function pushbuttons. InFigure 2, the PDPs are the twelve pushbuttonslocated in the lower-middle portion of thedisplay. The portions to the left and top of thedisplay are status indicators that were used todisplay various functional states.

When a PDP is selected, the name of thatfunction is then displayed in a small simulatedEL display located just above the PDP clusterand the options that follow within that

MOO[

$RFING

(NO (FF

MOO( IMDICATON5

FTSUNL (MD[FF FTSLD I PAYLD]

AUTO ! AUTO Z AUTO 31 AUTO 4|

OPCflD _ DIIttCT T(Sr ]

JOINT

SHOLDR-V SHOLOR-P [LSOW-P

WRIST-P WAIST-Y WRIST-R

* (AMtRAS

0FT

ORAl((5

Figure 3. PDP control panel with PDP changes andnavigational aid.

86

Page 97: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

It was determined that there was a problem inmaneuvering across functional modalities

during the development of the PDP prototypesbecause it required many commands to do so.For example, when one was in the RMS jointmanipulation mode, it would require severalsteps, including going back to the "Home"level of the task hierarchy first, to be able tomake camera adjustments. Since the RMS

operation requires much maneuvering acrossmodalities during its use, this PDParrangement would result in many circuitousmovements and much wasted time. Therefore,

a special PDP was developed for the presentinvestigation which would readily allowoperators to "jump" across functionalmodalities with a single command locatedwithin the PDP matrix.

EXPERIMENTAL VARIABLES

The independent variables in this investigationwere the two different RMS operating

experience levels of the subjects (experiencedand novice) and the two different control panelprototypes (non-PDP and PDP). Since eachsubject was tested on each of the two controlpanel prototypes (in counterbalanced order),then a 2 x 2 repeated measures experimental

design was used. Of specific interest was thecomparison between PDP and non-PDP usage,the difference in the performance and

subjective impressions of the two differentsubject groups, the use of navigational aids,and informational needs of operators while

performing simulated FTS tasks.

The dependent variables were operator taskcompletion times, number of commandsrequired to complete the task for each controlpanel prototype, a question of preferencebetween the two different control panel

prototypes, questionnaire responses, andsubjective impressions.

SUBJECTS

Volunteer subjects from both Johnson SpaceCenter and Lockheed took part in this

experiment. Four subjects who had no priorRMS training comprised the novice usersgroup. Four subjects who had completed

training on a high-fidelity simulator of theRMS comprised the experienced users group.

PROCEDURE

Performance of a simulated RMS-like taskscenario was used for each of the control panel

configurations. Each scenario coveredsimulated RMS-like manipulation activities and

the testing took place on the Apple MacintoshSE computer. The task scenario was identicalfor both control panel configurations.

Before testing began, each subject had thebasic functionality of each of the control panelsexplained to them. Subjects then completed apractice session on the first control panelconfiguration that they would be using. Asubject would then perform the simulated taskson the Macintosh. After the subject's first

scenario was completed, the same procedurewas followed using the other control panelprototype. Order effects were controlled byhaving an equal number of subjects begin thetesting with the non-PDP control panel asthose who began the testing with the PDPcontrol panel within each of the two subject

groups.

After performing the task scenarios on both ofthe control panel prototypes, each subject wasasked to select which of the two control panel

prototypes were preferred. Each subject wasalso asked to complete a questionnaire

designed to garner subjective impressionsconcerning the control panels. Subjects ratedfive issues on a five-point Likert-scale whereone point indicated "Least" and five pointsindicated "Most." These five issues were"Maneuver Across Modalities," "ManeuverWithin Modalities," "Provides TaskStructure," "Contributes to Task Structure,"and "Ability to Make Commands." Subjectsthen answered open-ended questionsconcerning PDP use.

RESULTS AND DISCUSSION

Data were collected and analyzed with the

objective of determining differences in userperformance and preferences between the twodifferent control panel configurations so that,

87

Page 98: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

ultimately, guidelines concerning the use ofPDPs could be established. All numeric data

were statistically analyzed with a repeatedmeasures analysis of variance.

Analysis of the performance data revealed thatsubjects used significantly (p = 0.001) fewercommands when using the PDP control panelprototype than they did while using the non-PDP control panel. Interestingly, though,subjects did not significantly differ in theamount of time that it took for them to

complete the two tasks. The average task timefor the PDP prototype was 18:12 while it was18:49 for the non-PDP condition. This findingprovides some support for the PDP prototypem the sense that if more commands are

required to perform the same task in virtuallythe same time frame then the condition which

requires more commands to be activated maypredispose operators to make more errors.

Analysis of the subjects' control panelpreferences revealed that all eight of thesubjects preferred the PDP control panel overthe non-PDP control panel. As Table Iindicates, the analysis of the five-point Likert-scale questionnaire responses also providedstrong support for the PDP control panel sincesubjects rated two of the five questionnaireitems significantly (p < 0.05) higher for thePDP prototypes• These two questionnaireitems were "Maneuver Within Modalities" and

"Ability to Make Commands." Subjects alsorated the PDP prototype higher on the otherthree questionnaire items, although thesedifferences were not statistically significant.There was also statistical significance (/9 =0.049) due to the RMS experience level of thesubjects where the novice users had a higherrating on the "Maneuver Across Modalities"quesnon.

Subjective comments were also collected fromeach of the subjects. These are summarized in

Table 2. The comments were categorized aseither positive or negative with respect to PDPusage.

The subjective impressions indicate that PDPscan have very good as well as very badfeatures. It was observed by the subjects that

the use of the PDPs can result in less panelspace used and that they can provide taskstructure in the sense that they can clearlydelineate what task options are available atspecific times. On the negative side, subjectsexpressed that one loses "global perspective"with the use of PDPs and that this cancontribute to task disorientation. It was alsostated that PDPs should not be used in

"exceedingly" complex systems.

Subjective impressions were also studied todetermine if there was a difference between the

two RMS-experience groups. Data analysis

TABLE 1.

Five-point Likert-scale responses for thenon-PDP and PDP control panel prototypes

Questionnaire Item

Control PanelNon-PDP PDP

Maneuver acrossmodalities 3.12 3.87

Maneuver within

mo&dities 2.87 4.25 *

Provides task

structure 2.75 4.12

Contributes to

task orientation 2.62 3.50

Ability to makecommands 3.00 3.87 *

* Significant at p < 0.05

revealed that there were no differences sincethe comments were common across both

groups.

CONCLUSIONS

The ultimate objective of this investigation wasto establish a set of guidelines concerning theuse of PDPs for the FTS workstation. The

88

Page 99: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

data collected during this investigation werethen used to create this set of guidelines. It iscontended that the established set of guidelines

will also be generalizable to other workstationsas well. These guidelines are listed in Table 3.

It is clear from the previously mentioned

experiment results and subjective commentsthat the use of PDPs does in fact present atrade-off- there is some good as well assome bad about them. It is for this reason thenthat PDPs should be used judiciously because

improper usage can contribute to taskcomplexity and user task-disorientation, It iscontended that the previously mentioned set ofguidelines will help to ensure that PDPs will beoptimally designed and arranged.

TABLE 2.

Positive and negative subjective impressions

conceming PDP usage

Positive• Provide task structure

• Save panel space• User attention is more localized• Good when working within a

functional modality (e. g., camera

manipulation)• Navigational aids provide user

guidance• Good for infrequently used sub-tasks• Can result in reduced search time

Negative• Processing time (option refresh rate) to

perform next steps was too slow• Bad if used in highly complex systems

(e. g., large number of functionalmodalities within the overall task)

• Lose global perspective because fewerspatially redundant cues

• Not good for applications where fewcontrols are used frequently

• Possibility of getting lost in complextask structures

• May result in more cognitive processing

Future research endeavors should examine the

use of actual, hard-wired PDPs in full-scale

mockups while performing high-fidelitysimulated tasks. This would increase the

external generalizability of the results. Thedevelopment of an equation which wouldprecisely determine how many PDPs should beused for a specific task may be possible. Thisequation would have to take into accountvariables such as the frequency that all of thecommands are activated, as well as the depthand breadth of the task hierarchical structure.

TABLE 3.

Guidelines concerning PDP usage

• Use PDPs instead of other controls if

PDP usage reduces the total number ofcommands to perform the task anddoesn't significantly increase taskcompletion time.

• A PDP or control capability should be

provided that will allow "jumping"across functional modalities

• Navigational aids should be used to helporient users

• May be better for infrequently usedsub-tasks

• May be better when working within afunctional modality

• Should not be used for certain criticalfunctions, such as brake control

• Should give an indication of the numberof hierarchical steps the operator is awayfrom the "Home" level

A CKNOWLEDGEMENTS

Support for this investigation was provided bythe National Aeronautics and SpaceAdministration through Contract NAS9-17900to Lockheed Engineering and SciencesCompany.

REFERENCES

o Burns, M. J., and Warren, D. L.(1985). Applying programmable

display pushbuttons to manned spaceoperations. In Proceedings of theHuman Factors Society 29th Annual

89

Page 100: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

,

,

,

Meeting (pp. 839-842). Santa Monica,

CA: Human Factors Society.

Gray, J. (1986). The role of menu

titles as a navigational aid inhierarchical menus. SIGCHI Bulletin,17(3), 33-40.

Hawkins, J. S., Reising, J. M., andWoodson, B. K. (1984). A study ofprogrammable switch symbology. InProceedings of the Human FactorsSociety 28th Annual Meeting (pp. 118-122). Santa Monica, CA: Human

Factors Society.

Stuart, M. A., Smith, R. L., andMoore, E. P. (1988). PDP preliminaryevaluation (Lockheed EMSCO MemoNo. 88-1078). Houston, TX:

Lockheed Engineering andManagement Services Company.

90

Page 101: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

N 9 4 -SPEECH VERSUS MANUAL CONTROL OF

DURING A TELEROBOTIC TASK /_'/_ -ff/cf_/CAMERA FUNCTIONS

John M. Bierschwale, Carlos E. Sampaio, Mark A. Stuart, and Randy L. Smith .: ri

Lockheed Engineering and Sciences Company :t ,,

Research directed by Jay Legendre, Manager,Remote Operator Interaction Lab, NASA JSC.

INTRODUCTION

Telerobotic workstations will play a major rolein the assembly of Space Station Freedom andlater in construction and maintenance in space.

Successful completion of these activities willrequire consideration of many differentactivities integral to effective operation:

operating the manipulator, controlling remotelighting, camera selection and operation, imageprocessing, as well as monitoring systeminformation on all of these activities.

Of these activities, the vision (camera viewing)

system is particularly important. During manytasks where a direct view is not possible,cameras will be the user's only form of visualfeedback. If the vision system is manuallycontrolled and both hands are busy during the

performance of a dynamic task, it will requirereorientation of the hands and eyes between the

manipulator controls, vision system controls,and view of the remote worksite. Allocatingsome or all of the control of vision system

components to voice input may lessen theworkload of the operator, reduce movementtime, and ultimately improve performance.Voice input is currently being considered forthis as well as other applications by NASA.

Very few studies are found in the literature thatinvestigate the use of voice input for cameracontrol. The only study that was found(Bejczy, Dotson, Brown, and Lewis, 1982)was relevant in that it investigated voice inputfor camera control of the Remote Manipulator

System (RMS) and payload bay cameras usedon the Space Shuttle. Although statistical

analyses were not presented, voice input wasfound to be 10% slower across four subjects.

The philosophy of the present investigationdiffers in that subjects were not constrained tocurrent RMS control panel terminology and

organization. Subjects used words from avocabulary sheet developed in a previous study(Bierschwale, Sampaio, Stuart, and Smith,1989) to construct camera commands toaccomplish a telerobotic task. The subjects'vocabulary preferences are presentedelsewhere (Bierschwale, et al., 1989).

It is important to consider current terminologyso that personnel are not forced to learn newjargon. However, the use of voice input wasnot considered in the development andselection of the current terminology and switchlabels. Choice of vocabulary is very importantin terms of recognizer performance and useracceptance. Successful vocabulary design(ultimately the human machine interfacedesign) will most readily be achieved byconsidering the recognition qualities of thecommands and cognitive relationship betweenthe commands and their respective actions.

A potential problem with voice control ofcameras may be verbalizing the directions tomove the cameras. Many people havedifficulty when providing verbal directions.An example would be saying "left" when"right" is meant. Indeed, this cognitivedifficulty when verbalizing directions has beennoted with voice control of cursor movement

while editing text (Murray, Praag, and Gilfoil,1983; and Bierschwale, 1987).

Identification of critical issues such as this

early in the design phase will allow for moreeffective implementation of a voice

91

Page 102: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

commandedcameracontrol system. In moregeneral terms, one report (Simpson,McCauley,Roland,Ruth,andWilliges, 1985,p. 120) found that, historically, "projectsdesignedfrom inceptionto incorporateavoiceinteractivesystemhada greaterprobabilityofsuccessthanwhenthecapabilitywasaddedtoan existing system." By understandingthedifferencesbetweenthetwo modesof input, amoreeffectiveutilization canbemadeof bothvoiceandmanualinput.

Theobjectivesof thisstudyareasfollows: (1)optimizethe vocabularyusedin avoice inputsystemfrom aHumanFactorsperspective,(2)perform a comparison between voice andmanualinput in termsof variousperformanceparameters,and(3) identify factorsthat differbetweenvoice and manualcontrol of camerafunctions.

METHOD

SUBJECTS

Eight volunteer subjects were selected toparticipate in this evaluation. These subjectswere partitioned into the following two groups:an experienced group of four subjects whowere familiar with telerobotic tasks and

workstations and an inexperienced group offour who were not familiar with these

concepts.

APPARATUS

Testing took place in the Man-SystemsTelerobotic Laboratory (MSTL) located at theNASA Johnson Space Center. A Kraft

manipulator slaved to a replica mastercontroller was used to perform a remotetelerobotic task. The task selected for this

study was a generic pick and placement task.This task required a high degree of visualinspection and dextrous manipulation. Thetasksite is depicted in Figure 1.

Two 4-inch tall and two 10-inch tall tiers wereplaced on a semicircular taskboard in fi'ont of

the Kraft manipulator. Three task pieces wereplaced on the lower left-hand tier and three

were placed on the upper left-hand tier. On the

right-hand side, four receptacles were placedon the upper and lower tiers (two receptaclesper tier). The task consisted of locating,grasping, transporting, and depositing each offour task pieces into the correct receptacle. Inaddition to the required manipulation, subjectshad to move cameras, adjust lens parameters,and select views to successfully complete thetask. During the task, subjects were instructedwhich task piece and receptacle were involved.

Two cameras equipped with remote pan, tilt,zoom, focus, and iris controls provided theoperator with two oblique views of theworksite (i.e., approximately 45 degrees abovethe horizontal plane with one displaced 45degrees to the left and the other camera 45degrees to the right). A fixed-focus cameraprovided a "bird's-eye" view of the entirework area looking down at a 45-degree angleon the worksite from above the task. The two

oblique views were input to a 21-inch monitorwhere only one view could be shown at a time.

The "bird's-eye" view was continuouslydisplayed on a 9-inch monitor positioned atopthe larger monitor.

Task Piece

Pick-Up _o_/_

Tier

Oblique View

UpperTiers

Mani relator

Oblique View

Bird's-Eye View

Figure 1. Overhead view of remote work site.

The left oblique view showed the task piecesand the surrounding area. The right obliqueview showed the box and the surroundingarea. The "bird's-eye" view showed the entirework area. Taskpieces were aligned such thatthe left oblique view was required to read their

92

Page 103: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

markings while the right oblique view wasrequiredto readthereceptacles'markings.

A practicetask(usingdirectview) wasdevisedso subjectscould becomefamiliar with themanipulator controls, camera views, andkinesthetics of the arm movements andpositionsthat theywould beusinglaterduringdata collection. The practice task used theidenticalviews, taskboard,tier placement,andsimilar taskobjectives.

During use of manual input, the cameracontrols were placeddirectly in front of thesubjects. Subjectswere requiredto usetheirright handto operateboth themanipulatorandcameracontrols. This required halting themanipulatorto operatethecameras.This wasa simulation of a hands-busyscenariowherevoiceinputmightaidperformance.

A vocabulary list containing stereotypicalwords determined in a previous evaluation(Bierschwaleet al., 1989)wasusedfor voiceinput. A separatevocabularysheetwasusedfor eachsubjectandthewordswererandomlylisted under each icon (descriptive of thecamerafunction) to avoid any possible listordereffect.

In order for the control systemto be flexibleenough to accommodatethe various wordcombinations,anexperimentalapproachwasusedthathasbeenreferredto asthe"WizardofOz" method. This is often used in user-computer interaction research and issummarizedin Greenand Wei-Haas(1985).For this evaluation,a "wizard" carriedout theactionsof a speechrecognizer. This methodhas been used before with voice inputresearch. One study (Casali, Dryden, andWilliges, 1988) useda wizard recognizertoevaluatetheeffectsof recognitionaccuracyandvocabularysizeonperformance.

The "wizard" was situated at the cameracontrolsout of thefield-of-view behindandtotheright of thesubject.Whenvoice inputwasused, the "wizard" wore a headsetwhichallowed him to screenout externalnoiseandconcentrateon the commandsissuedby thesubjectthroughamicrophone.

VARIABLES

Three independentvariableswith two levelseach were studied in this evaluation: inputmodality(voiceor manual),levelof experience(experienced or inexperienced), andadministration order (voice followed bymanual input or manual followed by voiceinput). Experiencelevel and administrationorder were between-subjectsvariableswhileinputmodalitywasawithin-subjectsvariable.

Dependent variables consisted of taskcompletion time, number of cameracommands,and errors. Scaledquestionandquestionnaireresponseswerealsocollected.

PROCEDURES

At the beginning of the evaluation, subjectswereprovidedwith a brief explanationof thepurposeof the study. Each subjectreceivedinstruction on the use of the manipulatorcontrols that would be neededto perform themanipulationtasks.

Following performanceof the practice task(using direct view), a videotapewas usedtoillustrate the different camera and lensmovementsthatwouldbeavailableon thetwoadjustable cameras. The investigator useddeliberate wording when pointing to thecorresponding icons on either the cameracontrolpanel(manual)or thevocabularysheet(voice), so as not to bias any subject'sselectionof vocal commands.Prior to eachofthetwo conditions,subjectswereinstructedonthe use of the respective camera controls.When using manual input, a template withdescriptiveiconsillustratingthefunctionswasplacedover the controls so that the subjectswould not be biased in their vocabularyselection(for voice input) by using the listedlabels. Thesesameicons were usedon thevocabularysheetfor voiceinput.

The subjects' view of the task was thenobstructedsothattheyhadto rely totallyon thecameraviews. Each subjectperformed twosessionsunder both conditions (voice andmanualinput). Thefirst wasapracticesessionusing anabbreviatedversionof the taskwith

93

Page 104: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

the secondbeing the complete task. Thispractice also allowed subjects,while usingvoice input, to become familiar with thedesignatedwordsandselectthefew theymightprefer to use. Administration order wascounterbalancedwith half of thesubjectsusingvoice input first and half usingmanualinputfirst. Additionally, to avoidanymemorizationof task requirements,different locationsandtaskpieceselectionswere usedfor eachof thefour sessions(onepracticeanddatacollectionsessionpercondition).

Following the practice session, the datacollection sessionwas conducted. Subjectswereinstructedto work quickly while makingasfew errorsaspossible. If anerroroccurred,the taskpiecesandreceptacleswere placedintheconfigurationpresentprior to theerrorandthesubjectrepeatedthetrial. While setuptimewas not recorded,repetition of the trial wasincluded in the completion time. The videoimagesusedto performthetask(excludingthe"bird's-eye" view) were recordedalong withaudioinputfrom thesubjects'headset.

Following completion of the datacollectionsessionfor eachcondition, subjectscompletedaquestionnaire.Theprocedurefor thesecondcondition progressedin the samemanner.After testing was finished, anotherquestionnaire, involving comparison of thetwomodalities,wascompletedby thesubjects.

RESULTS AND DISCUSSION

PERFORMANCE DATA

Analysis of Variance (ANOVA) results arepresented for the task completion times,number of commands, and errors. Table 1

presents the group means for each of thesemeasures.

An ANOVA run on the task completion timesfound that voice input was significantly slowerthan manual input for controlling cameras inthis task (F (1,4) = 19.80, p < .05).

In order to allow for a direct comparison of thenumber of camera manipulations, the voice

commands were tallied such that a single

command consisted of both activating andstopping the movement (actually two voicecommands issued). An ANOVA run on thenumber of commands that were used found

that significantly more commands were usedwith manual than voice input (F (1,4) = 10.34,p < .05).

It was expected that more manual commands

would be used since people tend to "bump"manual controls and set things up perfectly.With voice input, subjects tended to acceptcoarse adjustments because of the difficulties

imposed by the system lag time and lack ofvariable rate control. If examined in

conjunction with the task completion times, itis seen that subjects used more time to executefewer commands with voice input. It mayvery well be that using voice input to controlthe cameras resulted in more cognitivedifficulty associated with each command whichcould result in more errors. On the other hand,

assuming a constant error rate, the greaternumber of commands given with manual inputwould increase the probability that an error willoccur. If this effect exists, this is a systemtrade-off that will need to be evaluated.

TABLE 1.

Group means for performance measures.

Voice input

Exp. Inexp.

Completion 'time (Minutes) 12.58

Commands * 90.30

Manipulation En'ors .75

Focusin_ Envr Ra_es (oer_nl) 30.50

• Does not include exma commands resullingfrom directional ermr_.

Manual input

Exp. Inexp.

10.94 12.4115.46

104.50 114.00 130.30

.75 1.00 1.00

32.80 50.00 37.00

It was hypothesized that fewer manipulatorerrors would occur with voice control since

this would allow the subjects to keep their eyeson the screen and avoid interruption of thetask. However, the results of an ANOVA

94

Page 105: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

show that there was not a statisticallysignificant difference in the number ofmanipulationerrorsbetweenmodalities. Themakeupof the task was suchthat few errorswerecommittedwith eithermodality.

A directional error consistedof moving onedirection when one wanted to move in theoppositedirection. Very few directionalerrorswere observed across the functions exceptwhenfocusingthecameras.Moreerrorsweremadewhenusingmanualcontrol to focusthecameras than when using voice control.However, resultsof anANOVA revealednosignificant difference in focusing error ratesbetweenthetwoinput modalities.

The probable reason for the high focusingerror rateswas thatthe taskrequiredzoomingthefocal lengthbackandforth andthesubjectwould usually guess which directionalcommandwould bring the picture into focus.Possiblereasonswhy somewhathighererrorratesoccurredduring manualinput were thatsubjectstendedto performmorecommands,aswas previously mentioned, and were morelikely to attemptto brin.gthepictureinto exactfocus. With voice input, focusing wasdifficult due to the sensitivityof the focusingoperationand the systemlag time. Subjectswouldoftenaccepta lessthanperfectimage.

SUBJECTIVE RESPONSES

The following types of questions were asked

concerning the two input modalities: scaledquestions, open-ended questions, and yes/noquestions that allowed the subject to elaborate.Analysis revealed no real differences inpreference between the two modalities of inputand two experience groups across all of thequestions for this task. However, similarsubjective comments, concerning advantagesand disadvantages of the two modalities forperformance of this task, were frequently madeacross many of the questions and aresummarized in Table 2.

When subjects were asked what teleroboticworkstation functions they would recommend

allocating to voice input, the followingapplications were given: selecting or moving

cameras, controlling lights, halting themanipulator arm, setting the manipulator griplock, changing modes, and panning and tiltingonly. For the most part, these applications areof a discrete nature that minimize the

disadvantages of voice input listed in Table 2.

TABLE 2.

Advantages and disadvantages of voice-operated cameracontrol.

VOICE INPUT

ADVANTAGES DISADVANTAGES

Hands and eyes free

Good for single, gross movementswhile hands are occupied

Possibility for simultaneous camera/manipulator mntrol

Cognitive difficulty verbalizingcommands/directions

System lag time

Two step start-stop process

Can't perform two cameramovements at once

MANUAL INPUT

ADVANTAGES DISADVANTAGES

Finer positioning than voice input

Less mental load than voice input

Diverting eyes and hands fromtelerobotic task to adjust cameracontrols

Quicker system response time

CONCLUSION

This investigation has evaluated the voice-commanded camera control concept. For this

particular task, total voice control ofcontinuous and discrete camera functions was

significantly slower than manual control.There was no significant difference betweenvoice and manual input for severaI types oferrors. There was not a clear trend in

subjective preference (across severalquestions) of camera command input modality.Task performance, in terms of both accuracyand speed, was very similar across both levelsof experience.

One problem that emerged was that numerousfocusing errors (30-50%) were observedacross both groups and modalities. For tasks

as dynamic as this, development of an

95

Page 106: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

autofocusingsystemis highly recommendedtoavoidoperatorfrustrationandinefficiency.

The fundamental advantagethat voice inputhadover manualinput, asmentionedby bothgroups of subjects,was that it allowed thehandsand eyesto be free to doother tasks.

Unfortunately,voice input of cameracontrolsalso resulted in cognitive difficulty whenverbally transcribing movements,specifyingthe correct directions, and stoppingmovements.The advantageof manual inputwas that it allowed precisepositioning. Theapplicationsthat subjectssuggestedfor voiceinput at a telerobotic workstation were of adiscretecontrolnature.

Most of the problemsseemto be associatedwith the movementprocesses. While eachdistinct movement(zoom,pan, tilt, etc.) wasnot directly comparedacrossbothmodalities,subjectivecommentsindicatethattheproblemis a fundamentalone of verbal control of aspatialmotor task. ThestudybyBejczy,et al.(1982) also stated that controlling cameramovementwastroublesome for the subjects.

The results of this investigation indicate that

using voice input for control of discrete typesof camera operations (selecting cameras,multiplexing, and selecting rates) could aidperformance in a telerobotic task. Control ofcontinuous camera functions by voice input isnot recommended.

A combination of voice input and manual inputfor control of camera movements would take

advantage of the best aspects of each of thecontrol modalities. Future studies should

evaluate alternate methods of controllingcamera movements. Some examples are: (1) ahand-controller mounted joystick whosefunction is selected (camera pitch, camera roll,zoom, focus, and iris control) by voice andcontrolled manually, which would save panelspace by only requiring one control for each orall of the cameras, (2) activating movements byvoice and stopping them manually using aswitch on the hand controller, and (3) usediscrete levels of zoom, focus, and iris (Level1, 2, 3, 4, etc.) and discrete movements of

cameras (perhaps angular, as in pan right 30 °,60 °, etc.). Other modalities of input such as aneye tracking device or head-slaved camera

control device should also be investigated.

These results were achieved with a particulartask, manipulator, and camera control system.A voice recognizer simulation was used that

had the advantage of 100% recognition and thepossible disadvantage of slower response time.An actual voice recognizer will not performthis well. With decreasing recognition rates,several things will probably occur (although itis difficult to precisely quantify the magnitudeof the effects). For example, one study (Casaliet al., 1988) found a 17% increase incompletion time for a data entry task whenrecognition rate dropped from 99 to 95% and a50% increase in completion time when the ratedropped from 99 to 91%. It was also foundthat each lowered level of recognition produceda significant decline in subjective acceptance ofthe system.

Different tasks and control systems mightproduce different results, although it isbelieved that the trends discussed in this reportare applicable across a wide variety oftelerobotic tasks. Thus, it is contended that the

results will have immediate application to thedesign of the telerobotic workstations.

A CKNOWLEDGEMENTS

Support for this investigation was provided bythe National Aeronautics and SpaceAdministration through Contract NAS 9-17900

to Lockheed Engineering and SciencesCompany.

REFERENCES

. Bejczy, A. K., Dotson, R. S., Brown,J. W., and Lewis, J. L. (1982). Voicecontrol of the space shuttle videosystem. In Proceedings of the 17thAnnual Conference on Manual Control(pp. 627-640). Pasadena, CA: Jet

Propulsion Laboratory and CaliforniaInstitute of Technology.

96

Page 107: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

o

.

.

.

o

.

Bierschwale, J. M. (1987). Speech

versus keying commands in a text-editing task. Unpublished master'sthesis, Texas A&M University, CollegeStation, TX.

Bierschwale, J. M., Sampaio, C. E.,Stuart, M. A., and Smith, R. L.Development of a vocabulary for voiceinput of camera controls during atelerobotic task. (NASA Tech. ReportJSC - 23706). Houston, TX: NASA

Lyndon B. Johnson Space Center.

Casali, S. P., Dryden, R. D., and

Williges, B. H. (1988). The effects ofrecognition accuracy and vocabularysize of a speech recognition system on

task performance and user acceptance.In Proceedings of the Human Factors

Society 29th Annual Meeting (pp. 232-236). Santa Monica, CA: HumanFactors Society.

Green, P., and Wei-Haas, L. (1985).

The rapid development of userinterfaces: experience with the Wizardof Oz method. In Proceedings of theHuman Factors Society 29th AnnualMeeting (pp.470-474). Santa Monica,CA: Human Factors Society.

Murray, J. T., Van Praag, J., andGilfoil, D. (1983). Voice versus

keyboard control of cursor motion. InProceedings of the Human FactorsSociety 27th Annual Meeting (pp. 103).Santa Monica, CA: Human Factors

Society.

Simpson, C. A., McCauley, M. E.,Roland, E. F., Ruth, J. C., andWilliges, B. H. (1985). System designfor speech recognition and generation.Human Factors, 27, 115-141.

97

Page 108: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

/

\

SPACE STATION FREEDOM COUPLING TASKS:,L.tAN EVALUATION OF THEIR TELEROBOTIC |'_ 9 4 - 2 4 2

AND EVA COMPATIBILITY

Carlos E. Sampaio, John M. Bierschwale, Terence F. Fleming, and Mark A. StuartLockheed Engineering and Sciences Company

Research directed by Jay Legendre, Manager,Remote Operator Interaction Lab, NASA JSC.

INTRODUCTION

After its completion, Space Station Freedomwill continue to require a great deal ofmaintenance and support work in order tomaintain daily operations. Dextrousmanipulators including the Flight TeleroboticServicer, the Special Purpose DextrousManipulator, and the Japanese ExperimentalModule Fine arm will not only be critical to theperformance of these tasks but may actually bethe primary system devoted to the execution ofmany of them.

Among the tasks to be commonly performedwill be the coupling and uncoupling of fluidconnectors designed to provide remoteresupply of liquids and gases in orbit (NASA,1989). This will be done using various quickdisconnect (QD) couplings designed to mateand demate repeatedly without leakage. Atpresent, several designs exist which allow thecouplings to be quickly mated and demated byan extravehicular astronaut. While it is critical

that these couplings be capable of manipulationby the suited astronaut, it is equally critical thatthese couplings be capable of successfuloperation with a telerobotic manipulator inorder to reduce the likelihood of these

hazardous extravehicular operations in the firstplace. Consequently, these couplingsnecessitate a design that is compatible withboth modes of operation.

QD coupling designs and methods of actuationcan vary widely. The coupling's contents, theamount of pressure it will have to sustain, theamount of flow it will need to accommodate,as well as several other factors all have a

bearing on the coupling's final form. Clearlyaboard Freedom, the varying conditions under

which the different QDs operate willnecessitate that their designs be different aswell. Just as clear, however, is the concern

that a proliferance of coupling designs will, atbest, often result in uncertainty in a coupling'soperation when encountered, and at worst,

result in unsuccessful mating or even loss offluid or pressure as a result of implementingthe incorrect coupling process. Although thesize and action of the couplings will clearlyneed to vary, it is preferable that a similaroperation concept might be shared over thecoupling points aboard Freedom in order toreduce the likelihood of using the incorrectprocedure.

It is widely held that in the vast majority ofcases, a task that has been designed to betelerobotically compatible will be compatiblewith the extravehicular astronaut as well

(Newport, 1989). This study, conducted inthe Remote Operator Interaction Laboratory(ROIL) at NASA's Johnson Space Center(JSC), evaluated subjects' abilities to mate anddemate QD couplings of varying design bothtelerobotically as well as manually. In aprevious study assessing various teleroboticcontrol modes, a manual condition was

included as a representation of the optimalperformance to strive for in the design of aspace glove (Hannaford, 1989). Therefore,

the manual condition in this study is similarlyincluded as a baseline to reasonablyapproximate extravehicular activity (EVA).

In collaboration with various telerobotic

interface development facilities including theROIL, Symetrics Inc. has been iterativelydesigning fluid couplings whose operation isintended to be telerobotically as well as EVAcompatible. One of these iteratively designedcouplings was among the four couplingdesigns evaluated in this investigation. Thusthe hypothesis of this study proposes that the

98

Page 109: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

coupling designedto be telerobotically andEVA compatiblewill bematedanddematedthemost quickly and be most preferredsubjectivelyfor boththe teleroboticaswell asmanualconditions.

METHOD

SUBJECTS

Four subjects participated voluntarily in thisstudy. In order to minimize learning effectsassociated with the various systems involved,

all subjects had extensive experience with thetelerobotic and viewing systems employed inthe study. None of the subjects had anyexperience operating the QD couplings prior to

their participation.

APPARATUS

Three equipment systems were employed inthe ROIL. These were a telerobotic system, aviewing system, and a task support structure.The telerobotic system consisted of a Kraftforce-reflecting master-slave manipulator. Theviewing system consisted of three cameraviews displayed on three monitors, two ofwhich were 21-inch monitors with one 9-inch

monitor. The 21-inch monitors displayed

close-up views of the couplings from bothfront and rear, while the 9-inch monitor

displayed an overall view providing the subjectwith information regarding the orientation ofthe manipulator to the task piece. The tasksupport structure consisted of a 72-inch by 48-inch metal frame upon which each couplingwas attached one at a time during testing. Asdemonstrated in Figure 1, the designs of thefour couplings included in this study differedquite a bit, as did their actuation.

Coupling A was demated by grasping the outersleeve between the two flanges and applyingaxial force toward the flex hose. It was

demated when enough force was applied toovercome the breakout force of the coupling.

Mating occurred by aligning the coupling ontothe nipple end and applying axial force until theouter sleeve locked back into place. This was

the customized coupling designed specificallyby Symetrics to be telerobotically and EVA

compatible. The flanges of the outer spool-shaped sleeve were designed to be slightlywider than the telerobotic grippers. This

allowed some compliance in grappling thefixture while still providing a sufficient bracein order to apply the axial force necessary fordemating and mating. Another aspect ofcoupling A's design which did not exist on theother couplings was a chamfering of theentrance at a 45 degree angle in order to guidethe nipple portion into the coupling. It was feltthat these compliant features would also lead toenhanced manual operation of the coupling aswell.

Coupling B had a very similar mechanism ascoupling A. The narrow outer ring was pulledtoward the flex hose until the breakout force of

the coupling was overcome and the couplingwas demated. Mating also occurred byaligning the coupling onto the nipple end and

applying axial force until the coupling portionlocked back into place.

Demating coupling C required depression oftwo detents, one on either side of a knurled

aluminum ring. Once the detents weredepressed, the aluminum ring would slidetoward the flex hose and the coupling portioncould be pulled away. Mating required

aligning the coupling portion onto the nippleend and applying force axially until the detents

engaged.

Coupling D had a lever actuated dematingprocess. The coupling's lever was pushedtoward the hard mounted, nipple end. Whenthe lever was pushed beyond a certain point(approximately 45 degrees), dematingautomatically occurred. Mating requiredaligning the coupling and applying axial forceonto the nipple end until the lever restoreditself to the vertical position.

It is important to note that the task performedin this study does not represent the entirecoupling process. The experimental taskconsisted of, in effect, the soft-latch phase of

the coupling process where the coupling ismated or demated but the actual flow of fluidhas not been affected. With each of these

couplings, the flow of fluid would need to be

99

Page 110: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Mated

Fl_x m9un{

Demated

Hard mount

A

S

Mated

Hard mount

Flex

Demated

Hard_mount [_

mount

Flex moint

C

Mated

Hard mount Flex mount

Demated

Hard mount

B

Flex mounl

Fl_x mount

Demated

Hard mount Flex mount

D

Figure 1. Schematic diagrams of the four couplings evaluated in this study.

turned on or off in an additional step notincluded in the task. That phase of thecoupling process would involve the use of anadded tool or modification to the end effector

which would drive the coupling into the fullyopened or closed position. Since that phase ofthe process has yet to be defined, it was ofinterest to the experimenters to evaluate thecompatibility of the mating and dematingcomponents of the task which could beaddressed at this time.

DESIGN

This study implemented a 2 modality (manualand telerobotic) by 4 coupling (couplings A,B, C, and D) within subjects design. Tile

modality and the coupling sequence werecounterbalanced as demonstrated in Table 1.

PROCEDURE

To begin each testing session, subjects wereintroduced to the purpose and procedure of thestudy as well as the basic layout of thecameras, task, and robotic system. Sincesubjects were already familiar with the

operation of the robotic and viewing systemsemployed in the ROIL, no instruction was

necessary regarding these aspects of the task.Subjects began the session by manipulating acoupling either manually or teleroboticallydepending on their particular counterbalancingsequence. Each coupling was demated and

100

Page 111: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

mated three times in each modality. Theexperimenter kept performance time by meansof a hand stopwatch and recorded those timeson a data collection sheet where errors were

logged as well. An error was counted only ifthe coupling portion was dropped, at whichpoint the experimenter reset the coupling in thefully mated position. Following a set of threetrials with each coupling, subjects filled out ashort questionnaire with rating scalesconcerning workload, discomfort, as well asvarious task related issues. Once all the

couplings had been completed, subjects filledout a final questionnaire for each modalitywhere they rated the couplings in comparisonto one another.

TABLE 1.

Counterbalancing sequence for couplings and modalityacross subjects (M = manual condition, T = teleroboticcondition).

into them. Data from the telerobotic trials

showed that differences between performancetime across the couplings was significant (F(3,3) = 4.372, p < .05). A Duncan's pairwisecomparison performed on the data showed thatthe source of significance came largely fromcoupling C being significantly slower than allother couplings' performance time. Dueprimarily to the small variance in the manualcondition, differences in performance time didnot reach significance for these trials. ADuncan's pairwise comparison on these data,however, did show that performance time forcoupling A was significantly faster thancoupling C. As anticipated, it appears that forboth modalities, coupling A was faster -- insome cases significantly faster- to demateand mate than the other couplings.

TABLE 2.

Group means for performance and subjective measures.

Subject [ [

Couu. A1 MI T

Coun. B

a TIH

Couo, C3 MI T

Couo. D

4 T [ M

QD Coupling Sequence

2 _ 4

Couv. BTIM

Couo. CMI T

Couo. A

M] T

Couo. DMI T

Couo. ATIM

Couo. BMI T

Coup. C

TIM

Couo. CTIM

Coup. DMI T

Couo. ATIM

Court. B

M] T

RESULTS AND DISCUSSION

Analysis of variance performed on the datashowed there were clear trends in both the

performance as well as the subjective data.Table 2 presents the group means for many ofthe performance and subjective measures. Dueto the very few number of errors occurring inany of the trials, analysis of the error dataresulted in no significant findings and is notpresented in the table.

It was hypothesized that as a result of thecompliant structures built into coupling A,demating and mating it would be faster thanother couplings without these structures built

Measure

Perform. Timeper Trial (sec.l

Overall Rating(1 to 7)

Grip Accep-tability (1 to 7)Mental Work-load (1 to 10)

Phys. Discom-fort (I to 7)

Modality of Operation

Telembotic

A B C

66 77 ,150

1.5 2.0 5.3

1.3 2.8 3.5

3.0 2.5 6.8

2.5 1.5 3.8

[ Manual

D A B C

168 2,4 3.7 65

3.8 1.8 2,5 5.0

2,8 1.3 t,8 3.8

4.0 NotAddressed

2.5

D

38

2.5

13

It was also felt that subjective reactions to thecouplings would show preference for thecustom coupling in both modalities. Theoverall rating data were collected on sevenpoint scales with 1 corresponding to"completely acceptable" and 7 correspondingto "completely unacceptable." As shown inTable 2, these data revealed reliabledifferences, this time for both telerobotic as

well as manual ratings. The data regarding thetelerobotic preference revealed an F (3,3) =7.981 with a p < .01. Pairwise comparisonsshowed that couplings A and B were rated

significantly more acceptable than coupling C,

101

Page 112: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

while coupling A was significantly moreacceptablethancoupling D. For the manualratings the datashowedthat F (3,3) = 8.007,with p < .01. In this case pairwisecomparisons indicated that coupling C wassignificantly less acceptable than all others.The comparable ratings attributed to couplingsA and B appeared the result of their similarmechanisms and operation. The shape of theouter sleeve and coupling A's chamfering wereall that varied between the two.

Using the same seven point scale describedabove, data regarding the acceptability ofobtaining the proper grip did not reachsignificance for the telerobotic condition,although the pairwise comparisons did showthat coupling A was rated significantly moreacceptable than coupling C. For the manualcondition this difference did reach significance,F (3,3) = 5.368, p < .05, with thecomparisons among the means indicating thatcoupling C was significantly less acceptablethan all three other couplings.

After the telerobotic trials, data were alsocollected on mental workload and physicaldiscomfort. Data from a Modified Cooper-Harper mental workload rating scale reachedsignificance, F (3,3) = 3.860, p = .05. Thepairwise comparisons showed that couplings Aand B were rated significantIy less mentallytaxing than coupling C. Data from eitherquestion addressing physical discomfort didnot reach significance although the pairwisecomparisons tended to show couplings A andB as less demanding than coupling C. Theseeffects seemed the result of the rather straight-forward mechanism implemented on couplingsA and B. Subjects only had to grab and pull todemate couplings A and B, while coupling Crequired depression of the detents on eitherside of the detention sleeve. This orientation

was often very difficult to achieve with therobotic grippers, typically requiring repeatedattempts before demate finally occurred.Issues of mental workload and physicaldiscomfort were not addressed after the manual

trials due to their very short duration.

CONCLUSION

Of the couplings included in this study, severaldesign components were found to be ofinterest. With respect to the operation of thecouplings, the various concepts resulted indiffering reactions from the subjects.

Regarding the demate process, subjects feltcoupling D included an attractive feature byrequiring little force to demate, achieving itsimply by forcing the lever over. However,maintaining control of the coupling portionafter demate proved difficult for teleoperation,although somewhat easier for manual

operation. Demating coupling C showed thatdepression of detents is a very delicateoperation to perform with the telerobot and tosome extent, to perform manually as well.Without some method of fixing the orientationof the detents, it is very difficult to engage bothat the same time, particularly with theteIerobot. This was compounded by the factthat the depression had to be combined withthe axial force necessary to demate. Becauseof coupling B's small outer ring, demating wasat times found to be clumsy with it as well.This was particularly the case for thetelerobotic condition, but at times the manualcondition was awkward as well.

Mating the couplings proved, on the whole, afar simpler process. Couplings B and Drequired close alignment which, when met,resulted in a very straightforward matingprocess. Coupling C incorporated a longernipple portion to the coupling. This assistedoperation in both modalities by helping toguide the coupling into the mated positionwhen the axial force was applied.

While coupling A did appear the better designin this evaluation, there clearly were facetswhich could be improved. Although the largeflanges on the outer sleeve assisted in mating,they also might allow the telerobot or EVAastronaut to accidentally bump or deactivate thecoupling prior to full actuation. Also, the

chamfering performed on the entry of the

102

Page 113: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

coupling wasperhapsangledtoo far. The45degreeentranceguidedthenippleportion intothe coupling, but allowed sufficientmisalignment such that the coupling oftenboundjust prior to fully mating. Symetricshas recognized these concerns and hasprovidedtheROIL acouplingaddressingtheseissuesby making two changesin the design.New shorter flanges still allow necessarysupport for the axial forces required, butgreatly reduce the likelihood of accidentaldeactivation. The entry to the coupling wasalso chamferedto approximately30 degreesrather than 45. This assistedin guiding thenipple into the coupling but reduced thepotentialfor binding by lesseningtheamountof misalignmentpossible.

Thepurposeof this studywasnot to conceivethe final coupling design. Rather, it wasintendedasa stepalong an iterative process.Thenewly modifiedcouplingwill be includedin a series of furtfier controlled, as well assubjective, evaluations. This is part ofongoing work in the ROIL designed toenhancethe overall interface by improvingdesignat both the teleoperatorand telerobotendsof thesystem.

A CKNO WLEDGEMENTS

Support for this investigation was provided bythe National Aeronautics and Space

Administration through Contract NAS9-17900to Lockheed Engineering and SciencesCompany.

The authors would like to acknowledge JohnCalvin and Steve Calvin of Symetrics

Incorporated as well as Ray Anderson ofMacDonnell Douglas Space Systems Companyfor their assistance in providing materials andinformation crucial to this evaluation.

REFERENCES

. Hannaford, B., Wood, L.,

Guggisberg, B., McAffee, D., andZak, H. Performance evaluation of a

six-axis generalized force-reflectingoperator. (JPL Publication 89-18).

.

°

Pasadena, California: NASA Jet

Propulsion Laboratory.

National Aeronautics and SpaceAdministration (1989). Interface

design considerations for roboticsatellite servicers. (NASA ReportJSC-23920). Satellite Servicing

Systems Project Office.

Newport, C. (1989, June).Application of subsea roboticsmaintenance experience to satelliteservicing. Paper presented at theSatellite Services Workshop IV, NASAJohnson Space Center, Houston,Texas.

103

Page 114: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

THE EFFECTS OF SPATIALLY DISPLACED VISUAL FEEDBACKON REMOTE MANIPULATOR PERFORMANCE

RandyL. Smith andMarkA. Stuart N 9 4 - 2 0 ]Lockheed Engineering and Sciences Company

Research directed by Jay Legendre, Manager,Remote Operator Interaction Lab, NASA JSC.

INTRODUC, TION

Telerobotics will be heavily used for the

assembly, maintenance, and servicing ofNASA's Space Station Freedom. The visualsystem may well be the single most importantsource of information for the operator of thevarious telerobot systems that will be used.When performing a remote manipulation task,the operator can view the remote scene eitherby looking through a window, or with the useof cameras. For most of the tasks that will be

performed on the Space Station, a direct viewof the work area will either not be available, orwill not provide the necessary visual cues forteleoperation. Therefore, cameras will providethe primary mode of feedback to the operatorconcerning manipulator position, orientation,and rate of movement.

Operators normally use the body of themanipulator as a reference point when makingcontrol inputs, but, if the Space Station'sexternal cameras are placed such that thecamera view is not normal with respect to themanipulator (normal refers to placementapproximately behind the shoulder of themanipulator arm) then the visual feedback willbe spatially displaced. At a fundamental level,displacement refers to there not being a one-to-one spatial correspondence between controlinputs and perceived motion (either directlyperceived through a window view or perceivedon monitors). Spatial displacement is anunfortunate consequence of attempts to providevisual information to the operator when thecamera placement is not normal and it shouldbe avoided if at all possible. If displacedvisual feedback is presented to the operator,system performance can be seriously degradeddue to operator disorientation. This isimportant for Space Station Freedom

telerobotic tasks because it is possible thatcameras may be placed on Station structuresuch that the human operator receivesdisplaced visual feedback.

If control inputs are referenced to the body ofthe manipulator (analogous to the "world"mode in industrial robotics) the followingdescriptions can be made. Spatially displacedfeedback can take on four different forms:

angular displacement, where the reference

point is displaced horizontally or vertically (seeFigure 1 for a depiction of horizontallydisplaced angular feedback and Figure 2 forvertically displaced angular feedback); reversal

displacement, where the camera is facing thearm instead of being placed behind it (seeFigure 3 for a depiction of reversal); inversion-reversal displacement, where the camera is

upside down and is facing the ann (see Figure3); and inversion displacement, where thecamera is upside down with respect to themanipulator arm (see Figure 3).

It has been suggested that these spatialdisplacements adversely affect operatorperformance to varying degrees. The literaturestates that direct manipulation tasks take onprogressively more disturbance, with angulardisplacement being the least disruptive andinversion dispIacement being the most disrup-tive (Smith and Smith, 1962). Direct manipu-lation is where a person manipulates an objectwith their bare hands or with a simple, rigidtool such as a stylus or screwdriver. A remotemanipulation task is where the person manipu-lates a mechanism (e.g., hand controller)which transfers the operator's motions to aremotely placed mechanical device. The actualmanipulation of the object is spatially removedor distant from the operator.

Early studies on spatial displacement wereconducted by Helmholtz, Kohler, Smith and

104

Page 115: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

180 °Reversal

It" 900

Normal

._90 °

j,

.._ 45°

Figure 2. Angularly displaced feedback,vertically displaced about the manipulator (sideview shown).

Figure 1. Angularly displaced feedback,horizontally displaced about the manipulator(top view shown).

_vertediReversed _verted

Reversed _ / _orma,

Figure 3. Spatially displaced feedback -- nomaal, reversal, inversion, and inversion-reversal

(side-view shown).

105

Page 116: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Smith, et al. Smith and Smith (1962) wereinterested in perceptual-motor integration,specificallytheeffectsof spatialandtemporaldisplacements of the visual feedback ofmotion.Accordingto SmithandSmith (1987)their work was incorporated into theneurogeometricorganizationof behavior inwhich "space perception and visuallycontrolled movementare learned,the natureanddegreeof learningaredeterminedby thenature and degree of spatial compliancebetweenmuscularcontrol andsensoryinput."Therehavebeennumerousstudiesof viewingsystemsfor teleroboticsystems;however,it isdifficult to draw coherent and generalizedconclusions from them (Crooks, Freedman,and Coan, 1975; Horst, Rau, LeCocq, andSilverman, 1983; Chu and Crooks, 1980;Clarke,Hamel,and Draper, 1983;BodeyandCepolina, 1973; Huggins, Malone, andShields, 1973; Onega and Clingman, 1973;Fornoff and Thornton, 1973; and Clarke,Handel,andGarin, 1982).

OBJECTIVES

One objective of this investigation was toquantify whether the above mentioned resultsfrom the literature hold true for remotemanipulation tasks performed with a remotemanipulator arm. It was also of interest toinformally evaluate how a direct view of theworksite compares to a normal camera-aidedview of the worksite. This secondaryevaluation was an attempt to determine if theresults obtained in a previous evaluation(Smith, 1986) of remote manipulatoroperators, who had both direct and normalcamera views, could be replicated. Thepresent investigation examined operatorsperforming a remote manipulation task whileexposed to the following different viewingconditions:

• direct view of the work site (baselinecondition)

• normal camera view (zero-degreedisplacement)

• reversed camera view (180-degreedisplacement)

• inverted/reversed camera view• inverted camera view

METHOD

Data were collected from subjects as theyperformed a remote manipulation task whileexposed to the different viewing conditions.All six subjects used the five viewingconditions.

SUBJECTS

Six volunteer subjects were used in thisevaluation. All were experienced in theoperation of the Kraft robotic system.

APPARATUS

Testing took place in the Man-SystemsTelerobotics Laboratory at NASA's JohnsonSpace Center (JSC).

A Kraft Telerobotics force-reflecting 6 degree-of-freedom master-slave remote manipulatorwas used to perform the remote manipulationtask. A Javelin CCD color camera and aMitsubishi 20-inch color video monitor were

used to present the camera views to thesubjects. The camera in each position wasexactly 10 feet 2 inches from the remotemanipulation task, with the focus and zoomcontrolled. The direct view was from adistance that was controlled so that the visual

angle subtended at the eye by the task piecewas approximately the same for both cameraand direct views.

The task consisted of grasping and moving sixpyramid-shaped wooden blocks so that theycouId be dropped inside a metal box located sixinches from the blocks on the taskboard. This

particular task was selected because it isfunctionally similar to multi-axis translation

and alignment tasks which will be performedby the telerobots on Space Station Freedom.

Each subject sat behind a barrier for thecamera-aided viewing conditions so that adirect view of the worksite was not possible.For the direct viewing condition, the subjectsfaced the Kraft manipulator arm with a zero-degree displacement view.

106

Page 117: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

VARIABLES

The independent variable in this study was thedifferent camera viewing conditions (normal,image reversal, image inversion/reversal, andimage inversion). Note that the noncameraviewing condition (direct view) was not anindependent variable, but was only used as abaseline measure. This evaluation used a one-

factor repeated measures design -- all subjectswere exposed to all levels of the independentvariable used. The dependent variable in thisevaluation was task performance time.

PROCEDURES

Subjects were instructed to perform the remotemanipulation task quickly and accurately. Allsubjects performed the manipulation task withthe direct view first. This served as the

baseline condition by which the performancetimes for all the other viewing conditions couldbe compared. Each subject then performed themanipulation task for each of the four differentcamera-viewing conditions. Each subject

performed the task a total of five times. Theorder in which subjects were exposed to thefour different camera views wascounterbalanced to control for order effects.

Task performance times were collectedthroughout the test sessions.

RESULTS AND DISCUSSION

The task completion time data were collectedand summarized. The average performancetimes (in minutes) are summarized as follows:

Direct (baseline) 0.59Normal 1.20

Inverse/Reverse 5.00Reverse 6.02Inverse 9.51

The task completion times were thenstatistically analyzed with a repeated measuresanalysis of variance. It was determined fromthe ANOVA that the main effect of the viewingconditions, F(3,15) = 7.72, p < 0.05, was

statistically significant. Because of this result,a Newman-Keuls pairwise comparison testwas then administered to the data. It was

revealed that the performance times for theinverted camera view were significantly (p <0.05) worse than all of the other viewingconditions. This analysis also revealed that thereversed viewing condition was significantlyworse than the normal viewing condition. Theperformance times for the inverted cameraviewing condition were also significantlyworse than the normal viewing condition

performance times at p < 0.01.

It was also of interest to compare theperformance of subjects under the direct-viewing condition to the performance ofsubjects under the normal camera-viewingcondition. It was hoped that a statisticallyvalid comparison could be made between thesetwo viewing conditions, but since all subjectsperformed the direct-viewing condition firstand the normal-viewing condition eithersecond, third, fourth, or fifth out of all

viewing conditions used in this study, then theresults for this analysis could well becontaminated by the effect of differentialamounts of training. A valid statisticalcomparison could not be made between thesetwo viewing conditions because the

experimental design in this study was used tocounterbalance four different viewingconditions, not two. An informal comparisonwas made for the sake of general interest. Thisinformal comparison involved partitioning thenormal viewing-condition data from the rest ofthe camera-viewing data. These data and thedirect-viewing data were analyzed with a t-test.This data analysis revealed that the taskperformance times for the normal viewingcondition were significantly slower (p < 0.05)than for the direct viewing condition. It isrecommended that further studies conduct an

analysis of these two viewing conditions underproper experimental conditions so that anaccurate assessment can be attained.

It is clear from the results of this study that

spatially displaced visual feedback adverselyaffects remote manipulation performance. Toget an indication of how views of the remotemanipulator through a CRT monitor changewith respect to hand controller movements forthe four types of visual feedback studied in this

107

Page 118: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

r •

I !

Monitor

Controller

lator

NORMAL

! I

INVERSION-REVERSAL

| !

! !

REVERSAL

INVERSION

Figure 4. Viewed manipulator movements with respect to controller movements for four types of visual feedback.

evaluation, refer to Figure 4. In this figure,the object in the monitor is the remotemanipulator. The arrows indicate the directionthat the manipulator will move in the videoimage with respect to the specific handcontroller movement for the four different

camera placements. For example, in theinverted visual feedback condition, a rightwardmovement of the hand controller will result in

the monitor image of the manipulator movingleftward and an upward hand controllermovement will result in the monitor image ofthe manipulator moving downward.

The results obtained were not quite as wouldbe expected based upon the previouslymentioned studies of camera-aided viewing ofdirect manipulation tasks. The differenceobserved in this evaluation was that, in rankingthe four viewing conditions, the reversedcamera view was ranked third while theliterature stated that the inversion/reversal was

third. The reversed-viewing condition notonly took over a minute longer, on theaverage, to complete than the inversed/reversedcondition, but it was also significantly worsethan the nomaal viewing condition performance

108

I#

t

q

,5

ar

=

£

Page 119: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

time. The differences obtained in thisevaluation could be due to the fact that the

remote manipulation task used in the present

study involved the use of axes of movementdifferent from those involved in the direct

manipulation tasks reported in the literature.The axis of movement, the quantity of

movement per axis, and the type ofdisplacement are interrelated in a fashion thatprobably affects performance times; however,quantitative determination of theseinterrelationships is beyond the scope of thispreliminary evaluation. The differencesobtained could simply be due to the fact thatthe sited direct manipulation results were basedupon data gathered from many different studieswhile the remote manipulation data came fromonly one study. More remote manipulationstudies will need to be conducted before thisconclusion can be made.

This study did informally replicate the resultsof the previously mentioned study by Smith(1986) which found, among other things, thatperformance with a normal camera view of theworksite is significantly worse thanperformance with a direct view. This result isno doubt partially due to the lack of binoculardisparity that accompanied the camera viewingconditions.

CONCLUSIONS

The results of this evaluation have importantimplications for the arrangement of remotemanipulation worksites and the design ofworkstations for telerobot operations. This

study clearly illustrates the deleterious effectsthat can accompany the performance of remotemanipulator tasks when viewing conditions areless than optimal. Future evaluations should

emphasize telerobot camera locations and theuse of image/graphical enhancement techniquesin an attempt to lessen the adverse effects ofdisplaced visual feedback. For a furtherdiscussion of the effects of perturbed sensoryfeedback see Smith, Smith, Stuart, Smith and

Smith (1989).

An important fincting in this evaluation is theextent to which results from previously

performed direct manipulation studies can be

generalized to remote manipulation studies.Even though the results obtained were verysimilar to those of the direct manipulationevaluations, there were differences as well.This evaluation has demonstrated that

generalizations to remote manipulationapplications based upon the results of directmanipulation studies are quite useful, but theyshould be made cautiously.

A CKNOWLEDGEMENTS

Support for this investigation was provided bythe National Aeronautics and SpaceAdministration through Contract NAS9-17900to Lockheed Engineering and Sciences

Company.

REFERENCES

° Bodey, C. E., and Cepolina, F. J.(1973). Shuttle-attached manipulatorsystem requirements. In E. Heer (Ed.),Remotely Manned Systems (pp. 77-83).Pasadena, CA: California Institute of

Technology.

. Chu, Y., and Crooks, W. (1980). Man-machine communication in computer-

aided teleoperator control systems. InProceedings of the Human FactorsSociety 24th Annual Meeting (pp. 54-58). Santa Monica, CA: Human FactorsSociety.

. Clarke, M., Hamel, W., and Draper, J.(1983). Human factors in remote controlengineering development activities. InProceedings, 31st Conference on RemoteSystems Technology (pp. 8-16).Gatlinburg, TN: American NuclearSociety.

. Clarke, M., Handel, S., and Garin, J.(1982). A comparison of color and blackand white television for remote viewingin nuclear facilities. In Transactions ofthe American Nuclear Society (p. 239).Washington, DC: American Nuclear

Society.

109

Page 120: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

, Crooks, W., Freedman, L., and Coan, P.

(1975). Television systems for remotemanipulation. In Proceedings of theHuman Factors Society 19th AnnualMeeting (pp. 428-435). Santa Monica,CA: Human Factors Society.

. Fornoff, H., and Thornton, W. G.(1973). Experimental evaluation ofremote manipulation systems. In E. Heer(Ed.), Remotely Manned Systems (pp.43-61). Pasadena, CA: CaliforniaInstitute of Technology.

. Horst, R., Rau, P., LeCocq, A., andSilverman, E. (1983). Studies of three-

dimensional viewing in teleoperatedsystems. In Proceedings, 31stConference on Remote SystemsTechnology (pp. 30-34). Gatlinburg, TN:American Nuclear Society.

. Huggins, C. T., Malone, T. B., andShields, N. L. (1973). Evaluation ofhuman operator visual performancecapability for teleoperator missions. InE. Heer (Ed.), Remotely MannedSystems (pp. 337-350). Pasadena, CA:California Institute of Technology.

. Onega, G. T., and Clingman, J. H.(1973). Free-flying teleoperator require-ments and conceptual design. In E. Heer(Ed.), Remotely Manned Systems (pp.19-41). Pasadena, CA: CaliforniaInstitute of Technology.

10. Smith, K. U., and Smith, W. M. (1962).Perception and Motion. W. B. Saundersand Company: Philadelphia.

1I. Smith, R. L. (1986). The effects oftelevised angular displacement of visualfeedback on a remote manipulation task.Unpublished Masters Thesis, TexasA&M University.

12. Smith, R. L. (1988). Human Visual

Requirements for Control and Monitoringof a Space Telerobot. In W. Karwowski,H.R. Parsaei, and M. R. Wilhelm(Eds.), Ergonomics of Hybrid Automated

13.

14.

Systems I, Elsevier: Amsterdam, pp.233-240.

Smith, T. J., and Smith, K. U. (1987).Feedback-Control Mechanisms of Human

Behavior. In G. Salvendy (Ed.),Handbook of Human Factors. JohnWiley & Sons: New York.

Smith, T. J., Smith, R. L., Stuart, M. A.,Smith, S. T., and Smith, K. U. (1989).Interactive Performance in Space - TheRole of Perturbed Sensory Feedback.3rd International Conference on Human-Computer Interaction. Boston:

Massachusetts, September 18-22.

q

110

A

Page 121: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

SIMULATION OFoNTHETHEHUMAN-TELEROBOTsPACESTATION INTERFACE /; 3//_

Lockheed Engineering and Sciences Company

Research directed by Jay Legendre, Manager,Remote Operator Interaction Lab, NASA JSC.

INTR OD UCTION

The Space Station is a NASA project which,when completed in the mid-1990s, willfunction as a permanently manned orbitingspace laboratory. A part of the Space Stationwill be a remotely controlled Flight TeleroboticServicer (FTS). The FTS, a project led byNASA's Goddard Space Flight Center, will beused to help assemble, service, and maintainthe Space Station and various satellites. Theuse of the FTS will help ensure the safety andproductivity of space-based tasks normallyaccomplished by astronauts performing outsidethe pressurized spacecraft. For the short-term,control of the FTS will be dependent primarilyon the human operator. Since the humanoperator will be a part of the teleroboticsystem, then it is important that the human-telerobot interface be well-designed from aHuman Factors perspective. It is critical thatthe components of this interface be designed sothat the human operator's capabilities andlimitations are best accommodated for within

the structure of specific task requirements. Toemphasize the importance of a well-designedhuman-telerobot interface, one study foundthat simply the selection of an appropriatecontrol device, based upon the operator'scapabilities and the requirements of the task,can more than double the productivity of thetelerobotic system (O'Hara, 1986).

With the system development processbecoming more complex and expensive, moreemphasis is being placed on the evaluation ofsystems during early stages of the developmentcycle. The design of systems that includehuman operators is especially complex becausedetermining overall systems performance isdependent upon the interaction of the human

operator, hardware components, and softwarecomponents (Chubb et al., 1987). Adequatelyevaluating the performance of a system during

the design cycle is becoming increasingly moredifficult when using the static evaluation toolstraditionally available to the Human Factorsengineer, such as job and task analysis (Geer,1981). It is becoming more common forsystems developers to use computer simulationas a design tool instead of hardware models(Gawron and Polito, 1985) and for HumanFactors engineers to use computer simulationto enhance the use of static evaluation tools.

This is because more sophisticated analysistools are needed that will allow a controlled

evaluation of the human operator/hardwarecomponents/software components interaction(Chubb, et al., 1987).

This paper will cover the various uses ofsimulation, the elements of the human-telerobot interface, and how simulating thehuman-telerobot interface on the Space Stationwill result in a better designed system. Beforefocusing the discussion specifically to thesimulation of the human-telerobot interface, it

will be useful to briefly define simulation andto cover the major uses of system simulation--independent of the type of system that is beingsimulated. There will then be a discussion ofthe areas of the human-telerobot interface andhow simulation can contribute to a better

designed user interface from a Human Factorsperspective.

USES OF SIMULATION

Simulation is the process of imitating orduplicating the actions or processes of somesystem in a controlled environment (Arya,1985). Emphasis should be placed on theword "controlled." System simulation, eitherhardware, computer, or a combination of thetwo, has been used for decades. This paper

111

Page 122: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

will describefour major usesof simulation.One use of simulation is to study theeffectiveness of various hardware/softwarecomponentsonoverall system'sperformance.Theadvantagesof usingsimulationwithin thiscontextare cost- it is cheaper to simulate a

system than it is to build one; timesimulating a system is usually faster than

building it; feasibility -- because of the sizeand complexity of some systems, it is notpossible to evaluate them in the real world,therefore, simulation serves the function of

systems verification; safety- some systemsoperate in dangerous environments and canonly be evaluated safely with the use ofsimulation; and prediction -- with the use ofsimulation, a system's performance andprocesses can be speeded up so that futurebehavior can be predicted (Arya, 1985).

A second use of simulation is to study theeffects of various hardware/software

components on simulated human performance.This approach utilizes mathematical models ofhuman performance to assist the simulationprocess. In this, as well as, the approachmentioned above, man-in-the-loop is not a partof the evaluation.

A third use of simulation is to study the effectsof various hardware/software components onactual human performance. This approach canbe taken in an attempt to match systemscomponents and operator capabilities andlimitations in order to ensure optimal systemsand operator performance. This approach canbe taken in an attempt to add greater fidelity,and thus, external validity to the data that aregathered in the analysis.

The last use of simulation to be addressed in

this paper is to train operators to eventually usea real-world system. The major benefits ofsimulation as a training aid are in the areas ofscheduling -- training is not affected byweather or the need to perform operationalmissions; cost- simulator training issignificantly less expensive than prime systemtraining; safety- reduces the exposure ofoperators and the prime system to the hazardsof the operating environment; control oftraining conditions -- control of environmental

and human interaction conditions that may be apart of the operating environment; learningenhancement -- system malfunctions andenvironmental conditions can be included in

the training; and performance enhancement-inclusion of critical missions that are difficult

to train for in the real world (Flexman andStark, 1987).

As the above list indicates, simulation has

significant usage as an aid in the developmentof pre-existent systems. It can have evengreater significance in the design anddevelopment of novel pre-existent systems --systems that have never existed before andwhere few direct comparisons to existentsystems can be made. The human-telerobotsystem that will be used on the Space Station issuch a novel system.

Even though industrial robots and teleoperatorsare heavily used in such areas as the nuclearindustry and in underwater activities, there aremajor differences between these applicationsand the telerobot system to be used on theSpace Station -- one of these being the zero-gravity factor. There is also a limited numberof direct comparisons which can be made fromthe Remote Manipulator System (RMS) usedon the Space Shuttle and from the proposedtelerobot system. The review of the literatureconcerning these systems has providedanswers to some important design issues, butthere are major limits to how far these data canbe generalized to the human-telerobot interfaceon the Space Station. Laboratory evaluation ofthe effects of various hardware and software

components on operator performance can, ofcourse, provide answers and guidance, but,perhaps greater fidelity can be attained with theuse of simulation.

It is thus proposed that the use of simulation inthe design and development of the human-telerobot interface on the Space Station will bevery beneficial. Simulation should serve as anaid in the selection and design of hardware andsoftware components to ensure maximum,error-free performance. Simulation should beworthwhile especially for its ability to simulatethe effects of zero gravity on performance.Operator performance at manipulation tasks

II

4

112

Page 123: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

while in a one-gravityenvironmentmay wellnot be generalizable to weightless states.Simulationof theinterfaceshouldalsohavethebenefit of helpingengineersto detectflaws inthe design of componentsof the interfacewhich would adverselyaffect systemand/oroperator performance. It is obviouslyimportant that any mistakesof this type bedetectedearly and far before the design isfinalized or manufactureof the systemhasoccurred.

INFORMATION NEEDS OF THEOPERATOR

There are three broad areas of the human-telerobot interface where simulation can be of

assistance: operator information needs, controldevices, and workstation layout. These threeareas are listed in Table 1. The information

needs of the operator will vary depending uponthe tasks to be performed. The operator willneed information concerning the location andorientation of the telerobot in space, the healthstatus of the telerobot, visual feedback fromthe viewing system, the status of anytransportation devices, the status of theworkpiece, and the status of the hardware inthe control workstation.

Regarding visual feedback, the visual systemmay well be the single most important sourceof information for the operator (Smith andStuart, 1988). Some of the issues related tothe visual system are concerned with cameraposition and number, the spatial orientation ofthe image presented to the operator, andmonitor type, placement, and number. Forexample, when performing a remotemanipulation task in real time, the operator canview the remote scene either by lookingthrough a window, or with the use of cameras.For most of the tasks that will be performed inspace, a direct view of the working area willeither not be available, or will not provide thenecessary visual cues for teleoperation.Therefore, cameras will provide the primarymode of feedback to the operator concerningmanipulator position, orientation, and rate ofmovement. Operators normally use the bodyof the manipulator as a reference point whenmaking control inputs, but if the Space

Station's external cameras are placed such thatthe camera view is not normal to the

manipulator (normal refers to placement behindthe shoulder of the arm), then the visualfeedback will be spatially displaced. Spatial

displacement is an unfortunate consequence ofattempts to provide visual information to theoperator when the camera placement is notnormal and it should be avoided if at all

possible.

TABLE 1.

Three areas of the human-telerobot interface

1. Information needs of the operator• Location of telerobot

• Status of transportation devices• Status of workpiece• Status of workstation• Force feedback• Visual feedback

Camera position and numberSpatial orientation of imageMonitor type, placement, numberIllumination

2. Control devices considered• Miniature master controllers

• 3 or 6 degree-of-freedom handcontrollers

• Exoskeleton controllers• Head-slaved controllers• Dedicated switches

• Programmable display pushbuttons• Voice command systems• Computers

3. Telerobot workstation

• Hardware layout• Software layout

Spatially displaced feedback can take ondifferent forms: angular displacement, the

reference point is displaced horizontally withinthe sagittal plane or vertically within themedian plane; reversal is facing the arm insteadof being placed behind it; inversion-reversal isupside down and is facing the arm; andinversion, the camera is upside down with

113

Page 124: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

respectto themanipulatorarm. Theimagecanalsobedisplacedtemporally -- there are timedelays in which the operator receives the visualfeedback, as well as size distorted-- the imageis enlarged or reduced from its actual size.These spatial displacements adversely affectoperator performance to varying degrees.Generally, they take on progressively moredisturbance with angular displacement being

the least disruptive and inversion displacementbeing the most disruptive. Temporal dis-placement interrupts the intrinsic temporalpatterning of motion and causes severedisruptions in behavior. Much effort shouldbe extended to prevent its occurrence. Sizedistortions generally do not affect performanceto a great extent (Smith and Smith, 1962).

Other visual system issues include how anoperator will use multiple views of the taskarea and how operators can best use non-stereoscopic cues to depth perception.Computer simulation of various task scenarioswith human operators working within varioushardware and software mockups, includingsophisticated scene generation techniques, canserve as an aid in determining what types ofinformation are needed and what types ofinformation presentation enhancements shouldbe used at various points within the sequenceof task performance. An example of an infor-mation enhancement technique that simulationcan investigate is the use of real-time movinggraphics displays designed to help operatorsmaintain their orientation while performingunder potentially visually disorienting condi-tions. Other screen-viewing techniques shouldbe investigated with the use of simulation in anattempt to avoid operator disorientation whileperforming manipulation tasks.

CONTROL DEVICES

Control devices will be used to control such

things as telerobot activation, position,manipulators, end effectors, rate of movement,and the viewing system. Control devicesbeing considered include manipulatorcontrollers such as miniature master controllers

with direct position control, 3 or 6 degree-of-freedom hand controllers using rate or forceinputs, exoskeleton controllers using various

position sensors to detect human armconfigurations, head-slaved control, dedicatedswitches, programmable display pushbuttons,voice-commanded systems, and computerdisplays with cursor-control devices whichallow menu selections. Control device

selection is important because it affectsoperator performance, workload, andpreference. Computer simulated scenarioscould be linked to actual controllers' use to

determine their effects on operator performanceacross different manipulation tasks.

WORKSTATION DESIGN

The telerobot workstation consists of hardware

elements, their interfaces, and the software thatwill allow the hardware to be used. The

workstation is the point where the informationand control inputs are made available to theoperator. Just as with the selection of controldevices, the workstation should be logicallyand functionally laid out to optimize operatorperformance and preference while minimizingworkload and error rates. Again, simulationcan help to determine optimal workstationlayouts. A simple means of simulating theworkstation layout is through the use ofcomputer prototyping, but it is recommendedthat large-scale simulation be used as a meansof designing and evaluating the telerobotworkstation.

CONCLUSIONS

Many issues remain unresolved conceming thecomponents of the human-telerobot interfacementioned above. It is then critical that these

components be optimally designed andarranged to ensure, not only that the overallsystem's goals are met, but that the intendedend-user has been optimally accommodated.With sufficient testing and evaluation through-out the development cycle, the selection of thecomponents to use in the final teleroboticsystem can promote efficient, error-freeperformance. It is recommended that whole-system simulation with full-scale mockups beused to help design the human-telerobot inter-face. It is contended that the use of simulation

can facilitate this design and evaluationprocess. The use of simulation can also ensure

114

Page 125: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

that the hardware/softwarecomponentshavebeen selected to best accommodate theastronaut,insteadof the astronauthaving tomake performanceaccommodationsfor thehardware/softwarecomponentsthathavebeenselected.

As was mentioned above, there are otheradvantages to simulating the human-teleoperatorinterfacethansimply servingasanaid in the selection and design ofhardware/software components so thatoperatorperformanceis optimized. Systemsdeveloperscanalso usethesimulationsystemto test whetheror not hardwarecomponentsmeetoverall systemsgoals,andthe simulationsystemcanbeusedfor subsequenttrainingoftheastronautswho will usetheactualsystem.

A CKNO WLEDGEMENTS

Support for this investigation was provided bythe National Aeronautics and SpaceAdministration through Contract NAS9-17900to Lockheed Engineering and Space Company.

REFERENCES

. Arya, S. (1985). Defining variousclasses of simulators and simulation.In J. S. Gardenier (Ed.), Simulators

(pp. 36-38). La Jolla, California:Simulation Councils, Inc.

. Chubb, G. P. Laughery, Jr., K. R.,and Pritsher, A. A. B. (1987).Simulating manned systems. In G.Salvendy (Ed.), Handbook of humanfactors (pp. 1298-1327). New York,New York: John Wiley and Sons.

. Flexman, R. E., and Stark, E. A.

(1987). Training simulators. In G.Salvendy (Ed.), Handbook of humanfactors (pp. 1012-1038). New York,New York: John Wiley and Sons.

. Gawron, V. J., and Polito, J. (1985).

Human performance simulation:combining the data. In J. S. Gardenier(Ed.), Simulators (pp. 61-65). La

.

.

o

o

Jolla, California: Simulation Councils,Inc.

Geer, C. W. (1981). Human

engineering procedures guide(AFAMRL-TR-81-35). Wright-Patterson Air Force Base, OH: Air

Force Aerospace Medical ResearchLaboratory.

O'Hara, J. M. (1986). Telerobotic

work system: Space-station truss-structure assembly using a two-arm

dextrous manipulator (GrummanSpace Systems Report No. SA-TWS-86-R007). Bethpage, NY: Grumman

Space Systems.

Smith, K. U., and Smith, W. M.(1962). Perception and motion: Ananalysis of space structured behavior.

Philadelphia, Pennsylvania: W. B.Saunders Company.

Smith, R. L., and Stuart, M. A.(1988). Telerobotic vision systems:The human factor. In Proceedings ofthe Instrument Society of America(ISA) Conference (in press).

115

Page 126: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A
Page 127: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

r; •

Er orlol "llCS

ORIGINAL PAGE

COLOR PHOTOGRAPH

An astronaut's applied force is measured to determine human capability to perform tasks in

space. The CYBEX dynanometer, which is used to evaluate the skeletal muscle strength, power,and endurance of astronauts and astronaut candidates, is used in the Weightless Environment

Training Facility (WETF) to simulate zero-gravity.

PRI_d)tNG PAGE BLANK NOT FtLMF_

Page 128: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

#

Page 129: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

QUANTITATIVE ASSESSMENT OF HUMAN MOTION USINGVIDEO MOTION ANALYSIS

...., i+}/

_j/. ,---r-"¢_'_ 7#

John D. Probe

Lockheed Engineering and Sciences Company N94-2(.208

Research directed by Michael Greenisen,Former Manager, Anthropometry andBiomechanics Lab, NASA JSC.

INTRODUCTION

In the study of the dynamics and kinematics ofthe human body a wide variety of technologieshas been developed. Photogrammetric

techniques are well documented and are knownto provide reliable positional data fromrecorded images. Often these techniques areused in conjunction with cinematography andvideography for analysis of planar motion, andto a lesser degree three-dimensional motion.Cinematography has been the most widelyused medium for movement analysis.

Excessive operating costs and the lag timerequired for film development, coupled withrecent advances in video technology, haveallowed video based motion analysis systems

to emerge as a cost effective method ofcollecting and analyzing human movement.The Anthropometric and Biomechanics Lab atJohnson Space Center utilizes the video basedAriel Performance Analysis System to developdata on shirtsleeved and space-suited human

performance in order to plan efficient on-orbitintravehicular and extravehicular activities.

VIDEO BASED MOTION ANALYSIS

The Ariel Performance Analysis System

(APAS) is a fully integrated system ofhardware and software for biomechanics and

the analysis of human performance andgeneralized motion measurement. Majorcomponents of the complete system include thevideo system, the AT compatible computer,and the proprietary software.

VIDEO SUBSYSTEM

The video system consists of commercialquality 112-inch VHS format portable video

cameras. They are used to record motion

sequences for subsequent analysis. Aminimum of two cameras are required for fullthree-dimensional analysis. A high qualityVCR and monitor are used for the display and

digitizing of videotaped sequences. Theplayback unit accommodates standard VHSvideotapes recorded from any standard videosource to allow high precision freeze-frame

video imaging with accurate single frameadvance and reverse as well as a variable speed

search capability.

HARDWARE SUBSYSTEM

An AT compatible computer is the primary

component of the analysis system. Thecomputer uses a combination of a framegrabber and a VCR controller board to digitizefrom the playback unit. The APAS capturesvideo images from video tape and importsthem into the computer memory. The operatorcan then digitize the desired sequence bypositioning a cross-hair cursor over the jointcenter of interest and recording the coordinates

of this point by pressing a button on themouse. Digitization of the joints on the firstframe is performed completely by the operator.For subsequent frames, the point locationsfrom previous frames are used to predict thepositions of each point on the current frame.This significantly reduces the time required todigitize a sequence. Additionally, since thecomputer stores a digital image, the analysisframe can be contrasted, enhanced, or filteredfor clarification.

SOFTWARE SUBSYSTEM

An extensive integrated software package

makes up the third component of the APAS.For ease of operation, the software has been

highly structured and modularized. Eachmodule is designed to perform a particularfunction and is completely menu driven. A

;:_WIgIDING PAGE BLANK N.Q'T' F',ILMEB

i19

Page 130: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

brief functionaldescriptionof eachmoduleislistedbelow.

Performanceanalysisalwaysbeginswith thedigitizing module. This module allows video

images to be converted to body joint locationcoordinates in the computer. These digitizedlocations are saved for subsequent conversionto true image space location.

The transformation module converts digitizedvideo data into true two or three-dimensional

image data using an algorithm called directlinear transformation (1). If a single cameraview is used, the resulting image is two-dimensional. If two or more camera views

have been used, the resulting image is three-dimensional.

The smoothing module removes small randomdigitizing errors from the computed imagecoordinates. At the same time, it computesbody joint velocities and accelerations from the

smoothed joint coordinates. The operator maychoose any of three different smoothingfunctions: cubic spline, digital filter, andpolynomial smoothing. The APAS utilizes, asthe smoothing method of choice, a modifiedcubic spline smoothing algorithm formulatedby Reinsch (2). In addition, the user maycontrol the amount of smoothing applied toeach joint to insure that smoothing does notdistort the digitized data.

The viewing module is used to examine imagedata in "stick figure" format. Viewing optionsinclude single frame, multiple frame, andanimated images. Three-dimensional imagesmay be rotated to allow viewing from anychosen direction.

The graphing module is used to draw graphsof image motion. Displacement, velocity, andacceleration curves may be graphed for anynumber of individual body joints or segments.Joint motion may be presented in either linear

coordinates or an.gular coordinates, whilesegment motion is presented in angularcoordinates about a single segment endpoint.The data may be displayed on cartesian graphsor as full figure models.

Printed reports of the image motion data areproduced by the print module. Data can be

saved for future printing. Additionally, reportsmay be transferred to other systems such asspread sheets or data base programs.

The analog module includes a hardware

interface and an analog sampling unit withprogram selectable gain for collection of up to16 channels of analog input. Specializedfeatures support the use of force plates andelectromyography (EMG) measurement andanalysis. Included are options for spikeanalysis, envelope processing, signalintegration analysis, waveform analysis, andspectral analysis.

STUDIES IN PROGRESS

The Anthropometric and Biomechanics Lab

(ABL) is currently involved in ongoing studiesto enhance astronaut performance in a spaceenvironment. Depending on the study, all orpart of the APAS may be used for datacollection and analysis.

Initial investigations are in progress utilizingmotion analysis, EMG, and an instrumentedtreadmill to measure and compare theshirtsleeved one-gravity relations betweenvelocity, angle of inclination, skeletal musclecontraction patterns, and impact loading of theskeletal system to identical conditions in azero-gravity environment.

An unrelated but similar investigation is inprogress to determine the lower torso mobilityof one of the Space Station prototype spacesuits in one gravity, in addition to simulated

lunar and martian gravity, 1/6 and 1/3 Earthgravity respectively. The treadmill and EMG

are also being incorporated into this study.The ABL is also investigating the possibility ofusing the APAS to determine reach envelopesof astronauts as a function of varyinggravitational loads while wearing theLaunch/Entry Suit (LES).

CONCLUSION

The video based motion analysis system beingused by the ABL has proved to be a viable

120

Page 131: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

meansfor collecting and analyzing humanmotion. A great strength of video basedsystemsis their flexibility. Thesystemis usedin the one-gravitylab environment,in neutralbuoyancy at JSC'sWeightlessEnvironmentTrainingFacility (WETF), andin zerogravityon boardNASA's KC-135. The systemsareversatile and allow the operator to analyzevirtually anymotion that canbe sequentiallyimaged.

REFERENCES

. Shapiro, R. "Direct Linear Trans-formation Method For Three-

Dimensional Cinematography."Research Quarterly, 49:197-205, 1978.

, Reinsch, C. H. "Smoothing By SplineFunctions." Numerische Mathematik,

10:177-184, 1967.

121

Page 132: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

/ _EACH PERFORMANCE WHILE WEARING THE SPACE SHUTTLE LAUNCH

AND ENTRY SUIT DURING EXPOSURE TO LAUNCH ACCELERATIONS

James P. Bagian and Michael C. GreenisenNASA Lyndon B. Johnson Space Center

Lauren E. Schafer and John D. Probe

Lockheed Engineering and Sciences Co.

Robert W. Krutz, Jr.Krug International

INTRODUCTION

Crewmen aboard the Space Shuttle aresubjected to accelerations during ascent (thepowered flight phase of launch) which rangeup to +3Gx. Despite having 33 missions andnine years experience, not to mention all thetime spent in development prior to the firstflight, no truly quantitative reach studywearing actual crew equipment, using actualShuttle seats and restraints has ever been done.What little information exists on reach

performance while under acceleration has beenderived primarily from subjective commentsgathered retrospectively from Shuttle flightcrews during their post mission debrief. Thislack of reach performance data has resulted inuncertainty regarding emergency procedures

that can realistically be performed during anactual Shuttle ascent'versus what is practiced inthe ground-fixed and motion-based ShuttleSimulators.

With the introduction on STS-26 of the current

Shuttle escape system, the question of reachperformance under launch accelerations was

once again raised. The escape system'srequirement that each crewman wear a Launch/Entry Suit (LES), parachute harness, andparachute were all anticipated to contribute to afurther degradation of reach performanceduring Shuttle ascent accelerations. In order to

answer the reach performance question in aquantitative way, a photogrammetric methodwas chosen so that the actual reach values and

associated envelopes could be captured. Thiswould allow quantitative assessment of

potential task performance impact and identify

areas where changes to our Shuttle ascent

emergency procedures might be required.Also, such a set of reach values would be valid

for any similar acceleration profile using thesame crew equipment. Potential Space Stationapplications of this data include predictingreach performance during Assured CrewReturn Vehicle (ACRV) operations.

METHOD

Four astronaut/pilot volunteers were used astest subjects for the reach evaluations at both 1and 3Gx. All were veterans of one or more

previous Shuttle flights and had used the crewequipment configuration under consideration

numerous times before, including an actualShuttle mission.

The LES was designed to function as acombination dry-type, anti-exposure suit and apartial pressure, high altitude protection suit.Each subject wore the LES over a set of

expedition weight Capilene ® underwear. Aspecially designed torso harness was wornover the LES and connected by quick releasefasteners to a personal parachute. Thisparachute was worn on the crewman's backand also functioned as a seat-back cushion.

Each subject was tested during two runs on thecentrifuge at Brooks Air Force Base. One run

was done at 1Gx (lying on his back whilestrapped in the seat), and the other wasperformed at the 3Gx level.

The reach sweeps performed by each subjectwere captured by four video cameras. One

122

Page 133: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

camera was securedin each corner of thecentrifuge gondola and oriented for anoptimum view of thesubject. The four viewsof each recordedmotion were subsequentlydigitized and analyzed using the ArielPerformanceAnalysis System,developedbyAriel Dynamics,Inc.

STATISTICAL METHODS

The data obtained from the motion analysis ofleft and right reach sweeps was normalized andprepared for statistical analysis. The cartesiancoordinates of the left and right shoulder werenoted while the subject was at rest during the

1Gx loading condition. These coordinateswere then used as the origin for reachmeasurements during both the 1Gx and 3Gx

sweeps. In this way, reach was normalizedfor each subject.

Reach was defined as the distance, in

centimeters, between the shoulder and theknuckles for each coordinate. Maximum reach

capability was compared in the forward, lateraland overhead (x, y, and z respectively)directions during 1 and 3Gx loading

conditions. (Note: The measurement of lateralreach did not reflect a true maximum since all

of the subjects were, at less than their fullreach, able to touch the sidewalls of the

gondola during the 1 and 3Gx exposures.Therefore, the y and Ay values were notconsidered for evaluation.) Changes in reachbetween the two Gx levels (Ax and Az) were

calculated by subtracting the 3Gx reach datafrom the 1Gx values. Changes in reach

between the two 3Gx arm sweeps (Ax and Az)

were calculated by subtracting the fight armreach data from the left arm values.

Paired-t tests were used to statistically analyzereach differences in the x and z directions.

This analysis was conducted on threecomparisons: the left reach sweeps at 1 versus3Gx, the right reach sweeps at 1 versus 3Gx,and the left versus right reach sweeps at 3Gx.

Because of the small population size (n=4), theuse of the paired-t test is limited. For thisreason, percent differences were alsocalculated for these same comparisons.

RESULTS

The results for 1 and 3Gx left reach sweeps are

shown in Table 1. No statistically significantdifferences (p < 0.05) existed between the 1and 3Gx left reach sweeps. The difference in

average forward reach (Ax) for this studypopulation was 3.3 +/- 5.0 cm. This valueindicates that a greater left forward reach wasachieved during the 1Gx loading condition.

The difference in average overhead reach (Az)was 3.9 +/- 3.4 cm. However, in this case,

greater left overhead reach capability occurredduring the 3Gx exposure.

TABLE 1.

LES Left Sweep 1Gx versus 3Gx

Sub- Dominantject Hand

1 Right

2 Right

3 Right

4 Left

1Gx

X dir Zdir

28,16 55.30

38.29 53.30

49.66 54.18

49.59 50.88

3G x

Xd|r Zdir

25.55 55.30

40.33 60.38

39.55 60.42

46.97 53.03

Sub- Dominant

ject Hand AXdir AZdir %AXdir %AZdir

1 Right 12.61 0 -9.27 0

2 Right -2.04 -7.08 5.33 13.30

3 Right 10,11 -6.24 -20.36 11.52

4 Left 2.62 -2.15 -5.28 4.23

Percent differences were calculated using IGx

data as the control variable (% difference =[(experimental-control)/control] x 100). Whilepercent differences in reach for the entirepopulation did not exceed 10%, significant (>10%) individual differences between 1 and3Gx left reach capability did exist.

Specifically, subject 2 demonstrated a 13.3%

123

Page 134: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

greater left overhead reach at 3Gx than at 1Gx.

Similarly, subject 3 displayed an 11.5%greater left overhead reach capability at 3Gx.

This same participant showed a 20.4% greaterleft forward reach at 1Gx than at 3Gx.

No statistically significant differences werefound to exist between the 1 and 3Gx right

reach sweeps (Table 2). The Ax for the studygroup was 6.3 +/- 5.6 cm. That is, a greaterforward reach occurred at 1Gx than at 3Gx.The Az was 6.2 +/- 7.6 cm. However,

overhead reach capability was greater duringthe 3Gx loading conditions.

TABLE 2.

LES Right Sweep 1Gx versus 3Gx

Sub- Dominantject Hand

1 Right

2 Right

3 Right

4 Left

1Gx

Xdir Zdl r

42.98 56.40

41.74 56.21

49.10 : 66.11

43.17 73.71

3Gx

X dlr Zdlr

28.58 57.23

38.22 71.62

43.56 75.32

41.43 72.95;

displayed a 27.4% greater right overhead reachduring the 3Gx exposure. Similarly, subject 3showed a 13.9% greater right overhead reachat the 3Gx level. However, this same astronaut

exhibited an 11.3% greater forward reachduring 1Gx loading conditions.

Comparison of reach at 3Gx in the LES

revealed that a statistically significantdifference (p = .037) did exist between left and

right sweeps under 3Gx loading conditions(Table 3). This difference indicated that agreater right overhead reach was obtained inthe LES suit. This was true for both fight andleft hand dominant subjects.

TABLE 3.

At 3Gx in LES Left versus Right Sweep

Sub- DominantLX LZ RX RZ

ject Hand

1 Right 25.55 55.30 28.58 57.23

2 Right 40.33 60.38 38.22 71.62

3 Right 39.55 60.42 43.56 75.32

4 Left 46.97 53.03 41.43 72.95

Sub- Dominant

ject Hand AXdir Z_-dir %Z_Xdir %_Zdir

I

1 Right 14.40 -.83 -33.50 1.47

2 Right 3.52 -15.41 -8.43 27.42

3 Right 5.54 -9.21 -11.28 13.93

4 Left 1.74 .76 -4.03 -1.03

Once again, percent differences were calculatedusing lGx data as the control variable. There

was a significant percent difference in rightforward reach for the entire population. Thiscalculation indicated that forward reach was

I4.2% greater at 1Gx than at 3Gx for the entire

group. Significant individual percentdifferences in right reach also occurred.Subject 1 demonstrated a 33.5% greater rightforward reach at 1Gx than at 3Gx. Subject 2

Sub- Dominantject Hand LX-RX LZ-RZ %_0( %_Z

1 Right -3.03 -1.93 -10.60 -3.37

2 Right 2.11 -11.24 5.52 -15.69

3 Right -4.01 -14,90 -9.21 -19.78

4 Left 5.54 -19.92 -11.79 37.56

Percent differences were calculated usingdominant hand data as the control variable (%difference = [(nondominant)/dominant] x 100).There was a significant percent difference(17.3%) between left and right overhead reachfor the entire population. This value indicatesthat, under 3Gx loading conditions, the right

overhead reach was greater than the left. Noother significant percent differences in meanpopulation reach occurred. However,

124

Page 135: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

significant individual differences did exist.Subject 1 showed a 10.6% greater right thanleft forward reach. Subject 2 demonstrated a

15.7% greater fight overhead reach. Similarly,subject 3 exhibited a 19.8% greater rightoverhead reach. Subject 4, the only left-handed person in this group, displayed an11.8% greater left forward reach. This

participant also demonstrated a 37.6% greaterright overhead reach.

S UMMA R Y

Since all subjects had significant previousexperience using the equipment underevaluation, it is unlikely that any training effectis responsible for the results which wereobtained.

The changes in reach in the +x (forward)direction were qualitatively what had beenanticipated based on anecdotal reports receivedduring Space Shuttle mission debriefings.Three of four subjects during left arm motionand four of four subjects during right arm

motion experienced reduced reach capability inthe +x direction at 3Gx versus 1Gx. The

magnitude of this change was not as great aswas expected, in all cases, ranging from animprovement of 2.04 cm to a 10.11 cmdecrease on the left to a 14.4 cm decrease on

the right. While these differences between rightand left are striking, they are not statisticallysignificant.

It was unexpected that any reach envelopes at3Gx would have been greater than thatobserved at 1Gx. However, this was definitely

the case in the +z (overhead) direction for threeof four subjects during both left and right armmotion. The absolute range of reach differencein the +z (overhead) direction ranged from 0 to7.08 cm on the left and -.76 to 15.41 cm on

the right. These represented 13.3% and27.4% increase in left versus right reach

respectively. Operationally this would seem toindicate that any task which can be

accomplished during 1Gx in the simulatorshould be achievable during actual flight.

Interestingly, there was a statisticallysignificant difference (p = .037) between the

left and right overhead reach with the right

being greater. This unanticipated finding,which was unrelated to the subject'shandedness, raises several points forconsideration. Since the LES is symmetricallyconstructed, it is unlikely that it was, by itself,

responsible for the asymmetry observed. Thetorso harness which is worn over the LES is

not symmetrical (which is also the case withthe parachute). It is felt that further analysis inthe future of the asymmetry of the equipment

may identify a course of action which willimprove the left overhead reach to the pointwhere it is equivalent to the right.

CONCLUSIONS

These data indicate that ground-based

simulator training is adequate as far asverifying the feasibility of overhead activitiesare concerned. The same is not true ofactivities involving forward reach.

Accordingly, to make training realistic,crewmen should be instructed that tasks

involving forward reach should not beattempted during simulator runs if they exceed66-80% of the maximum 1Gx forward reach

capability of the crewman.

Also, more generically, this study hasdemonstrated the utility of usingphotogrammetric techniques to quantifymagnitudes of reach in any direction. Further,since this data is handled and ultimately stored

digitally, it is fully "portable" and can thus beused to predict reach performance in anyenvironment where the subject is exposed tosimilar accelerative loads, etc.

In future work, we will merge our reachinformation with a graphics data base

describing the Space Shuttle cockpit panels.This will allow us to find the intersection of

these two data bases and represent actual panelpositions reachable by a specific subject.

125

Page 136: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

DEVELOPMENT OF BIOMECHANICAL MODELSFOR HUMAN FACTORS EVALUATIONS

Barbara Woolford

NASA Lyndon B. Johnson Space Center

Abhilash Pandya and James MaidaLockheed Engineering and Sciences Company

COMPUTER MODELINGOF HUMAN MOTION

Computer aided design (CAD) techniques arenow well established and have become the

norm in many aspects of aerospaceengineering. They enable analytical studies,such as finite element analysis, to beperformed to measure performancecharacteristics of the aircraft or spacecraft longbefore a physical model is built. However,because of the complexity of humanperformance, CAD systems for human factorsare not in widespread use. The purpose ofsuch a program would be to analyze theperformance capability of a crew membergiven a particular environment and task. Thisrequires the design capabilities to describe theenvironment's geometry and to describe the

task's requirements, which may involvemotion and strength. This in turn requiresextensive data on human physical performancewhich can be generalized to many different.physical configurations. PLAID is developinginto such a program. Begun at Johnson SpaceCenter in 1977, it was started to model onlythe geometry of the environment. The physicalappearance of a human body was generated,and the tool took on a new meaning as fit,access, and reach could be checked.Specification of fields-of-view soon followed.

This allowed PLAID to be used to predict whatthe Space Shuttle cameras or crew could seefrom a given point. An illustration of this useis shown in Figures la and lb. Figure la wasdeveloped well before the mission, to show theplanners where the EVA astronaut would stand

while restraining a satellite manually, and whatthe IVA crewmember would be able to see

from the window. Figure l b is the viewactually captured by the camera from the

window. However, at this stage positioningof the human body was a slow, difficultprocess as each joint angle had to be specifiedin degrees.

REA CH

The next step in enhancing PLAID'susefulness was to develop a way ofpositioning bodies by computer simulation,rather than by the engineer's inputs of jointangles. The University of Pennsylvania wascontracted to perform this work. Korein(1985) developed an inverse kinematic solutionfor multijointed bodies. This enabled the

engineer to position one "root" of the body(feet in foot restraint, or waist or hips fixed) ina specified location, and then specify whatobject or point in the workspace was to betouched by other parts of the body (such asplace the right hand on a hand controller, and

the left on a specific switch). The algorithmthen attempted to find a position which wouldallow this configuration to be achieved. If itwas impossible to achieve, due to shortness of

arms or position of feet, a message would bepresented giving the miss distance. Thisfeedback enabled the engineer to drawconclusions about the suitability of theproposed body position and workspace.While this reach algorithm is extremely usefulfor body position, it does not enable an analystto check an entire workspace for accessibilitywithout specifying a large number of "reachto" points. This need has been recently met bya kinematic reach algorithm. The userspecifies which joints to exercise. The

algorithm then accesses an anthropometry database giving joint angle limits, positions theproximal joint at its extreme limit, and steps thedistal joint through its range of motion in a

I26

Page 137: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

2 • Y -,2 ._

Figure la. PLAID rendition of crewmemberrestraining payload, from premission studies.

Figure lb. Photo taken during mission from aftcrew station.

number of small steps, generating a contour.The proximal joint is moved an increment, andthe distal joint swung through its range ofmotion again. This process continues until theproximal joint reaches its other extreme limit.A three dimensional set of colored contours is

thus generated which can be compared to theworkstation and conclusions can be drawn.

An example of this is shown in Figures 2a and2b. In Figure 2a, a fifth percentile female isplaced at the proposed foot restraint positionintended to provide an eyepoint 20" from theworkstation. In this position, her reachenvelope falls short of the workstation. Figure2b shows the same body and reach envelopepositioned with a 16" eyepoint, in which casethe woman can reach the workstation.

A NIMA TIO N

Human performance is not static. To do usefulwork, the crewmembers must move theirhands at least, and frequently their bodies,their tools, and their equipment. While thiscan be captured in a sequence of static pictures,animations are much preferred because theyshow all the intermediate points between thestatic views. Originally, PLAID animations

were created by having the analyst enter everysingle step individually. This was highly labor

intensive, and prohibitive in cost for any butthe most essential conditions. However, an

animation capability was created that allowedthe user to input only "key frames." (A keyframe is one where the velocity or direction ofmotion changes.) The software then smoothlyinterpolates 20 or 30 intermediate framescenes, showing the continuous movement.This has many applications for both the Shuttleprogram and for the Space Station Freedom(SSF) program. For example, in determiningwhere interior handholds were needed, an

animation was created showing the process ofmoving an experiment rack from the logisticsmodule to the laboratory module. Clearances,collisions, and points of change could beidentified from the videotape. However, whilethe tape showed the locations for thehandholds, it could not give information as tothe loads the handholds would have to bear.

Thus a project to model strength was begun.

BIOMECHANICS MODELING

UPPER TORSO STRENGTH

Using a Loredan, Inc. LIDO dynamometer,single joint strength data was collected for the

127

Page 138: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

....... i __1

.0. _0,_ _ ,. \L_

\SIDE UIEU

-I

l

/

.i'

y

Figure 2a. Fifth percentile female positioned at

workspace with 20" eyepoint. Reach contours miss the

workstation.

/

" ....... .

qIDf UIEU

Figure 2b. Fifth percentile female positioned at

workspace with 16" eyepoint. Reach contours touchworkstation.

shoulder, elbow, and wrist of one individual.The data was collected in the form of (velocity,position, and strength) triplets. That is, thedynamometer was set to a selected speed,ranging from 30 deg/sec to 240 deg/sec in 30deg/sec increments. For that speed, the subjectmoved his joint through its entire range ofmotion for the specified axis (abduction/adduction and flexion/extension). Data was

collected every five degrees and a polynomialregression equation fit to the data for thatvelocity. The velocity was changed, and theprocedure repeated. This resulted in a set ofequations, giving torque in foot-pounds as afunction of velocity and joint angle, for eachjoint rotation direction. Figure 3 showsshoulder flexion torque over a range of angles,

parameterized by velocity. Figure 4 shows thedata points and the equation fit for elbowflexion/extension over the range of motion at90 deg/sec.

These regression equations were stored intables in PLAID. To predict total strength

exerted in a given position or during a givenmotion, the body configuration for the desiredposition (or sequence of positions) iscalculated from the inverse kinematics

algorithm. For example, the task used so farin testing is ratchet wrench push/pull. Thistask is assumed to keep the body fixed, andallow movement only of the arm. (As morestrength data is obtained, the tasks can be mademore complex.) A starting position for thewrench is established, and the position of thebody is set. The angles of the arm jointsneeded to reach the wrench handle are then

calculated. A speed of motion, indicative ofthe resistance of the bolt, is specified. Thetables are searched, and the strength for eachjoint for the given velocity at the calculatedangle is retrieved. The direction of the forcevector is calculated from the cross products ofthe segments, giving a normal to the axis ofrotation in the plane of rotation.

Once all these force vectors are obtained, theyare summed vectorially to calculate the

128

Page 139: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

60 30 I

50-

40-

-J 30

2O

10

30 FLEX

60 FLEX

_ 90 FLEX

120 FLEX

150 FLEX

180 FLEX

i

10oANGLE

200

Figure 3. Shoulder flexion torque for velocitiesranging from 30°/see to 240°/see.

lid-.I

i:I.K

2O

10

olo

y = 8.7473 + 0.39266x - 2.9533e-3x^2 R^2 = 0.83?

D

= 12.440 - 2.0934e-2x + 4.1209e-4x"2 R^2 = 0.940

I

1O0 200

ANGLE

Figure 4. Raw data and regression equations forelbow flexion and extension at 90*/see.

resultant end effector force. Currently the

program displays the force for each joint andthe resultant end effector force, as illustrated in

Figure 5. The ratchet wrench model rotatesaccordingly for an angular increment. Thisrequires a new configuration of the body, andthe calculation is repeated for this newposition. A continuous contour line may begenerated which shows the end effector forceover the entire range of motion by colorcoding. The model will be validated thissummer. A ratchet wrench attachment for a

dynamometer has been obtained, and an ArielMotion Digitizing System will be used tomeasure the actual joint angles at each point inthe pushing and pulling of the wrench. Thiswill provide checks on both the validity of thepositioning algorithms and of the forcecalculations. When this simple model isvalidated, more complex motions will be

investigated.

The significance of this model is that it willpermit strengths to be calculated from basicdata (single joint rotations) rather thanrequiring that data be collected for eachparticular motion, as is done in Crew Chief(Easterly, 1989). A synthesis of the reach

envelope generating algorithm and the forcecalculations has been achieved. The analyst

can now generate reach contours which arecolor coded to show the amount of force

available at any point within the reachenvelope.

EFFECTS OFGRAVITY-LOADING ON VISION

Human vision is another important parameterbeing investigated in conjunction with humanreach and strength. Empirical data relatingmaximum vision envelopes versus gravityloading have been collected on several subjectsby L. Schafer and E. Saenz. This data will betabularized in a computer readable foma for usein man-modeling. Preliminary software designhas begun on a vision model which will utilizethis vision data to simulate a period of SpaceShuttle launch where gravity loading is a majorfactor. This model will be able to dynamicallydisplay the vision cone of a particularindividual as a function of gravity force andproject that cone onto a workstation todetermine if all the appropriate gauges/displays can been seen.

129

Page 140: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Figure 5. Body model exerting force on ratchetwrench. Joint forces and effective force at wrench aredisplayed as bar graphs beneath the picture.

APPLICA TIONS

The biomechanical models, combined with

geometric and dynamic modeling of theenvironment, have two major applications.The first is in equipment design. Frequentlythe strength or force of a crewmember is a keyparameter in design specifications. Forexample, a manually operated trash compactorhas recently been built for the Shuttle forextended duration (10-14 days) operations.This is operated by a crew member exertingforce on the handle to squeeze the trash, and isseen as an exercise device as well as a trash

compactor. The two key specifications neededwere (1) how much force can a relatively weakcrewmember exert, so the right amount ofmechanical amplification can be built in, and(2) how much force could a very strongcrewmember exert, so the machine could bebuilt to withstand those forces. When the

biomechanical model is completed, questionssuch as these can be answered during thedesign phase with a simulation rather thanrequiring extensive testing in the laboratory.In addition, the size of the equipment can becompared visually to the available storagespace, and the location of foot restraints

relative to the equipment can be determined.Other equipment design applications includedetermining the specifications for exercise

equipment, determining the available strengthfor opening or closing a hatch or door, anddetermining the rate at which a given masscould be moved. The second application for astrength model is in mission planning.Particularly during extravehicular activities(EVA), crewmembers need to handle largemasses such as satellites or structural elements.

A complete dynamics model would enable the

mission planners to view the scenes as theywould be during actual operations bysimulating the forces which can be exerted andthe resulting accelerations of the large mass.

FUTURE PLANS

Currently the only motion modeled is arotational motion of a wrench using only thearm, not the entire body. One step indeveloping a useful model is to allow the

software already available for animatingmotion to be used to define any motion andthen permit calculation of the strength availabletaking the entire body into account. This is a

major step to accomplish, because of the manydegrees of freedom in the entire human body.In order to consider the entire body in strengthanalysis, empirical strength data must becollected. The Anthropometry andBiomechanics Lab at the Johnson Space Centeris beginning work on this project. To date,shoulder and arm strength measurements havebeen collected on a number of subjects. Thisdata must be made available through theprogram's data base so that 5th percentile, ormedian, or 95th percentile strengths can beexamined. This will involve another layer ofdata in the data base. The strengthmeasurements for the entire body, especiallytorso and legs, are needed. Collecting thesestrength data for the individual joints at anumber of angular positions and angular

130

Page 141: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

velocitieswill bean ongoing project for sometime. However, efforts have been made toautomate data entry and reduction, which willresult in easier data collection. Finally, the

most important step is to validate the strengthdata. An assembly for collecting forces andangles for a ratchet wrench operation isavailable, and will be used to validate thecompound motion of the arm. Movement ofthe entire body will be validated after theoriginal data is collected, equations fit, andpredictions of strength made.

REFERENCES

o

°

°

Badler, N. I., Lee, P., Phillips, C. andOtani, E. M. "The JACK InteractiveHuman Model." In "Concurrent

Engineering of Mechanical Systems,Vol. 1." E. J. Haug, ed. Proceedingsof the First Annual Symposium onMechanical System Design in aConcurrent Engineering Environment,University of Iowa: Oct. 24-25, 1989.

Easterly, J. "CREW CHIEF: A Modelof a Maintenance Technician,"

AIAA/NASA Symposium on theMaintainability of Aerospace Systems,July 26-27, 1989: Anaheim, CA.

Korein, James U. "A GeometricInvestigation of Reach," MIT Press,Cambridge, MA: 1985.

131

Page 142: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

--'2x----5..,N94- 24 206 ,ii

ESTABLISHING A RELATIONSHIP BETWEEN MAXIMUMTORQUE PRODUCTION OF ISOLATED JOINTS TO SIMULATE

EVA RATCHET PUSH-PULL MANEUVER: A CASE STUDY

Abhilash Pandya and James MaidaLockheed Engineering and Services Company

Scott Hasson

University of Texas Medical Branch

Michael Greenisen and Barbara Woolford

NASA Lyndon B. Johnson Space Center

INTRODUCTION

As manned exploration of space continues,analytical evaluation of human strengthcharacteristics is critical. These extraterrestrialenvironments will spawn issues of human

performance which will impact the designs oftools, work spaces, and space vehicles.

Computer modeling is an effective method ofcorrelating human biomechanical and anthro-pometric data with models of space structuresand human work spaces (Figure 1). The aim ofthis study is to provide biomechanical datafrom isolated joints to be utilized in a computermodeling system for calculating torque result-ing from any upper extremity motions: in thisstudy, the ratchet wrench push-pull operation(a typical extravehicular activity task).

Established here are mathematical relationshipsused to calculate maximum torque productionof isolated upper extremity joints. Theserelationships are a function of joint angle andjoint velocity.

METHOD

Maximum torque data were obtained on asingle subject during isolated joint movementsof the shoulder, elbow, and wrist at angularvelocities of 30 to 240 deg/sec at 30 deg/secincrements on the Loredan Inc. LIDO system.Data collection software tracked and stored

joint angle data, as well as torque and velocitydata, simultaneously. The angle versus torque

data was reduced using a least squaresregression algorithm to generate polynomialequations relating the two variables, torque andjoint angle at various velocities.

These torque functions were then tabularized

for utilization by the computer modelingsystem (Figure 2). The modeling system thencorrelated the functions with the appropriatejoints in an anthropometrically correct humanmodel. A ratchet wrench task was simulated

and the force vectors generated from theseisolated joint equations were then summed toyield end-effector torque.

As a preliminary step in the model validationprocess, isotonic (constant load) maximumtorque data were collected for the ratchetwrench push-pull operation. Plans to collectmore controlled (restricted motions) isokinetic(constant velocity) ratchet wrench data tomatch model outputs are in progress.

RESULTS

Second order regression equations relatingjoint angle to end-effector torque of theshoulder, elbow and wrist in all axes, anddirections at various velocities were

established. The data indicated a relationshipbetween the allowed velocity (i.e., decreasedvelocity was proportional to increased resis-tance) and the torque generated. As indicatedin Figure 3, the maximum torque generateddecreases as the velocity increases.

132

Page 143: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

• %,

Figure 2.

Figure 1.

Torque function regression coefficients

133

Page 144: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

7O

Shoulder Extension

Velocities 30-2.40 deg/sec

CA3

nn.d

I

F--LJ_

6O

5O

4O

3O

20

10

0

J1"-

I1_ .-- if i_,- "

...."2 ...._ .... ,<"

I I I

100 200

ANGLE

Figure 3. Shoulder regression equation at all measured velocities.

30 EXT

-_ 60 EXT

_n-_ 90 EXT

.... _ ........... 120 EXT

•,. • .... 150 EXT

_o---- 180 EXT

--A-__ 210 EXT

.. z_ ..... 240 EXT

All the isolated joint relationships were codedinto a flexible and interactive computergraphics model. This model allowed alterationof the initial position and joint angles of thehuman figure relative to the ratchet wrench.This flexibility allowed one to gauge the effectsof body orientation on torque generated.

The calculation for torque generated was forthe isokinetic ratchet wrench motion. Model

validation data for this configuration is nowbeing collected.

CONCL USION

It has been demonstrated that a computermodel may be a viable method to calculate

torque resulting from arbitrarily complexmotions. Using regression equations derivedfrom empirically measured torques for isolatedjoints, end-effector torque was calculated anddisplayed for an isokinetic ratchet wrenchprocedure (Figure 4).

For initial validation efforts, isotonic data onthe ratchet wrench were collected. Because ofthe uncontrolled ratchet velocities in theisotonic measurements, model calculations

(based on isokinetic configuration) were notacceptably accurate (up to 40% lower). Anaccurate validation and refinement of the model

is contingent upon collection of very controlled(restricted motion) isokinetic data (constantvelocity) of the ratchet wrench motion for moresubjects.

134

Page 145: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

4

¢..)

,6©

° ,_-.iO

• v..._

_t

° ,.-,i

,4

INbiw

135

Page 146: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A
Page 147: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A

Form ApprovedREPORT DOCUMENTATION PAGE OMBNo.0704-0188

Public reporting burden for this collection of information is estimated to average I hour per response, Including the time for reviewing instructions, searching existing data sources, gathering and

maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information

including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA

22202-4302, and to the Office of Management and 8udget, Paperwork Reduction Project (0704-0188), Washington, DC 20503.

1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 1 3. REPORTTYPE AND DATES COVERED

July 1993 I Technical Memorandum

4 TITLE AND SUBTITLE 5 FUNDING NUMBERS

Crew Interface Analysis: Selected Articles on Space Human

Factors Research, 1987-1991

6I AUTHOR(S)

Tandi Bagian, compiler (NASA JSC)

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)

Lyndon B. Johnson Space CenterHouston, Texas 77058

9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES)

National Aeronautics and Space AdministrationWashington, D.C. 20546

8. PERFORMING ORGANIZATIONREPORT NUMBER

S-697

10. SPONSORING/MONITORINGAGENCY REPORT NUMBER

NASA TM 104756

11. SUPPLEMENTARY NOTES

12b. DISTRIBUTION CODE12aDISTRIBUTION/AVAILABILITYSTATEMENT

Unclassified/Unlimited

Subject Category 169National Technical Information Service

5285 Port Royal Road

Springfield, VA 22161 (703) 487-4600

13. ABSTRACT (Max_um 200 words)

As part of the Flight Crew Support Division at NASA, the Crew Interface Analysis Sectionis dedicated to the study of human factors in the manned space program. It assumes a

specialized role that focuses on answering operational questions pertaining to NASA'sSpace Shuttle and Space Station Freedom Programs. One of the Section's key

contributions is to provide knowledge and information about human capabilities and

limitations that promote optimal spacecraft and habitat design and use to enhance crew

safety and productivity. The Section provides human factors engineering for the ongoingmissions as well as proposed missions that aim to put human settlements on the Moon and

on Mars.

Research providing solutions to operational issues is the primary objective of theCrew Interface Analysis Section. In this pursuit, many interdisciplinary studies are

developed, designed, and conducted. This document is a representative collection of

studies providing a cross section of work performed by members of the Crew InterfaceAnalysis Section. The studies represent such subdiciplines as ergonomoics, space

habitability, human-computer interaction, and remote operator interaction.14. SUBJECT TERMS

Human Factors, Human-computer Interface, Ergonomics, RemoteOperators, Space Habitability

17. SECURITYCLASSIFIO_TIONOF REPORT

Unclassified

18. SECURITY CLASSIFICATIONOF THIS PAGE

Unclassified

19. SECURITY CLASSIFICATIONOF ABSTRACT

Unclassified

15. NUMBEROF PAGES

141

16. PRICE CODE

20. LIMITATION OF ABSTRACT

Unlimited

Standard Form 298 (Rev 2-89)

Prescribed by ANSI Std, 23g-1B298-102

Page 148: Crew Interface Analysis: Selected Articles on Space Human ... · Crew Interface Analysis: Selected Articles on Space Human Factors Research, ... Bagian, Greenisen, Schafer, ... A