9
Proceedings of HCI International 2005 – Las Vegas, USA A Cross-Fertilized Evaluation Method based on Visual Patterns and Cognitive Functions Modeling Lucas Stephane and Guy Boy European Institute of Cognitive Sciences and Engineering (EURISCO International) 4 avenue Edouard Belin, 31400 Toulouse, France [email protected] Abstract This paper presents an approach to function-oriented evaluation of a user-interface based on the combination of eye- tracking and cognitive modeling. The iCOG method was deduced from a series of experiments and analyses in order to rationalize the association of objective data (eye movement patterns in space and time) and subjective information (meaning provided from experts and model-based observations). Since eye-tracking data is not usually useful without appropriate interpretations, we added specific protocols based on the Personal Construct Theory (Kelly 1955) and situation awareness models (CC-SART and SAGAT). In addition to eye movement patterns analysis, we developed a Knowledge Elicitation method that provides meaning to the emergent visual patterns. Test-users, human factors experts, engineers and designers were used to elicit this kind of knowledge. iCOG is currently applied in the aerospace and automotive domains for the evaluation of the usability cockpit instruments. 1 Introduction Subjective evaluation methods have been developed and persistently used to assess, e.g., workload and situation awareness in human-machine system design and development. Evaluating a control panel or one of its instruments is a complex process that involves the elicitation of interaction patterns between users and technology at various levels of granularity. Resulting methods have incrementally moved from quantitative (e.g., score assignments) to qualitative (e.g., questionnaires) assessments in order to improve understanding of what is going on. In addition, in the control of safety-critical systems, huge amounts of dynamic visual data need to be processed by users to make decisions. We choose to use data related to the central vision channel as objective inputs to the evaluation method. Subjective assessment methods rely on language and on the work memory of subjects. Thus, even if the feedback collected from subjects is rich and pertinent, these methods have accuracy boundaries for real-time specific events that occur during experiment scenarios. The addition of eye tracking enables to go beyond these boundaries, contributing with objective data. This paper presents a method that associates eye-tracking measurements of gaze paths and expertise of appropriate users to better understand elicited visual patterns for evaluation purposes and more generally human-centered design. Meaningful visual patterns are very valuable inputs of a related user-centered design process. We have built and used this method to evaluate new instances of control panels. We used the head-mounted eye-tracker from SMI Inc. to capture visual patterns (sequences of eye points of gaze). We anticipated that elicited visual patterns would help to better understand user’s situation awareness. We also developed a mental model based on the description of cognitive functions used by pilots for situation awareness purposes. These cognitive functions are typically elicited from situated interviews of expert users. Each identified-as-generic visual pattern was mapped with an elicited cognitive function. It follows that the identification of some specific cognitive function has a direct impact on the evaluation of the usability of the tested interface. 2 The need for a visual-scanning mental model Over the last two decades, eye-tracking techniques and tools have been developed and used to support and improve our understanding on gathered visual data. Consequently, we recently focus on the relations between sequences of eye points of gaze and situation awareness. We define visual patterns as categories of sequences of eye points of gaze. Since situation awareness is very context-dependent, even it is hardly possible to automatically predict it from

A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

A Cross-Fertilized Evaluation Method based on Visual Patterns and Cognitive Functions Modeling

Lucas Stephane and Guy Boy

European Institute of Cognitive Sciences and Engineering (EURISCO International)

4 avenue Edouard Belin, 31400 Toulouse, France [email protected]

Abstract This paper presents an approach to function-oriented evaluation of a user-interface based on the combination of eye-tracking and cognitive modeling. The iCOG method was deduced from a series of experiments and analyses in order to rationalize the association of objective data (eye movement patterns in space and time) and subjective information (meaning provided from experts and model-based observations). Since eye-tracking data is not usually useful without appropriate interpretations, we added specific protocols based on the Personal Construct Theory (Kelly 1955) and situation awareness models (CC-SART and SAGAT). In addition to eye movement patterns analysis, we developed a Knowledge Elicitation method that provides meaning to the emergent visual patterns. Test-users, human factors experts, engineers and designers were used to elicit this kind of knowledge. iCOG is currently applied in the aerospace and automotive domains for the evaluation of the usability cockpit instruments. 1 Introduction Subjective evaluation methods have been developed and persistently used to assess, e.g., workload and situation awareness in human-machine system design and development. Evaluating a control panel or one of its instruments is a complex process that involves the elicitation of interaction patterns between users and technology at various levels of granularity. Resulting methods have incrementally moved from quantitative (e.g., score assignments) to qualitative (e.g., questionnaires) assessments in order to improve understanding of what is going on. In addition, in the control of safety-critical systems, huge amounts of dynamic visual data need to be processed by users to make decisions. We choose to use data related to the central vision channel as objective inputs to the evaluation method. Subjective assessment methods rely on language and on the work memory of subjects. Thus, even if the feedback collected from subjects is rich and pertinent, these methods have accuracy boundaries for real-time specific events that occur during experiment scenarios. The addition of eye tracking enables to go beyond these boundaries, contributing with objective data. This paper presents a method that associates eye-tracking measurements of gaze paths and expertise of appropriate users to better understand elicited visual patterns for evaluation purposes and more generally human-centered design. Meaningful visual patterns are very valuable inputs of a related user-centered design process. We have built and used this method to evaluate new instances of control panels. We used the head-mounted eye-tracker from SMI Inc. to capture visual patterns (sequences of eye points of gaze). We anticipated that elicited visual patterns would help to better understand user’s situation awareness. We also developed a mental model based on the description of cognitive functions used by pilots for situation awareness purposes. These cognitive functions are typically elicited from situated interviews of expert users. Each identified-as-generic visual pattern was mapped with an elicited cognitive function. It follows that the identification of some specific cognitive function has a direct impact on the evaluation of the usability of the tested interface. 2 The need for a visual-scanning mental model Over the last two decades, eye-tracking techniques and tools have been developed and used to support and improve our understanding on gathered visual data. Consequently, we recently focus on the relations between sequences of eye points of gaze and situation awareness. We define visual patterns as categories of sequences of eye points of gaze. Since situation awareness is very context-dependent, even it is hardly possible to automatically predict it from

Page 2: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

eye points of gaze, it is crucial to make a mental model relating objective data and subjective information. In human-computer interaction for example (Starker and Bolt, 1990), accessing user’s intention and awareness has been strongly advocated. Mental models have been studied for a long time in Human Factors (Rouse and Morris, 1986). They refer to more general knowledge than the reflection of how the system being used works; specifically, mental models are defined as a rich and elaborate structure, reflecting the user's understanding of what the system contains, how it works, and why it works that way (Carroll & Olson, 1987). Cognitive models (Narayanan and Hegarty, 2002), computational models (Narayanan and Chandrasekaran, 1991), and empirical studies (Hegarty, 1992) of problem solving in visuo-spatial and causal domains suggest that problem solving tasks in such domains invoke cognitive processes involving mental animation and imagery. In order to better understand the need for mental models, we need to explain the notion of observability (Boy, 2002) of an internal process or function via an external measurement. Here the internal process is situation awareness (Endsley, 2003), the external measurement is represented by sequences of eye points of gaze. In this case, the single availability of eye points of gaze is not enough to “observe” user’s situation awareness. Other kinds of information are necessary. We propose to use subjective information provided by test-users during the eye-tracking experiment using the Personal Construct Theory (Kelly, 1955). PCT attempts to explain “Why people do what they do”. George Kelly was trained as an engineer in the first place, and became a clinical psychologist and educator. His theory is based on the fact that someone anticipates events by constructing their replication. He saw anticipatory processes as the source of all psychological phenomena. He introduced the term “construct” to denote patterns that are used to predict something to happen. These constructs are revised according to the way the world “conform” to the predictions. Kelly’s clinical approach was based on constructive alternativism, on encouraging the client to develop alternative constructs systems through which to construe life events. We have used the following PCT guidelines for visual pattern understanding: (1) elicit similarities between the visual patterns of a test-user group, but also their differences; (2) have a test-user-2 perform the visual patterns of test-user-1; (3) detect the changes between initial and final visual patterns of a same test-user; detect the changes when using new artifacts; (4) understand and rationalize the specificity of visual patterns for each test-user, and understand the causes that lead a test-user to use the observed visual patterns Eye tracking data analysis can be top-down, guided either by a cognitive model or a design hypothesis, or bottom-up, based only on observation (Jacob & Karn, 2003): (1) top-down based on a cognitive model: longer fixations on an object of the visual scene reflect subject’s difficulty to understand that object (or the objects’ features); (2) top-down based on a design hypothesis: users will understand easier color objects than gray ones; (3) bottom-up: users are taking much longer than anticipated making selections on this screen. We wonder where they are looking. In the first case, because eye tracking contribute with very precise data, a precise behavioral or cognitive model is necessary for the analysis and interpretation of this data. If the model is not enough precise, the use of eye tracking may loose its benefits. In a top-down approach, mental models are on the top of the pyramid. Generally, they are the most used entity for industrial experiments. Mental models (Johnson-Laird, 1993, in Taylor et al, 1996) are defined as representations of past experiences used in a predictive way. A mental model is generic; a schema is an instance of a mental model for a specific system. Schemas (Bartlett, 1932, in Taylor et al, 1996) are packets of knowledge, tuned by experience. They represent a form of memory organization. A mental model is the representation of the artifact features and contextual events and states that enable the user to mentally try out actions before executing them. The operative structure of the artifact is built as an operative mental mode or image (Guérin et al, 1994; Ochanine, 1981 in Boy, 1998). A method that enables to elicit partial mental models incrementally from the users is proposed in Cognitive Function Analysis (Boy, 1998). It should be noted that a mental model is commonly developed according to the goal of each investigation by eliciting knowledge from experts. 3 Connecting observable data to abstract concepts 3.1 Looking for the right situation awareness mental model We then looked for the right mental model that links observed eye points of gaze to situation awareness (SA) concepts requires the development of a mental model. There are several mental models for situation awareness proposed in the literature. Dominguez in ESSAI (2000) proposed a list of requirements for situation awareness:

Page 3: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

extract information from the environment; integrate this information with relevant internal knowledge to create a mental picture of the current situation; use this picture to direct further perceptual exploration in a continual perceptual cycle; anticipate future events. Smith & Hancock (1994, in Taylor et al, 1996) proposed that SA directs consciousness intentionally to perform the tasks and is oriented on the environment. In situation awareness, user’s actions are oriented responses to the environment through filtering, tuning and matching. Thus, SA is defined as the antithesis of introspection. Boy (1991) introduced the distinction between the perceived situation and the desired situation in the human operator model. In the same line, Finnie and Taylor (1998 in ESSAI 2000) proposed the Integrated Model of Perceived Awareness Control (IMPACT) taking into account that: behavior is associated with the control of perception; actions are valuable if their outcome is positively perceived in relation with the intended goals; feedback is a fundamental requirement to goal directed behavior. SA acquisition and maintenance are constructed from the behavior involved in reducing the differences between the perceived level of SA and the desired level of SA. We decided to consider the following SA models in our current work: Endsley’s model (2003) based on Goal set, Mental Models and Schemes, and also the three SA levels – perception, comprehension, projection; and Taylor’s model (1996) built on the Skills Rules Knowledge model (Rasmussen, 1983).

Figure 1: Adapted from Endsley’s model.

Table 1: Taylor’s model.

Endsley’s model proposes a multi-loop process where the mental model plays a central role to direct attention, comprehension and projection, and takes information from perception and action (Figure 1). Taylor’s model is centered on cognitive compatibility (CC) that may be seen as a completion of usability. A system may be usable without being necessarily compatible with the user. “Cognitive compatibility is considered as the degree of consistency, congruency, and mapping between tasks on the one hand, and internal mental processes, knowledge and expectations on the other.” The mental model considered by Taylor et al. matches SA dimensions with the three Rasmussen’s levels of behavior (Table 1). It is interesting to note that cognitive compatibility induces the fact that knowledge has a top-down influence on cognitive functions, and both depth of processing (level of processing) and ease of reasoning have a bottom-up influence on cognitive functions. SA is gathered either as a product or either as a process. As a product (Endsley’s model), measures of SA are performed in a particular context, at a particular point of time. SA as a product attempts to measure an operators' knowledge or awareness of certain states or conditions at the particular point of time. As a process (Taylor’s model), SA focuses on how SA is obtained, and thus attempts to measure cognitive resources, i.e., cognitive functions.

Page 4: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

Activated Knowledge (Evoked Schemata - conditions are context /situation specific)

Cognitive Task > Cognitive 'Function(s)' > Cognitive Activity > Feedback

Ease of Reasoning (Competence with Level of Expertise - particular activity requirements)

Depth of Processing (Applied Intelligence - task understanding)

Figure 2: Cognitive functions explained from cognitive compatibility dimensions (Taylor et al, 1996).

3.2 Structuring the visual scene in terms of situation awareness attributes Taking into account the generic SA models presented previously, describing SA requires the definition of environment properties in which the situation evolves in the context of the task. Wickens (2000) proposed three kinds of environment for aerospace situations: (1) the 3D geographical space around the aircraft as occupied by hazards, such as other air traffic (friend or foe), weather and terrain; (2) internal systems of the aircraft, in particular the automation system; (3) responsibility for the array of tasks confronting the pilot, her or his crew, and various automated agents. Uhlarik et al. (2002) compiled the following surveillance-related SA components: (1) environment awareness (weather, windshear, other aircraft); (2) spatial awareness (attitude, location relative to terrain, waypoints, flight-path vector, speed) proposed by Regal (1987 in Uhlarik et al, 2002); (3) temporal awareness (time before deadlines) proposed by Wickens (1992b in Uhlarik & al, 2002); (4) navigation awareness (combination of spatial awareness and temporal awareness). Furthermore, in the specific case of 3D displays, the perception and understanding of the visual scene are based on several visual cues. Wickens (1992, in Alm, 2001) proposed nine major visual cues: (1) linear perspective: we assume that converging lines are parallel lines receding in depth; (2) interposition: when the contours of an object obscure the contours of another, we assume that the obscured object is more distant; (3) height in the plane: because we normally view objects from above, we assume that objects higher in our visual field are farther away; (4) light and shadow: when objects are lighted from one direction, they normally have shadows that enable us to assume their relative orientation to us; (5) relative size: when objects are known to be the same true size, smaller objects are assumed to be farther; (6) textural gradients: when the plane of a texture is oriented toward the line of sight, the grain will grow finer at greater distance. This change in grain across the visual field is referred to as textural gradient; (7) proximity-luminance covariance: objects and lines are typically brighter as they are closer to us, and so continuous reduction in illumination and intensity are assumed to signal receding distance; (8) aerial perspective: more distant objects tend to be less clearly defined; (9) relative motion gradient or parallax: in the 3D scene, objects closer to us show greater relative motion than those that are more distant. Thus, we assume that distance from us is inversely related to the degree of motion. In conclusion, visual cues in the sequence of points of gaze are identified with respect to the framework being chosen, e.g., 3D interfaces, graphical or textual information. The way the user interface and the task are organized influences the way eye tracking protocols and data analysis are performed.

Page 5: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

3.3 Model of the visual scene in the context of the task The method that was developed is intended to be operational, i.e., a clear and usable structure had to be defined. According to the evaluation context and purposes, its structure needs to be flexible and modular to enable the development and investigation of relevant evaluation topics. 3.3.1 Cognitive modeling based on Agents and Cognitive Functions Agent modeling using the cognitive function paradigm (Boy, 1998) is very appropriate to take into account visual information processing and task execution that all may be serial or parallel. Thus, agents and cognitive functions are kernel entities of the method. The cognitive architecture is based on two main levels: input-output (IO) channels and information processing. Each system contains a set of agents. Agents are categorized by level (IO or Process) and by type (IO channel or process system). As described further, agents have a set of services that are the cognitive functions.

Figure 3: Levels of perception and processing.

Each level is decomposed in several systems (Figure 3). Thus, the main cognitive systems, i.e. Long Term Memory, Work Memory, Attention, Decision Making, etc., figure in the model. This modularity is common in cognitive modeling (Johnson, 1997; Leiden et al, 2001; Lewis, 1999; Lindsay & Norman, 1980; Minsky, 1988; Young, 1999). The model is adaptable to the purpose of each specific study. Design and usability issues can be tracked at the input-output level as well as at the process level, if need be. Each level may be detailed if need be in order to model specific cognitive resources and processes involved in Human-Machine Interaction.

3.3.2 Agent model Our agent approach was influenced by the Belief Desire Intention model (BDI) (Kinny & Georgeff, 1996). For the purposes of our iCOG evaluation method, only the main definitions of the BDI multi-agent systems have been kept. In agent modeling, the primary emphasis is on roles, responsibilities, services and goals. The application domain is analyzed in terms of what needs to be achieved, and in what context. Thus, an agent can be completely specified by: the items and events it can perceive the resources it may activate, the beliefs it may hold, the goals it may adopt, the plans that give rise to its intentions. Plans are directly executable prescriptions of how an agent should behave to achieve a goal or respond to an event. Context-sensitivity of plans provides modularity and compositionality; plans for new contexts may be added without changing existing plans for the same goal. The design is extensible (can cope with frequent changes and special cases), and enables incremental development and testing. Roles are defined as sets of responsibilities. Responsibilities are defined as sets of services. Services are defined as resources that don’t need to be decomposed further. For example, a 3D scene may be represented by a set of agents, each one being in charge of a specific feature of the 3D scene such as relief, textures, colors and symbology.

Agent = (Role(s), Responsibilities, Services) 3.3.3 Cognitive Functions Cognitive Functions transform cognitive tasks into cognitive activities (Boy, 1998). Integrated in the Agent model, cognitive functions are related to the sets of services of an agent. Thus, like agents, cognitive functions are categorized by level (IO or Process) and by type (IO channel or Process System). For example, the relief of a 3D

Page 6: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

scene (an agent) may have various cognitive functions used to detect and analyze valleys, rivers, woods, roads, buildings, and so on.

Cognitive Task > Agent Cognitive 'Function(s)' > Cognitive Activity Several agents may cope through their cognitive functions in order to achieve their goals, i.e. to transform the cognitive tasks into cognitive activities in the given context. The Interaction Blocks defined in the Cognitive Function Analysis (Boy, 1998) are equivalent to the plans of an agent. 3.3.4 Mixing up cognitive functions, agents and mental models The guidelines of the method are as follows: identify the task; identify the subtasks; identify the agent(s) responsible for the task and subtasks; identify the mental models related to the task and subtasks; identify the cognitive functions of each agent; identify on the visual scene the items to be allocated to each cognitive function; identify on the visual scene the main categories of items to be allocated to each agent. The aim of mixing of agents, cognitive functions and situation awareness is to obtain a structure that holds together the main components, from the most specific to the most general, i.e. cognitive function, agent(s) and mental models (Table 2).

Table 2: Linking cognitive functions, agents and mental models. Cognitive Function Own Agent Link Agent Mental Model Cognitive Function Own Agent Link Agent Mental Model A link-agent may be requested either by a cognitive function or another agent to provide resources for the task to accomplish. The identification of the different components in the table may be performed bottom-up or top-down, i.e. from the cognitive functions to agents and mental models, or from mental models to agents and to cognitive functions. Experience proved that several mixed (bottom-up/top-down) iterations are necessary in order to fine-tune the component model and the relationships between components. This clear identification is used to understand and analyze the points of gaze provided by the eye tracker. Cognitive functions are linked to items that correspond to points of gaze on the visual scene, and via this direct link, higher-level components (agents and mental models) are also linked the points of gaze. When the cognitive function that corresponds to a point of gaze is identified, the structure enables to switch analysis levels through the appurtenance of cognitive functions to agents. Thus, evaluation is performed at all the levels (artifact, task and mental models).

Figure 4: 3D graph for the cognitive model.

In order to improve the cognitive model usability, a 3D tool based on 3D graphs and UML notation has been developed (Figure 4 presents a sketch of a terrain awareness warning system application). The tool enhances the iterative building of the cognitive model. Depending of the desired level of detail, each entity in the model may be represented as a cluster, with respect to the modularity of the cognitive architecture. Another interesting feature is that the 3D tool enables to represent the various entities in a static view, but also to model their states in a dynamic view. To that specific purpose, a formalism based on colors and forms has been specified.

Page 7: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

4 The need for experience and expertise A visual scene is represented as a set of visual elements. When someone scans a visual scene, he or she produces a visual path through an ordered set of visual elements. This path is called a visual pattern. We are interested in visual pattern elicitation that goes far beyond a statistic analysis of classes of visual elements. In addition, working with visual patterns enables the analysis of specific visual elements in specific contexts where they make sense in scanning. Visual pattern analysis improves user-behavior understanding that contributes to improve user-interface design and related training issues. Such an eye-tracking technique provides purposeful information that can’t be captured using questionnaires for example. Knowledge elicitation includes knowledge acquisition and explanation. In general, knowledge acquisition (KA) is carried out using subjective (oral or written) techniques such as interviews, questionnaires or forms. The visual patterns obtained with the eye tracking technique represent a precise and complete non-subjective (i.e., objective) knowledge. Thus, eye tracking is usually considered as an objective KA method. For the expert pilots, visual patterns are meaningful, and they can analyze such patterns without a supplementary verbal translation and explanation. However, for intra and inter-domain knowledge sharing, it is important to implement a method that enables subjective explanation of these visual patterns. Moreover, such explicit subjective knowledge completes visual knowledge contained in the elicited visual patterns. Our approach is thus both objective and subjective. It is objective because we work on objective raw data, and subjective because the interpretation is necessarily subjective information provided by experts who include users, human factors experts, engineers and designers. The knowledge elicitation method includes three phases: (1) individual explanation and elicitation that is performed after the experiment session; after each eye-tracking session, the user analyzes, comments and validates his or her visual patterns that are automatically presented to him or her; (2) intra-domain collective elicitation is then performed with all users and experts; a collective intra-domain elicitation and classification of all eye-tracking-collected visual patterns are performed; (3) inter-domain collective knowledge sharing is finally performed across experts; this is to make sense of all eye-tracking-collected data and define compiled use cases.

Table 3: Method overview.

Component Enable to: Eye Tracking - get gaze positions and scan path

- calibrate the visual scene - define regions of interest - define pertinent scene elements

Situation Awareness Methods - define a SA model - implement systematic questionnaires and rating scales - estimate SA as a product or as a process

Cognitive Modeling - perform activity analysis - define a cognitive model based on mental models, cognitive agents and cognitive functions

Experimental Protocol - define experiment variables - define scenarios - specify the way the experiment should go on

Experiment - perform the test Result Analysis - define a systematic method for eye tracking

- analyze the results provided by various data sources (Eye Tracking and SA questionnaires) - improve the experimental protocol

Knowledge Elicitation - provide feedback for experimentation and for result analysis - obtain a consensual identification of the use cases of Eye Tracking for the application under evaluation - validate the SA model - validate the cognitive model - identify the relations between CM, SA and attention - obtain a consensual interpretation of the results - obtain a consensual identification of the use cases of Eye Tracking in aeronautics

Page 8: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

5 Conclusion We developed a cognitive model based on agents and cognitive functions that enables meaningful integration of eye tracking with subjective assessment techniques. Agents and cognitive functions may be used at different levels of granularity, and in particular they are used here at a very fine grain of granularity that is very appropriate for eye tracking data handling. Furthermore, knowledge elicitation is a very important aspect of iCOG since it is used in the first place to construct the mental model and later to construct reliable interpretation of eye tracking data based on the categorization of visual patterns. To summarize, the different components of our global evaluation method are presented in Table 3. The different components of the global evaluation method and their dependencies are illustrated in a Unified Modeling Language diagram (Figure 5). The method is currently used in various domains including aerospace and automotive.

Figure 5: Method Components and their dependencies. References Alm, T. (2001). How to put the Real World into a 3D Aircraft Display. Proceedings, People in Control Conference,

Manchester, June 2001. Boy, G.A. (2002). Procedural Interfaces. In French. Proceedings IHM 2002, Poitiers, France, pp. 81-88. ACM

Press, New York Boy, G.A. (1998). Cognitive Function Analysis. Ablex Publishing Corporation Carroll, J. M., and Olson, J. R. Mental models in human-computer interaction: Research issues about what the user

of software knows. Committee on Human Factors, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington, DC: National Academy Press (1987).

Endsley, M.R. (2003). Designing for situation awareness; An approach to User-Centered Design. London: Taylor & Francis. ISBN: 074840967X.

ESSAI (2000). Enhanced Safety through Situation Awareness Integration. In Training WP1 Orientation on Situation Awareness and Crisis Management ESSAI/NLR/WPR/WP1

Guérin, F. (1994). Comprendre le Travail pour le Transformer. ANACT, 1994

Page 9: A Cross-Fertilized Evaluation Method based on Visual Patterns …my.fit.edu/~gboy/GAB/Conferences_files/HCII05_Stephane_v... · 2016-02-26 · Proceedings of HCI International 2005

Proceedings of HCI International 2005 – Las Vegas, USA

Jacob, R.J.K., Karn, K.S. (2003). "Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises (Section Commentary)," in The Mind's Eye: Cognitive and Applied Aspects of Eye Movement Research, ed. by J. Hyona, R. Radach, and H. Deubel, pp. 573-605, Amsterdam, Elsevier Science

Johnson, T.R. (1997). Control in Act-R and SOAR. In M. Shafto & P. Langley (Eds.), Proceedings of the Nineteenth Annual Conference of the Cognitive Science Society (pp. 343-348): Hillsdale, NJ: Lawrence Erlbaum Associates.

Kelly, G.A. (1955). The Psychology of Personal Constructs. New York: Norton. Kinny, D., Georgeff, M. (1996). Modelling and Design of Multi-Agent Systems. Technical Note 59. Australian

Artificial Intelligence Institute, 1996 Leiden, K & al (2001). A Review of Human Performance Models for the Prediction of Human Error. NASA,

System-Wide Accident Prevention Program, Ames Research Center Lewis, R.L. (1999). Cognitive modeling, symbolic. In Wilson, R. and Keil, F. (eds.), The MIT Encyclopedia of the

Cognitive Sciences. Cambridge, MA: MIT Press Lindsay, P.H., & Norman, D.A. (1980). Traitement de l’information et comportement humain. Vigot, Paris. Minsky, M. (1985). The Society of Mind. Touchstone Simon and Schuster. Rasmussen, J. (1983). Skills, rules, knowledge: Signals, signs, and symbols and other distinctions in human

performance models, IEEE Trans. Syst., Man, Cybern., vol. SMC-13, pp. 257267, 1983. Rouse, W. B., and Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental

models. Psychological Bulletin, 100, 3 (1986), 349-363 Starker, I. and Bolt, R. A. A Gaze-responsive selfdisclosing display. In Proceedings of CHI 90, ACM, (1990), 3-9. Taylor, R.M., Finnie, S.E., MacLeod, I. (1996). Enhancing Situational Awareness Through System Cognitive

Quality. Defence Research Agency, 1996. Uhlarik, J., Comerford, D.A. (2002). A Review of Situation Awareness Literature Relevant to Pilot Surveillance

Functions. DOT/FAA/AM-02/3 Wickens, C.D. (2000). The Trade-off of Design for Routine and Unexpected Performance: Implications of Situation

Awareness. In Situation Awareness Analysis and Measurement. Laurence Erlbaum Associates Young, R. M. (1999). Brief Introduction to ACT-R for Soarers: SOAR and ACT-R Still Have Much to Learn from

Each Other. Talk presented at 19th SOAR Workshop, University of Michigan, Ann Arbor, 21st-23rd May 1999