2
IMAGE - Complex Situation Understanding: An Immersive Concept Development Marielle Mokhtari 1 , Eric Boivin 1 , Denis Laurendeau 2 , Sylvain Comtois 2 , Denis Ouellet 2 , Julien-Charles Levesque 2 and Etienne Ouellet 2 1 Defence Research & Development Canada–Valcartier, Quebec, Canada 2 Dept of Electrical & Computer Eng., Laval University, Quebec, Canada ABSTRACT This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves. KEYWORDS: Immersive environment, 3D objects, interaction, visualization, animation, synchronization. 1 INTRODUCTION Clever decisions usually result from good understanding of a situation. Nowadays, in the military domain, achieving good understanding is a challenge when rapid operational and technological changes occur. The challenge of complex situation (CS) understanding and the objective of increasing the agility of Canadian Forces in dealing with such situations are investigated in the IMAGE project [1]. IMAGE aims to support collaboration of experts/specialists for common CS understanding that can be shared among users. The IMAGE concept is supported by 3 closely coupled principles: Understanding Spiral Principle: IMAGE supports an iterative and incremental understanding process. CS understanding emerges through iteratively enhanced representations of the situation: from a vague and unorganized assessment to a structured one revealing hidden or emergent knowledge; Human Control Principle: IMAGE is fully controlled by Human (experts, decision makers … - individual and/or team); and Technology Synergy Principle: IMAGE tackles the CS understanding problem using the synergy between different technologies in order to favour complementarity, flexibility and agility. The IMAGE concept is composed of 4 modules exploiting cutting-edge technologies: (1) KNOWLEDGE REPRESENTATION (REP): building and sharing a vocabulary and conceptual graphs making the understanding explicit; (2) SCENARIO SCRIPTING: Transforming conceptual graphs into an executable simulation model; (3) SIMULATION (SIM): Moving in the space of scenario variables and browsing through simulation runs; and (4) EXPLORATION (XPL): Using visualization and interaction metaphors for investigating datasets dynamically so they become meaningful to the user. The work presented in this paper focuses on the XPL Module. Two versions of XPL tools have been developed to date: the desktop version and the semi-immersive version [2]. This paper presents a third version, a Human-centric/built fully immersive XPL environment, proposing a range of visualization and interaction concepts/metaphors for investigating datasets dynamically. Figure 1 shows the IMAGE concept associated with (a snapshot of) the immersive working environment. 2 XPL ENVIRONMENT ARCHITECTURE The IMAGE XPL environment can be used on a wall screen and in a CAVE, both providing active stereoscopic visualization. The architecture also extends to desktop/mobile platforms, and supports collaborative work between users. The architecture has been designed to be independent of graphics/physics rendering engines, and is built as a tree, in which each branch is an entity, some entities having no correspondence with graphics/physics rendering trees. 3 categories of entities have been defined: Manipulator: represents a hardware device (data glove, wand…) used by Human to interact with the Objects in the environment. Several Manipulators of the same type can coexist in IMAGE . User: corresponds to a real user in the environment. It establishes the correspondence between the User and his Manipulator(s) and also manages Object manipulation to avoid conflict between Users; Object: represents an element that is not a Manipulator and with which User can interact. 3D objects (see Section 3) are special instances of Object: their pose (position, rotation, pivoting point and scale factor) can be modified and manipulations (move, rotate, zoom) can be applied on them. Interaction is considered as a communication mechanism (flow of messages) allowing Manipulators to send interactions to Objects. Interactions are not directly coupled with Manipulators so the Manipulator design is independent of the Object design. Drag & Drop is a mechanism that is implemented at entity level (1) to transfer information from one Object to another (via a ghost concept) and (2) to instantiate associations between two Objects. Another interesting feature of the architecture is the MetaData concept which encapsulates all data types provided to XPL by the SIM Module. Figure 1. (Right) IMAGE Concept – (Left) IMAGE Working Environment 3 3D OBJECTS WORK TOOLS The XPL environment is entirely built and controlled/managed by the User along his work session (including Object creation, Object manipulation, Object destruction, association of Objects...). The various Objects available, which can be considered as a set of work tools, are presented in the following subsections. 3.1 SIMULATION TREE ONE SCENARIO, ONE VARIABLE SPACE The Simulation Tree is the cornerstone of the XPL environment. It is the direct link between the SIM and XPL Modules and encapsulates Simulation data. It corresponds to a basic 3D implementation [4] of the visual representation of the simulation conceptual framework used by IMAGE [2]. Combining concepts of data/information visualisation, the Simulation Tree provides Users with a visually explicit history of his experiment. It also provides interaction metaphors with running Simulations for which parameters and resulting data, rendered in an adequate way, can assist Users to 229 IEEE Virtual Reality 2011 19 - 23 March, Singapore 978-1-4577-0038-5/11/$26.00 ©2011 IEEE

[IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - IMAGE — Complex situation understanding: An immersive

  • Upload
    etienne

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - IMAGE — Complex situation understanding: An immersive

IMAGE - Complex Situation Understanding: An Immersive Concept Development

Marielle Mokhtari1, Eric Boivin1, Denis Laurendeau2, Sylvain Comtois2, Denis Ouellet2, Julien-Charles Levesque2 and Etienne Ouellet2

1Defence Research & Development Canada–Valcartier, Quebec, Canada 2Dept of Electrical & Computer Eng., Laval University, Quebec, Canada

ABSTRACT This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves. KEYWORDS: Immersive environment, 3D objects, interaction, visualization, animation, synchronization. 1 INTRODUCTION Clever decisions usually result from good understanding of a situation. Nowadays, in the military domain, achieving good understanding is a challenge when rapid operational and technological changes occur. The challenge of complex situation (CS) understanding and the objective of increasing the agility of Canadian Forces in dealing with such situations are investigated in the IMAGE project [1]. IMAGE aims to support collaboration of experts/specialists for common CS understanding that can be shared among users. The IMAGE concept is supported by 3 closely coupled principles: � Understanding Spiral Principle: IMAGE supports an iterative and incremental understanding process. CS understanding emerges through iteratively enhanced representations of the situation: from a vague and unorganized assessment to a structured one revealing hidden or emergent knowledge; � Human Control Principle: IMAGE is fully controlled by Human (experts, decision makers … - individual and/or team); and � Technology Synergy Principle: IMAGE tackles the CS understanding problem using the synergy between different technologies in order to favour complementarity, flexibility and agility.

The IMAGE concept is composed of 4 modules exploiting cutting-edge technologies: (1) KNOWLEDGE REPRESENTATION (REP): building and sharing a vocabulary and conceptual graphs making the understanding explicit; (2) SCENARIO SCRIPTING: Transforming conceptual graphs into an executable simulation model; (3) SIMULATION (SIM): Moving in the space of scenario variables and browsing through simulation runs; and (4) EXPLORATION (XPL): Using visualization and interaction metaphors for investigating datasets dynamically so they become meaningful to the user.

The work presented in this paper focuses on the XPL Module. Two versions of XPL tools have been developed to date: the desktop version and the semi-immersive version [2]. This paper presents a third version, a Human-centric/built fully immersive XPL environment, proposing a range of visualization and interaction concepts/metaphors for investigating datasets dynamically. Figure 1 shows the IMAGE concept associated with (a snapshot of) the immersive working environment. 2 XPL ENVIRONMENT ARCHITECTURE The IMAGE XPL environment can be used on a wall screen and in a CAVE, both providing active stereoscopic visualization. The architecture also extends to desktop/mobile platforms, and supports collaborative work between users. The architecture has

been designed to be independent of graphics/physics rendering engines, and is built as a tree, in which each branch is an entity, some entities having no correspondence with graphics/physics rendering trees. 3 categories of entities have been defined: � Manipulator: represents a hardware device (data glove, wand…) used by Human to interact with the Objects in the environment. Several Manipulators of the same type can coexist in IMAGE . � User: corresponds to a real user in the environment. It establishes the correspondence between the User and his Manipulator(s) and also manages Object manipulation to avoid conflict between Users; � Object: represents an element that is not a Manipulator and with which User can interact. 3D objects (see Section 3) are special instances of Object: their pose (position, rotation, pivoting point and scale factor) can be modified and manipulations (move, rotate, zoom) can be applied on them.

Interaction is considered as a communication mechanism (flow of messages) allowing Manipulators to send interactions to Objects. Interactions are not directly coupled with Manipulators so the Manipulator design is independent of the Object design.

Drag & Drop is a mechanism that is implemented at entity level (1) to transfer information from one Object to another (via a ghost concept) and (2) to instantiate associations between two Objects.

Another interesting feature of the architecture is the MetaData concept which encapsulates all data types provided to XPL by the SIM Module.

Figure 1. (Right) IMAGE Concept – (Left) IMAGE Working Environment

3 3D OBJECTS – WORK TOOLS The XPL environment is entirely built and controlled/managed by the User along his work session (including Object creation, Object manipulation, Object destruction, association of Objects...). The various Objects available, which can be considered as a set of work tools, are presented in the following subsections. 3.1 SIMULATION TREE – ONE SCENARIO, ONE VARIABLE SPACE The Simulation Tree is the cornerstone of the XPL environment. It is the direct link between the SIM and XPL Modules and encapsulates Simulation data. It corresponds to a basic 3D implementation [4] of the visual representation of the simulation conceptual framework used by IMAGE [2]. Combining concepts of data/information visualisation, the Simulation Tree provides Users with a visually explicit history of his experiment. It also provides interaction metaphors with running Simulations for which parameters and resulting data, rendered in an adequate way, can assist Users to

229

IEEE Virtual Reality 201119 - 23 March, Singapore978-1-4577-0038-5/11/$26.00 ©2011 IEEE

Page 2: [IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - IMAGE — Complex situation understanding: An immersive

better understand the simulated CS. Figure 2 presents information on the Simulation Tree.

Figure 2. Simulation Tree (up) Hierarchy - (Down) Information draping

3.2 GEOSPATIAL VIEW – EXPLANATION OF SIMULATION The Geospatial View allows the Simulation status to be visualized (at each Simulation Step) using a terrain representation on which the scenario elements are displayed symbolically (cubing of military icons). Information layers and gauges/indicators (evolving with Simulation time) can be added to inform the User of the status of simulation variables. This tool also allows a Simulation to be played (like a video). This Object is active if and only if it is associated with a given Simulation (in the Simulation Tree). Figure 3 shows different aspects of the Geospatial View. 3.3 SCIENTIFIC VIEWS – 2D AND 3D VIEWS At any time, to analyze Simulation data or compare Simulations, User can configure Scientific Views (already associated or not to Simulation(s)) for: � 2D plotting: line graphs (plot), bar graphs (histogram, stacked), and area graphs (pie). The User can create up to 4 different graphic displays within a figure window, each representing different combinations of data. � 3D plotting: similar to the 2D plotting but in 3D.

Users can Drag & Drop a Simulation on a Scientific View and select data they wish to analyze (compare) as well as the type of plotting mode to be used. Figure 4 shows examples of Scientific Views. 3.4 ASSOCIATION – COMM. CHANNEL BETWEEN OBJECTS In order to create an environment in which Objects can interact with each other, the concept of association has been implemented at the entity level. This mechanism allows two Objects to synchronize and share data using a common protocol. In the environment, an association is represented graphically as a physical link between Objects (an arrow). Objects associations can be deleted when they are no longer needed by the User. Associations are shown in Figures 1 and 3. 3.5 CONTROL MENUS – MAIN MENU AND CIRCULAR MENUS Two types of control menus (see Figure 5) have been designed: � Main menu: always visible, allows to connect the XPL module to the simulator, create Objects (and modify variables), request information on components of the environment; � Circular menu: associated to specific Objects and visible on-demand, this type of menu allows choosing how data should be visualized and displayed. 4 HUMAN INTERACTIONS – BIMANUAL GESTURAL INTERFACE To enable the interaction between the User and the environment, a 3D bimanual gestural interface using CyberglovesTM has been developed [5]. It is built upon past contributions about gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface tailored to IMAGE needs. Based on real world bimanual interactions, the interface uses the hands in an

asymmetric style, with the left hand providing the mode of interaction and the right hand acting at a finer level of detail.

The User’s actions in the environment have been separated into 4 categories of gestures: (1) Selection and designation of Objects; (2) Generic manipulations, which group all interactions relevant to moving and positioning Objects; (3) Specific manipulations, which are interactions tuned for Objects such as the Geospatial View or the Simulation Tree; and (4) System control, which represents all actions that are related to menus and modifying the way the environment behaves.

There is no need for travelling interactions in IMAGE since the User is located at the center of the virtual environment and has access to all Objects without moving his body.

Figure 3. (Left) Snapshot of Geospatial View - (Right) Scenario elements

Figure 4. Examples of (Left) 2D Scientific View – (Right) 3D Scientific View

Figure 5. (Left) Main Control Menu Hierarchy – Icon & associated functionality

(Right) Examples of Circular Menus associated to Scientific View 5 CONCLUSION This paper has presented the prototype of a Human-centric /built virtual immersive environment which aims to accelerate CS understanding when combined with proper REP tools. REFERENCES [1] M. Lizotte, D. Poussart, F. Bernier, M. Mokhtari, É. Boivin, and M.

DuCharme, (2008), “IMAGE: Simulation for Understanding Complex Situations and Increasing Future Force Agility,” Proc. of ASC.

[2] M. Mokhtari, É. Boivin, D. Laurendeau, and M. Girardin (2010), “Visual Tools for Dynamic Analysis of Complex Situations,” Proc. of VizWeek.

[3] F. Rioux, F. Bernier, and D. Laurendeau (2008), “Multichronia – A Generic Visual Interactive Simulation Exploration Framework,” Proc. of I/ITSEC.

[4] E. Ouellet (Pending), “Simulation Data Manipulation and Representation in a Virtual Environment,” MSc’s Thesis, Laval University, Quebec (QC), Canada.

[5] J.C. Levesque, D. Laurendeau and M. Mokhtari (2011), “Bimanual Gestural Interface for Virtual Environments,” Proc. of VR.

230