21
Int. J. Human-Computer Studies (2001) 55, 145}165 doi:10.1006/ijhc.2001.0474 Available online at http://www.idealibrary.com on A toolset supported approach for designing and testing virtual environment interaction techniques JAMES S. WILLANS AND MICHAEL D. HARRISON Department of Computer Science, Human-Computer Interaction Group, University of York, Heslington, York YO10 5DD, UK. email: MJames.Willans,Michael.HarrisonN@cs.york.ac.uk Usability problems associated with virtual environments are a serious obstacle to their successful development. One source of these problems is that virtual environment toolkits provide only a small number of prede"ned interaction techniques that are expected to be used regardless of context, hence developers are not encouraged to consider interaction. In addition, there are no generally accepted development method- ologies for virtual environments. Therefore, even when developers do consider interac- tion, it is likely to be in an ad hoc fashion driven by technology rather than requirements. If virtual environments are to be useful in a wider context, it is important to provide developers with methods (and tools to support the methods) by which interaction techniques can be systematically designed, tested and re"ned. In this paper we present the Marigold toolset which supports such a development process. The process begins with a visual speci"cation of the technique being designed. This is requirements centred because it abstracts from implementation issues. Using the toolset, this speci"cation is re"ned to a prototype implementation so that the technique can be explored in the context of the other elements of the environment. In this way, the developer can verify the technique against requirements in both the speci"cation and prototype. Additionally, because the speci"cation is readily understandable, users can be involved at both stages of the process. ( 2001 Academic Press 1. Introduction Despite their great potential, highly interactive virtual environments remain poorly represented outside specialized laboratories (Derra, 1995). One reason is that the cost of buying the specialised equipment, such as headsets and datagloves, remains high (al- though this has been signi"cantly reduced in recent years). However, analysis has also shown that the usability of virtual environments tends to be poor (Kaur, Maiden & Sutcli!e, 1996). This is particularly the case with interaction techniques. An interaction technique de"nes a consistent mapping between the user and the virtual environment technology, and how the environment will react as a user interacts with the input devices. An analogy can be drawn between virtual environment interaction techniques and those found on desktop interfaces such as &&point and click''. Even virtual environments which contain moderately complex interaction techniques su!er from problems (Bowman & Hodges, 1995): &&More complex systems remain in research laboratories, because while functionality is impressive, their interface to that functionality and their user interaction metaphors, are inconsistent, imprecise, ine$cient, and perhaps unusable.'' 1071-5819/01/080145#21 $35.00/0 ( 2001 Academic Press

A toolset supported approach for designing and testing virtual environment interaction techniques

Embed Size (px)

Citation preview

Int. J. Human-Computer Studies (2001) 55, 145}165doi:10.1006/ijhc.2001.0474Available online at http://www.idealibrary.com on

A toolset supported approach for designing and testingvirtual environment interaction techniques

JAMES S. WILLANS AND MICHAEL D. HARRISON

Department of Computer Science, Human-Computer Interaction Group, University of York,Heslington, York YO10 5DD, UK. email: MJames.Willans,[email protected]

Usability problems associated with virtual environments are a serious obstacle to theirsuccessful development. One source of these problems is that virtual environmenttoolkits provide only a small number of prede"ned interaction techniques that areexpected to be used regardless of context, hence developers are not encouraged toconsider interaction. In addition, there are no generally accepted development method-ologies for virtual environments. Therefore, even when developers do consider interac-tion, it is likely to be in an ad hoc fashion driven by technology rather than requirements.If virtual environments are to be useful in a wider context, it is important to providedevelopers with methods (and tools to support the methods) by which interactiontechniques can be systematically designed, tested and re"ned.

In this paper we present the Marigold toolset which supports such a developmentprocess. The process begins with a visual speci"cation of the technique being designed.This is requirements centred because it abstracts from implementation issues. Using thetoolset, this speci"cation is re"ned to a prototype implementation so that the techniquecan be explored in the context of the other elements of the environment. In this way, thedeveloper can verify the technique against requirements in both the speci"cation andprototype. Additionally, because the speci"cation is readily understandable, users can beinvolved at both stages of the process.

( 2001 Academic Press

1. Introduction

Despite their great potential, highly interactive virtual environments remain poorlyrepresented outside specialized laboratories (Derra, 1995). One reason is that the cost ofbuying the specialised equipment, such as headsets and datagloves, remains high (al-though this has been signi"cantly reduced in recent years). However, analysis has alsoshown that the usability of virtual environments tends to be poor (Kaur, Maiden& Sutcli!e, 1996). This is particularly the case with interaction techniques. An interactiontechnique de"nes a consistent mapping between the user and the virtual environmenttechnology, and how the environment will react as a user interacts with the input devices.An analogy can be drawn between virtual environment interaction techniques and thosefound on desktop interfaces such as &&point and click''. Even virtual environments whichcontain moderately complex interaction techniques su!er from problems (Bowman& Hodges, 1995):

&&More complex systems remain in research laboratories, because while functionality isimpressive, their interface to that functionality and their user interaction metaphors, areinconsistent, imprecise, ine$cient, and perhaps unusable.''

1071-5819/01/080145#21 $35.00/0 ( 2001 Academic Press

146 J. S. WILLANS AND M. D. HARRISON

These problems are not simply solved by more realistic modelling of real world tech-niques. The environment may not be analogous to a real world situation. Indeed, one ofthe main advantages of virtual environments is that the user can perform tasks notphysically possible in the real world such as #ying round the design of a proposedproduct (Thompson, Max"eld & Dew, 1999) or in the novel visualisation of complexdata (Sastry, Boyd, Fowler & Sastry, 1998). Even when there is a real world equivalent,usability reports have shown that techniques &&closer to natural mapping often exhibitserious usability problems'' (Bowman, 1999).

We believe that the diversity of virtual environments dictates that in most cases aninteraction technique will only be e!ective if it has been designed (or selected) for therequirements of the environment and in conjunction with users. An interaction techniquewhich is highly usable in one context is likely to be less usable within another. This viewis supported by the development of new interaction techniques for speci"c environments(Mackinlay, Card & Robertson, 1990; Ware & Osborne, 1990; Mine, 1995; Hand, 1997).Developers must, then, consider the requirements of the individual environments andbuild new techniques or reconsider previous techniques in view of the requirements.

In practice, popular toolkits such as Superscape (Corporation, 1999) and dVise(duPont, 1995) do not encourage developers to consider interaction. They providea small number of prede"ned interaction techniques (bound to physical devices) whichare expected to be used regardless of the context. The focus within the toolkit communityhas been on the technological aspects of virtual environments such as distributing thecomputational load inherent in such systems (for instance the Metis toolkit (Turner, Li& Gobbetti, 1999)). Secondly, there are no de"ned development methodologies forvirtual environments akin to those found in the other disciplines of software engineering,particularly, research has shown that developers &&rarely test with users'' (Kaur et al.,1996). Therefore, even when interaction is considered, it is in an ad hoc unsystematicfashion driven by technology rather than requirements. Given this situation, it is hardlysurprising that users experience interaction problems.

As noted by Stanney &&if humans cannot perform e!ectively within virtual environ-ments, then further pursuit of this technology may be fruitless'' (Stanney, 1995). If virtualenvironments are to be useful in a wider context, it is important to provide developerswith methods (and tools to support the methods) by which interaction techniques can besystematically designed, evaluated and re"ned. In this paper we begin by examininga toolkit supported approach which has been successful for the interaction design ofother classes of system. In Section 3 we discuss the issues associated with applyinga similar approach to virtual environments. Taking into account these issues, Section 4introduces the Marigold toolset which supports such an approach for virtual environ-ments. In Section 5 we illustrate the usage of the new approach. In Section 6 we discussrelated work. Finally, in Section 7 we summarise our conclusions.

2. Interaction design

Interaction problems are not unique to virtual environments: almost every form ofnon-trivial interactive system su!ers to some extent. One process which has proveduseful in the design of some interactive systems (e.g. direct manipulation interfaces) is thatsupported by Statemate (Harel et al., 1990). Statemate was developed to aid in the design

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 147

and prototyping of the behaviour of dynamic systems (systems whose state changesautonomously over time). This tool has been used in industry especially for the design ofsafety critical systems. Statemate utilises statecharts (Harel, 1987) for the speci"cation ofthe behaviour. This formalism describes the behavioural ordering of the system (whatcan happen and in what sequence) but does not formally describe the meaning of thestates. Visual renderings can then be added to the speci"cation so that the behaviour canbe explored interactively as a prototype. In the context of interactive systems, theStatechart speci"cation can be considered an interaction speci"cation. The advantages ofthis approach are numerous.

(1) Because the speci"cation abstracts from implementation issues, it is requirement-centred rather than implementation-centred. As a consequence, the design is driven bywhat the user requires rather than speci"c implementation abstractions. This is clearlyadvantageous in terms of usability.

(2) The visual nature of the speci"cation makes it more acceptable to users as well asdevelopers and the consequences of interactions can be explored by users within thespeci"cation. This presents the opportunity for users to be involved within the designprocess (at an early stage) and in#uence the resulting system.

(3) The presence of desirable usability properties can be proved within the speci"ca-tion. This is di$cult to achieve in the implementation code because the behaviour is oftenembedded within the other elements that compose the system.

(4) An ability to re"ne speci"cations means that the behaviour can also beexplored within the context of the other elements of the environment (i.e. the visualrenderings).

(5) The speci"cation acts as an accurate documentation of the system it is specifying.Such documentation is essential to any non-trivial software engineering project (Som-merville, 1996).

The design process supported by Statemate is visualised in Figure 1. The require-ments are used to design an interaction model (speci"cation) which can then beveri"ed, either informally or formally, against the requirements. The resultsof this process are used to re"ne the interaction model so that it more accuratelyre#ects the requirements. From the interaction model a prototype may begenerated which can be used to test the interaction design with users and ensure that itmeets their requirements. The model is also re"ned according to the results of thisprocess.

Clearly, the ability to apply such an approach to the design of virtual environmentinteraction is desirable: it would provide developers with a means of designing, testingand re"ning interaction techniques in a manner which involves the user. However, thereare additional issues in virtual environments which make the application of the State-mate tool unsuitable.

3. Virtual environments issues

There are three major issues which pertain to the application of the Statemate tool as anaid to designing virtual environment interaction techniques.

The "rst issue relates to modelling the behaviour of interaction techniques; for thisa number of formalisms have been investigated. In (Smith, Duke, Marsh, Harrison

FIGURE 1. The design process supported by Statemate.

FIGURE 2. A purely state description of an interaction technique.

148 J. S. WILLANS AND M. D. HARRISON

& Wright, 1998; Smith & Duke, 1999a) purely state-based approaches were dismissedbecause they fail to capture the interesting and distinguishing features of interactiontechniques. For instance, Figure 2 shows a state machine which can describe manyinteraction techniques. It is unclear from this description how the state is changed fromstationary to moving or whether, for instance, the process that caused the initial statechange to moving is also the process which changes the state back to stationary.Moreover, how these states relate to the physical state of the interaction devices isambiguous. A more precise description requires additional abstractions to enable thedeveloper to describe these important details.

Smith et al. concluded that a more suitable formalism would be a hybrid of discretestate and continuous data-#ow behaviour (Smith & Duke, 1999a). The additionalcontinuous constructs provide a means of describing the processes that occur duringa discrete state. A similar conclusion has also been reached independently by Jacob(1995). This insight means that because statecharts do not model continuous data-#owbehaviour, they are not ideal for modelling classes of interaction techniques required for

FIGURE 3. The relation between virtual environment input devices and interaction techniques.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 149

virtual environments. Rather, a new formalism must be chosen which better captures thehybrid nature of virtual environment behaviour.

The second issue relates to how the speci"cation is re"ned to a prototype. WithinStatemate the statechart speci"cation maps onto a prede"ned set of visual renderings.This is adequate for testing the behaviour of those systems which conform to a commoninterface (i.e. switches, sliders and dials). However, because of the diversity of virtualenvironments, we cannot reuse visual renderings in this way. The renderings must bespeci"c to each individual environment.

The third issue relates to how the virtual environment prototype is tested. Thereare two aspects of the interaction technique that need to be veri"ed. Firstly, itscapability*what can be done or not done using the interaction technique. Secondly, itsusability*how easily can the capability of the interaction technique be utilised by theuser. This distinction closely corresponds to that discussed by Buxton (1986). A conve-nient way of viewing this relation between input devices and interaction techniques is bymeasures and triggers (Duce, van Liere & ten Hagen, 1990). A measure is the type andvalue to be passed between the physical device and the interaction technique e.g. theposition of a three-dimensional tracking device. A trigger is a discrete event to be passedbetween the physical device and the interaction technique e.g. a button click. Aninteraction technique's capability can be determined from its trigger and measuresinterface without any knowledge of the input device(s). However, the usability of aninteraction technique, is determined by the physical input device(s). For example, a 3Dmouse and a Polhemus magnetic tracker both have six degrees of freedom and can besubstituted within a technique requiring such a measure and trigger interface. Substitu-ting the device in this way does not alter the capability of the technique, but does alter theusability. Using the Statemate tool, the user interacts with the speci"cation (manipulatesthe visual renderings) using the mouse. The emphasis within this form of prototype is tocheck the speci"cation interactively for undesirable consequences such as state deadlock.Hence, Statemate supports the testing of the capability of the speci"cation but not theusability. In order to test the usability of the speci"cation, the device interface to the usermust match that of the intended context. Therefore, the re"nement to a prototype mustallow easy experimentation with di!erent device-technique con"gurations so that boththe capability and the usability of the technique can be evaluated. This distinction isvisualised in Figure 3.

FIGURE 4. The process supported by the Marigold toolset.

150 J. S. WILLANS AND M. D. HARRISON

4. The Marigold toolset

In this section we introduce the Marigold toolset which, taking into account the issuesraised in the previous section, supports the design, testing and re"nement of virtualenvironment interaction techniques in a style similar to Statemate. Marigold consists oftwo tools. The hybrid speci"cation builder (HSB) provides a means of specifyinginteraction techniques independent of the virtual environment. From the HSB a stub ofthe interaction technique is generated. This is a context independent description of thetechnique. The prototype builder (PB) then provides a means of integrating the interac-tion technique stub into the other elements of the virtual environment (e.g., the devicesand visual renderings). From this a virtual environment implementation is generated.The current version of the tool generates C code using the Maverik virtual environmentlibrary (Hubbold, Dongbo & Gibson, 1996).- Both tools use visual speci"cationsthus making comprehension more e!ective. An overview of this process is illustrated inFigure 4.

4.1. THE SPECIFICATION FORMALISM

A commitment to a formalism was an initial design decision in the development ofthe Marigold toolset. We decided to use the Flownet hybrid speci"cation formalismpresented in Smith & Duke (1999b) and Smith, Duke & Massink (1999) which wasdeveloped for the speci"cation of virtual environment interaction techniques. Thenotation utilises Petri-nets (Petri, 1962) to describe the discrete behaviour (standardevent condition nets with the added construct of an inhibitor arc) and elements ofa systems dynamics modelling notation (Forrester, 1961) to describe the continuousdata-#ow behaviour. There are a number of reasons we chose Flownets. Firstly, like

-Although the target implementation is currently Maverik, the Marigold process is toolkit independent andin principle, could be used with other toolkits.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 151

Statecharts, the formalism is not concerned with the low level (implementation) detail ofthe states, only behavioural ordering. Although the lower level detail can add importantcharacteristics to a technique (a full characterisation of any artifact is its completeimplementation), it reduces complexity in order to address such concerns separately asa re"nement rather than at the same level of abstraction. This reduction in complexityallows clearer thinking about the requirements and how they can be achieved within thedesign. Secondly, Flownets has a strong notion of the interface between the system andthe user through the data-#ow constructs (although it does not describe the low leveldetail of the data). Finally, the speci"cations are compact, readable and have a relativelysmall number of constructs.

Another possible choice of formalism, which has been considered as a method fordescribing virtual environment interaction techniques in Massink, Duke and Smith(1999), is HyNet (Wieting, 1996) also based around the Petri-net notation. However forour purposes, this formalism su!ers from two main problems. Firstly, even for simpleinteraction techniques, the speci"cation becomes large and di$cult to read. This isbecause HyNet tries to capture an exhaustive description of the hybrid system at onelevel of abstraction: the state behaviour, the data structure and the transformation of thedata (a great deal of information is also captured in the text which augments itscomplexity). Secondly, it is not obvious from the speci"cation as to what the relationbetween the system and the user is. This makes comprehension and analysis of thehuman}computer dialogue di$cult. These problems are clearly illustrated by theexample in Massink et al. (1999).

Jacob et al. have developed another visual hybrid formalism which describes virtualenvironment interaction (Jacob, 1996; Morrison & Jacob, 1998). Again, because this wasdesigned as a control formalism for a virtual environment user interface managementsystem (UIMS), as part of a design process we consider it to be a mix of abstractions.Although this formalism results in more compact speci"cations than HyNet, the use oftwo separated notations for the discrete and continuous parts of the speci"cation makescomprehension di$cult.

Appendix A contains a list and description of the components used within Flownets (takenfrom Smith and Duke (1999b)). We will explain the formalism by means of an example. Themouse-based #ying interaction technique enables #ying through a virtual environment usingthe desktop mouse. Variations of it are used in many desktop virtual environment packages(e.g. the virtual production planner). One variation works as follows. Flying is initiated bypressing the middle mouse button and moving the mouse away from the clicked position.The user's speed and direction is directly proportional to the angle and distance between thecurrent pointer position and the location at which the middle mouse button was pressed.Flying is deactivated by a second press of the middle mouse button.

The technique (shown in Figure 5) has one input: mouse, and one output: position.When the middle mouse button is pressed, the Flownet middle mouse button sensor isactivated and the start transition (1) is "red. The start transition enables the continuous#ow which updates origin with the current mouse position (2). A token is then placed inthe idle state. When the out origin sensor detects that the mouse has moved away fromthe origin position, transition (3) is triggered and the token is moved from the idle to the-ying state. A token in the -ying state enables the continuous #ow which calculates thetranslation on position (4) using the current mouse position and the origin. This is then

FIGURE 5. The mouse-based #ying interaction technique.

152 J. S. WILLANS AND M. D. HARRISON

continuously supplied to the output plug. Whenever the -ying state is enabled, theinhibitor implies that the start transition cannot be re"red. When the in origin sensordetects that the mouse has moved back into the origin position, the token in the -yingstate is returned to the idle state closing the #ow control and halting the transformationof position. Regardless of whether the technique is in the idle or -ying state, the techniquecan be exited by the middle mouse button sensor becoming true and "ring one of the exittransitions (5 or 6).

4.2. BUILDING THE SPECIFICATION USING MARIGOLD HSB

The hybrid speci"cation builder (HSB) provides a means of visually specifying interac-tion techniques using the Flownets speci"cation formalism discussed in the previoussection. The resulting speci"cation for the mouse-based #ying technique is illustrated inFigure 6. The toolbar at the top of the diagram contains an option for each of the nodetypes (e.g. state, transition and transformer) and each of the connection types (e.g.continuous and discrete). The HSB enforces the static semantics of the speci"cation andonly allows legal connections between nodes. The tool also tries to maintain clarity ofspeci"cation by automatically formatting the visual connections between components.

There are two stages to the re"nement of an interaction technique speci"cationtowards an implementation prototype. The "rst stage takes place in the HSB. Thisinvolves the addition of small amount of code to some of the nodes within the speci"ca-tion. There are three types of codes that can be added. We will describe these in thecontext of the mouse-based #ying example.

FIGURE 6. Mouse-based #ying Flownet speci"cation in Marigold HSB.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 153

(1) Variable code*this is placed in the plugs of the speci"cation (plugs areinput/output to the environment outside the technique). It describes the kind of informa-tion that #ows in and out of the plugs and, hence, around the speci"cation. Illustrated inFigure 7(a) is the code added to the mouse plug. An integer variable represents the state ofthe mouse buttons and a vector represents the mouse position. The variable code is alsoused to de"ne data which reside in the stores.

(2) Conditional code*this is placed in some transitions and all sensors. It describesthe threshold state of the data for "ring the component. Illustrated in Figure 7(b) is thecode added to the middle m/but sensor. As can be seen from Figure 7(b), the HSB informsthe developer as to which data #ow in and out of the node (i.e. which data they are able toaccess). The code speci"es that when the middle mouse button is pressed, the sensorshould "re.

(3) Process code*this is placed in all transformers and denotes how the information#owing into the transformer is transformed when enabled. Illustrated in Figure 7(c) is thecode added to the position transformer. This describes how position should be trans-formed using the current mouse position and the origin position.

Once the code has been added, it is necessary to generate a stub of the interactiontechnique. The interaction technique speci"cation, along with all the information addedto the nodes, can also be saved to a "le and loaded back into the Marigold HSB for laterre"nement.

The second stage of the speci"cation to prototype re"nement involves the integrationof the interaction technique speci"cation into an environment (e.g. input devices, output

FIGURE 7. (a) Adding variables to the mouse input plug (b) Adding conditional code to the middle mousebutton sensor (c) Adding process code to the position transformer.

154 J. S. WILLANS AND M. D. HARRISON

devices and world objects). This is done within the prototype builder (PB) and will beexplained in the next section.

4.3. CONSTRUCTING A PROTOTYPE USING MARIGOLD PB

The hybrid speci"cation is an environment independent description of the interactiontechnique. By this, we mean that it does not make commitments to the inputs andoutputs from the technique. In order to explore the technique in an implementationprototype context, it is necessary to &&plug'' it into an environment. This stage of there"nement is supported by the Marigold prototype builder (PB). We have found itconvenient to model a virtual environment as "ve concepts: interaction techniques,interaction devices, viewpoints, world-objects and cursor objects. We will brie#y de"nethese concepts in the context of the PB:

De5nition 1. Interaction techniques de"ne a consistent mapping of the interactiondevices onto the other components of the virtual environment.

De5nition 2. Interaction devices are physical devices which act as an input to interactiontechniques.

De5nition 3. Viewpoints visually render a subset of world objects and cursor objects tothe user. They have an initial state (position) which may be altered by an interactiontechnique during interaction.

De5nition 4. World objects are objects which exist within the virtual environment. Theyhave an initial state (position) which may be altered by an interaction technique duringinteraction. These may also represent the user, or part of the user, visually within theenvironment (often referred to as an avatar).

FIGURE 8. Complete implementation speci"cation for a virtual environment using the mouse-based #yinginteraction technique.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 155

De5nition 5. Cursor objects are objects which do not exist in the environment, but arerendered within the environment viewpoint(s) to give an indication to the user of theinteraction device's state (i.e. positional feedback).

Marigold PB provides a visual method of connecting these elements as inputs andoutputs to one or more interaction techniques.- Illustrated in Figure 8 is the mouse-based #ying interaction technique within the PB and connected into an environment.The toolbar at the top of the diagram contains an option for each of the elementsdescribed above and the interaction technique itself. In order to construct a prototype itis necessary to select these options and click on the workspace to insert an objectinstance. As can be seen from Figure 8 each node has a set of variables and the variablesfor the mouse-based #ying interaction technique (mbf ) are those that were placed in theplugs within the HSB. What cannot be seen, from this black and white "gure, is that eachvariable has a di!erent background colour denoting whether it is an input, output orboth. The relation between the environment elements are de"ned by joining thesevariables. Essentially the joining of two variables is de"ning a data #ow from one toanother. The tool automatically veri"es that the variables being joined are of the correcttype and are semantically correct.

-More than one interaction technique may be concurrently active.

FIGURE 9. Setting the initial state of the o$ce desk world object.

156 J. S. WILLANS AND M. D. HARRISON

Within the mouse-based #ying speci"cation, we have linked a desktop mouse, as aninput to the technique, and a viewpoint, as an output from the technique. Additionally, wehave inserted an o.ce desk world object so that the movement of the viewpoint can beperceived. However, because the desk remains static during interaction, it is not linked toany other element of the environment. When a world object is inserted it is necessary tospecify the "le location of the object's image.- In the case of an input device, the locationof a device stub is required. These device stubs are short textual "les which can beconstructed easily by the developer. They relate the output of each device to a commondata layer of measures and triggers (in a manner similar to that presented in Faisstnauer,Schmalstieg and SzalavaH ri (1997)) allowing devices to be easily substituted withininteraction techniques.

It is necessary to de"ne the initial state of the viewpoint and world object elements.Illustrated in Figure 9 is the dialogue box for setting the state of the o.ce desk worldobject. Once the speci"cation is complete the code for the environment can be automati-cally generated and compiled.

5. Discussion

In the previous section we presented the Marigold toolset which supports a Statemate-like process for virtual environments. The advantages of the Statemate approach were

-This would be generated from a 3D-modeller such as 3DStudio.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 157

discussed in Section 2; in this section we will brie#y examine some of these bene"ts in thecontext of Marigold. We will exemplify these by using the two-handed #ying interactiontechnique as an example.

Two-handed #ying, detailed in Mine, Brook Jr and Sequin (1997), enables #yingthrough a virtual environment in a direction determined by the vector between the user'stwo hands and at a speed relative to their hand separation. Movement can be haltedwhen the user's hands are brought together. The Flownet speci"cation of this interactiontechnique is shown in Figure 10 inside the Marigold HSB. There are three plug inputs:enable, disable and hand position (pictorially duplicated for clearness of speci"cation),and one output plug: position. The technique is enabled by the "ring of the starttransition, upon which a token is placed in the not -ying state. When the distancebetween the user's hand positions exceeds the minimum distance the d'min sensortriggers true and the associated transition moves the token from the not -ying state to the-ying state. When the distance between the users' hand position is less than or equal tothe minimum distance, the d("min sensor triggers true and the associated transitionmoves the token from the -ying state back to the not -ying state. Regardless of whetherthe user is in the -ying or not -ying state, the technique can be exited by the "ring of oneof the exit transitions. When there is a token in the -ying state the -ow control is enabledand the position store is transformed using information from the speed and directionstores (which, because of the absence of -ow controls, are continuously transformedregardless of the state of the Petri-net). The transformation on position is continuouslyoutput to the plug.

FIGURE 10. Flownet speci"cation of the two-handed #ying interaction technique in Marigold HSB.

158 J. S. WILLANS AND M. D. HARRISON

5.1. SPECIFICATION

As mentioned in Section 2, one of the virtues of starting a design with a speci"cation isthat it can abstract from implementation issues. This is additionally true of the Flownetspeci"cation used by Marigold. The speci"cation at this level is not concerned withprecisely what data #ow around the speci"cation or how this data gets manipulated bythe states. Therefore, the technique can be considered and designed in a requirements, oruser, centred manner. This contrasts with the alternative method of designing interactiontechniques directly at the code level where the implementation concerns must also beconsidered. With respect to the two-handed #ying example, details such as the calcula-tions to transform position are left to the re"nement as are de"ning the exact meaning ofthe start and exit transitions. In addition, when using the program code it is often di$cultto separate an interaction technique from other elements of the environment (input andoutput devices, for instance) which makes the independent inspection of the interactiontechnique problematic.

The Flownet description is essentially a description of the dialogue between the userand the interactive system. Within human}computer interaction such descriptions areoften used to reason about the usability of the interface. For instance, one generalprinciple is that an interactive system should accurately render its state to the user so thatthey do not su!er mode confusion (Degani, 1996). Mode confusion is where the userthinks the system is in one state when it is actually in another and can result in the usermisunderstanding how the system will interpret their input. To illustrate how this modeconfusion principle can be successfully checked for in the Flownet speci"cation ofa virtual environment interaction technique, consider once again the two-handed #yingtechnique. We can see from the speci"cation in Figure 10 that the technique renderssome change into the environment when the -ying state is enabled, however there is nosuch rendering when the technique is in the not -ying state. This could very easily lead tomode confusion because the user is not able to perceive from the state of the environmentwhether or not the interaction technique is active when they are stationary. To overcomethis problem, a redesigned version of the interaction technique is shown in Figure 12.This incorporates additional detail so that when it becomes active or inactive a noti"ca-tion is passed to the outside environment (via. the output to the thf plug). The rewiring ofthe prototype is shown in Figure 13, the noti"cation from the technique is mapped ontothe variable of a world object indicating visibility which displays active when thisvariable is true. The position of the active world object is also updated using the positionoutput so that it always remains in the viewpoint. Hence, the active world object isinitially set to invisible, when the technique is enabled it becomes visible and invisible whenthe technique is disabled. A comparison of the two versions of the techniques is shown inFigure 11 and a screenshot of the revised technique in use is shown in Figure 14.

FIGURE 11. A comparison of the feedback given to the user during the execution of the two-handed #yingtechnique.

FIGURE 12. The amended Flownet speci"cation which takes into account potential mode confusion.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 159

5.2. PROTOTYPE

As noted by Myers &&the only reliable way to generate quality interfaces is to testprototypes with users and modify the design based on their comments'' (Myers, 1989). InSection 3 we discussed the requirements of a virtual environment prototype whichsupports the veri"cation of the capability and usability. Within this section we discussexactly how Marigold, and the prototype generated from Marigold PB, supports suchveri"cation.

The capability issues relate to whether the technique is physically able to supportnavigation in the manner required. For instance, in Figure 13 the two-handed #yingtechnique stub was inserted into an environment which allows the exploration of a carexterior. The generated prototype, illustrated in Figure 14 allows the developer to verifywhether the capability of the technique supports this requirement. One surprisinglimitation of the capability of this technique, which was not apparent until we tested theprototype, was that it does not control the pitch, roll and yaw axis. Consequently, theuser is always facing the same direction and cannot turn around.

The device independent nature of the interaction technique speci"ed within MarigoldHSB provides the ability to test the capability of the technique using any device thatmatches the technique's requirements in terms of triggers and measures (discussed inSection 3). This is useful because the target environment for a virtual environment isoften di!erent from the host environment in which the system is developed. Therefore,

FIGURE 13. The amended two-handed #ying prototype speci"cation.

FIGURE 14. The amended two-handed #ying prototype as generated from Marigold PB.

160 J. S. WILLANS AND M. D. HARRISON

devices that are available to the development environment (in our case a desktopworkstation) can be used to verify the capabilities of the technique. Figure 13 showsa prototype speci"cation using the two-handed #ying technique. The original description

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 161

of this technique in Mine et al. (1997) speci"es that two 3D trackers (such as thePolhemus magnetic trackers) should be used to measure the user's hand positions. Asseen in Figure 13, the keyboard was mapped onto the left hand and the mouse onto theright hand. These pseudo-devices were also mapped onto cursors so that their relativedistance can be perceived in the viewpoint (this would not be necessary with Polhemustrackers because the user has a proprioceptive sense of the relative position of theirhands). Additionally, a keyboard mapping was made to enable and disable the technique.The position output from the technique was mapped onto a viewpoint and a car worldobject was placed within the environment. In this way, the capability of a technique canbe tested in the development environment.

In Section 3 we discussed the requirements of a prototype if the usability of thetechnique be determined, we concluded that it is necessary to test the technique with theintended input devices. Marigold PB supports an easy recon"guration of the relationbetween devices and techniques in this way, simply by rewiring real device stubs into theinteraction technique and regenerating the prototype code. With reference to the two-handed #ying technique, usability details that would be veri"ed are whether the handdistance to speed mapping was such that control could be maintained as the usernavigated around the car. Such ease of recon"guration also provides a useful method forexploring the usability of alternative device-technique con"gurations.

6. Related work

In Bowman (1999) a method is presented for designing interaction techniques. The mainmotivation behind this work is to provide a method for selecting interaction techniquesfor a speci"c task and a framework is presented which supports this process. Theframework results in a high level description of the components constituting the requiredinteraction techniques. Our work presented in this paper complements this process. It isable to re"ne these high level descriptions of the required techniques to a speci"cationwhere usability factors such as moding can be veri"ed.

An approach to designing complete virtual environments is introduced in Kim, Kang,Kim and Lee (1998) which contains a component for describing the dynamic behaviourof the system using Statecharts. As justi"ed earlier in this paper, interesting features ofvirtual environment interaction techniques are captured in the hybrid behaviour whichare not necessarily captured in Statecharts. Consequently, the dynamic speci"cation canbe abstract, awkward and fail to show important details of the technique. However, thefocus of this work is non-user driven virtual environments for production line prototyp-ing. As a consequence, the behaviour of these systems is prede"ned and not dynamic inthe manner of the interaction techniques discussed in this paper.

In Section 4.1 we discussed the hybrid notation developed by Jacob et al. andpresented in Jacob (1996) and Morrison and Jacob (1998) for use in a UIMS. Similarly,the work we present here links a hybrid speci"cation to implementation but in a di!erentway. Our starting point is a high level speci"cation which is then re"ned to a prototype.Jacob's work already has an implementation as a starting point, this is then linked to thehigher level notation so that changes made in the speci"cation are propagated to theimplementation (in the traditional UIMS manner).

162 J. S. WILLANS AND M. D. HARRISON

Another approach which provides a method for building interaction techniquesvisually is presented in Steed (1996). The motivation behind this work is to provide theability to specify the behaviour of the virtual environment while immersed in theenvironment to provide an easy transition between construction and testing. The visualnotation provides a very high level set of components which are plugged together tospecify the interaction. The visual notation includes logical/boolean gates such as notand and and is very similar to the approaches taken by early visual programming toolsfor the production of desktop software. Clearly, the goal of this work is to show thatinteraction can be de"ned immersively. However, the high level, completely visual,language severely limits what can be achieved. Additionally, the visual speci"cation isnot particularly useful for analysis (moding is not shown, for instance).

7. Conclusions

In this paper, we have motivated a need for a systematic method of designing, testing andre"ning virtual environment interaction techniques. We have discussed why the processsupported by the Statemate tool is desirable, but examined issues which make the directapplication of this tool unsuitable. The Marigold toolset has been introduced whichsupports a similar process to Statemate but with the additional consideration of issueswhich pertain to modelling and verifying virtual environment interaction techniques. Wehave discussed and demonstrated with an example the usefulness of the presentedapproach. Currently we are in the process of designing extensions to the tool to supportselection and manipulation based interaction techniques. Additionally, we are exploringthe design bene"ts of the prototype builder (Willans & Harrison, 2000a), how well theMarigold process can be applied to de"ne the behaviour of world objects (Willans,Harrison & Smith, 2000) and the automated formal veri"cation of the hybrid speci"ca-tions (Willans & Harrison, 2000b).

We are grateful to Jon Cook at the Advanced Interface Group at the University of Manchester forhis help with the details of Maverik. We are also grateful to Shamus Smith and David Duke fortheir comments. Finally, we would like to thank the reviewers who provided useful feedback onearlier versions of this paper.

References

BOWMAN, D. A. (1999). Interaction techniques for common tasks in immersive virtual environments2design, evaluation and application. Ph.D. Thesis, Georgia Institute of Technology, USA.

BOWMAN, D. A. & HODGES, L. F. (1995). User interface constraints for immersive virtual environ-ment applications. Technical Report TR95-26, Graphics, Visualisation and Usability Center,Georgia Institue of Technology.

BUXTON, W. (1986). There's more to interaction than meets the eye: some issues in manual input. InD. A. NORMAN & S. W. DRAPER Eds.;ser Centered System Design, Chapter 15, pp. 319}337.London: Lawrence Erlbaum Associates.

CORPORATION, S. (1999). Superscape. 3945 Freedom Circle, Suite 1050, Santa Clara, CA 95054,USA.

DEGANI, A. (1996). On modes, error, and patterns of interaction. Ph.D. Thesis, Georgia Institute ofTechnology, USA.

DERRA, S. (1995). Virtual reality: development tool or research toy? Research and DevelopmentMagazine, 5, 146}158.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 163

DUCE, D. A., VAN LIERE, R. & TEN HAGEN, P. J. W. (1990). An approach to hierarchical inputdevices. Computer Graphics Forum, 9, 15}26.

DUPONT, P. (1995). Building complex virtual worlds without programming. In R. C. VELTKAMP, Ed.Eurographics'95 STAR Report, pp. 61}70. Eurographics.

FAISSTNAUER, C., SCHMALSTIEG, D. & SZALAD VAD RI, Z. (1997). Device-independent navigation andinteraction in virtual environments. Technical Report TR-186-2-97-15, Vienna University ofTechnology, Austria.

FORRESTER, J. W. (1961). Industrial Dynamics. Cambridge, MA: MIT Press.HAND, C. (1997). A survey of 3D interaction techniques. Computer Graphics Forum, 16, 269}281.HAREL, D. (1987). Statecharts: a visual formalism for complex systems. Science of Computer

Programming, 8, 231}274.HAREL, D., LACHOVER, H., NAAAD, A., PNUELI, A., POLITI, M., SHERMAN, R., SHTULL-TRAURING,

A. & TRAKHTENBROT M. (1990). STATEMATE: a working environment for the developmentof complex reactive systems. IEEE ¹ransactions on Software Engineering, 16, 403}413.

HUBBOLD, R. J., DONGBO, X. & GIBSON, S. (1996). MAVERIK*the Manchester virtual environ-ment interface kernel. In M. GOEBEL & J. DAVID, Eds. Proceedings of 3rd Eurographics=orkshop on <irtual Environments. Berlin: Springer-Verlag.

JACOB, R. J. K. (1995). Specifying non-WIMP interfaces. CHI+95 =orkshop on the FormalSpeci,cation of ;ser Interfaces Position Papers, Denver, USA.

JACOB, R. J. K. (1996). A visual language for non-WIMP user interfaces. Proceedings IEEESymposium on <isual ¸anguages, pp. 231}238. Silver Spring, MD: IEEE Computer SciencePress.

KAUR, K., MAIDEN, N. & SUTCLIFFE, A. (1996). Design practice and usability problems withvirtual environments. Proceedings of <irtual Reality =orld +96, Stuttgart, Germany.

KIM, G. J., KANG, K. C., KIM, H. & LEE, J. (1998). Software engineering of virtual worlds. ACM<irtual Reality Systems and ¹echnology Conference (<RS¹+98 ), Taipei, Taiwan, pp. 131}138.

MACKINLAY, J. D., CARD, S. K. & ROBERTSON, G. G. (1990). Rapid controlled movement througha virtual 3D workspace. Computer Graphics, 24, 171}176.

MASSINK, M., DUKE, D. & SMITH, S. (1999). Towards hybrid interface speci"cation for virtualenvironments. Design, Speci,cation and <eri,cation of Interactive Systems+99, pp. 30}51.Berlin: Springer.

MINE, M. R. (1995). <irtual environment interaction techniques. Technical Report TR95-018, UNCChapel Hill Computer Science, USA.

MINE, M. R., BROOK JR, F. P. & SEQUIN, C. H. (1997). Moving objects in space: exploitingproprioception in virtual-environment interaction. SIGGRAPH+97, pp. 19}26. New York:ACM SIGGRAPH.

MORRISON, S. A. & JACOB, R. J. K. (1998). A speci"cation paradigm for design and implementationof non-WIMP human-computer interaction. ACM CHI+98 Human Factors in ComputingSystems Conference, pp. 357}358. Reading, MA: Addison-Wesley.

MYERS, B. A. (1989). User-interface tools: introduction and survey. IEEE Software, 6, 15}23.PETRI, C. A. (1962). Kommunikation mit automaten. Schriften des iim nr. 2, Institut fuK r Instru-

mentelle Mathematic. English translation: Technical Report RADC-TR-65-377, Gri$ths AirBase, New York, Vol. 1, Suppl. 1, 1966.

SASTRY, L., BOYD, D. R. S., FOWLER, R. F. & SASTRY, V. V. S. S. (1998). Numerical #owvisualization using virtual reality techniques. 8th International Symposium on Flow <isualisa-tion. Sorrenta, Italy, pp. 235.1}235.9.

SMITH, S. & DUKE, D. (1999a). ;sing CSP to specify interaction in virtual environments. TechnicalReport YCS 321, University of York, Department of Computer Science.

SMITH, S. & DUKE, D. (1999b). Virtual environments as hybrid systems. Eurographics ;K 17thAnnual Conference. Exeter, UK, pp. 113}128. Eurographics.

SMITH, S., DUKE, D., MARSH, T., HARRISON, M. & WRIGHT, P. (1998). Modelling interaction invirtual environments. ;K-<RSIG+98.

SMITH, S., DUKE, D. & MASSINK, M. (1999). The hybrid world of virtual environments. ComputerGraphics Forum, 18, C297}C307.

SOMMERVILLE, I. (1996). Software Engineering (5 edn), Reading, MA: Addison-Wesley.

164 J. S. WILLANS AND M. D. HARRISON

STANNEY, K. (1995). Realizing the full potential of virtual reality: human factors issues that couldstand in the way. <RAIS+95 Conference, pp. 28}34. Silver Spring, MD: IEEE ComputerSociety Press.

STEED, A. J. (1996). De,ning interaction within immersive virtual environments. Ph.D. Thesis, QueenMary and West"eld College, UK.

THOMPSON, M. R., MAXFIELD, J. D. & DEW, P. M. (1999). Interactive virtual prototyping. Euro-graphics ;K 16th Annual Conference, pp. 107}120. Eurographics.

TURNER, R., LI, S. & GOBBETTI, E. (1999). Metis*an object-orientated toolkit for constructingvirtual reality applications. Computer Graphics Forum, 18, 121}130.

WARE, C. & OSBORNE, S. (1990). Exploration and virtual camera control in virtual three dimen-sional environments. Proceedings of Symposium on Interactive 3D Computer Graphics,pp. 175}183. New York: ACM Press.

WIETING, R. (1996). Hybrid high-level nets. In J. M. CHARNES, D. J. MORRICE & D. T. BRUNNER,Eds. Proceedings of the 1996 =inter Simulation Conference, pp. 848}855. New York: ACMPress.

WILLANS, J. S. & HARRISON, M. D. (2000a). A &plug and play' approach to testing virtualenvironment interaction techniques. 6th Eurographics =orkshop on <irtual Environments,pp. 33}42. Berlin: Springer-Verlag.

WILLANS, J. S. & HARRISON, M. D. (2000b). Verifying the behaviour of virtual environment worldobjects. In P. PALANQUE & F. PATERNOD , Eds. Interactive systems design, speci,cation andveri,cation, pp. 65}77. Lecture notes in Computer Science 1946.

WILLANS, J. S., HARRISON, M. D. & SMITH, S. P. (2000). Implementing virtual environment objectbehaviour from a speci"cation. In V. PAELKE & S. VOLBRACHT, Eds.;ser Guidance in <irtualEnvironments, pp. 87}97. Aachen, Germany: Shaker Verlag.

VIRTUAL ENVIRONMENT INTERACTION TECHNIQUES 165

Appendix A