13
Computer-Atded Design, Voi 29. No 8. pp. 585-597, 1997 < 1997 Elsevier Science Ltd All rfghts reserved PrInted ,n Great Brltaln PII: SOOlO-4485(96)00093-O 0010.4485197/$17 00 +o 00 Prototyping and Design for Assembly analysis using Multimodal virtual environments Rakesh Gupta,* Daniel Whitneyt and David Zeltzert The goal of this work is to investigate whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual emironment (VE) tech- nology, rather than by using conventional tzible-based methods such as Boothroyd and Dewhurst Charts. The long term goal is to extend CGI systems to evaluate and compare alternative designs using Design for Assembly Analysis. .4 unified physically based model has been develdped for modeling dynamic interactions among virtual objects and haptic interactions between the human designer and the virtual objects. This model is augmented with auditory events in a multimodal VE system called the Virtual Enviromnent fol Design for Assembly (VEDA). The designer sees a \.isuaI representation of the objects. hears collision sounds when objects hit each other and can feel and manipulate the objects through haptic interface devices with force feedback. Currently these models arc 2D in order to preserve interactive update rates. Experiments were conducted with human subjects using two-dimensional peg-in-hole apparatus and a VEDA simula- tion of the same apparatus. The simulation duplicated as well as possible the Lveight, shape. size. peg-hole clearance. and frictional characteristics of the physical apparatus. The experiments showed that the Multimodal VE is able to replicate experimental results in which increased task complc- tion times correlated with increasing task difficulty (measured as increased friction. increased handling distance combined with decreased peg-hole clearance). However, the Multimodal VE task completion rimes are approximately two times the physical apparatus completion times. A number of possible factors for this temporal discrepancy have been identified but their effect has not been quantified Copyright (‘ 1997 Elsevie] Science Ltd Keywords: Multimodal virtual environments, experiments, haptic interfaces, dynamic simulation, design for assembly, assembly time, dynamic simulation Schlumherger Austin Product Center. X31 I North FM 610 Road. Austin. T>< 78726. USA *To whom correspondence should be addressed. -r Massachusetts Institute ofTechnolo_e!. Cambridge. MA 02139, LISA Pupcr rcwi~~~il: I .‘+/mwh’l. 19Y.T. Re~~,.wti~ 23 0~ /o/w 1996 585 1. INTRODUCTION 1.1. Multimodal virtual environments A Virtual Environment (VE) system consists of a computer system for generating VEs. one or more human operators and a multimodal human machine interface for interacting with the synthesized virtual world’. Three key elements of a VE system are”‘: Autonomy: the computational models and processes that account for the mechanical properties of the objects in the virtual world. Interaction: the extent to which real-time interaction is facilitated. Presence: the degree to which the user becomes immersed in the computer-synthesized environment. In a Multimodal VE system, the human operator senses the synthetic environment through visual. audi- tory? and haptic displays and then controls it through a haptic interface. Figure I shows the components of a typical multimodal system. Synchronous operation of a very simple haptic interface with a visual and auditory display, is likely to provide a more immersive interface. This is because the ability to touch and manipulate objects in the virtual environment gives a more natural feedback while interacting with the environment. The Multimodal VE system consists of the following modes: 1.1.1. Visual channel For the visual channel frame rates of lower than 10 frames per second severely degrade the illusion of presence. The ideal frame rate is 20 frames per second or higher. A system may have a high frame rate. but the image being dlsplayed or the computational result being presented may be several frames old due to lags in the system’“. This research has shown that such delays must be less than 0.1 seconds. A third requirement concerns the picture resolution needed for realism”. 1.1.2. Auditory channel Collision sounds are auditory events in virtual environ- ments which can be used to provide cues confirming

Prototyping and design for assembly analysis using multimodal virtual environments

Embed Size (px)

Citation preview

Page 1: Prototyping and design for assembly analysis using multimodal virtual environments

Computer-Atded Design, Voi 29. No 8. pp. 585-597, 1997 < 1997 Elsevier Science Ltd All rfghts reserved

PrInted ,n Great Brltaln

PII: SOOlO-4485(96)00093-O 0010.4485197/$17 00 +o 00

Prototyping and Design for Assembly analysis using Multimodal virtual environments Rakesh Gupta,* Daniel Whitneyt and David Zeltzert

The goal of this work is to investigate whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual emironment (VE) tech-

nology, rather than by using conventional tzible-based methods such as Boothroyd and Dewhurst Charts. The long term goal is

to extend CGI systems to evaluate and compare alternative designs using Design for Assembly Analysis. .4 unified

physically based model has been develdped for modeling

dynamic interactions among virtual objects and haptic

interactions between the human designer and the virtual objects. This model is augmented with auditory events in a

multimodal VE system called the Virtual Enviromnent fol Design for Assembly (VEDA). The designer sees a \.isuaI

representation of the objects. hears collision sounds when

objects hit each other and can feel and manipulate the objects

through haptic interface devices with force feedback. Currently these models arc 2D in order to preserve interactive update rates. Experiments were conducted with human subjects using two-dimensional peg-in-hole apparatus and a VEDA simula- tion of the same apparatus. The simulation duplicated as well as possible the Lveight, shape. size. peg-hole clearance. and frictional characteristics of the physical apparatus. The

experiments showed that the Multimodal VE is able to replicate experimental results in which increased task complc-

tion times correlated with increasing task difficulty (measured as increased friction. increased handling distance combined with decreased peg-hole clearance). However, the Multimodal

VE task completion rimes are approximately two times the physical apparatus completion times. A number of possible factors for this temporal discrepancy have been identified but their effect has not been quantified Copyright (‘ 1997 Elsevie] Science Ltd

Keywords: Multimodal virtual environments, experiments, haptic

interfaces, dynamic simulation, design for assembly, assembly

time, dynamic simulation

Schlumherger Austin Product Center. X31 I North FM 610 Road. Austin. T>< 78726. USA *To whom correspondence should be addressed. -r Massachusetts Institute ofTechnolo_e!. Cambridge. MA 02139, LISA Pupcr rcwi~~~il: I .‘+/mwh’l. 19Y.T. Re~~,.wti~ 23 0~ /o/w 1996

585

1. INTRODUCTION

1.1. Multimodal virtual environments

A Virtual Environment (VE) system consists of a computer system for generating VEs. one or more human operators and a multimodal human machine interface for interacting with the synthesized virtual world’. Three key elements of a VE system are”‘:

Autonomy: the computational models and processes that account for the mechanical properties of the objects in the virtual world.

Interaction: the extent to which real-time interaction is facilitated.

Presence: the degree to which the user becomes immersed in the computer-synthesized environment.

In a Multimodal VE system, the human operator senses the synthetic environment through visual. audi- tory? and haptic displays and then controls it through a haptic interface. Figure I shows the components of a typical multimodal system. Synchronous operation of a very simple haptic interface with a visual and auditory display, is likely to provide a more immersive interface. This is because the ability to touch and manipulate objects in the virtual environment gives a more natural feedback while interacting with the environment. The Multimodal VE system consists of the following modes:

1.1.1. Visual channel For the visual channel frame rates of lower than 10 frames per second severely degrade the illusion of presence. The ideal frame rate is 20 frames per second or higher. A system may have a high frame rate. but the image being dlsplayed or the computational result being presented may be several frames old due to lags in the system’“. This research has shown that such delays must be less than 0.1 seconds. A third requirement concerns the picture resolution needed for realism”.

1.1.2. Auditory channel Collision sounds are auditory events in virtual environ- ments which can be used to provide cues confirming

Page 2: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

Physically Based Multimodal Simulation

Figure I Schematic showing modes in Multimodal VE applications

contact among objects and give information about the intensity of impact. By using digital signal process based devices, spatially localized sound can be produced so that it appears to be coming from a point in the 3D space. Current devices available for generating non-speech sounds tend to fall into two general categories: samplers, which digitally store sounds for later real-time playback, and synthesizers, which rely on analytical or algorithmi- cally based sound generation techniques originally devel- oped for imitating musical instruments. Most widely available synthesizers and samplers are based on MIDI (Musical Instrument Digital Interface) technology*.

1.1.3. Haptics channel The term haptic interfaces refers to interfaces involving manual sensing, exploration, and manipulation in virtual environments. The haptic channel can be used to both sense the position and forces (and their time derivatives) of the users hand and to display forces back to the user. With suitable sensors and actuators, the object can be made to feel stiff or spongy by controlling the force cues as a function of state of the fingers relative to the object.

Haptic display systems can be fixed to the ground as in joysticks that track a single point and reflect forces back to the user, or they can be body based, such as exo- skeletons that track hand postures and provide force feedback to one or more fingers. The ground-based devices have much higher resolution and bandwidth of force display compared to body-based devices, which enhances the quality of their interaction.

1.2. Extending CAD systems for design evaluation

There are several motivations for linking together design- evaluation techniques like Design for Assembly (DFA) and CAD systems-the avoidance of tedious and error- prone data entry is just one. Present CAD systems can capture component geometry and manufacturing-pro- cess-related non-geometric data, such as surface finish. However, these CAD systems do not provide sufficiently sophisticated representations of fine surface interactions to allow the modeling of mating interfaces between the components”.

The new generation of CAD environments, based on Multimodal VE technology, will reflect the emergence of true 3D prototyping tools that will capture and model this interaction among components. For manual assembly, allowing the designer to touch and move objects inter- actively, provides information about the position of the objects at all stages of the interaction which would otherwise have to be specified analytically. CAD systems that incorporate Design Evaluation Techniques can

potentially eliminate the need ,for physical prototypes by offering the following advantages:

1.2.1. Shortening the product development cycle Designers often complain that Boothroyd and Dewhurst charts are tedious to use. However, Multimodal VE systems can allow rapid estimation of ease of part handling and ease of part assembly. The system can keep track of part handling and insertion times when the designer is interactively trying out the assembly to allow incorporation of Design for Assembly analysis in the early design stages. This can shorten the design to manufacture cycle by weeks, effectively reducing the time to market and lowering the costs.

1.2.2. Verification of designs The designer can sense and manipulate the parts to test and verify the assembly sequence of these components in virtual space, much the same way as he would explore a physical mock-up in real space. He can check the parts in the design for proper fit, accessibility and removability and look for unexpected interferences. Based on this interactive feedback he can redesign the part geometry as necessary without having to first put the whole assembly together.

1.2.3. Design iteration and exploring design alternatives VE can change the way the designers work by placing them inside the design and reducing, even eliminating, the need for mock-ups. The designer can make sweeping changes to a design; he can change the physical charac- teristics to something new without having to rework or destroy and make a fresh physical prototype. Reconfigu- rability of the system will allow the designer to prototype a wide variety of designs.

1.2.4. Concurrent product design and marketing Using shared databases, customers, designers, sales staff and engineers from different divisions can simultane- ously evaluate a proposed product design. They can have the ability to manipulate the product in virtual space so that they can agree on the basic design decisions early in the design cycle. This will allow a more comprehensive assessment of design tradeoffs with a view to manufac- turability, economics, parts availability, human factors considerations, maintenance and reliability. By provid- ing potential customers with the ability to visualize various uses of an artifact, Multimode1 VE could be used for marketing studies or even marketing an array of completed products prior to their production.

Section 2 discusses design evaluation techniques and related work. Section 3 is on physically based modeling and also discusses scope and limitations of our approach. Section 4 describes the software and hardware architec- ture of the system. Section 5 is about the experimental design. Sections 6 and 7 discuss Experiment sets 1 and 2 respectively. Section 8 is on summary and analysis of results. The last section discusses the conclusions and future extensions of this work.

2. RELATED WORK

2.1. Boothroyd and Dewhurst charts

The Boothroyd and Dewhurst Design -for Assembly

Page 3: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

(DFA) method is based on modeling assembly difficulty with data drawn from a large number of empirical observations of people and machines (part feeders) performing selected elements of assembly task?. DFA provides the systematic procedure for analyzing the proposed designs from the point of view of the assembly and manufacture5. To compare different design alter- natives, Manual Design Efficiency is calculated as follows:

3 x Theoretical Minimum Number of Parts Manual Design Efficiency = __

Total Manual Assembly Time

Parts which cannot be eliminated or combined with other parts on the basis of three basic criteria are totaled to give the Theoretical Minimum Number of Parts. The Manual Assembly Time is obtained by adding the hand- ling and insertion time obtained using Boothroyd and Dewhurst Charts for each of the parts in the assembly. Based on this information, the designer can later use rules and guidelines for DFA and examine the scope for eliminating parts and reducing the operation times.

Boothroyd and Dewhurst charts provide a crude quantization of assembly times and these times may not adequately capture the actual assembly difficulty. Another drawback is that it lacks the capability to help designers to explore new possibilities.

2.2. Sturges’ Design for Assembly Calculator

Sturges and Kilani’8 built an analysis system to evaluate the object’s difficulty of handling and assemblability and have referred this difficulty to total cost. They have used Index of Difficulty (ID) to quantify the dexterity and time required to assemble a product. The index of dif- ficulty for a task is given by Fitt’s Law:

ID = log,(s/l<)

where M’ is width and s is the separation. The index of difficulty can be used for comparing different assembly systems and strategies and the manual assembly time can be obtained by multiplying it by the human motor capacity*““. The Design for Assembly Calculator gives index of difficulty for the task. Intended for use by a designer on the drawing board, it provides immediate design analysis and comparative evaluations of designs and methods. This method is very similar to finding the Index of Difficulty by using a slide-rule, and is time consuming.

2.3. Studies in force feedback

Some studies have examined the effects of force feedback for telerobotics applications by comparing performance in force feedback vs the no force feedback case3,‘4.2’. In Massimino’s work”, the human operator controls the manipulator in 6 degrees of freedom through the hand controller interface. For the peg-in-hole task, force feed- back significantly improved performance at all frame rates. The experiments also showed that as task difficulty (as defined by Fitts’ index of difficulty) increased, mean task times increased at an increasing rate. Reducing frame rate from 30 frames/second may be acceptable until a cutoff frame rate is reached beyond which per- formance would be below the acceptable level.

Hill15 has described the results from two manipulators on a peg-in-hole task. Gross trajectories as well as fitting movements require more time without force feedback. The task time is shown to be the sum of two independent functions-a non-linear function of the peg and hole clearance and a linear function of the trajectory length.

Hannaford et a1.14 have conducted experiments with a master-slave configuration with high sampling rate (1000 Hz) and low loop computation delay (5 ms). They have found that force feedback reduces completion time by approx. 30%, reduces Sum of Squared Forces (given by C,f;‘dt, wheref; is the force and dt is the sampling time) by a factor of 7, and reduces errors in performing tasks by 63%.

Buttolo et a1.6 have done experiments with subjects performing tasks on a physical setup, VE with visual and force feedback through haptic display, and remotely on the physical setup using a telemanipulation system. The tasks include free movement, sliding, shape exploration and application of modulated/impulsive forces. The objective is to decouple the effects on the overall tele- manipulator performance introduced by the single com- ponents of the system, master manipulator, display, slave manipulator and bilateral controller.

Louis Rosenberg’” investigated the use of computer generated haptic sensations (called virtual fixtures) during peg-m-hole insertion telemanipulation tasks with time delay. Subjects were tested wearing an exoskeleton to control a robot arm with no time delay, 250ms time delay and 450ms time delay between master and slave. Fitt’s law paradigm was used to quantify operator per- formance. Task completion times were reduced by 36% for the 250ms delay and 45% for the 450ms delay without virtual fixtures, but no performance degradation was recorded with overlaid virtual fixtures.

Batter et al.9 have used Argonne Remote Manipulator (ARM) haptic display as an augmentation to visual display to improve perception and understanding of force fields and of world models populated with objects. Haptic augmented display systems have been found to give about a two-fold performance improvement over purely graphic interactive systems. The most perfor- mance enhancement measured is 2.2 times in simple manipulation task and 1.7 times in a complex molecular docking task.

Research work at Tokyo Institute of Technology aims at realization of a collaborative networked virtual environ- ment for the design of 3D objects. The collaborative design of 31) objects is carried out with multi-modal inter- actions (visual, auditory and haptic)ih. Hand-over tasks were performed with this system, where one user hands over d block to another. The haptic information helped in enforcing consistency by preventing the participants from moving virtual objects contradictorily.

3. PHYSICALLY BASED MODELING

Simulating the machines of the everyday world is of central importance for virtual environment applications that involve nearJield user interaction (i.e. within a few feet where the user can reach out). Virtual objects should not pass through one another, and should act in accord- ance with the rules of physics. Within the scope of the world being modeled, any situation that could possibly arise must be anticipated and handled reliably in real-time.

587

Page 4: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

For representing and modeling object interactions to facilitate a realistic simulation of their behavior, physical properties of the object in multiple modalities must be specified. The dynamic modeling and haptic rendering requires material properties like mass and moment of inertia, surface friction properties and geometric proper- ties of the object. Visual display requires geometric and shading properties. The acoustic display requires models based on impact, aerodynamic flow, etc.

One approach to simulating constrained systems of objects is based on the classic method of Lagrangian multipliers, in which a linear system is solved at each time step to yield a set of constraint forces. The other method is the penalty method, where we shall (conceptually) allow objects to inter-penetrate slightly, and consider a deformation in the contact surfaces proportional to the amount of interpenetration. It might appear non-physical that two objects can penetrate one another. What we are really trying to model is the deformation of surface layers and this accounts for the apparent interpenetration. The contact model incorporated in this work represents the normal forces by a spring-damper system. The penalty method provides a common method of modeling the dynamic interactions to drive the haptic and visual displays.

The compliance and damping of finger contact with the object is modeled. The rotation of the objects is damped to prevent its rotation at the slightest imbalance in finger forces (due to point force approximation for finger contact). It is necessary to model the friction between the objects and the finger so that objects can be lifted in the presence of gravity. We have modeled the plastic energy lost in friction as a function of position by a very stiff horizontal spring.

Simulating object interactions involves detecting collisions among the objects and automatically reformu- lating the equations of motion according to each change in connectivity”“‘. It is therefore necessary to keep track of the contacts and update them as they form or break at each time step. We have modeled the contacts as vertex- edge contacts characterized by a vertex-edge pair among neighboring polygons. Concave objects are modeled as single objects and not as a combination of convex objects. For details of how various ambiguities in formation and deletion of contacts are resolved elegantly, and the fric- tion and simulation algorithm please refer to References 12 and 13.

3.1. Scope and limitations of our approach

This work is limited to interactions among rigid two- dimensional polyhedral objects which can be either convex or concave. Edges on the objects are restricted to straight lines. The two-dimensional domain is simple enough to keep the analysis tractable and also rich enough to give meaningful results which can be extended to three dimen- sions in the future. Dynamic and static friction are modeled.

Firstly, energy loss and change in velocities due to impacts cannot be accurately predicted. The mechanics of friction contact (e.g. adhesion, plastic deformation, fracture) is not well understood. For example, Coulomb’s law of sliding friction imposes constraints on the force, state and state changes but it does not specify the force as a unique function of state. Solving for contact forces may

588

lead to more than one solution (indeterminate) or no solution (inconsistency)“. This work does not develop better models of these phenomena. The energy loss is approximated using dampers and the friction is approxi- mated using Coulomb’s Law.

Modeling the sound generation phenomena is beyond the scope of this work and we have limited ourselves to using simple sound cues at a specific frequency. Haptic devices with high fidelity are currently limited to tracking only a single point in 3D space. This restricts the model- ing of the finger to point contact model and severely limits the performance and fidelity of the simulation.

4. VEDA SOFTWARE AND HARDWARE ARCHITECTURE

A schematic of the configuration of the multimodal VE system is shown in Figure 2. The multimodal physically based software runs in a Silicon Graphics Indigo* Extreme with 100MHz processor. It executes the dynamics and haptic cycles and generates data for changes in visual and auditory modes.

For the visual channel, a stereoscopic 3D view of the objects is displayed on another Silicon Graphics Indigo2 Extreme using StereoGraphics CrystalEyes device. Parallel communication is used for transfer of object and finger position data between these two Indigo’ Extremes. This communication is done through Keithley Metrabyte PIO- 12 24 bit parallel IjO boards for parallel I/O communi- cation. With handshaking, this provides a bandwidth of .5Mbits/second data among the two processors. How- ever, time delay much lower than that can be obtained by using Ethernet.

Two PHANTOM force feedback devices20.‘4 are employed for haptics and these are connected to the Indigo’ Extreme (one running the dynamics and haptic loops) through the EISA bus. The mass and inertia of these devices is assumed to be low and not considered in the analysis. The hardware is memory mapped into user address space to permit communication between the user process and a peripheral device. Communication between the physically based model and the haptic devices involves first reading the motor encoders corresponding to the finger position. This finger position is checked for col- lisions with any of the virtual objects and appropriate torques are then commanded to the motors to render the haptic display as the user explores or manipulates the virtual object.

For generating sounds, a Macintosh with a Digidesign Sample Cell II Playback Card is used. The Macintosh is connected to an Indigo’ using serial port to MIDI network adaptor and MIDI cable. We are using sound samples that are pre-recorded and loaded to the Sample Cell II sampler. Different instruments are set up for different sound sources and mapped to their correspond- ing output channels. Each of these output channels can be controlled independently for simultaneously render- ing multiple sound events. Collision sounds are pre; recorded and stored on the Macintosh. The Indigo- Extreme is used to command MIDI signals to control the onset, volume, and end of sound events whenever there are collisions.

Figure 3 shows a part of the hardware setup-the display of one of the SGI Indigo* Extreme computers used for the visual channel and the relative placement of

Page 5: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

Silicon Graphics Indigo2 Extreme

Figure 2 Hardware configuration of the Multimodal VE system

the two force feedback PHANTOM devices. The user’s first finger is placed in the right hand side PHANTOM and the thumb is placed in the left one. These points of interaction are also shown on the visual display. The jig seen on the table between the two PHANTOMS was built so that the incremental encoders on ‘the PHANTOM motors could be initialized to the same position at the beginning of each trial. This ensured repeatability among different experimental trials.

The designer is able to pick and place active objects, move them around, and feel the forces. Whenever there are collisions with contact force magnitude above a certain threshold, sound is produced, with the intensity of sound proportional to the contact forces among the objects. For the system, the performance can be charac-

Figure 3 Visual and haptic setup for the experiments

terized as follows. It is able to currently multimodally render interactions among 4-5 polygons (with multiple dynamic polygonal objects). The visual rendering is done at about 20 Hz. The haptics and the dynamics loop run at about 4 kHz. The dynamic simulation clock is matched with real time to an accuracy of only 0.1 s primarily due to real time performance limitations of UNIX operating system in multi-user mode.

5. EXPERIMENTAL DESIGN

Many assembly tasks involve circular symmetry which reduces the task from a three-dimensional problem to two-dimensional one. Theoretical investigations of assembly have focused on the idealized model of insert- ing a cylindrical peg into a cylindrical hole’. Inserting a round peg into a round hole is possibly the single most frequently performed task in the assembly of metal products with machined or cast parts (variants include a tab into a slot and a wheel onto a shaft). In this work, a peg-in-hole task in 2D is studied.

5.1. Description of the task

Figure 5 shows the visual display for the 2D peg-in-hole assembly task in Multimodal VE. The two wedges in the figure visually represent the location of the fingers. The white background material is simulated teflon (with fric- tion coefficient p = 0.04) and constrains the motion of the peg to two dimensions. The physically based model- ing is in two dimensions and only two force components and one torque component in the plane may be generated by contact between the peg and the hole (or between human fingers and the peg). But the visual display is 3D to enhance realism.

589

Page 6: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

The objective is to grasp the peg and insert it in the hole at a natural pace while trying to minimize the errors (e.g. avoid dropping the peg). Once the peg is grasped, the subjects are supposed to lift and carry it, rather than slide it on the base surface. Firstly, depending on the strategy of either lifting or sliding to perform the assembly, completion times can be very different. Secondly, lifting of the peg gives a clean signal for the assembly start time in the tasks with real materials.

Figure 4 shows the physical version of one of the peg- in-hole assembly tasks. The real task apparatus are made from brass, aluminum and copper mounted on a plexi- glass plate. LEDs light up on contact and are used to give feedback. For example, when the peg reaches the bottom of the hole, the electrical contact between the peg and the middle plate closes, lighting up the middle LED indicat- ing assembly completion. The real task and the simu- lation task have the same sizes, weights, frictional characteristics and index of difficulty by Fitts’ Law (i.e. same geometric starting configurations of the peg and same clearance ratio c).

The main performance measure is the time for assembly completion25 which is further split into handling time and insertion time. For real world tasks the time is measured with the help of a digital I/O card which records the onset and end of contacts among the real blocks. The assembly time recording starts at breaking of contact between the left base block and the peg and ends at making of contact between the peg and the middle base plate. For the VE simulations, the physically based model records the task

completion times. The following set of errors are defined and explained to the subjects:

Slip Object slips from hands. Drop Object is dropped outside the working volume. Excessive Force Visible vibration in the haptic device. Timed out Assembly takes more than 15 s for completion.

Subjects were also asked to answer a brief question- naire at the end of the trials to assess the realism of the multimodal simulation and to correlate their experience with performance.

5.2. Experimental issues

A Latin square design is used in an 11 x 11 arrangement, with 11 tasks (treatments) and 11 subjects. Using the subjects as blocks in the experiment eliminates the effect of subject to subject variation. Blocking in two directions using the Latin square design eliminates the effect of the fatigue and learning, because each task appears in each position of the time administering sequence.

For Experiment sets 1 and 2, 11 subjects were used, each performing one block of 11 treatments in one day. Each treatment consisted of about 15 sample measure- ments. 11 x 11 Latin square design treatments are obtained by combining the eight treatments of set 1 and five treatments of set 2 and removing two duplicates. The Latin square design is counterbalanced (i.e. another 1 I x 11 Latin square design with reverse order of treatments) by making subjects go though the reverse

Figure 4

590

Peg-in-hole assembly task in the real world

Page 7: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et a/.

Figure 5 Peg-in-hole assembly task as visually seen m the virtual environment

sequence of treatments on the second day of experiments (not counting the day of training).

The subjects for the experiments were chosen from a population with a technical background, mostly under- graduates from Mechanical Engineering and other Engin- eering disciplines. They were paid for their participation. Each subject received a 1 hour training session the day prior to performing the final experimental runs. They were made familiar with the experimental design and procedures and acquainted with the VE system. Subjects were trained in all the different task conditions till their performance times met minimum training levels which were indicated by the flattening of their learning curves. This training minimized the effect of learning during experimental runs.

experiments linking increases in assembly time with increasing task difficulty. In addition, the effect of chamfer on assembly time in real and virtual environments is examined.

Given a clearance ratio c, friction coefficient p and handling distance h, the four variations of the peg-in- hole assembly task examined are:

1. c - I-(, - h, combination, material brass 2. c;! - blZ - h2 combination, material copper 3. c:; - pL3 - h, combination, material aluminum 4. same as 3, but with chamfers that make it easy to

guide parts when they are laterally or angularly misaligned.

D-d Clearance Ratio = D

6. EXPERIMENT SET 1 where D is the diameter/width of the hole and d is the diameter/width of the peg. Each of these four tasks has a

The first set of experiments examines whether Multi- real counterpart making a total of eight treatments. We modal Virtual Environments are able to replicate compute averages over all subjects for each of these cases

Table 1 Materials and parameters used for peg-u-hole assembly tasks

Task Material Friction Clearance Handling Index of number coefficients (/L) ralio (c,) distance (d) (cm) difficult) (ID)

I Brass 0.22 0.01 2.17 7.1 I 2 Copper 0.26 0.005 5.27 9.04 3 Aluminum with chamfer 0.31 0.0025 10.02 10.96 4 Aluminum 0.31 0.0025 10.02

591

Page 8: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

.’

_:’

:’ ,:’

,I’ /

,.: .’

,:’ ,:’

,.’ ,_.’

Difficult Tasks ;.’

_,i

Aluminum :”

,.I’ ,,I’

@ ::” ,:’

Copper ” ,;’

,:’ ,2. o ;,,,.: -,‘.

,.: .“’ Brass ”

,/ /’ @

,:’ ,.,’

,:’ ,:’

,:’ ,:’

,:’ ,: Easy Tasks

;.’ ,:’ ,:’

0.10

1 ! . ..’ : : ,.,’

,,.’

;, ,.I’ ,:.

:...,’ ,:’

;.’

/

I I I I I I )

0.00125 0.0025 0.005 .Ol 0 02

Clearance Ratio (log scale)

Figure 6 Insertion task difficulty for the selected tasks

and compare the real task performance with the multi- modal VE simulation.

One of the surveys22, examined industrial design prac- tice with respect to sizes and clearances between parts. Here it was found that parts made by certain manu- facturing techniques, reliably fall into predictable clear- ance ratio ranges from 0.001 to 0.01. This justifies the range of clearance ratios used in the analysis. Table 1 shows a schematic for the parameter values chosen for the different tasks.

The total task difficulty is measured by Fitt’s Law and is given by:

Index of Difficulty = log2 Handling Distance

All these variables were varied to get the maximum possible variation in task difficulty in the different tasks. Effects of the following phenomena on the assembly

Aluminum

c = 0.0025

h = 10.02 cm.

copper

c = 0.005

p = 0.31

21 i I

7.5 8 8.5 9 9.5 10 105 11 o!ocQ 0.002 0.004 0.006 0.006 0.010 0.012

Index of Task Difficulty --> Clearance Rat10 -->

Figure 7 Variation of total assembly time with task dificulty Figure 9 Variation of insertion time with clearance ratio/friction

61

SC

0’ 2 3 4 5 6 7 a

Handling Distance (cm) --z 9 10 11

Figure 8 Variation of handling time with handling distance

handling and insertion time is investigated in the Multi- modal Virtual Environment:

The distance of travel Handling distance affects handling time. The handling distance that the part is removed before insertion increases as we move down Table 1 from Task 1 to Task 3 increasing the handling task difficulty.

Resistance to insertion The resistance encountered during part insertion can be due to small clearances, jamming or wedging, hang-up conditions or insertion against a large force. For example press fit is an interference fit where large force is required for assembly. Friction coefficient and clearance ratio affects the insertion time and not the handling time because the peg is not in contact with the hole during gross handling motions. Increasing friction and decreasing peg-hole clearance increases the insertion task difficulty. As we move from Task 1 to Task 3, the friction coefficient p increases and clearance L’ decreases leading to increasing resistance to insertion. Figure 6 shows the relative location of the three selected tasks on a clear ratio-friction plot.

Whether a part is easy to align If the position of the part is established by locating features on the part or on its

5

4.5 I

4-

592

Page 9: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

It

- Handling+lnsertion Time I” Multimodal VE

Handlmg+lnsertlon Time in Real World

Handling Time in Multtmodal VE

- - Handling Time I” Real World

7 I

Figure 10 Variation of total assembly time with ease of alignment

mating part and insertion is facilitated by well designed chamfers or similar features. The assembly times in Task 3 and Task 4 can be compared to see the effect of chamfers.

6.1. Results

Figure 7 shows the effect of index of difficulty on the total assembly time. In all the plots in this paper the vertical errorbar represents &J about the mean. The trend in the real and Multimodal VE tasks is the same-the task completion time increases with increase in index of difficulty. However the rate VE Assembly Time/Real Assembly Time changes from 2.1 for brass to 2.6 for aluminum. Figure 8 shows the effect of handling time with handling distance. With increasing handling distance the handling time increases in both the real world and the Multimodal VE. Figure 9 shows the effect of insertion time with clearance ratio. With increasing clearance ratio, the insertion time decreases in both real world and Multimodal VE.

Figure IO shows the effect of chamfers. The assembly time decreases in the presence of chamfer in both the real world and the Multimodal VE. The ratio VE Assembly Time/Real Assembly Time is 2.3 in the presence of chamfer and 2.6 in the absence of chamfer for aluminum. From Figure IO it can be seen that the handling time remains almost the same (same handling distance) and the insertion time accounts for the total assembly time difference. Tuhle 2 shows the significance level associated with different tests in this set in a tabular format.

7. EXPERIMENT SET 2

The objective of the experiments in set 2 is to compare and quantify the effects of different modes (visual, auditory and haptic) on handling and insertion phases of manual assembly. This can help us determine which of the modes need to be modeled with higher fidelity to improve the accuracy of results. The following five treaiments are examined for an aluminum peg-in-hole assembly without chamfer:

1. Real task 2. 3D visuals + force feedback + sound 3. 3D visuals + force feedback + no sound 4. 3D visuals + no force feedback + sound 5. 2D visuals + force feedback + sound

l Task 2 and Task 3 are compared to see effect of auditory mode on assembly time

l Task 2 and Task 4 are compared to see effect of force feedback on assembly time

l Task 2 and Task 5 are compared to see effect of degrading the visual mode on assembly time

l Task 1 is for reference

From pilot experiments it is determined that with the current state of haptics technology, it is almost impos- sible to accomplish the task in VE without the visual mode. Thus. we replace the treatment with no visuals by a degradation from a 3D stereoscopic view to a 2D line view of the assembly.

7.1. Results

Figure II shows the effect of force feedback on assembly time. The time for assembly completion increases by a factor of 1.3 in the absence of force feedback. However, the standard deviation increased significantly in the absence of force feedback. The subjects consistently reported that the hardest task in the VE was moving the aluminum peg with no force feedback because they could not feel the amount of force to be applied without drop- ping it. In addition, the fingers could go inside the peg making it more difficult to manipulate. Table 3 shows that the significance level test associated with the test for variation in assembly times with absence/presence of haptics is satisfied implying that the haptic mode has a significant effect on performance.

Figure 12 shows the effect of visual feedback on assembly time. Table 3 shows that the significance level test associated with the test for variation in assembly times with degradation in visual mode is satisfied only for

Table 2 Attained significance levels for variation in handling and insertion times using ANOVA analysis

Variation in Source of variation f-value for P-value for Multimodal VE real tasks

Assembly time Handling time Insertion time Assembly time Handling time Insertion time

Index of difficulty Handling distance Clearance ratio/friction Chamfer present/absent Chamfer present/absent Chamfer presentiabscnt

<0.0001 <0.0001 10.0001 <0.0001 10.0001 <0.0001 <0.0001 <0.0001

0.33 0.23 <0.0001 <0.0001

593

Page 10: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

Table 3 Attained significance levels for variation in assembly times with chamfers and different modes present/absent using ANOVA analysis

Environment Source of variation P-value for total assembly time

P-value for handling time

P-value for insertion time

-.

Multimodal VE Multimodal VE Multimodal VE

Force feedback present/absent 2D Line/3D stereoscopic view Sound present/absent

<0.0001 <0.0001 <0.0001 0.18 0.008 0.83 0.79 0.79 0.55

the handling time (0.008) and not for insertion time. The stereoscopic view improved the handling time marginally and enhanced the subjective experience for the subjects.

Figure 13 shows the effect of sound. Table 3 shows that the test for significance level associated with the test for variation in handling and insertion time with absence/ presence of auditory mode is not satisfied. There is no variation in handling or insertion time with sound because with the imposed restriction on picking up and assembl- ing the peg rather than sliding it, this task is not sound intensive. In fact, sound tended to be produced only at the end of the trial.

Table 4 shows the number of errors for all the subjects. It can be noted that errors are zero for real tasks, less than 4% for VE tasks with force feedback and 12% for VE task without force feedback.

8. RESULTS

8.1. Summary

Although the time for assembly given by multimodal VE system is greater than that in the real world, the trends in the variation in assembly time with parameters like friction, chamfer, clearance and handling distance are the same in the real world and VE.

The task completion time increases with increase in index of difficulty. The ratio VE Assembly Time/Real Assembly Time lies in the range 2.1-2.6. With increasing handling distance the handling time increases in both the real world and the Multimodal VE. With decreasing clearance ratio and increasing object-object coefficient of friction ~1, the insertion time increases in both the real

- Handling+lnsertion Time in Multimodal VE

- Handling Time in Multimodal VE

5-

Force Feedback No Force Feedback

Figure 11 Variation of assembly time with force feedback

world and Multimodal VE. The total assembly time decreases in the presence of chamfer in both the real world and the Multimodal VE (but the handling time remains the same due to the same handling distance).

The hardest task in the VE was moving the aluminum with no force feedback. Not being able to feel the object made it difficult to judge how far to visually apply force to hold onto the object and avoid any slipping. It seemed unnatural to be able to go through the object. The absence of force feedback increases the time to completion by a factor of 1.3, increases the standard deviation of the measurements and increases the errors from less than 4% to 12%. Being able to touch and manipulate objects in a virtual environment, in addition to seeing (and/or hear- ing) them, gives a natural feel for interacting with the environment and objects within it.

Stereo viewing capabilities can produce models that communicate volume and depth more effectively than conventional 2D or 3D models. However, subjects did not feel that 2D line task was more difficult than the 3D stereoscopic task. From the results, the stereoscopic view improved the subjective experience for the subjects and improved the handling time marginally. There is no variation in handling or insertion time with sound, primarily because with the imposed restriction on pick- ing up and assembling the peg rather than sliding it.

8.2. Comparison of human performance in Multimodal VE and real world

We believe the major causes of discrepancy in the results are:

6r - Handling+lnsertlon Time in Mulhodal VE

Handling Time in Multimodal VE 1 4

0 3D Stereoscopic Display 2D Line Display

Figure 12 Variation of assembly time with visual display type

594

Page 11: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et al.

Table 4 Errors in the assembly tasks for experimental sets I and 2

Task Total Successful

samples assembly Dropped Timed

out Slipped,’ excessive

force

Brass VE task 336 325 1 I 0 Copper VE task 335 329 3 1 1

Aluminum VE task 341 336 1 0 4 Aluminum with chamfer VE task 339 329 5 0 5 Aluminum VE task (no sound) 339 326 0 0 13 Aluminum VE task (no haptics) 337 297 22 3 IS Aluminum VE task (2D) 336 326 2 0 8 Brass real task 340 340 0 0 0 Copper real task 340 340 0 0 0 Aluminum real task 338 338 0 0 0 Aluminum real task wtth chamfer 336 336 0 0 0

Spatial discrepancy between visual and haptic images The subjects found it difficult to coordinate what their hands were doing and what their eyes were seeing because these two did not coincide in the Multimodal VE, whereas they did in the real world. It was unnatural to have to look to see the virtual object in a different spatial location than where there were feeling it.

Not being able to model the rolling of the fingers If we hold a large object like cardboard box or basketball, the point-contact model of our fingertip is fairly accurate, but when we hold a small object (like the peg in this experiment) it becomes necessary to treat the finger- tips not as points but as surfaces of finite radius that can roll with respect to the manipulated object.

From the frame-by-frame analysis of the video record- ing of two subjects doing the VE tasks, it was noticed that most of the extra time was spent in the transition from gross motion to fine motion at the start of insertion phase. It was mathematically shown by Cutkosky’ that the rolling of curved fingers causes the contact area to shift with respect to the object and stabilizes the grasp. Without this rolling motion the subjects take much longer to manipulate the peg to transition from gross to fine motion-increasing both the handling and insertion time in the Multimodal VE. Since the haptic device tracks only a single point in space, it is not possible to accurately quantify the effect of not being able to model the rolling of the fingers.

- Handling+lnsettion Time in Multimodal VE

- Handhg Time in Multimodal VE I 1

Auditory Feedback No Auditory Feedback

Figure 13 Variation of assembly time with auditory feedback

Coulomb model not accurate for finger-object contact For soft fingers, sweat glands, by moistening the skin, tend to increase friction and make the skin more adhesive”. At light pressures, adhesion contributes greatly to the tangential force that a contact can sustain without slipping. The coulomb model of friction used (primarily due to simplicity and speed) does not capture the adhesion phenomena and is therefore less accurate.

9. CONCLUSIONS AND RECOMMENDATIONS

A physically based multimodal VE system has been developed for use in Design for Assembly analysis and prototyping designs. The physically based model keeps track of collisions and contacts, and models both dynamic and haptics interactions and rendering auditory, haptic and visual displays. The designer interacts with the system by using multiple finger interactions through haptic devices and can feel the objects, see them and hear sounds of their interaction. Experiments have been con- ducted with human subjects to compare handling and insertion assembly times in real world and Multimodal VE. This is the major original contribution of our work and has not been attempted by anyone to our knowledge.

Using VEDA, a designer can interactively manipulate components in a stimulated physical environment, and each component responds according to physical laws. He can then answer questions such as ‘are all the components tightlllfitted to each other? ‘, or ‘it is easy to assemble these components.7’. The designer interacts with the system by using multiple finger interactions through haptic devices and can feel the objects, see them and hear sounds of their interaction. VEDA has been used as an effective test bed for concepts, physically based models and human experiments to study the use of Multimodal VE for design evaluation, in particular for Design for Assembly.

The CAD/CAM community uses simulation and analysis tools which are geared towards accurate numerical analy- sis of dynamic interactions. These employ only the visual model and do not necessarily offer real-time performance. The Multimodal VE community is mostly concentrating on representing static objects multimodally for real time performance. This work is an attempt to bring these two areas closer by employing the contact model developed to simulate both the haptics and dynamics and to render multimodal displays in real time. There is no other work in our knowledge which does real-time simulation of

595

Page 12: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et a/.

dynamic objects with gravity and friction properties as well as examines the effect of force feedback.

9.1. Conclusions

The trends in the assembly time in the real and virtual assembly time are the same for both the handling time and the insertion time. The ratio VE Assembly Time/ Real Assembly Time is found to be close to 2.0. These results can be stated in terms of the resolution and fidelity of the Multimodal VE:

Resolution is how subtle a difference between two real tasks can a given emulation system convey. These results show that the resolution of the Multimodal VE simulation is good as assembly completion times are able to capture subtle differences in clearance, hand- ling distance and other parameters.

Fidelity is whether the task subjectively feels like a particular real task. The fidelity of the Multimodal VE system is low because Multimodal VE and real task completion times are different by a factor of 2.0.

The simulation is able to duplicate jamming situ- ations in virtual environment, but wedging is not simu- lated. This is because the stiffness used in the simulation is a couple of magnitudes lower than the real material stiffness and the deformation of the objects is not simulated.

These results show promise for use of virtual environ- ments for design evaluation but the technology is still under development. Over the next couple of years, as the haptic technology gets more mature, the use of haptic devices is likely to become more widespread. In the meanwhile, the computational speed of workstations will continue to increase, allowing more complex physically based models to run in real-time. We foresee the tech- nology in a couple of years to be at the point where high fidelity 3D simulations of objects with six degrees of freedom can be put together.

9.2. Other applications of VEDA

Extending this system to other practical applications of the system include:

Teaching robotic assembly For robotic assembly, the human instructor can demonstrate a task to a robot and record it on the Multimodal VE system. This approach is called programming b?% example. The robot remembers the steps and can generalize a prog- ram from the examples. Advice from the user can influence how this generalization occurs”.

Generation of alternate assembly plans The system knows of a valid assembly sequence (performed interactively by the designer), and it can use this information to generate alternate assembly plans using Al techniques.

Diagnosis The system can be used as diagnosis tool. For example, one can measure and keep track of force profile in VE without using sophisticated force sensors.

Estimating defect rate Assembly efficiency can be inter- preted as a measure of the potential to achieve further reduction in assembly time by redesign. Recently a

596

significant relationship was reported by Motorola between the assembly efficiency rating of a given product design and defect rates encountered in production assembly’. Such a relationship could provide a basis for a general, quantitative. predictive tool’.

9.3. Recommendations

Simulations of higher fidelity, higher realism, and lower time lags and delays in the system are desirable:

I.

2.

3.

4.

5.

6.

It is desirable to move the 3D visual feedback to the same location as the force feedback so that the visual and haptic images of the hands and the PHANTOM used as a probe coincide spatially. Using SCRAMNet with a shared memory architec- ture or other networking technologies like ATM will allow better distribution of processes and provide more computational power for the dynamic simula- tion4,3’. This in turn will allow use of higher stiffness of the objects in the simulation increasing the fidelity of the results. Using multiple processors will also allow much better real time performance. Improvements in physically based simulation models, for example a more accurate model for adhesion for finger contact friction, and more efficient methods of detecting and responding to collisions are desirable. lmprovements in auditory and haptic rendering will enhance the fidelity of the simulation, Work in audi- tory domain includes real time physically based syn- thesis for friction, collision, and spatialized sounds. In the haptic domain, work will involve exploration of a variety of haptic rendering and representation issues such as object shape, texture and surface compliance. Standard methods for easily implementing physical models that range from high fidelity to coarse approxi- mations by varying simulation parameters need to be developed in the future. It is also necessary to move to descriptions at higher levels. For example, we would like to describe a surface in terms of degrees of roughness, softness and stickiness and let the system choose the simulation compliance, friction and other parameters. The hardware that we have used for a haptic interface is barely adequate, and it is desirable to track multiple points on the finger to allow more accurate physically based modeling. Further improvements in interface devices will make the Multimodal VE based design tools more acceptable to the designers.

ACKNOWLEDGEMENTS

This work is supported by funds from the Naval Air Warefare Center Training Systems Division. Work was performed using the computational and hardware facilities at the Virtual Environment Technology for Training (VETT) group. These sources of support are gratefully acknowledged. Thanks are also due to Jayaraman Krish- nasamy, Professor Thomas Sheridan, Walter Aviles, Nat Durlach, Dr J. F. Lee, Dr Mandayam Shrinivasan and Professor John Williams for guidance in different stages of the project.

Page 13: Prototyping and design for assembly analysis using multimodal virtual environments

Prototyping and design for assembly analysis: R Gupta et a/.

REFERENCES

1.

2.

3.

4.

5.

6.

I.

8.

9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

19.

20.

21.

22.

23.

B., B., Six sigma quality and DFA-DFMA case study: Motorola inc. Boothroyd und Dewhurst DFM Insight, 1991. 2, l-3. Barkan, P. and Hinckley, C. M., The benefits and limitations of structured design methodologies. Manufacturing Review, 1993. 6(3). 211-220. Bejczy, A. K. and Handlykken. M., Experimental results with a six-degree-of-freedom force reflecting hand controller. In Pro- ceedings of the Seventeenth annuul conference on manual control (Prepared by Jet Propulsion Lab, Caltech, Pasadena). UCLA, Los Angeles, CA, 1981, pp. 4655477. Bohman, T., Shared memory computing architectures for real- time simulation-simplicity and elegance. Internal Paper. SCRAMNet Corporation, 1995. pp. Ill I. Boothroyd, G. and Dewhurst, P., Product Design for Assembly Handbook. Boothroyd Dewhurst. 138 Main Street. Wakefield. RI 02879, 1991. Buttolo, P., Kung, D. and Hannaford. B., Manipulation in real. virtual and remote environments. In Proceedings IEEE Internu- tional Conferenw on Systems, Man and Cybernetics. Vancouver. BC, 1995. Vol. 5. pp. 4656661. Cutkosky, M. R.. Robotic Gripping und Fine Manipulation. Kluwer Academic Publishers, Boston, MA, 1985. Durlach, N. I. and Mavor, A. S., eds, Virtual Reality: Scientific und Technologicul Challenges. National Academy Press. Wash- ington. D.C., 1995. Brooks Jr. F. P.. Ming Ouh-Young, J. J. B. and Kilpatrick, P. J.. Project GROPE-haptic displays for scientific visualization. Computer Graphics, 24(4), 117- 185. Gilmore, B. J.. The simulation of mechanical systems with a changing topology. Ph.D. dissertation. Department of Mechani- cal Engineering. Purdue University, Lafayete. IN, 1986. Gilmore, B. J. and Cipra, R. J., Simulation of planar dynamic mechanical systems with changing topologies-Part I. Charac- terization and prediction of the kinetic constraint changes; Part II, Implementation strategy and simulation results for example dynamic systems. ASME Transactions of Journal of Mwhanical Design 1991. 113(l), 70-83. Gupta, R., Prototyping and design for assembly using multimo- dal virtual environments. Ph.D. dissertation. Department of Mechanical Engineering, MIT, Cambridge, MA, 1995. Gupta, R. and Krishnasamy, J., Modeling and simulation of dynamic and haptic interactions in multimodal virtual environ- ments. In Proceedings of International Cor$wnce on Virtual Systems andMultimedia, Gifu. Japan, 1995. pp. 161-170. Hannaford, B.. Wood, L.. McAffee, D. A. and Zak, H.. Perfor- mance evaluation of a six-axis generalized force-reflecting tele- operator. IEEE Transactions on Systems. Man and Cybernetics. 1991, 21(3), 62Om 633. Hill. J. W., Two measures of performance in a peg-in-hole manip- ulation task with force feedback. In Proceedings of 13th Annual Conference on Manuul Control (Prepared by NASA and US Department of Transportation). MIT, Cambridge. MA. 1977. pp. 3Oll309. Ishii. M.. Nakata, M. and Sato, M., Networked spidar: a net- worked virtual environment with visual, auditory and haptic interactions. Prr.smce. 1994, 3(4). 351-359. Jared, G. E. M.. Limage, M. G.. Sherrin, I. J. and Swift, K. G.. Geometric reasoning and design for manufacture. Computer- Aided Design, 1994, 26(7), 5288536. Malek, R.. The grip and its modalities. In The Hund, ed. R. Tubi- ana. W. B. Sanders. Philadelphia, PA, 1981. Chap. 45. Mason, M. T. and Wang, Y.. On the inconsistency of rigid-body frictional planar mechanics. In Proceedings of’IEEE Znternational Conference on Rohotic.s and Automation, 1988, pp. 524-528. Massie. T. H. and Salisbury, J. K., The PHANTOM haptic inter- face: a device for probing virtual objects. In ASME Winter .4nnual Meeting: Symposium on Haptic Interfaces fin Virtual Environment and Teleoperator S,vstems (Vol. 55-l), Chicago, 1994, pp. 295--302. Massimino. M. J. and Sheridan. T. B., Teleoperator performance with varying force and visual feedback. Human Fartors. 1994. 36(l), 1455157. Nevins. J. L. and Whitney, D. E., Robot assembly research and its future applications. In Symposium on Computer Vision and Sensor- Based Robots. GM Research Laboratories, 1978, pp. 2755321. Rosenberg, L.. The use of virtual fixtures to enhance telema- nipulation with time delay. In ASME Winter Annual Meeting, Symposium on Haptic Interfaces .for Virtual Environment and Teleoperator .Svstems (Vol. 49). Louisiana, 1993. pp. 29-36.

34.

15.

26.

17.

‘8.

29.

30.

31.

Salisbury. K., Brock, D., Massie. T.. Swarup, N. and Zilles, C.. Haptic rendering: programming touch interaction with virtual objects. In ACM Svmposbm on lnteractiw 30 Graphics, Mon- terey. CA. 1995, pp. 12133130. Sheridan. T. B.. Telerobotics, Automation and Human Supervisory, Control. The MIT Press, Cambridge, MA. 1992. Sheridan. T. B. and Ferrell, W. R.. Man-- Machine Systems; Infor- mation, Control and Decision Models of Human Prrformancc. MIT Press, Cambridge, MA., 1974. Sturges. R. H.. A quantification of manual dexterity: the design for an assembly calculator. Robotics and Computer Integrated Manufirc~turing. 1989, 6(3), 2377252. Sturges. R. H. and Kilani, M. 1.. Towards an integrated design for an assembly evaluation and reasoning system. Computer Atdetl De.ci,gn, 1992. 24(2). 67-79. Takahashi. T. and Ogata, H.. Robotic assembly operation based on task-level teaching in virtual reality. In IEEE Znternationul Conlereme on Robotics and .4utomation, Nice. France. 1992, pp. 1083m 1088. Zeltzer. D.. Autonomy, interaction. and presence. Presence: Tele- operators and Virtual Environment.\. 1992. 1( I ). 127- 132. Zirkler. A.. Using vmebus to help combat helicopters hug the terrain. V’MEhus Systems. 1994. lI( I). l-8

working n.ith the Graphics and Modeling Group at Schlumberger Austin Prodwt Center on creation and interac,trve manipulation of grometric~ n7odel.s for geologic~al frarrrw.c with material properties.

edged e.ypert in the area of Part Mating Theory. and holds several patents in the area of part mating methods. Dr Whitne>, i.r the co- author of .reveral books and over 80 arriclcs.

David Zeltzer received his MS and PhD degrees in Computer untl Information Science from Ohio State University in 1979 und 1984, respectiwly.. He spent I2 years working m research and teaching at MIT, and from 1993 was a Principle Research Scientist at the MIT Research Laboratory of Electronics. He is cur- rently Visualiration Technology Lender in the Visual Information Systems Group of the David Sarnoff Research Center. In addition to work in virtual environment technology. his research interests include human/machine intrrfiw design, and

htologic~al and artificial motor control .svsrems.

597