10
Computer-Aided Desrgn. Vol. 29 No 8, pp. 565-574, 1997 c 1997 Elseuer Science Ltd. All rights reserved Prmted in Great Eiritam PII: SOOlO-4485(96)00092-9 0010.4485197/$17.00+ 0.00 Virtual reality interfaces for feature-based computer-aided design systems* S N Trika, P Banerjeet and R L Kashyap$ A computer-aided design (CAD) system with a virtual reality (VR) interface simplifies the design of complex mechanical parts. To add a design feature (e.g., a hole, slot, or protrusion), the designer can navigate in the part to the appropriate face of the part where he/she wishes to attach the feature, and sketch directly on that face. Besides convenience, this method of feature specification implicitly enforces feature accessibility constraints, and also provides hints to the process-planner regarding the order in which the features may be manufactured. We detail the design of a VR-based prototype CAD system. The system maintains the knowledge of part cavities and their adjacencies, and a triangulated boundary-representation of an approximating polyhedron. We present incremental provably correct algorithms for updating this representation as the user edits the part. We also show how this representation supports real-time displays. navigation, and collision detection. The user-interface of the CAD system relies on these capabilities to provide the above-mentioned advantages. c- 1997 Elsevier Science Ltd Keywords: User interfaces, virtual reality, feature-based design, geometric reasoning, feature extraction 1. INTRODUCTION Most of the current Computer-Aided Design (CAD) systems are both feature-based and constraint-based. Feature-based design means that parts are designed by starting with a simple extrusion or revolution (such as a cylinder or a block), and then modified by attaching features such as holes, slots, etc. to the various faces of the existing design. The designer spe$es the position, size and orientation of these features . In constraint- based design, most of these properties of the features are not specified explicitly, but rather as relationships to the INTEL Corporation, 2111 NE 25th Avenue, Hillsboro, OR 97124, USA *This work was supported in part by the National Science Foundation grant to the Purdue Engineering Research Center for Collaborative Manufacturing, and the NSF Grant DMI 9500396 to the University of Illinois, Chicago t Department of Mechanical Engineering, University of Illinois, Chicago, IL 60607, USA $’ School of Electrical and Computer Engineering, Purdue University. W. Lafayette. IN 47906, USA Poprr recrivrd: I September 1995. Rr~Ysrd: 23 Oc~ohcv I996 existing features (or faces/edges) of the part’. These methodologies effectively capture the design intent for most designs, and have essentially replaced the con- structive solid geometry” interface to design. Despite the simplicity of the concepts, CAD packages have very complex user interfaces, primarily because the displays present only an external view of the design to the user20, making it difficult to specify the three- dimensional constraints and to locate the three- dimensional features. Not surprisingly, several CAD packages, for example Pro-Engineer, only allow the user to sketch a cross-section of the features on a two- dimensional sketching plane, and to place constraints on this sketch. A feature thus specified is attached by selecting a face of the part, and the orientation of the feature with respect to this face. In case the designer wishes to attach a feature to an internal hidden face of the part, he/she is required to select the desired face as the package enumerates several faces. Another shortcoming of these systems is that they do not address detection of inaccessible features (for manufacturing), and provide little guidance to the process planners about sequencing the manufacturing features. These difficulties can be alleviated if the user is allowed to navigate in the environment of the designed part”.*‘. Specifically, the design of an immersive interface for a feature-based system provides three important advantages over the traditional two-dimensional interfaces: 1. It rrimpli$es the design process. The design of an immersive interface permits the user to navigate around the part, into its cavities, and to sketch the desired features in three dimensions on a face that is pointed to. For example, the cylindrical pocket in the example of Figure 1 can be attached to the part by going down the cylindrical through hole, and sketching with a wand (a three-dimensional mouse) a circle on the cylindrical face. The depth can be spoken, and interpreted by a speech-recognition system. Three-dimensional constraints can also be placed intuitively by selecting the associated entities, and dictating the appropriate dimensions and relations. An immersive interface provides the opportunity for inputting many of the difficult to input features and constraints without ever having to exit the environment, so the continuity of the designer’s thought and reasoning process is never interrupted. 565

Virtual reality interfaces for feature-based computer-aided design systems

Embed Size (px)

Citation preview

Page 1: Virtual reality interfaces for feature-based computer-aided design systems

Computer-Aided Desrgn. Vol. 29 No 8, pp. 565-574, 1997 c 1997 Elseuer Science Ltd. All rights reserved

Prmted in Great Eiritam

PII: SOOlO-4485(96)00092-9 0010.4485197/$17.00+ 0.00

Virtual reality interfaces for feature-based computer-aided design systems* S N Trika, P Banerjeet and R L Kashyap$

A computer-aided design (CAD) system with a virtual reality

(VR) interface simplifies the design of complex mechanical parts. To add a design feature (e.g., a hole, slot, or protrusion), the designer can navigate in the part to the appropriate face of the part where he/she wishes to attach the feature, and sketch directly on that face. Besides convenience, this method of feature specification implicitly enforces feature accessibility

constraints, and also provides hints to the process-planner regarding the order in which the features may be manufactured. We detail the design of a VR-based prototype CAD system. The system maintains the knowledge of part cavities and their adjacencies, and a triangulated boundary-representation of an

approximating polyhedron. We present incremental provably correct algorithms for updating this representation as the user

edits the part. We also show how this representation supports real-time displays. navigation, and collision detection. The user-interface of the CAD system relies on these capabilities to provide the above-mentioned advantages. c- 1997 Elsevier

Science Ltd

Keywords: User interfaces, virtual reality, feature-based design, geometric reasoning, feature extraction

1. INTRODUCTION

Most of the current Computer-Aided Design (CAD)

systems are both feature-based and constraint-based. Feature-based design means that parts are designed by starting with a simple extrusion or revolution (such as a cylinder or a block), and then modified by attaching features such as holes, slots, etc. to the various faces of the existing design. The designer spe$es the position, size and orientation of these features . In constraint- based design, most of these properties of the features are not specified explicitly, but rather as relationships to the

INTEL Corporation, 2111 NE 25th Avenue, Hillsboro, OR 97124, USA *This work was supported in part by the National Science Foundation grant to the Purdue Engineering Research Center for Collaborative Manufacturing, and the NSF Grant DMI 9500396 to the University of Illinois, Chicago t Department of Mechanical Engineering, University of Illinois, Chicago, IL 60607, USA $’ School of Electrical and Computer Engineering, Purdue University. W. Lafayette. IN 47906, USA Poprr recrivrd: I September 1995. Rr~Ysrd: 23 Oc~ohcv I996

existing features (or faces/edges) of the part’. These methodologies effectively capture the design intent for most designs, and have essentially replaced the con- structive solid geometry” interface to design.

Despite the simplicity of the concepts, CAD packages have very complex user interfaces, primarily because the displays present only an external view of the design to the user20, making it difficult to specify the three- dimensional constraints and to locate the three- dimensional features. Not surprisingly, several CAD

packages, for example Pro-Engineer, only allow the user to sketch a cross-section of the features on a two- dimensional sketching plane, and to place constraints on this sketch. A feature thus specified is attached by selecting a face of the part, and the orientation of the feature with respect to this face. In case the designer wishes to attach a feature to an internal hidden face of the part, he/she is required to select the desired face as the package enumerates several faces. Another shortcoming of these systems is that they do not address detection of inaccessible features (for manufacturing), and provide little guidance to the process planners about sequencing the manufacturing features.

These difficulties can be alleviated if the user is allowed to navigate in the environment of the designed part”.*‘. Specifically, the design of an immersive interface for a feature-based system provides three important advantages over the traditional two-dimensional interfaces:

1. It rrimpli$es the design process. The design of an immersive interface permits the user to navigate around the part, into its cavities, and to sketch the desired features in three dimensions on a face that is pointed to. For example, the cylindrical pocket in the example of Figure 1 can be attached to the part by going down the cylindrical through hole, and sketching with a wand (a three-dimensional mouse) a circle on the cylindrical face. The depth can be spoken, and interpreted by a speech-recognition system. Three-dimensional constraints can also be placed intuitively by selecting the associated entities, and dictating the appropriate dimensions and relations. An immersive interface provides the opportunity for inputting many of the difficult to input features and constraints without ever having to exit the environment, so the continuity of the designer’s thought and reasoning process is never interrupted.

565

Page 2: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al.

Figure I ,411 example part that is difficult to input in the traditional pnrametrx CAD packages. because to create the cylindrical pocket, an tmintuitlve datum plane must bc created on which the pocket’s cross section is sketched. With a virtual reality interface. the user can simply navigate doa-n the cyhndrical hole. point with a wand to the appropriate position on the cylindrical face. and sketch a circle. He’ she ma) then assign a depth using the keyboat-d

3

Reqdiring ‘the ‘hesigner to sketch feat&s directly on part surfaces adds a degree of confidence to the manufacturability of the design because it guarantees accessibility to that surface. The designer is required to reach a surface before sketching on it, in much the same way as a machine tool must reach a surface before machining a feature on it. It pro rides hi13 Is to t/it> p~o~r~.s.r-~~l~rMiit~i. regarding feNtLlTC-.sL’IlLlL’I1I’ilZg. One of the problems in process- planning is that of deciding the order in which the various features should be machined. Design of parts using immersive interfaces (as explained above) makes this information explicit: the designer’s choice of the order of features typically reflects the quickest (the most convenient) way of ‘generating’ the part.

The basic requirements of a CAD system providing a 3D interface are unambiguous and exact representation of parts in its part domain. fast shaded displays of the represented part, fast navigation and collision-detection support. and a user-friendly complete interface that supports the basic advantages explained above.

In this paper, we present the design of a prototype CAD

system that has a limited part domain, but effectively demonstrates the above ideas. The part domain consists of all iso-oriented (no inclined faces’) parts with prismatic and cylindrical cavities (see Figure 2). The focus of the paper is to present in detail the system architecture and user-interface. Central to the system architecture is the issue of part representation, and how this is maintained as the user edits the part. We reason that the requirements of unambiguous representation of all parts in the domain, and that of efficient collision- detection are met by maintaining an explicit list of part cavities and their adjacencies. To admit real time shaded displays, the system also maintains the boundary- representation (b-rep) of a triangulated polyhedron that approximates the part.

The paper is organized as follows. The following section discusses related work in the areas of computer- aided design in virtual environments, as well as research results in automatic recognition of part cavities. The third section explains the chosen part representation, and details algorithms for updating this representation when the user edits the part geometry. This admits a fast

Figure 2 Illustration of the part domain. The parts are iso-oriented. and contain only prismatic and cylindrical cavities. These are some typical example parts that are non-trivial to input into a (‘ALI package using standard parametric techniques. but straightforward to specify if the user is oft‘ered a VR interface

collision detection algorithm, presented in Section 4, that allows us to implicitly enforce feature accessibility constraints. In Section 5. we detail the system design and the user-interface of the CAD system. Section 6 concludes the paper with a brief summary.

2. LITERATURE REVIEW

There is very little documented research on the application of virtual reality (VR) to computer-aided design of mechanical parts. Goodson’, probably the first to address application of VR technology to CAD,

describes geometric modeling of surfaces, and addition of hole features, using a glove input device. Yamamoto ct al.‘h define parts using a virtual cutter. and provide tactile information to the designer about the required force. More recently, Trego and Magleby”’ propose several ways in which virtual reality interfaces can benefit computer-aided design by simplifying the user interface. These works are conceptual in nature. and do not address either the internal representation and algo- rithms, or the user-interface details. Similarly, Dani and Gadhi only provide a conceptual architecture of their system; they specify the I,!0 devices, the modes of interaction. and the supporting hardware and software libraries. In contrast, this paper presents the design of a complete system--including details of the internal part representation and the associated algorithms.

The update of the chosen internal representation requires recognizing the part cavities; this is a non- trivial problem because of the presence of compound cavities with interacting primitive volumes. Several

566

Page 3: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al.

approaches exist to handle the problem, and more of these are discussed in detail on Shah’s excellent review’“. Some of the more recent methods are reviewed in Reference 22. Unfortunately, most existing procedures for this problem fail in the presence of interacting primitive volumes”’ 14. and systems that provide correct recognition of the parts are computationally very expcnsivel”_l”.“, Techniques for incremental recognition of part cavities”.” overcome the complexity problem by ‘naintaining a set of cavity volumes that are updated whenever the user modifies the design; these systems, however, are not shown to be provably correct”. and ‘nay misinterpret so’ne parts. In this paper, we present an efficient incremental cavity recognition algorithm that is provably correct.

The knowledge of part cavities is maintained primarily to allow fast collision detection. Traditional collision detection algorithms (e.g. Reference 2) are not applicable because they check intersection between two polyhedra. Our problem is simply to ensure that the user, typical11 represented by a 3D point, never collides with the part; this is easily done by exploiting the knowledge of part cavities. as explained in the fourth section.

3. PART REPRESENTATrONS AND INCREMENTAL UPDATES

The prototype system maintains two representations of a part: (1) an explicit representation of the part’s cavities and their adjacencies. and (2) a boundary representation of a triangulated polyhedron approximating the part. This section details the two representations and presents algorithms for updating them as the user edits the part. The procedure for updating the knowledge of part cavities is incremental; we reason how the existing part cavities are modified by the user action. The update of the approx- imating polyhedron relies on the solid modeler.

C&2,6) - -

3.1. Representing the part cavities and their adjacencies

A part’s cavities are explicitly represented by its destructive solid geometry (DSG) representation. The DSG representation lists the part’s cavities decomposed into primitive \,olumes”. For our part domain, a DSG representation consists of (i) the bounding box of the part. (ii) a set of boxes. SB. representing the part’s cuboidal cavities. and (iii) a set of cylinders. SC, representing the part’s cylindrical cavities. For example, a DSG representation of the part in Fig~c 30 is shown in Q/KCJ 3h. Figure _?(a presents an alternate, but equally valid, DSG representation of the same part. As explained below. the internal DSG representation of the part in our system depends on the user input: this simplifies the updating of the representation, at the possible risk of making it unneccessarily complicated.

We represent the adjacencies between the cavities by means of a graph G. This graph contains a node for each cavity in the DSG representation. An edge exists between two nodes if the corresponding cavities either overlap, or touch. The graph G also contains six additional nodes that represent the region outside the part’s bounding box by six half spaces. Each node stores a pointer to the cavity volume it represents (a box. a cylinder, or a halfspacc). For convenience. \ve refer to nodes represent- ing half-spaces as l~nlf$/~l,lrccj ~otk~.c. and similarly define S,-t~orl~~ and S’, -r~or/~~.s. For example. the graph corresponding to the DSG representations of Figurr.s

3h and 3~~ are shown in Figurrs 4~ and 4h respectively. The rectangular nodes are the half-space nodes. the circular nodes are the S,-nodes. and the double-circular nodes are the &.-nodes.

The cavities and their adjacencies are thus represented by the set {.S,3. S’( G}.

<-

Circle (r= :I, center=(6,4,3))

Y

- - (6,0,0) k X

Z

(b)

Figure 3 (a) An example part. with selected b-rep entity speafications. (b) :I DSG rsprevs~tatlon oi‘ the part III (a). shout1 ly the t’ront view. (I_) another. equali) valid. DSG representation of the part 111 (a)

567

Page 4: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al.

0 half-space nodes

(b)

Figure 4 (a) The graph describing the cavity adjacencies, for the DSG representation in Figure 3b; the square nodes are the half-space nodes, the circular nodes are the &-nodes, and the double circular nodes are the SC-nodes, (b) the graph describing the cavity adjacencies. for the DSG representation in Figure 3c,

3.2. Incremental recognition of part cavities

The user operations that edit a part are addition of new features and constraints, modification of existing features and constraints, and the undo operation. Each of these operations modifies the part essentially by either subtraction or addition of a cuboidal or cylindrical primitive volume. We explain below how the subtraction and addition actions effect the cavity information of the part.

3.2.1. Subtraction of a primitive volume Subtraction of a cylindrical volume, C, introduces a cylindrical cavity in the part. The update of the cavity representation {S,: &, G} thus requires addition of C to the set of cylindrical cavities SC. A new node, corresponding to the introduced cavity, is introduced into the adjacency describing graph G. Next, we link the new node to the nodes for other cavity volumes by simply checking overlap and contact between C and the cavities represented by the other nodes. Figure 5 illustrates an example showing the attachment of a cylindrical cavity. The user specifies a cylindrical through-hole attached to one of the ribs of the slot (Figure 5~). This corresponds to removal of a cylindrical volume from the part. Accord- ingly, the cylindrical volume C is added to the SC set, and a new node corresponding to the cylindrical volume is added to the adjacency-describing graph. The dashed links in Figure 56 are the new links that are added to the adjacency describing graph. The link between the node for the halfspace (X36) and C, for example, indicates that the two volumes interact, and that it is possible for the user to navigate from either of these cavities into the other.

Subtraction of a cuboidal volume from the part is handled in a similar manner. The cuboidal volume is first

(b)

Figure 5 (a) The example part of Figure 3a. The user specifies attachment of a cylindrical through-hole feature (dashed ) on one of the ribs of the slot, (b) the updated cavity adjacencies describing graph, assuming the initial DSG representation of Figure 3b. A new node corresponding to the cylindrical cavity is inserted. Since the cavity interacts with two half-space nodes and one &-node, links to these nodes are also introduced (dashed)

added to S,. A new node is inserted into G, and links are added between the new node and existing nodes as required, by checking overlap and contact of the corresponding volumes.

A 1 r’th Update-By-Adding-minX-Cavity @BOX, S$?$ Update-By-Adding-minY-Cavity (BBOX, Sg. G: B)

;see below ;similar to above

Update-By-Adding-minZ-Cavity (BBOX, Sg. G, B) ;similar to above Update-By-Adding-maxX-Cavity (BBOX, SB, G, B) ;similar to above Update-By-Adding-maxY-Cavity (BBOX, Sg. G, B) ;similar to above Update-By-Adding-maxZ-Cavity (BBOX, Sg+ G, B) ;simifar to above

BBOX = Box (min (BBOX.minpt, B.minpt) _) max (BBOX.maxpt, B.maxpt))

aorithm Update Bv Add- Ca itv _ _ _’ - If’ (BBOX.Sn,G.B) if (B.minx > BBOX.minx)

return endif Let xmin-node = the half-space node in G for (X 5 BBOX.minx) Update xmin-node to (X < B.minx)

Let New-Box = Box ((B.minx, BBOX.miny, BBOX.minz) + B.maxpt) Add New-Box to Sg Let n = a new graph node, for New-Box

Add to G the link (n H xmin-node) for each neighbor n2 of xmin-node

if (n2 is not a half-space node)

Remove from G the link (n2 tf xmin-node) end if

Add to G the link (n tf n2) end for

Figure 6 The algorithm that reduces the problem of addition of a cuboidal protrusion to a part, to the problem of addition of a cuboidal volume within the bounding box of the part. This is done by expanding the bounding box, and adding some cavities. BBOX: the part’s bounding box. Sa: a set of boxes, representing the cuboidal cavities of the part. G: the graph describing the adjacencies between the part cavities, B: the box to be added to the part (the protrusion volume)

568

Page 5: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al.

3.2.2. Addition of a primitive volume iso-oriented parts with cuboidal and cylindrical cavities. Addition of a primitive volume must ensure that the part Addition of a primitive volume V is allowed only if remains in the part domain. Thus, for example, a cyl- existing cavities that interact with I/ are either contained indrical protrusion cannot be added to the part, because in V, or can be split such that the portion outside V can be such an operation results in a ‘positive’ cylindrical face, represented as a collection of cylinders and boxes. and thus in a part which is not in the part domain. Recall In general, addition of a cuboidal volume is more that the part domain for this prototype system consists of complex than addition of a cylindrical volume, because

1 3 - -_

m

84 B5

Figure 7 An example illustrating modification of the adjacency describing graph, when a cuboidal protrusion is attached to the part. (a) Cross-section of an example part. The user specifies addition of the dashed cuboidal protrusion B, (b) the DSG representation of the part, and the adjacency describing graph, prior to the protrusion attachment, (c) step 1: Eliminate nodes corresponding to the cavities contained in the protrusion. The node for B2 is removed, (d) step 2: Identify nodes that touch or overlap the protrusion (shaded nodes), (e) step 3: Split the identified nodes so as not to contain any portion of the protrusion. (f) step 4, Re-evaluate adjacencies of the new nodes to the old neighbors. The resulting graph describes the adjacencies of the cavities in the resulting part

Page 6: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al

the cuboidal volume can split both cylindrical and cuboidal cavities. and can expand the bounding box as well. For example, addition of a cylindrical protrusion that expands the bounding box is simply not permitted (as explained earlier). We detail next how the representa- tion of the cavities {S,. S,.G} is updated when a cuboidal protrusion is attached to the part.

Addition of a cuboidal volume B to the part may shrink the existing cavities. expand the bounding box (resulting in new cavities that must be described), and change the adjacencies between the existing cavities. To simplify the update problem, we first reduce the problem to that of addition of a box within the bounding box (by expanding the bounding box), and then add the cuboidal protrusion B.

Given a part P, with bounding box BBOX. cuboidal cavities S,. cylindrical cavities S,-. and the corresponding adjacency describing graph G. expansion of the bound- ing box requires (i) setting BBOX to the expanded volume. (ii) adding to S, upto six more boxes representing the volume in the new BBOX that is not in the old BBOX. and (iii) updating G by adding nodes for the added cavities, severing links that comiect the half-space nodes to the SIs-nodes and &-nodes, and adding new links representing the new adjacencies. Addition of the new links does not require intersection tests, but follows from geometric reasoning dictating how the previous links arc modified. Figwr 6 lists a pseudo-code description of the procedure explained above.

The above procedure results in a representation of the part P before addition of B, such that the bounding box BBOX contains both P and B. This simplifies the generation of the representation of the new part (P+ B). First, each cavity volufie is checked for inclusion in B. If a volume is included in B. we reason that it does not represent a cavity in the new part, and is thus removed from SH or S, (and its corresponding node is removed from G). Next. we determine those cavity volumes that can possibly be modified by addition of B: these volumes either touch or overlap with B. For each such cavity volume I)‘,, we (i) determine the node II corresponding to I,‘,. (ii) find the set S of nodes neighboring IZ. (iii) remove the node II and the links incident on it. (iv) split I’, into new volumes that lie within V, but are outside B. (v) add nodes to G corresponding to these new volumes. and (vi) compute the adjacency between these new nodes and the nodes in S. Note that the procedure essentially finds those nodes that must be modified. modifies them as necessary (splits them if needed) and relinks them to the graph: the operation is local. and hence avoids complex computations. An example illustrating the procedure is shown in Figlms 711-7’f:

As mentioned earlier. update of the representation is simpler when a cylindrical volume C is added: the procedure is very similar to that detailed above. and is therefore omitted.

3.2.3. Complexity and correctness of incremental cavity recognition The proposed representation of the cavities and their adjacencies is {S,, SC, G}. as explained in Section 3.1. The additional storage required is thus quadratic in the number of part cavities, because of the adjacency information. However, if the part does not have a large number of cavity interactions (as is typical), the storage is linear in the number of part cavities.

Removing a cylindrical or cuboidal volume from the part takes time O(N) where N is the number of cavities in the representation, because the removed volume is simply checked against the others for possible inter- action. Determining whether a specified volume can be added to the part without violating the part domain assumptions takes O(N) time. since interaction with each existing cavity is considered individually. If the specified volume can be added, an additional O(N’) operations are required since each overlapping cavity V, must be reevaluated for contact with Vi’s neighbors. However, in typical parts. a cavity volume has only a small number of interacting cavities, and thus the quadratic upper bound is very loose. Thus, we expect linear time performance for most of the addition operations as well. Finally, note that we are willing to expend linear, and even quadratic time. for these operations because the user can wait while a feature is attached before continuing opera- tions; we require real time speeds only for navigation (Section 4).

We emphasize that the procedure outlined above for updating the DSG representation in case of addition; deletion of features is provably correct. This follows from simple geometric reasoning that dictates the alteration of the part cavities in case simple volumes are added to or subtracted from the part cavities. Correctness and efficiency are the unique characteristics that distinguish this cavity recognition approach.

3.3. Triangulated boundary representation of an approximating polyhedron

The boundary representation (b-rep) of a part can be obtained directly from its DSG representation by a sequence of regularized subtractions16. and is therefore not maintained. However, we do store explicitly the triangulated b-rep of a polyhedral approximation of the part, for this is crucial in providing real time VR displays. A polyhedral approximation of the part is maintained by a solid modeler (see Section 5). For fast displays. this representation is triangulated. The stored data includes a list of vertices. Each face is identified by three indices that refer to vertices. The indices are ordered such that the face interior remains to the left as the indexed vertices are traversed (this is quite standard’). The part is described as a set of faces.

Updating the triangulated polyhedron is quite straightforward. As mentioned. the solid modeler updates the polyhedral approximation. Most of the part faces are not modified by update of a solid, and thus the existing triangulation offers partial triangulation results. The new/modified faces of the part. however, have to be re-triangulated.

4. ENFORClNG NAVIGATION CONSTRAINTS IMPOSED BY THE PART CAVITIES

As mentioned in the Introduction, the existing part cavities in a part represent constraints as to where the designer is allowed to navigate and place a new feature. The CAD system is responsible for enforcing these constraints. It does so simply by ensuring that the designer. typically represented by a three-dimensional viewpoint Q. remains outside the part or in one of part

570

Page 7: Virtual reality interfaces for feature-based computer-aided design systems

cavities. The problem thus essentially reduces to the point inclusion problem: does the point Q lie inside, on the boundary, or outside the part? Since Q is permitted to lie on the boundary as well as outside the object, we wish to classify a point that lies inside the part as IN, and others as OUT (i.e. points on the boundary will also be classified as OUT).

A simple test that solves the problem, given the cavities, is to enumerate through the part cavities and test if any of them contains the point Q. If any cavity contains Q, then Q is OUT, otherwise it is IN. This requires O(N) operations, where N is the number of cavities in the representation. Thus, as parts become complex (i.e. as the number of cavities, N, increases), this simple approach becomes unacceptable for real time navigation.

Exploiting the adjacency information between the cavities, however, leads to a faster point inclusion test. Under the assumption that the sampling rate of the user position is faster than the user motion, as is typical in such applications, we reason that if the virtual user is in a cavity at sampling time t, the virtual user either remains in that cavity at sampling time t + 1, or moves into an adjacent cavity. Thus, we maintain at all times a current

cavity--the cavity in which a virtual user is. The user may be in several cavities at the same time since the cavities may overlap, but we require only one of these for our application. Thus, to determine if Q, the position of the virtual user, is IN or OUT, we simply check the current cavity, and if needed its adjacent cavities. If the user remains in the cavity, no representation update is required, and Q is still OUTside the object. If the user moves into an adjacent cavity, we simply update the current cavity, and Q is still classified as OUT. Finally, if the user is neither in the current cavity, nor in any adjacent cavity, we decide that Q is now IN the part, and that a collision has occurred. In this case, we simply shift (translate) the part such that the user-move appears to have no effect.

The number of operations required for collision

Virtual reality interfaces: S N Trika et al.

detection is proportional to the number of nodes adjacent to the current node, which is typically small. We therefore expect constant time (O(1)) collision detection.

Finally observe that, in computer graphics and in virtual reality, images of three-dimensional objects are generated by setting the front clipping plane at a non- zero distance 5 from the eye. Above, we allow the user to come arbitrarily close to the part boundary, essentially implying a zero distance to the front-clipping plane. Though we can modify the above procedure to track the distance of Q to the part surface, and announce a collision when this distance becomes less than 6, we simply choose to set the front clipping plane at a very small distance from the eye, and use the above explained procedure for collision detection.

5. SYSTEM ARCHITECTURE AND USER INTERFACE

The prototype system relies heavily on the part representation and algorithms explained in the previous sections. In this section, we present the system architecture and explain the user-interface.

5.1. System architecture

The system is implemented in C++ and uses an object- oriented methodology for system design. Each class in the system supports default initialization, printing, copying, comparison, and error-handling functions. Figure 8 shows the system architecture.

The bottom layer of classes consists of basic data structures such as an Ordered-Collection, Graph, and Matrix. An Ordered-Collection represents an ordered collection of arbitrary type elements (using the C++ template mechanism). The class supports several means of insertion/deletion of elements, searching for the

User interface

3-D Support (Point3D, Line3D, Edge3D. Plane3D. Box, Cylinder, Polygon3D, Facc3D, . ..)

(Point, Line, Edge, Circ~~D~~g~, Polygon, 2-D Face, .._)

I Basic Data Struch~res (Ordered-Collection, Priority Queue, Graph, Matrix, . ..)

Figure 8 System architecture. (See Section 5.1 for detailed explanation)

571

Page 8: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al.

elements, comparisons, sorting, and reversal. The user thinks of an Ordered-Collection as an advanced array of unspecified size, that maintains the number of elements as well. The Graph class represents an arbitrary labelled graph. Labels can be attached to nodes as well as edges. and are typically C++ pointers. Various methods for creating/updating the graph are provided. The Matrix class represents an arbitrary matrix, and is used in the upper layers to represent transformation matrices.

The two-dimensional support layer, as shown in Figuw 8, contains classes that represent points. lines. edges, polygons. etc. that lie in the XY plane. The classes provide a variety of methods, including various contain- ment and intersection tests. Triangulation of polygons, based on the presentation in Reference 13. is also supported. The three-dimensional support layer consists of classes that represent 3D points, lines. edges, polygons. etc. and these essentially mirror their 2D counterparts. Again, various intersection/containment tests implemen- ted. The layer also contains classes for the primitive volumes (box and cylinder), and transformation matrices.

The part representation layer is central to the system architecture (see Figuw 8). The layer consists of four classes. and the system keeps exactly one instance of each of these four classes:

The Polyhedron3D class represents an arbitrary untriangulated polyhedron. The class is included primarily to isolate the dependency on the TWIN solid modeling library3. TWIN provides functions to approximate primitive volumes by polyhedra, and to perform solid operations on these and on resulting polyhedra. Any other solid modeler that provides similar functionality can be used, and can be attached by simply re-implementing the methods in the Polyhedron3D class. The class updates the main- tained b-rep of an approximating polyhedron with help from the solid modeler. The Triangulated-Polyhedron class supports the part representation explained in Section 3.3. The class updates its state by querying the Polyhedron3D class; it re-triangulates any faces that are modified:‘created, and deletes the triangulations for those faces that are removed. The DSG-Representation class maintains an explicit representation of part cavities and their adjacencies using the { SH, SC. G} triple, as explained in Section 3. I. An update message (see later) causes incremental modifications to the representation using the algo- rithms presented in Section 3.2. The user interface interacts constantly with this class to support collision detection (Section 4). The Scene-Graph class provides the interface to the Open-Inventor 3D graphics library’“, just as the Poly- hedron3D class provides the interface to the TWIN solid modeler. If another graphics library is used, only the implementation of this class needs to be modified. The Scene-Graph class constructs a part description (called a ‘scene graph’) specific to the Open-Inventor library by querying the maintained Triangulated- Polyhedron. Lights are placed in the center of cavities; this information is obtained by querying the DSG- Representation. The choice of the color of the lights is explained in Section 5.2.

To provide uniformity. the classes support update

572

methods that respond to update messages. When the user specifies a change to the part, the user-interface layer generates an update message that specifies the change. The update message is sent, in turn, to the four instances of the four classes.

The user-interface layer of the system architecture provides the functionality detailed in Section 5.2. To provide this functionality, it relies heavily on the CAVE (the CAVE Automatic Virtual Environment) library for navigation and hardware contr<l”.6. The CAVE library also addresses the shared memory, multitasking and multiprocessing needs for our application. This library is widely used for developing virtual reality applications and is available in the public domain (for details, please contact the authors). The speech recognition is per- formed on a dedicated personal computer. The user- independent recognition system is first trained on a small set of phrases (such as ‘attach cylindrical hole’. ‘add box’. ‘modify vertex to’. etc.). The system already recognizes spoken numbers. Speech recognition provides an easy way for the user to specify the required operations; the operands arc specified as detailed in the following section.

5.2. User interface

The CAVE library supports three VR interfaces: (1) the CAVE environment that consists of four 10 x 10 foot screens that surround the user, (2) the ImmersaDesk environment that resembles a drafting-table, and consists of a 4 x _S foot rear projected screen at a 45’ angle, and (3) a CAVE simulator that operates on any workstation supporting OpenGL standards”. The library also provides us with the tools needed to navigate through the parts in immersive virtual reality using an Ascension head-tracking device and a wand (a 3D mouse) using the Ascension Flock of Birds tracking system.

Figwe Y shows a user interacting with our system using the ImmersaDesk environment. The user is wearing stereo glasses (LCD shutter glasses). The stereo glasses are synchronized using stereo emitters so that the left eye

Figure 9 The user-interface of the VR-based c ,w system. The user is wearing stereo glasses and is holding a wand (a 3D mouse). The image on the screen appears as a 3-D box to the user. The ‘line’ that goes from the left of the screen to the part appears to the user as a ray emanating from the wand to the solid. The user sketches a feature on the part surface using the wand

Page 9: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et a/.

is turned ‘off’ when the screen displays the part image with respect to the right eye, and vice versa. The stereo emitters are also used to provide to the system the information required to track user position and orienta- tion. In his hand, the user is holding a wand. The position and orientation of the wand is also tracked. The wand has three buttons. and the system recognizes their status (up/down) as well.

Initially the system displays a blank screen, and prompts the user to specify a box by selecting two 3D points. The user selects the first point by moving the wand to the desired position and clicking the left mouse button. As the user moves the wand to select the second point, a 3D box is rubber-banded between the first point and the current wand position. Clicking the left button again selects the second point and completely specifies a box. The user is now free to navigate around this box, and specify new features as explained below.

To specify a cuboidal protrusion, the user issues the command ‘add box’ verbally. The command is inter- preted by the speech recognition module. The user is now prompted to specify a rectangle by selecting two points. A solid ray (a thin cylinder) that emanates from the end of the wand is displayed. The wand is used to point in the three-dimensional space. much like a laser-pen is used in typical presentations (if the solid ray hits the current part. only the portion of the ray between the wand and the first intersection point with the solid is displayed). When the user clicks the left button, a point is selected: the selected point is the wand position if the emanated ray does not intersect the current solid, and it is the first point of intersection otherwise. The selected point is highlighted by placing a small sphere at the selected point, and the user is prompted to select the second point to specify the rectangle. The user selects the second point in exactly the same manner as the first. Once the two points are selected and highlighted, the solid ray display is turned off (the ray provides a visual cue to the user to select a point). The user is now prompted to speak the height of the protrusion. This completely specifies the cuboidal protrusion.

Other features (cylindrical protrusions. cuboidal cav- ities, and cylindrical cavities) are placed in a very similar intuitive manner. In case of the cylindrical features, the first selected point specifies the center, and the second selected point specifies the radius of a circle. to which a depth or height is attached using spoken commands.

Note that the interface described above provides only a 3D sketch to the system because exact 3D points are difficult to specify as explained above. To handle this problem. we allow the user to modify a selected 3D point by verbally specifying its coordinates [e.g. the verbal command ‘modify vertex to one zero zero’ sets the previously selected point to ( 1 ,O,O)].

Newly introduced cavities are colored by the system by choosing the color of the light placed in the cavity. The color of the light is used to signify the difficulty in reaching that cavity. Those cavities that can be reached directly from outside the pat-t have a white light in them. As we go deeper into the part. the cavities become redder and redder signifying the difficult,, in manufacturing them.

6. CONCLUSIONS AND FUTURE WORK

This paper presents the design and interface of a

VR-based prototype CAD system. The design relies on explicit representation of part cavities and adjacencies to provide an unambiguous representation of parts that also supports fast collision detection. The system also maintains a b-rep of a triangulated polyhedral approx- imation of the part to enable fast displays and navigation. The algorithms required for maintaining and exploiting the representations are both simple and effective, and are detailed in this paper. The system provides an easy-to-use immersive interface for feature- based design of mechanical parts. It implicitly enforces feature accessibility constraints and provides hints to the process planner about feature sequencing, as explained in the Introduction.

The domain of parts considered in this paper is that of iso-oriented parts with prismatic and cylindrical cavities: we restricted our part domain for ease of exposition and development speed. The part domain is easily expanded to include other types of cavities (such as spherical, conical, toroidal. etc.) for the fundamental concepts remam the same. The iso-oriented assumption can also be relaxed if simple interference checks between inclined primitive volumes are developed. The restriction to cavities (for example, we do not permit cylindrical protrusions) can be addressed by including ‘negative’ primitive volumes”.

Some of the more fundamental questions that we plan to address include handling of sketches drawn on curved surfaces. specification of 3D constraints in VR, and providing force feedback to the user when sketching on a surface.

REFERENCES

I.

2.

3

4

5

6.

7.

8.

9.

IO

11.

Anderl. R. and Mendgen. R.. Parametric design and its impact on solid modelin_g applications. In P,oc~cvtlir?g.c of r/x’ Tllird ~yrnqx- \illrrl 011 So/it/ :\~o&lII~,y ~~tl App/i~/ioxv. Salt Lake City. UT. 1995. pp. I II. Houma. W. .I. and Vanecek, G. J.. Modelling contacts in a physically based simulation. Cor?l~rr/o- Ait/& Dcs&z. 1994, 26(6). 452 463. CADLAB. Purdur Cniwsif~~ C‘A DLA B TWI:l’ Solid Modeling Ptrckrrgt~ Rrfcwwc~c~ Mrrrn~rl. http: cadlab.www.ecn.purdue.edu! cadlab:twin, 1991. C’rur-Neira. C.. Sandin. D. J. and DeFanti. T. A., Surround- screen projection-based virtual reality: the design and implemen- tation of the CAVE. In Proceec/i~~gs qf t/w .4c’.W SIGGRAPH ~‘or~fcrrnc~c cm Co~~prrc~r Grqdric~. 1993. pp. I35 142. Lkm~. T. H. and Gadh. R.. COVIRDS: a conceptual virtual design sq stem. In P~.owdi~~gs of t/w ~‘oII~I~PI..~ in E~rgiwcvYng C’or~fcrcw CJ UI~ the Eq#reritrg Lkrrc/htr.vc, Swp~.viw~~. Boston. hlA. 1995. pp. 959-966. Flrctronic-Vxualization-Laborator!. Um\ersity of Illinois, Chi- C,I~O. C.4 I ‘E C-W’.\ C;l,ic/c,. http:. ‘\~\~~~.ncsa.tlilIc.edu’EVL’docs~ html CAVEGuide.html, 1994. Goodson. R.. Glove input device as a design tool in computer- aIded design systems. MS Thesis. Mechamcal Engineering Department. Brigham Young University. 1992. Han. J. H. and Requicha. A. A. G., Incremental recognition of niachlning features. In Prorwtlirr,q 01 t/w 1994 AS.VE Intcar- m:r/m~ul ~~~IIII~~IIWT in Etlgi/wcrirl g C.mfcrrtw rrtrtl E.t/rihitiotl, hlinneapolis, MN. 1994. pp. I43 149. Hoffman. c‘. M . G(wwtri( mtl Sold .2lotl~lir1y .4t1 Inrrd~~riot~. hloryn Kaufmann. San Mateo, CA. 1989. JoshI. S. B. and Chang. T.-C.. Graph-based hcuristlcs for recog- nillon of machmed features from II 3-D solid model. C’onyxrfc/ Aided Lk\igrl. 1988. 20(2). 5X -66. Laahko. T. and Mantyla. M.. Incremental feature modeling. In .-lth’c~c~cs i!i /:w/w~J Bmed ,~~ftr,f~r/trc,/ir~i,~~. eds. .I. J. Shah. M. Mantyla and D. S. Nau. Else\iel- Science B. V.. Amsterdam. 1994. pp. 155 479.

573

Page 10: Virtual reality interfaces for feature-based computer-aided design systems

Virtual reality interfaces: S N Trika et al.

12. Neider, J., Davis, T. and Woo, M., Open GL Programming Guide. Addison Wesley, Reading, MA, 1993.

13. O’Rourke, J.. Compututional Geometry in C. Cambridge Univer- sity Press, New York, 1994.

14. Perng, D.-B., Chen, Z. and ti, R.-K., Automatic 3D machining feature extraction from 3D CSG solid input. Computer Aided

Design, 1990, 22(S), 285-295.

15. Regli, W. C. and Nau, D. S., Building a general approach to feature recognition of material removal shape element volumes (MRSEVs). In Second Symposium on Solid Modeling and

Applicutions, Montreal, Canada, 1993, pp. 293-302. 16. Requicha, A. G., Representations for rigid solids: theory,

methods and systems. ACM Computing Surveys, 1980. 12(4),

437-464.

17. Rossignac. J. R.. Issues in feature-based editing and interrogation of solid models. Computers and Graphics, 1990. 14(2). 1499172.

18. Shah, J. J., Assessment of features technology. Computer Aided

Design, 1991, 23(5), 331-343.

19. Tang, K. and Woo, T., Algorithmic aspects of alternating sum of volumes-Part 1. Data structure and difference operation. Com- puter Aided Design. 1991, 23(5), 351-366.

20. Trego, A. and Magleby, S., Virtual reality promises new design capabilities. In 4th ASME Flexible Assemhl~~ Conference.

Minneapolis. MN, 1994, pp. 125-133. 21. Trika, S. N. and Kashyap, R. L., Geometric reasoning for extrac-

tion of manupdcturing features in iso-oriented polyhedrons. IEEE

Trunsactions on Pattern Analysis and Machine Intelligence, 1994, 16(11), 1087-1100.

22. Trika. S. N. and Kashyap, R. L., A provably correct feature extractor for parts with cylindrical and planar surfaces. In Third

Symposium on Solid Modeling and Applications, Salt Lake City, UT, 1995, pp. 131-140.

23. Tseng, Y.-J. and Joshi, S. B. Recognizing multiple interpretations in 2-1:‘2D machining of pockets. International Journal ofProduc- tion Research, 1994, 32(5), 1063-1086.

24. Wernecke. J.. Tire Inventor Mentor. Addison-Wesley. Reading, MA. 1994.

25. Whitward, L.. How Real are the Benefits of Virtual Reality? In Design Engineering. 1994, pp. 41-45.

26. Yamamoto. K.. Ishiguro, A. and Uchikawa, Y., A development of dynamic deforming algorithms for 3D shape modeling with generation of interactive force sensations. In IEEE Virtual

Reality Annual Internutional Symposium. Seattle, WA, 1993. pp. 505~511.

Sanjeev N. Trika M’US born in Nelc Delhi,

India in 1970. He received his BS with

honors in August 1991, MS in December

1992 and PhD in August 1996, all from

Purdue University, West Lafayette, IN,

in the field of computer and electrical engineering. He is currently employed by

Intel Corporation in Portland, Oregon.

His research interests include computer

graphics, solid modeling, computational

geometry, virtual reality, CAD,‘~-AM inte- gration, and pattern recognition.

Prashant Banerjee is currently an Associ-

ate Professor in the Department of

Mechanical Engineering at Unirersitl

of Illinois at Chicago i UIC) and is

serving as the Director of Indu.rtrial Virtual Rea1it.v Institute, (I joint

research and development operation

comprising of UIC, Northwestern Uni-

versity and Argonne National Labora-

tory. Banerjee’s c’urrent research

interests include virtual reality-bused

factor)) design, part design and ussembl~

design models, immersive display interfaces und linear and non-linear

design optimizution models. His research has been supported b), NSF,

NIST and ONR, and b), companies such as Caterpillar, Searle und

Motorola. Professional education includes a MS and PhD in

industrial en&eering from Purdue Universit~~ and N BS in

mechanical engineeringfrom IIT, Kanpur, India.

R. L. Kashyap received the PhD degree in 1966 from Harvard Univer.rit~,, Cum-

bridge, MA. He joined the st@‘of Purdue University in 1966, where he is currently

a professor of electrical engineering. He

has been the Associute Director of’ the

NSF-supported Engineering Research

Center Intelligent Manufacturing S)ss-

terns at Purdue since its inception in

1985. He has held visiting professor positions at Harvard L’niversitJ,, the

University of Culifornicr at Berkeley, rend the Indian Institute of Technology, Kanpur. He i.s currentI)

working on research projects supported bj, the O,fice of Naval

Research. Army Research Qfice, .NSF, and .several companies like

Cummins Engines, etc. He has directed more than 25 PhD

dissertations at Purdue. He has authored one book and more than

300 publications, including 120 archival,journal papers in areas such

as pattern recognition and image processing, signal processing and

system identification, random field models, intelligent data buses, and

intelligent manufacturing s.v.stems. He is an urea editor ,for the

journuls Graphical Models and Image Processing (‘CVGIP) and The

Journal of Intelligent and Robotic Systems. He has been guest

co-editor of special issues of’ the IEEE Transactions on Softwrtre

Engineering (198X), IEEE Computer Magazine ( 1989). and the

IEEE Trunscrc~tions on S~stemr, Man and Cybernetics ( 1991). Dr Kashyap received the best resewch puper award at the Nutional

Electronics Conference in 1967. He is the rwipient of manv other

honors, including the King-Sun Fu Award in 1990 for jimdamentcd contributions to pattern recognition and computer vision given h>, the

International Association for Puttern Recognition IIAPRI, the J. C. Bose Award in 1991 for contributions to engineering sciences,fiom the

Institute of Electronic and Telecommunicution Engineers, and

election to the status of ,fellow of the IEEE and the Institution of’ Electronic,s and Telecommunications Engineerin,?.

574