10
© Springer-Verlag London Ltd Virtual Reality (1999) 4:213-222 Using Virtual Reality Techniques in Maxillofacial Surgery Planning P. Neumann, D. Siebert, A. Schuiz, G. Fauikner, M. Krauss, T. Tolxdorff Institute of Medico/InPor,rnatics, Biostotistics and Epidemio/ogy, University Hospita/ Benjamin Frank/in, Free University of Bet/in, Berlin, Germany Abstract: The primary goal of our research has been to implement an entirely computer-based maxillofacia] surgery planning system [1 ]. An important step toward this goal is to make virtual tools available to the surgeon in order to carry out a three~dimensional (3D) cephalometrical analysis and to interactively define bone segments from skull and jaw bones. An easy-to-handle user interface employs visual and force-feedback devices to define subvolumes of a patient's voIume dataset [2I. The defined subvo[umes, together with their spatial arrangements based on the cephalometrical resurts, eventually lead to an operation plan. We have evaluated modern low-cost, force-feedback devices with regard to their abiIity to emulate the surgeon's working procedure. Once the planning Qf the procedure is complete, the planning results are transferred to the operating room. In our intra-operative concept the visualisat[on of planning data is speech controlled by the surgeon and correlated with the patient's position by an electromagnetic 3D sensor system. Keywords: Surgery planning; Volume segmentation; Virtual tooIs; Force feedback; Intra-operative navigation Introduction MaxiIlofacial surgery is concerned with the treat- ment of skull deformations resulting from heavy injuries or malformations during adolescence. In the operative procedure the surgeon resects several skull fragments and rearranges them. The aesthetic appearance of the patient has to be taken into account as well as the function of the treated organ structures~ For example, a good dental occlusion must be achieved. Every surgical step must there- fore be thoroughly planned in order to predict accurately the postoperative shape of the skull and soft tissue. The aim here is to supplement con- ventional methods with virtual reality techniques, thus significantly contributing to clinical under- standing and improving treatment planning as well as surgical interventiom The 3D rea[-time visualisa- tion of the volume data, as well as the possibility of modifying the object by using virtual cutting and replacement tools, accelerate the planning pro- cedure and enhance it with new facilities. Therefore, the computer-based methods necessitate an adequate set of 3D input and output devices. Conventional Planning Most of the current planning in the field of oral, maxillofacial and facial plastic surgery involves the

Using virtual reality techniques in maxillofacial surgery planning

Embed Size (px)

Citation preview

Page 1: Using virtual reality techniques in maxillofacial surgery planning

© Springer-Verlag London Ltd Virtual Reality (1999) 4:213-222

Using Virtual Reality Techniques in Maxillofacial Surgery Planning P. Neumann, D. Siebert, A. Schuiz, G. Fauikner, M. Krauss, T. Tolxdorff Institute of Medico/InPor,rnatics, Biostotistics and Epidemio/ogy, University Hospita/ Benjamin Frank/in, Free University of Bet/in, Berlin, Germany

Abstract: The primary goal of our research has been to implement an entirely computer-based maxillofacia] surgery planning system [1 ]. An important step toward this goal is to make virtual tools available to the surgeon in order to carry out a three~dimensional (3D) cephalometrical analysis and to interactively define bone segments from skull and jaw bones. An easy-to-handle user interface employs visual and force-feedback devices to define subvolumes of a patient's voIume dataset [2I. The defined subvo[umes, together with their spatial arrangements based on the cephalometrical resurts, eventually lead to an operation plan. We have evaluated modern low-cost, force-feedback devices with regard to their abiIity to emulate the surgeon's working procedure. Once the planning Qf the procedure is complete, the planning results are transferred to the operating room. In our intra-operative concept the visualisat[on of planning data is speech controlled by the surgeon and correlated with the patient's position by an electromagnetic 3D sensor system.

Keywords: Surgery planning; Volume segmentation; Virtual tooIs; Force feedback; Intra-operative navigation

Introduction MaxiIlofacial surgery is concerned with the treat- ment of skull deformations resulting from heavy injuries or malformations during adolescence. In the operative procedure the surgeon resects several skull fragments and rearranges them. The aesthetic appearance of the patient has to be taken into account as well as the function of the treated organ structures~ For example, a good dental occlusion must be achieved. Every surgical step must there- fore be thoroughly planned in order to predict accurately the postoperative shape of the skull and soft tissue. The aim here is to supplement con- ventional methods with virtual reality techniques,

thus signif icantly contr ibut ing to clinical under- standing and improving treatment planning as well as surgical interventiom The 3D rea[-time visualisa- tion of the volume data, as well as the possibility of modifying the object by using virtual cutt ing and replacement tools, accelerate the planning pro- cedure and enhance it with new facilities. Therefore, the computer -based methods necessitate an adequate set of 3D input and output devices.

Conventional Planning Most of the current planning in the field of oral, maxillofacial and facial plastic surgery involves the

Page 2: Using virtual reality techniques in maxillofacial surgery planning

production of plaster casts from the patient's anatomy. The plaster casts are mounted on an articulator (Fig. l a), which allows the dental segments to be cut and repositioned while the bases maintain their interrelationship. With the articulator it is possible to simulate operations and to evaluate the occlusion of the superior and inferior dental arches.

In severe cases, a synthetic resin model based on CT data (see Fig. l b) is created by using the expensive method of stereolithography [3]. These models mainly serve to highlight and illustrate the spatial relationships of bones, teeth and joints. Typically, surgical planning software like the PC- based program WINCEPH is used only for the two- dimensional (2D) cephalometrical analysis and documentation (Fig. 1 c). This software takes normal frontal and profile photographs along with special x-ray images to enable the cephalometry by com- paring profile values with standard values [4].

Virtual Planning The technologies represented by virtual reality provide a variety of techniques to support or replace plaster-cast-model surgery. The production of resin models is rendered superfluous in some cases and the very high cost of stereolithography can be reduced. The third dimension in cephalometry makes the improved diagnostics of malformations possible, as explained later on.

For the surgical planning, the surgeon diverts copies of sub-volumes from the original set of 3D data into a special planning object. Once a bone segment is moved, the visual result is computed and shown immediately. All of the changes from the original positions of objects are exactly recorded and merged into a concrete operation plan. The surgeon can simulate different perspectives of the same patient by creating additional planning objects, which contain different sets of bone segments or represent different spatial shifts.

Several basic approaches have been investigated in the past for a 3D segmentation to cut skull frag- ments in volume data using VR techniques. For example, Delingette et aI [5] used the 'virtual hand' user interface, where the cutting tool follows the motion of the user's hand, which is tracked by an electromagnetic sensor. However, most of these VR approaches to surgical planning lack force and haptic feedback, which would enhance the realism of the simulations.

Fig. la. Articulator for plaster casts operations; b. Resin model of the skull built by stereofithography; c. 2D cephalometry software WINCEPH.

A Neumann et a].

Page 3: Using virtual reality techniques in maxillofacial surgery planning

Fig. 2. Rendering pipeline and image composition.

Visualisation

The planning process is based on rendered views of the patient's 3D volume data acquired by CT scan. tn the preprocessing stage of our rendering pipeline (Fig. 2), a threshold window automatically segments the volume to extract the patient's skin surface or skull bone.

For fast computation, each volume object is represented in a small bit-cube [6] in which each bit indicates whether the corresponding voxel belongs to the selected tissue or not. In modern computer architecture, each bit volume is stored in RAM as an array of 64bit words, where each long word represents 64 voxels. The additional memory requirements of this type of data representation are small. The visualisation speed is faster because blocks of 64 voxets can be computed in parallel.

After the object segmentation stage (described later), a depth map of the image is computed in the reconstruction stage by voxel projection from the obiect bit-cube. With this z-buffer image and the original data, an object image can be recon- structed and illuminated using the illumination model by Phong [7]. Progressive refinement and partial recomputat ion are implemented for interactive 3D visualisation. If the user requests a

view, it can be calculated very fast in poor quality. By using a successive image refinement method, the image quality improves over time [8] (Fig. 3a- d). Partial recomputation is used for fast recon- struction of small, changed areas in the obiect volume. In the case of very small areas, which will affect only a few pixels in the output image, special Iow4evei output functions can be used for speed optimisation,

For our application, an image containing depth information is generated from each object. Because of the z-buffer-based overlap, the visual results of different objects can be complied into one output image in the composition stage, differentiated (for example) by colour or transparency.

3D Cephalometry

During cephalometrica[ analysis the angles and distances between chosen anatomical marks are defined and brought into relation to standardised vaIues. The conventional analysis is based on a standard profile x-ray of the patient. Compared to this 2D x-ray image, the spatial visualisation of the patient's CT data delivers the required information for a 3D cephalometry (Fig, 4a)~ For example,

Fig. 3. Progressive refinement for very fast but initially low quality images; resolution: a. 1/64 < 0.05s; b. 1/16 < 0.2s; c. 1/4 < 0.Ss; d. high quality < 4s,

Using Virtual Reality Techniques in Maxillofacial Surgery Planning

Page 4: Using virtual reality techniques in maxillofacial surgery planning

P

Fig. 4a. 3D cephalometry; b, Skin surface; c. Overlay of original data slices in ROI.

asymmetries of the face can be depicted very well with this data.

After interactively defining a lateral reference plane in CT, the user sets the exact position of cephalometrical marks, which had been given defau[t values by the system. The best viewing direction and the best data visualisation method can be selected for detecting the desired anatomical marks. Some marks are best viewed in skin-surface images (Fig. 4b) or in the original data slices, shown directly as an overlay inside a region of interest (RQI) (Fig, 4c). The angles and distances between the given profile marks can be evaluated. These marks taken together serve as a complete 3D profile analysis and can be used for an accurate diagnosis.

3D Segmentation

During the segmentation process, a hierarchical obiect tree is generated with the paEient's skull bone as root object. To resect a bone segment, two new sub-objects are derived from their parent (Fig. 5), one for the new bone segment and one containing all remaining voxels from the original bone. Initially, the first object is empty whereas the second object is only a copy of the original, Within this object hierarchy it is always possible to undo planning steps or to compare different planning alternatives.

After choosing a seed voxe[ (Fig. 6a), the segmentation is carried out by a volume-growing

Fig. 5. Obiect segmcntadon and force-feedback integration scheme in the cutting process.

R Neumann et al.

Page 5: Using virtual reality techniques in maxillofacial surgery planning

Fig. 6. Vo[ume-growing control[ed by a cut a; b. Between the jaw bones; c. Inside a region of interest.

process I9] based on a 26-voxel neighbourhood. If there is a path of neighbouring voxels in the original bit-volume, corresponding voxe[ bits are moved from the object-copy to the new segment. The growth can be visualised in real-time by partial rendering since only one voxel changes in each volume-growing step and only a few surface voxels affect an object image result.

To control the volume-growing process, the bone-segment borders can be interactively defined by placing cuts. A cut is defined by drawing a line in free-hand mode with the visual pointer of the input device (Fig. 6a, b).

To keep the user interface simple, every cut is projected onto the bone surface orthogonally to the viewing plane. The cutting direction can be changed arbitrarily by simply choosing a different viewing direction. The depth of the cut and the cut- ting speed can be controlled by the force-feedback input device. User interaction triggers the proces- sing of force parameters, which are transferred to the force feedback device via an I/© interface (see Fig. 5).

An additional way to limit objects whose borders are hidden by bone from other objects is to limit the volume-growing process inside 2D slices of the original data voIume. The user can navigate through the 2D data slices inside a region of inter- est, shown directly as an overlay on the 3D view (Fig. 6c). The segmented object vo xeIs in these slices are distinguished by colour from non-segmented voxels and the volume-growing process is visualised. Like conventional segmentation techniques in 2D slices, the segment is defined by drawing a line at the obiect borders where the growing process is leaking.

Force Feedback

As the technology improves and the cost decreases, it is worth while considering the medical application of force-feedback devices from the gaming industry. Given these advances, it is possible to construct input devices that can be controlled via a highqevel interface and give the surgeon haptic feedback during the pianning process.

Our interactive segmentation technique uses force- feedback input devices by companies like Microsoft, and Logitech, that cost less than $150. Such low cost devices can be part of the standard computer configuration. They are not restricted to special appli- cations on special planning workstations. The new features of such input devices can be used to provide additional depth information, encoded by force, for visualisation on a 2D output screen. The fast and powerful visualisation kernel described above supports the segmentation process and continuously displays its progress. This method of performing a manual 3D segmentation enables the surgeon to define quickly and exactly the desired bone segments.

We have installed a testbed to evaluate both the usability of force-feedback devices and the para- meters of the effects we used. Our virtual pianning station comprises the visuaiisation engine and a driver for I/Q devices such as force-feedback joy- sticks. The device driver has a high-level interface protocol that operates independently from the device's particular hardware imp[ementation. The highqevel interface provides commands for force- feedback effects used in the surgical planning procedure (like 'saw', 'drill', 'chisel', etc.), together with their corresponding set of parameters. A device that supports force-feedback capabilities is free

Using Virtual ReaJity Techniques in MaxiHofacial Surgery Ranning

Page 6: Using virtual reality techniques in maxillofacial surgery planning

ID

to decode these commands and generate the appropriate effects. A device without such capabil- ities may ignore such commands and function as a standard input device.

We connected a low-performance host com- puter, which functions as the device driver, to the planning station via a serial link. This host computer drives the Microsoft ® Sidewinder TM force.-.feedback joystick using the DirectX TM library. Additionally, professional force-feedback systems like the PHANTOM TM can be connected with an adapted device driver, which supports absolute positioning.

To emulate the surgeon's working procedure we have implemented force-feedback sensations for sawing and drilling. During user interaction the force parameters are continuously streamed to the input device [10]. The input device driver must translate the force parameters into a force effect transmitted to the user by the joystick. The streamed force paras- meters always represent bone thickness. In drilling mode, the force is proportional to the bone density directly in front of the drill. In sawing mode, the force parameter is proportional to the sum of all voxel densities the user is cutting. The DirectX ~M library offers several methods like the constant force' method that can be modulated to generate the desired force effects.

To shield the user from heavy iolts or jerks by the stick, our driver is equipped with force ramps that increase the force smoothly in the joystick centre. We have developed a force-adjustment panel in order to configure these force ramps and to enable the surgeon to adjust the force-feedback sensation of the joystick.

As a principal requirement for the transmission speed, the overall bandwidth of the force-feedback

systern must exceed that of the human percep- tion system to achieve sufficient realistic results. Kalawsky [11] considers a bandwidth of 30 Hz sufficient. Thus, the structure of our interfacing protocol is compact and allows a command to be sent within a few bytes.

Intra-operative Planning Control An important aim of the project is the visual control of the planning during the intra-operative procedure. The precise registration of the spatial situation during the operation is important to make use of the planning data in the operating room. Natural speech recognition is currently being tested as the primary user interface for the surgeon to ensure the free-hand usabil i ty of the intra- operative visualisation.

Navigation A calibration frame has been designed (Fig, 7a) to register CT data and the patient's current position, Pre-operatively, a set of volume data is produced by the patient carrying an artificial splint in his mouth onto which the calibration frame is fixed. Our newest frame version features four special measurement points on the front side and six each on the right and left sides that serve as external fiducial markers. These markers are visible in all planning modalities.

Fig, 7a, Calibration frame with fixed sensor; b, Polhemus Fastrack electromagnetic sensor system.

R Neumann et at.

Page 7: Using virtual reality techniques in maxillofacial surgery planning

Fi 8. 8a. Prototype of calibration frame reconstructed from CT; b. Intra-operative use of calibration frame and integration of sensor system affixed to the surgical microscope.

Immediately prior to surgery the patient's posi- tion is captured by an electromagnetic tracking system with 6 DOF (Fig. 7b), and the frame markers in the reconstructed CT data are identified (Fig. 8a). Thus, the patient's anatomy and CT data are correlated.

At the beginning of the operation a second sensor, called the head sensor, is fixed to the patient's head. The frame is used once more to transfer the existing calibration to the head sensor (Figure 8b). After this procedure the frame is no longer needed during the operation. By affixing additional sensors onto mobilised bone segments, all shifts in relation to the position of the rest of the skull (given by the head sensor) can be captured. The actual position in comparison with the planning results provides navigational information to the surgeon by which to iudge the correct positioning of the mobile part of the patient's body before it wilI be fixated.

Speech Control

Speech control can be a helpful user interface for the surgeon in steriIizsed environments. We can use the high-level interface described earlier for the speech input device in the operating room. Speech recognition software from Dragon Systems TM runs

on our PC host computer with a standard sound blaster 16V audio adapter. A device driver translates the recognised speech commands into specific interactions of the visualisation application. The surgeon wears a headset microphone under his mask. About 25 speech commands are required and irnplemented for all intra-operative interaction tasks. Besides commands to control the application, there are four main modes in the visualisation:

• view selection mode for selecting the best viewing direction to the scene

• region of interest mode for diving into the original data slices

• pointer mode for co[laborativework, documen- tation and discussion

• object mode for selecting or hiding objects and choosing alternative plannings.

The speech command list contains for all modes commands such as: 'faster' and 'slower' for velocity control; 'up', 'down', etc, to specify the moving dir- ection; 'reset', 'speech on/off', 'view mode', 'region mode', etc. to define the overall status of the appli- cation. Movements are started and stopped by using a single command in a particular mode. The velocity of the continuous movements can be varied in several steps. Due to the speech recognition delay, some training by the user is reequired to not overrun desired targets.

Using Virtual Reality Techniques in Maxfllofacial Surgery Planning

Page 8: Using virtual reality techniques in maxillofacial surgery planning

ID

Results

Usuaily an operation planning starts with the segmentation of the calibration frame, At the end of the planning process, this segment is used for correlating the planning data with the patient. Apart from that, it is not needed and will only hide under- lying bone structures (Fig. 9a). It is not possible to extract the frame in the preprocessing stage using the threshold segmentation because our frame material has a wide range of ttounsfield units including thQse of bone and soft tissue. The frame can be segmented using the techniques explained earlier simply by setting one initial seedpoint on the frame and placing a small cut directly in front of the teeth in a profile view of the patient,

The segmentation of the upper jaw for a standard surgical procedure (e.g. Le Fort 1 osteotomy) requires one initial seedpoint, one straight cut at the osteotomy plane, and a few cuts at the contact points to the lower iaw (Fig. 9b), The extraction of the lower jaw presents no greater difficulty and needs only one initial seedpoint and a few small cuts at the joints and at the contact points to the upper jaw (Fig, 9c). The surgeon can extract these in less than two minutes, which is significantly faster than the production of a standard plaster cast.

For the sagittat correction of the mandible, a resection line has to be defined between the ramus and the rest of the lower jaw (Fig, 9d). One difficulty posed by this procedure in real surgery is to mobilise the mandible without cutting the main nerve to the teeth. In the virtual environment, where such

Fig. 9a. Frontal view of a segmented calibration frame; b. Segmented upper jaw; c. Segmented lower iaw; d. Resected ramus from the tower jaw.

P. Neumann et aI.

Page 9: Using virtual reality techniques in maxillofacial surgery planning

physiologic constraints do not exist, complex cuts can be approximated by simple plane cuts. Nevertheless, the simplification of this process can deliver the necessary planning values for sur- gery. If desired, the surgeon can also simulate the real cut exactly.

Figure 1 0 shows the virtual planning results for the maxillofacial surgical correction of jaw maI- formations. Up to now, 1 2 operations on patients with dysgnathia have been supported with tech- niques described above. We have found that our virtual cutting instruments greatly simplify the resection of bone segments for this area of surgery planning.

We use low-cost, force-feedback devices in our approach, which are quite different from real sur- gical tools. Nevertheless, the force-feedback joystick enhances the virtual 3D planning process on a 2D output screen. For example, the surgeon's hand on

the joystick feels a sudden thrust after drilling or sawing through the jaw bones. The parameters of the force effects have been optimised in empirical tests involving our medical partners. We have found that an individual fine tuning of the force effects within the user interface improves its acceptance by the surgeons.

An important medical demand is high accuracy for the whole procedure. This requires a very accurate 3D sensor system in combination with precise patient data acquisition and its precise processing. The accuracy of the presented seg- mentation is limited by the resolution and accuracy of the underlying data set [1 2]. The data set was acquired by a modern SIEMENS spiral CT scanner and has a voxel resolution of 0.7 x 0.7 x 1.4 mm. Although an electromagnetic sensor system can theoretically deliver the required accuracy of

Fig. 10. Results from a dysgnathia correction planning: a, b before correction; c, d after correction.

Using Virtual Reality Techniques in Maxillofacial Surgery Planning

Page 10: Using virtual reality techniques in maxillofacial surgery planning

ID

_+ 0.5 mm, these systems are susceptible to multi- faceted disturbances in a real operation [13]. In response to Birkfellner's concerns [14], we have developed an intra-operative procedure to minimise errors and increase precision. The main aspects of this procedure are as follows:

• reproducible conditions through fixed position of magnetic source during measurements

• reduction of head sensor and patient's head movements to a minimum

• smaIl measurement volume to avoid magnetic field inhomogeneities

• placement of source and sensor as far away as possible from metal to avoid magnetic contortions.

Conclusion and Future Work

Our system satisfies the requirements for an image- guided virtual surgery planning system defined by Cleynenbreugel et al [1 5]. The fast rotation of the volume, the simple cutt ing interaction with a force- feedback device, and the visualisation of the volume- growing process ensure the usability and acceptance of our planning interface in the medical domain. We view our system as a maxil lofacial surgery planning tool rather than a real simulation system.

Further efforts will be directed toward achieving a sufficient stereoscopic 3D representation through a 'see-through' 3D display. The intra-operat ive augmented display of the operation planning results can also be used for teaching purposes. The use of a 3D output device should give students a full rep- resentation of the planning and the real operation. As our project nears completion, we will evaluate the possibility of representing soft-tissue modulations corresponding to bone shift.

Acknowledgements Our project 'lntra-operative Navigation Support' has been funded by the Deutsche Forschungsgemein- schaft (DFG) and the University Hospital Benjamin Franklin (UKBF). We would like to thank the Depart- ment of Oral, Maxillofacial and Facial Plastic Surgery of the UKBF for its cooperation. The authors are grateful to Jean Pietrowicz for proofreading the manuscript.

References 1. Neumann R Faulkner G, Krauss M, Haarbeck K,

lb lxdorf f 1. MeVisTo-Jaw: a visualization-based maxillofacial surgical planning tool. In: Proceedings of the SPIE Medical Imaging 3335,1998; 110-118

2. Neumann R 5iebert D, Faulkner G, Krauss M, Schulz A, Lwowsky C, Tolxdorff T. Virtual 3D cutting for bone segment extraction in maxillofacial surgery planning. In: Proceedings of the 7th International Conference Medicine Meets Virtual Reality, San Francisco, 20-23 January 1999:235-241

3. Bill JS, Reuther JR Dittmann W, K~Jbler N, Meier JL, Pistner H, Wittenberg G. Stereolithography in oral and maxillofacial operation planning. International Journal of Oral Maxillofacial Surgery; 1995; 24(1): 98-101

4. Zeithofer H.-E Sader R, Horch H.-H, Deppe H.. Preoperative visualization of aesthetic changes in orthognathic surgery. In: Proceedings of the International Symposium CAR'95; 1369 1374

5. Delingette H, Subsol G, Cotin S, Pignon J. A craniofacial surgery simulation testbed. In: Proceedings of the SPIE Third Int. Conf. on Visualization in Biomedical Computing 2359, 1994; 607-618

6. Wood C, Ling C, Lee CY. Real time 3D rendering of volumes on a 64bit architecture. SPIE - Mathematical Methods in Medical Imaging 2707, 1996; 152-158

7. Phong BT. Illumination for computer generated pic- tures. Communications of ACM 1975; 18(6): 311-317

8. Sloan Jr, KR, Tanimoto SL. Progressive refinement of raster images, IEEE Transactions on Computers, 1979; c-28(11): 871-875

9. lbennies KD, Derz C. Volume rendering for interactive 3-D segmentation. In: Proceedings of the SPIE Medical Imaging 3031,1997; 602-609

10. Rosenberg LB. A force feedback programming primer. San Jose, CA: Immersion Corporation, 1997

1 I. Ka!awsky RS. The science of virtual reaIity and virtual environments. Wokingham: Addison-Wesley, 1993

12. Lueth T, Heissler E, BierJ. Evaluierungvon Navigations- und Robotersystemen for den Einsatz in der Chirurgie. In: -lele - und computergesdJtzte Chirurgie. Schlag PM. ed. Berlin: Springer Verlag, 1998;

13. Becker J, Krauss M, Faulkner G. The suitability of magnetic tracking devices in maxillofacial surgery. In: PrQceedings of the International Symposium CAR'96, 1996; 1043

14, Birkfellner W, Watzinger F, Wanschitz F, Enislidis G, Kollmann C, Rafolt D, Nowotny R, Ewers R., Bergmann H. Systematic distortions in magnetic position digitizers. Medical Physics 1998; 25(11): 2242-2248

15, Cleynenbreugel JV, Verstreken, Marchal G, Suetens R A flexible environment for image guided virtual surgery planning. In: Visualization in biomedical computing. H0hne KH, Kikinis R, eds. 1996; 501-510

Correspondence and offprint requests to: Neumann, Institute of )~edico/ /nformatics, Biostotistics and Epidemiotogy, University Hospito/ Benjom/n Fronklin, Free University of Berlin, Hindenburqdamm 30, D-12200 Berlin, Germany

R Neumann et al.