2
Virtual Reality Training Embedded in Neurosurgical Microscope Alessandro De Mauro 1 Jörg Raczkowsky 1 Marc Eric Halatsch 2 Heinz Wörn 1 1 Institute for Process Control and Robotics, University of Karlsruhe (TH), Germany 2 Neurosurgical Department, University Hospital of Heidelberg, Germany ABSTRACT In this paper, we present the very first virtual reality training system for neurosurgical interventions based on a real surgical microscope and on a haptic interface for a better visual and ergonomic realism. Its main purpose is the realistic simulation of the palpation of low-grade glioma. The ability of a surgeon to feel the difference in consistency between tumors cells and normal brain parenchyma requires considerable experience and it is a key factor for a successful intervention. The simulation takes advantage of an accurate tissue modeling, a force feedback device and the rendering of the virtual scene directly to the oculars of the operating microscope. Keywords: Virtual reality, physical modeling, haptic feedback, neurosurgery. 1 INTRODUCTION Nowadays, the success of any intervention is closely related to the surgeon’s skills. Intra-operative false movements can be devastating; leaving patients paralyzed, comatose or dead. Traditional techniques for training in surgery include the use of animals, phantoms, cadavers and real patients. Low-grade gliomas are intrinsic brain tumors typically occur in younger adults. One of the obstacles associated with the surgical resection of these tumors is that the pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. Due to intra-operative brain shift, neuronavigation is unreliable for guiding the extent of resection and a better guide is through the slightly increased consistency of the tumor compared to normal brain tissue. The appreciation of this difference in consistency requires considerable experience on the part of the neurosurgeon. The aim of our current development is to provide a training device for this particular task of haptic tissue differentiation to neurosurgeons in training. It has shown in [1] that virtual reality simulators improve the proficiency to perform surgery and speed-up the learning process. Surgical simulators offer sophisticated task training programs, record errors and provide a way of measuring operative efficiency and performance, working both as an educational tool and a skills validation instrument. This scientific work is focused on developing a neurosurgical simulator directed towards both educational and preoperative purposes based on a virtual environment set up on reconstructed human organs from real patients’ images and using a real operating microscope (Carl Zeiss). 2 BACKGROUND IN SURGICAL SIMULATION There are relevant previous works in the field of surgical training systems (the LapSim system - SurgicalScience [2], Endoscopy AccuTouch System - Immersion [3] and KISMET - Forschungszentrum, Karlsruhe [4]). These examples are all related to different endoscopic techniques (laparoscopy, bronchoscopy, gynecology, etc...). They simulate common procedures in minimal invasive surgery carried out during real surgery, through natural body openings or small incisions. On the other hand, the state-of-the-art in neurosurgical simulation [5] shows only a few examples of VR based systems which use force feedback [6, 7, 8]. Since a neurosurgical microscope is used in a large percentage of interventions, the realism of these simulators is limited by the architecture: they use either displays or head mounted displays and not a real surgical microscope; in addition, an operating microscope is a very expensive resource and is not used for normal standard training but only for the real operations. In previous work a prototype stereoscopic Augmented Reality (AR) microscope for ergonomic intra-operative presentation of complex preoperative three-dimensional data for neurosurgical interventions was realized at our institute [9]. 3 METHOD AND TOOLS In neurosurgical interventions, both monitors and operating microscopes are both used. In order to understand the correct tumor position compared to the preoperative images (CT, MRI) surgeon’s eyes are normally on the microscope oculars and only occasionally glance at a larger screen. This second view is requested to understand the position of the surgical tools inside the patient brain. A complete simulation system for neurosurgical training (see Fig. 1) is required to simulate the virtual view inside the microscope, to provide the user’s hand with force feedback and to simulate the navigational software actually used in OR (i.e. BrainLab or Stryker). (d) (e) Figure 1. Simulator concepts. (a) Surgical microscope. (b) Haptic Interface. (c) Intra-operative navigation software (3DSlicer). (d) Different ocular views: deformations and 3D patient reconstructed from CT. (e) Simulator. (a) (b) (c) 233 IEEE Virtual Reality 2009 14-18 March, Lafayette, Louisiana, USA 978-1-4244-3943-0/09/$25.00 ©2009 IEEE

[IEEE 2009 IEEE Virtual Reality Conference - Lafayette, LA (2009.03.14-2009.03.18)] 2009 IEEE Virtual Reality Conference - Virtual Reality Training Embedded in Neurosurgical Microscope

  • Upload
    h

  • View
    217

  • Download
    4

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 IEEE Virtual Reality Conference - Lafayette, LA (2009.03.14-2009.03.18)] 2009 IEEE Virtual Reality Conference - Virtual Reality Training Embedded in Neurosurgical Microscope

Virtual Reality Training Embedded in Neurosurgical Microscope

Alessandro De Mauro1 Jörg Raczkowsky1 Marc Eric Halatsch2 Heinz Wörn1

1Institute for Process Control and Robotics, University of Karlsruhe (TH), Germany

2Neurosurgical Department, University Hospital of Heidelberg, Germany

ABSTRACT In this paper, we present the very first virtual reality training system for neurosurgical interventions based on a real surgical microscope and on a haptic interface for a better visual and ergonomic realism. Its main purpose is the realistic simulation of the palpation of low-grade glioma. The ability of a surgeon to feel the difference in consistency between tumors cells and normal brain parenchyma requires considerable experience and it is a key factor for a successful intervention. The simulation takes advantage of an accurate tissue modeling, a force feedback device and the rendering of the virtual scene directly to the oculars of the operating microscope. Keywords: Virtual reality, physical modeling, haptic feedback, neurosurgery.

1 INTRODUCTION Nowadays, the success of any intervention is closely related to the surgeon’s skills. Intra-operative false movements can be devastating; leaving patients paralyzed, comatose or dead. Traditional techniques for training in surgery include the use of animals, phantoms, cadavers and real patients.

Low-grade gliomas are intrinsic brain tumors typically occur in younger adults. One of the obstacles associated with the surgical resection of these tumors is that the pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. Due to intra-operative brain shift, neuronavigation is unreliable for guiding the extent of resection and a better guide is through the slightly increased consistency of the tumor compared to normal brain tissue. The appreciation of this difference in consistency requires considerable experience on the part of the neurosurgeon. The aim of our current development is to provide a training device for this particular task of haptic tissue differentiation to neurosurgeons in training.

It has shown in [1] that virtual reality simulators improve the proficiency to perform surgery and speed-up the learning process. Surgical simulators offer sophisticated task training programs, record errors and provide a way of measuring operative efficiency and performance, working both as an educational tool and a skills validation instrument. This scientific work is focused on developing a neurosurgical simulator directed towards both educational and preoperative purposes based on a virtual environment set up on reconstructed human organs from real patients’ images and using a real operating microscope (Carl Zeiss).

2 BACKGROUND IN SURGICAL SIMULATION There are relevant previous works in the field of surgical training systems (the LapSim system - SurgicalScience [2], Endoscopy AccuTouch System - Immersion [3] and KISMET - Forschungszentrum, Karlsruhe [4]).

These examples are all related to different endoscopic techniques (laparoscopy, bronchoscopy, gynecology, etc...). They simulate common procedures in minimal invasive surgery carried out during real surgery, through natural body openings or small incisions. On the other hand, the state-of-the-art in neurosurgical simulation [5] shows only a few examples of VR based systems which use force feedback [6, 7, 8]. Since a neurosurgical microscope is used in a large percentage of interventions, the realism of these simulators is limited by the architecture: they use either displays or head mounted displays and not a real surgical microscope; in addition, an operating microscope is a very expensive resource and is not used for normal standard training but only for the real operations. In previous work a prototype stereoscopic Augmented Reality (AR) microscope for ergonomic intra-operative presentation of complex preoperative three-dimensional data for neurosurgical interventions was realized at our institute [9].

3 METHOD AND TOOLS In neurosurgical interventions, both monitors and operating microscopes are both used. In order to understand the correct tumor position compared to the preoperative images (CT, MRI) surgeon’s eyes are normally on the microscope oculars and only occasionally glance at a larger screen. This second view is requested to understand the position of the surgical tools inside the patient brain. A complete simulation system for neurosurgical training (see Fig. 1) is required to simulate the virtual view inside the microscope, to provide the user’s hand with force feedback and to simulate the navigational software actually used in OR (i.e. BrainLab or Stryker).

(d) (e)

Figure 1. Simulator concepts. (a) Surgical microscope. (b) Haptic Interface. (c) Intra-operative navigation software (3DSlicer). (d) Different ocular views: deformations and 3D patient reconstructed from CT. (e) Simulator.

(a) (b) (c)

233

IEEE Virtual Reality 200914-18 March, Lafayette, Louisiana, USA978-1-4244-3943-0/09/$25.00 ©2009 IEEE

Page 2: [IEEE 2009 IEEE Virtual Reality Conference - Lafayette, LA (2009.03.14-2009.03.18)] 2009 IEEE Virtual Reality Conference - Virtual Reality Training Embedded in Neurosurgical Microscope

We have built a virtual environment using real patients’ data affected by a low grade glioma from medical image devices. Human organs are accurately reconstructed from real patient images using the tool 3DSlicer [10].

A region growing algorithm has been used for segmentation and organ classification. The 3D organs obtained are imported directly into our application and the registration step between these two virtual environments is carried out. 3D models of the surgical tools are provided using a Laser ScanArm (FARO) [11].

The rendering software developed in C++ is built on the open source GPU licensed and cross-platform H3D [12], a scene-graph API based on OpenGL for graphics rendering, OpenHaptics for haptic rendering and X3D for the 3D environment description.

Collision detection modules based on the physical model for simulation are developed using OpenGL to obtain high level performances and realistic brain deformations. Hierarchies of bounding volumes are used to perform fast collision detection between complex models. The method used is based on model partition [13], the strategy of subdividing a set of objects into geometrically coherent subsets and computing a bounding volume for each subset of objects is used. The physical modeling method is based on mass-spring-damper (MSD) and consists of a mesh of point masses connected by elastic links and mapped onto the geometric representation of the virtual object. This is the method employed in our prototype to describe the mechanical properties of the virtual bodies computing the force feedback and the organ deformations. It’s a discrete method characterized by low computable load, simplicity, low accuracy and risk of instability because it uses Newton dynamic to modify the point-masses positions and creates deformations considering the volume conservation. Brain tissue properties are modeled with MSD upon OpenHaptics library (Sensable) [14]. The MSD parameters are evaluated together with our medical partner (Department of Neurosurgery, University Hospital of Heidelberg) using different training sections and processing empiric data. In order to have a complete training platform a video navigation system containing different 3D volume views is required. To achieve this, we have connected our system with the image guided therapy module of 3DSlicer. We are using a haptic device (Phantom Desktop) in order to provide the surgeon with an immersive experience during the interaction between the surgical tools and the brain or skull structures of the virtual patients. The force feedback workspace and other important properties (nominal position resolution and stiffness range) make it suitable to be used in an ergonomic way in conjunction with the microscope.

The 3D real-time position of the two viewpoints (one of each microscope ocular) in the virtual environment is achieved using passive markers affixed to the microscope that are tracked by an infrared optical system (Polaris NDI). In order to speed up the simulation we decided to modify the original library for tracking developed in our laboratories. In this way, data is collected from Polaris and sent over the local area network to several given IP addresses and ports using a modified version of the open-source OpenIGTLink of 3DSlicer. We obtained the separation between renderings (graphical and haptic) and tracking PC with advantages in terms of computational load and realism (the average frame rate for graphical rendering is 31 fps and for the haptic 998 fps). The collisions between organs and surgical tools produce forces which have to be replicated by the haptic interface and organ deformations, which have to be graphically rendered.

All the basic surgical operations on the brain are suitable for simulation to increase surgery skills in a fast and safe way using virtual reality. The main operating task simulated is the visual and tactile sensations of brain palpation (healthy or affected by low grade glioma) pushing aside the human tissue using a neurosurgical spatula. We are evaluating (supported by our

medical partner) other improvements in providing more specific simulation tasks and extending this work as a distributed training platform using web technology (the use of X3D to describe the environment will permit this result in a natural way).

4 CONCLUSION This paper presents the development of the first prototype of a neurosurgical simulator embedded on a real microscope. The main feature is the advanced simulation of brain tissue palpation enabling the surgeon to distinguish between tissues with a low grade glioma and normal tissues. The architecture and the main features were defined in tight collaboration with surgeon staff. Force feedback interaction with soft and hard tissues is provided to the surgeon’s hands in order to complete the immersive experience. The virtual environment (accurate and real-time updated) is rendered by a stereoscopic image injection directly inside the operating microscope. 3DSlicer is directly connected to the NDI tracking system in order to provide the navigation inside the real patient’s images on the screen. The previously described software architecture guarantees performance and portability. We are working to improve realistic deformations and force feedback integrating ad-hoc collision detection and physical modeling suitable for a brain tissue simulation. All the components are open source or based on a GPL license.

ACKNOWLEDGEMENTS

This research is a part of the European project “CompuSurge” funded by “Marie Curie” research network.

REFERENCES [1] C.Y. Ro et al., “The LapSim. A learning environment for both

experts and novices”, Department of Surgery, St. Luke's-Roosevelt Hospital Center and Columbia University, New York, USA, 2005.

[2] SurgicalScience, URL: http://www.surgical-science.com/ [Status January 2008]

[3] Immersion, URL: http://www.immersion.com/ [Status January 2008] [4] U. Kühnapfel, K. Çakmak, H. Maass, S. Waldhausen, “Models for

simulating instrument-tissue interactions”, Proceedings of 9th MMVR 2001, Newport Beach, CA, USA, 2001.

[5] K.Y.C Goh, “Virtual reality application in neurosurgery”, Proceedings of the 2005 IEEE, Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 2005.

[6] P. Litwinowicz and L. Williams. “Animating images with drawings”, In Andrew Glassner editor, ACM Press, Proceedings of Computer Graphics, annual conference series, pages 409–412, New York, USA, July 1994.

[7] D. Sato et al. “Soft tissue pushing operation using a haptic interface for simulation of brain tumor resection”. Journal of robotics and mechatronics, vol.18, pages 634 - 642, 2006.

[8] C. Luciano, P. Banerjee, M.G. Lemole, F. Charbel. “Second generation haptic ventriculostomy simulator using the immersivetouch™ system”, Proceedings of MMVR 14, Long Beach, CA, USA, 2008.

[9] M. Aschke, C.R. Wirtz, J. Raczkowsky, H. Wörn, S. Kunze. Augmented reality in operating microscopes for neurosurgical interventions”. Wolf and Strock editors, Proceedings of 1st In. IEEE EMBS Conference on Neural Engineering, pages 652 – 655, 2003.

[10] 3DSlicer, URL: http://www.slicer.org/ [status January 2008] [11] Faro, URL: http://www.faro.com [status January 2008] [12] H3D, URL: http://www.h3dapi.org/ [status January 2008] [13] G. Van den Bergen, Collision detection in interactive 3D

environment. Elsevier Morgan Kaufmann, S.Francisco, USA, 2004. [14] OpenHaptics, URL: http://www.sensable.com/products-openhaptics-

toolkit.htm [status October 2008]

234