98
Medical Robotics, Navigation and Visualization (MRNV 2004) Remagen, March 11 - 12, 2004 BOOK OF ABSTRACTS in cooperation with: center of advanced european studies and research Europäische Akademie

Beating Heart Tracking in Robotic Surgery using 500 Hz Visual Servoing, Model Predictive Control and an Adaptive Observer

Embed Size (px)

Citation preview

I

Medical Robotics, Navigation and Visualization

(MRNV 2004)

Remagen, March 11 - 12, 2004

BOOK OF ABSTRACTS

in cooperation with:

center of advanced europeanstudies and research

E u r o p ä i s c h eA k a d e m i e

II

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts Edited by Thorsten M. Buzug Department of Mathematics and Technology, RheinAhrCampus Remagen Compilation: Dr. Anke Hülster and Michael Böttcher Bureau of Technology Transfer, RheinAhrCampus Remagen RheinAhrCampus Remagen Südallee 2, 53424 Remagen, Germany [email protected] www.rheinahrcampus.de/mrnv2004 Publisher: Kreartive Konzepte, Beate Surek, Remagen Print: Druckhaus optiprint GmbH ISBN: 3-9807690-5-4 © 2004 RheinAhrCampus Remagen

III

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Scope and Aim The application of computer-aided planning, navigation and robotics in surgery provides significant advantages due to today’s sophisticated techniques of patient-data visualization in combination with the flexibility and precision of novel robots. Robotic surgery is going to revolutionize surgical proce-dures. Augmented with 3D image-guidance technology these tools give finer control over sensitive movements in diseased areas and therefore, allow for more surgical procedures to be performed using minimally invasive techniques than ever before. MRNV 2004 is a scientific workshop on new robotic procedures in all medical application areas. The workshop will bring together scientific, medical and application experts from university, clinical and commercial sites. Organization Thorsten M. Buzug (Workshop Chair) RheinAhrCampus Remagen Südallee 2, D-53424 Remagen, Germany Tel.: +49 (0) 26 42 / 932 318 E-mail: [email protected]

Tim C. Lueth (Program Chair) Charité Campus Virchow, Humboldt University Augustenburger Platz 1, 13353 Berlin, Germany Tel.: +49 (0) 30 / 450 555 131 E-mail: [email protected]

Program Committee Nicholas Ayache (INRIA Sophia Antipolis), Thorsten Buzug (RheinAhrCampus Remagen), Thomas Christaller (FhG AIS, St. Augustin), Olaf Dössel (University Karlsruhe), Rudolf Fahlbusch (Univer-sity Erlangen-Nürnberg), Toshio Fukuda (University Nagoya), Heinz Handels (UKE University Ham-burg), Ulrich Hartmann (RheinAhrCampus Remagen), Stefan Haßfeld (University Heidelberg), David Hawkes (King’s College University London), Peter Hering (University Düsseldorf), Gerd Hirzinger (DLR Oberpfaffenhofen), Dietrich Holz (RheinAhrCampus Remagen), Erwin Keeve (Caesar Bonn), Ron Kikinis (Harvard Medical School Boston), Frithjof Kruggel (University Leipzig), Heinz U. Lemke (Technical University Berlin), Steffen Leonhardt (Helmholtz Institute RWTH Aachen), Sven Loncaric (University Zagreb), Tim Lueth (Humboldt University Berlin), Seong K. Mun (University Georgetown), Wolfgang Niederlag (Dresden-Friedrichstadt General Hospital), Frank Pasemann (FhG AIS, St. Augustin), Wolf Rathgeber (Europäische Akademie Bad Neuenahr), Torsten Reichert (Uni-versity Mainz), Georg Schmitz (RheinAhrCampus Remagen), Jocelyne Troccaz (University Grenoble, CNRS, La Tronche), Max Viergever (University Utrecht), Heinz Wörn (University Karlsruhe), Gerhard Wahl (University Bonn). Local Organization Committee Tobias Bildhauer, Michael Böttcher, Holger Dörle, Dieter Gruschinski, Anke Hülster, Elvira Kluge, Marie-Sophie Lafontaine, Birgit Lentz, Kerstin Lüdtke-Buzug, Volker Luy, Giseal Niedzwetzki, Wal-traud Ott and Dirk Thomsen.

IV

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Preface As the district governor I would like to welcome you in Ahrweiler county. I also would like to thank you having accepted the invitation of the RheinAhrCampus Remagen and the Charité of the Humboldt University of Berlin here into our “Health and Fitness Region”. I am very proud of having this large and top-class scientific event here in Ahrweiler county. Experts expect a change within medicine in the following 20 years, much stronger than the developments during the last 2.000 years. The British author Aldous Huxley caricatured it like this: “The medical research has made such progresses, that there is hardly one more healthy human being anyway.”

The Fraunhofer Institute for Systems and Innovation Research in Karlsruhe pointed out, that medical technology is one of the key-technologies with the largest potential for the future worldwide. Germany is leading in this field. The results of our scientists and engineers are forefront in the world.

New technological methods of illustration submit data without being burdened or being painful for the patients. Medical practitioners have the opportunity to project an accurate operation in virtual environ-ment, laser and endoscopes admit exactly located operations and surgeons are able to navigate their in-struments reliable with technical help. Although many mockers call it „Nintendo surgery“, computers and robotics in operating rooms will soon be as normal as scalpel, acus and twine. They can’t really substitute the human surgeon, but they can assist him doing the operation more accurate and safe.

Apparatus for tests and diagnostics, ultrasonic, laser, magnetic resonance tomography, apparatus for radiation therapy or surgery instruments are a domain of the German industry. The digital medicine with its modern methods of medical illustration includes a high levelled innovation power. With this Germany is able to open new economic potentials. That’s why I am convinced, that Ahrweiler county has certainly backed the right horse with “Medical Technology” as a field of study at the university of applied sciences here in Remagen.

The RheinAhrCampus features best conditions for an innovative university education and research on a very high level: a high modern equipment with computer- and magnetic resonance tomograph, ultrasonic and endoscope technology up to thermography – everything is available. The laser laboratories are internationally appreciated. The young and high motivated professors team of the university is characterised by an outstanding flexibility and by unconventional ways in management. That’s why the RheinAhrCampus has an excellent standing in industry.

Our university of applied sciences is the “flagship” of the health competition in Ahrweiler county. Scientific events like “Medical Robotics, Navigation and Visualization” underline our growing significance as an important health and fitness region with numerous establishments and enormous potentials.

With our “Innovation Park Rhineland” in Grafschaft and the “Innovation and Start-up Center” in Sinzig we approach the industrial segment of medical technology – together with one more component also located in Sinzig: the “Scientific Institute of the Pharmaceutical Manufacturers Research Association” and the “Central Institute for Pharmaceutical Research”, a common establishment for technology transfer and advanced training for medium-sized companies.

Three medical bathes on high modern standard and famous medical mineral springs – such as Apollinaris - The Queen of Table Waters – attract as well. Alongside excellent restaurant- and hotel business, the delicious red wine from the Ahr - known as the largest acreage of red wine in Germany – is an international figurehead of our region. Last but not least I wish you an interesting conference with many new suggestions, also for your daily employment. I hope, you’ll enjoy our nice “Ahrweiler Health and Fitness Region” and come back again soon!

Landrat Dr. Jürgen Pföhler

V

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

VI

Medical Robotics, Navigation and Visualization (MRNV 2004)Book of Abstracts

Cooperating Societies Many thanks go to the following cooperating societies: DGBMT (Deutsche Gesellschaft für Biomedizinische Technik) of VDE CURAC (Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie) FhG – AIS (Fraunhofer Society – Institute of Autonomous Intelligent Systems) CAESAR (Center of Advanced European Studies and Research) European Academy Bad Neuenahr/Ahrweiler Acknowledgments The organizers thank the German Federal Government for supporting the MRNV 2004 workshop.

Institut Autonome Intelligente Systeme

Fraunhofer

center of advanced europeanstudies and research

E u r o p ä i s c he A k a d e m i e

VII

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Time Schedule Thursday Morning Sessions (March 11, 2004) Welcome Coffee 08:00 - 08:30 Plenary Foyer Opening Remarks 08:30 - 08:45 T. Buzug and T. Lueth Key Note I Chairs: Heinz Handels (University Hospital Hamburg-Eppendorf) Ulrich Hartmann (RheinAhrCampus Remagen) 08:45 - 09:15 K1 Image Guidance of Radiological and Surgical Interventions Wiro J. Niessen, L. Wilbert Bartels, Max A. Viergever University Medical Center Utrecht, The Netherlands Registration I Chairs: Heinz Handels (University Hospital Hamburg-Eppendorf) Ulrich Hartmann (RheinAhrCampus Remagen) 09:15 - 10:15 RE1 - RE4 Coffee Break 10:15 - 10:45 Plenary Foyer Advanced Navigation and Motion Tracking I Chairs: Peter Hering (University of Düsseldorf) Alexandra Branzan Albu (Laval University, Canada) 10:45 - 11:45 N1 - N4 Quick Plenary Introduction to Poster Contributions I Chairs: Peter Hering (University of Düsseldorf) Alexandra Branzan Albu (Laval University, Canada) 11:45 - 12:15 P01 - P15 15 Posters 2 min. each

VIII

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Thursday Afternoon Sessions (March 11, 2004) Lunch Break 12:15 - 13:00 Cafeteria Calibration and Accuracy Analysis Chairs: Frithjof Kruggel (University of Leipzig) Hiroshi Iseki (Tokyo Women's Medical University) 13:00 - 14:00 A1 - A4 Quick Plenary Introduction to Poster Contributions II Chairs: Frithjof Kruggel (University of Leipzig) Hiroshi Iseki (Tokyo Women's Medical University) 14:00 - 14:30 P16 - P30 15 Posters 2 min. each Coffee Break 14:30 - 14:45 Plenary Foyer Poster Session 14:45 - 15:45 P01 - P30 Plenary Foyer Clinical Case Studies Chairs: Erwin Keeve (caesar Bonn) Olaf Dössel (University of Karlsruhe) 15:45 - 17:00 C1 - C5 Advanced Navigation and Motion Tracking II Chairs: Wolf Rathgeber (Europäische Akademie Bad Neuenahr) Dietrich Holz (RheinAhrCampus Remagen) 17:00 - 18:00 N5 - N8 Workshop Dinner 19:30 Boat Tour

IX

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Friday Morning Sessions (March 12, 2004) Wake-Up Coffee 08:00 - 08:45 Plenary Foyer Key Note II Chairs: Heinz U. Lemke (Technical University of Berlin) Jenz Bongartz (RheinAhrCampus Remagen) 08:45 - 09:15 K2 Modelling and Registration in Image Guided Interventions David Hawkes

GKT School of Medicine, King's College London, UK

Registration II Chairs: Heinz U. Lemke (Technical University of Berlin) Jenz Bongartz (RheinAhrCampus Remagen) 09:15 - 10:00 RE5 - RE7 Coffee Break 10:00 - 10:30 Plenary Foyer Simulation and Modelling Chairs: Lutz-P. Nolte (University of Bern) Luc Soler (University Clinic of Strasbourg) 10:30 - 12:00 S1 - S6

X

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Friday Afternoon Sessions (March 12, 2004) Lunch Break 12:00 - 12:45 Cafeteria Key Note III Chairs: Thorsten Buzug (RheinAhrCampus Remagen) Georg Schmitz (RheinAhrCampus Remagen) 12:45 - 13:15 K3 Future Visions of Modern Image Guided Procedures Lutz-P. Nolte MEM Research Center for Orthopaedic Surgery

Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland

Robotic Interventions Chairs: Thorsten Buzug (RheinAhrCampus Remagen) Georg Schmitz (RheinAhrCampus Remagen) 13:15 - 14:15 RO1 - RO4 Coffee Break 14:15 - 14:45 Plenary Foyer Sensor-Feedback Systems and Augmented Reality Chair: Tim Lueth (Humboldt University of Berlin) Joerg Raczkowsky (University of Karlsruhe) 14:45 - 15:45 AR1 - AR4 Resume 15:45 - 16:00 T. Buzug and T. Lueth

XI

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

XII

Medical Robotics, Navigation and Visualization (MRNV 2004)Book of Abstracts

Contents Registration I RE1 Stereotactic Treatment Planning Using Fused Multi-Modality Imaging K. Hamm, G. Surber, M. Schmücking, R. Aschenbach, G. Kleinert and R. P. Baum 1RE2 Non-Rigid Registration of Intraoperatively Acquired 3D Ultrasound Images of

Brain Tumors M. Letteboer, P. Hellier, D. Rueckert, P. Willems, J. W. Berkelbach, W. Niessen 2RE3 Comparison of Different Registration Methods for Navigation in Craniomaxillo-

facial Surgery M. Zinser, R. A. Mischkowski, M. Siessegger, A. Kübler, J. E. Zöller 3RE4 Localisation of Moving Targets for Navigated Radiotherapy L. Vences, O. Sauer, M. Roth, K. Berlinger, M. Doetter and A. Schweikard 4

Advanced Navigation and Motion Tracking I N1 Clinical Relevance of Preoperative CT Based Computer Aided 3D Planning in

Hepatobiliary Surgery and Living Related Liver Transplantation J. Harms, H. Bourquain, K. Oldhafer, T. Kahn, J. Fangmann, H.-O. Peitgen and

J. Hauss 5N2 Analysis of Drill Sound in Spine Surgery I. Boesnach, M. Hahn, J. Moldenhauer, Th. Beth and U. Spetzger 6N3 Experimental Setup for Navigation for Coronary Interventions J. Borgert, H. Timinger, S. Krueger and R. Grewer 7N4 Beating Heart Tracking in Robotic Surgery Using 500 Hz Visual Servoing, Model

Predictive Control and an Adaptive Observer R. Ginhoux, J. A. Gangloff, M. F. de Mathelin, L. Soler, M. M. Arenas Sanchez and

J. Marescaux 8

Calibration and Accuracy Analysis A1 Non-Invasive Intraoperative Imaging Using Laser Radar System in Hip-Joint

Replacement Surgery G. Kamucha and G. Kompa 9A2 Accuracy in Computer Assisted Implant Dentristry. Image Guided Template

Production vs. Burr Tracking G. Widmann, R. Widmann, E. Widmann and R. J. Bale 10A3 3D-Accuracy Analysis of Fluoroscopic Planning and Navigation of Bone-Drilling

Procedures J. A. K. Ohnsorge, E. Schkommodau, D. C. Wirtz, J. E. Wildberger, A. Prescher and C.

H. Siebert 11A4 Accuracy of Fluoroscope and Navigated Controlled Hind- and Midfoot Correction

of Deformities J. Geerling, S. Zech, D. Kendoff, T. Hüfner, M. Richter and C. Krettek 12

XIII

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Clinical Case Studies C1 Resection of Bony Tumors within the Pelvis – Hemipelvectomy Using Navigation J. Geerling, D. Kendoff, L. Bastian, E. Mössinger, M. Richter, T. Hüfner and C.

Krettek 13C2 f-MRI Integrated Neuronavigation – Lesion Proximity to Eloquent Cortex as

Predictor for Motor Deficits R. Krishnan, H. Yahya, A. Szelényi, E. Hattingen, A. Raabe and V. Seifert 14C3 Trends and Perspectives in Computer-Assisted Dental Implantology K. Schicho, G. Wittwer, A. Wagner, R. Seemann and R. Ewers 15C4 The Expirience of the Working Group for Computer Assisted Surgery at the

University of Cologne R. A. Mischkowski, M. Zinser, M. Siessegger, A. Kübler and J. E. Zöller 16C5 Fluoroscopic Navigation of the Dynamic Hip Screw (DHS): an Experimental

Study D. Kendoff, M. Kfuri Jr., J. Geerling, T. Gösling, M. Citak and C. Krettek 17

Advanced Navigation and Motion Tracking II N5 Occlusion-Robust, Low-Latency Optical Tracking Using a Modular Scalable

System Architecture A. Köpfle, M. Schill, M. Rautmann, M. L. R. Schwarz, P. P. Pott, A. Wagner, R.

Männer, E. Badreddin, P. Weiser 18N6 Development of Autoclavable Reflective Optical Markers for Navigation Based

Surgery D. Schauer, T. Krüger and T. Lueth 19N7 Experimental Application of Ultrasound-Guided Navigation in Head and Neck I. Arapakis, J. Schipper and R. Laszig 20N8 Iso-C 3D Navigated Drilling of Osteochondral Defects of the Talus (A Cadaver

Study) M. Citak, J. Geerling, D. Kendoff, T. Hüfner, M. Richter, M. Kfuri and C. Krettek 21

Registration II RE5 Laser Surface Scanning for Registration R. Krishnan, A. Raabe and V. Seifert 22RE6 Using the AWIGS System for Preparation of Computer Aided Surgery H. Knoop, J. Raczkowsky, U. Wyslucha, T. Fiegele and H. Wörn 23RE7 Ultra-Fast Holographic Recording and Automatic 3D Scan Matching of Living

Human Faces D. Giel, S. Frey, A. Thelen, J. Bongartz, P. Hering, A. Nüchter, H. Surmann, K.

Lingemann and J. Hertzberg 24

XIV

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Simulation and Modelling S1 Realistic Haptic Interaction for Computer Simulation of Dental Surgery A. Petersik, B. Pflesser, U. Tiede, K. H. Höhne, M. Heiland and H. Handels 25S2 Robot Simulation System for Precise Positioning in Medical Applications E. Freund, F. Heinze and J. Roßmann 26S3 Computer-Aided Suturing in Laparoscopic Surgery F. Nageotte, C. Doignon, M. de Mathelin, L. Soler, J. Leroy and J. Marescaux 27S4 Experimental Validation of a Force Prediction Algorithm for Robot Assisted

Bone-Milling C. Plaskos, A. J. Hodgson and P. Cinquin 28S5 SKALPEL-ICT: Simulation Kernel Applied to the Planning and Evaluation of

Image-Guided Cryotherapy A. Branzan Albu, D. Laurendeau, C. Moisan and D. Rancourt 29S6 Simulation of Radio-Frequency Ablation Using Composite Finite Element

Methods T. Preusser, F. Liehr, U. Weikard, M. Rumpf, S. Sauter and H.-O. Peitgen 30

Robotic Interventions RO1 Principles of Navigation in Surgical Robotics D. Henrich and P. Stolka 31RO2 Robotic Surgery in Neurosurgical Field H. Iseki, Y. Muragaki, S. Oomori, K. Nishizawa, M. Hayashi, R. Nakamura and

I. Sakuma 32RO3 From the Laboratory to the Operating Room: Usability Testing of LER, the Light

Endoscope Robot P. Berkelman, E. Boidard, P. Cinquin and J. Troccaz 33RO4 Safety of Surgical Robots in Clinical Trials W. Korb, D. Engel, R. Boesecke, G. Eggers, B. Kotrikova, H. Knoop, R. Marmulla,

J. Raczkowsky, N. O'Sullivan, H. Wörn, J. Mühling and S. Hassfeld 34 Sensor-Feedback Systems and Augmented Reality AR1 Virtual Reality, Augmented Reality and Robotics in Digestive Surgery L. Soler, N. Ayache, S. Nicolau, X. Pennec, C. Forest, H. Delingette, D. Mutter and

J. Marescaux 35AR2 Palpation Imaging and Guidance Using a Haptic Sensor Actuator System for

Medical Applications W. Khaled, S. Reichling, O. T. Bruhns, S. Egersdörfer, G. Monkman, H. Böse,

M. Baumann, A. Tunayar, H. Freimuth, A. Lorenz, A. Pesavento and H. Ermert 36AR3 In Vivo Study of Forces During Needle Insertions B. Maurin, L. Barbe, B. Bayle, P. Zanne, J. Gangloff, M. de Mathelin, A. Gangi,

L. Soler and A. Forgione 37AR4 Teaching Bone Drilling: 3D Graphical and Haptic Simulation of a Bone Drilling

Operation H. Esen, K. Yano and M. Buss 38

XV

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Poster Contributions P1 Optimisation of the Robot Placement in the Operating Room P. Maillet, P. Poignet and E. Dombre 39P2 Development of a Navigation System for TMS A. Wechsler, S. Woessner and J. Stallkamp 40P3 Robotics in Health Care. An Interdisciplinary Technology Assessment Including

Ethical Reflection M. Decker 41P4 State of the Art of Surgical Robotics P. P. Pott, A. Köpfle, A. Wagner, E. Badreddin, R. Männer, P. Weiser, H.-P. Scharf

and M. L. R. Schwarz 42P5 Automatic Coarse Registration of 3D Surface Data in Oral and Maxillofacial

Surgery T. Maier, N. Schön, M. Benz, E. Nkenke, F. W. Neukam, F. Vogt and G. Häusler 43P6 Iso-C 3D Accuracy-Control and Usefullness at Calcanues Osteosynthesis D. Kendoff, M. Kfuri Jr., J. Geerling, M. Richter, T. Hüfner and C. Krettek 44P7 Calibration of a Stereo See-Through Head-Mounted Display S. Ghanai, T. Salb, G. Eggers, R. Dillmann, J. Mühling, R. Marmulla and S. Hassfeld 45P8 Sensor-Based Intraoperative Navigation for the Robot-Assisted MIC J. Stallkamp 46P9 SURGICOBOT: Surgical gesture assistance COBOT for maxillo-facial

interventions E. Bonneau, F. Taha, P. Gravez, S. Lamy 47P10 Micromachined Silicon 2-Axis Force Sensor for Teleoperated Surgery F. Van Meer and D. Esteve 48P11 MEDARPA – Implantation of brachytherapy-catheters using augmented reality S. Röddiger, D. Baltas, G. Sakas, M. Schnaider, S. Wesarg, P. Zogal, B. Schwald,

H. Seibert, R. Kurek, T. Martin and N. Zamboglou 49P12 Robotic and Laser Aided Navigation for Dental Implants T. M. Buzug, U. Hartmann, D. Holz, G. Schmitz, P. Hering, J. Bongartz, M. Ivanenko,

G. Wahl and Y. Pohl 50P13 Functional Mapping of the Cortex – Surface Based Visualization of functional MRI R. Krishnan, A. Raabe, M. Zimmermann and V. Seifert 51P14 Automated Marker Detection for Patient Registration in Image Guided

Neurosurgery R. Krishnan, E. Herrmann, R. Wolff, A. Raabe and V. Seifert 52P15 Navigation Error of Update Navigation System Based on Intraoperative MRI Y. Muragaki, T. Maruyama, H. Iseki, M. Sugiura, K. Suzukawa, K. Nambu, O. Kubo,

K. Takakura and H. Tomokatsu 53P16 Improvement of Computer and Robot-Assisted Surgery at the Lateral Skull Base

by Sensory Feedback D. Malthan, J. Stallkamp, F. Dammann, E. Schwaderer and M. M. Maassen 54P17 An Interactive Planning and Simulation Tool for Maxillo-Facial Surgery G. Berti, J. Fingberg, T. Hierl and J. G. Schmidt 55P18 Robotized Distraction Device for Soft Tissue Monitoring in Knee Replacement

Surgery C. Marmignon, A. Leimnei and P. Cinquin 56

XVI

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

P19 Accuracy analysis of vessel segmentation for a LITT dosimetry planning system J. Drexl, V. Knappe, K.S. Lehmann, B. Frericks and H.-O. Peitgen 57P20 3D-Reconstruction and Visualization of Bone Mineral Density for the Ethmoid

Bone C. Kober, R. Sader and H.-F. Zeilhofer 59P21 Craniofacial Endosseus Implant Positioning with Image-Guided Surgical

Navigation J. Hoffmann, D. Troitzsch, C. Westendorff, F. Dammann and S. Reinert 60P22 Image-Guided Navigation for Interstitial Laser Treatment in Vascular

Malformations in the Head J. Hoffmann, D. Troitzsch, C. Westendorff, U. Ernemann and S. Reinert 61P23 The Hybrid Approach to Minimally Invasive Craniomaxillofacial Surgery:

Videoendoscopic-Assisted Intervention J. Hoffmann, D. Troitzsch, F. Dammann and S. Reinert 62P24 Minimally Invasive Navigation-Assisted Excision of Bone Tumor in the temporo-

parietal skull base J. Hoffmann, D. Troitzsch, C. Westendorff, F. Dammann and S. Reinert 63P25 A Surgical Mechatronic Assistance System with Haptic Interface S. Pieck, P. Knappe, I. Gross and J. Wahrburg 64P26 Fluoroscopy based navigated drilling of four osteonecrosis lesions in one patient M. Citak, J. Geerling, D. Kendoff, H. Wübben, C. Krettek and T. Hüfner 65P27 Geometrical Control Approaches for Minimally Invasive Surgery M. Michelin, P. Poignet and E. Dombre 66P28 Combined Tracking System for Augmented Reality Assisted Treatment Device Wu Ruoyun, Md Irwan Kassim, Wee Siew Bock and Ng Wan Sing 67P29 Generation of Well-Formed Image by Mosaicing Split Images: An Approach

Based on Quad-tree Technique R. Babu, G. H. Kumar and P. Nagabhushan 68P30 Prospective Head Motion Compensation by Updating the Gradients of the MRT C. Dold, M. Zaitsev and B. Schwald 69

1

Registration I Thursday, 9:15 RE1

Stereotactic Treatment Planning Using Fused Multi-Modality Imaging

K. Hamm1), G. Surber1), M. Schmücking3), R. Aschenbach2), G. Kleinert1), A. Niesen3) and R. P. Baum3)

1) Department for Stereotactic Neurosurgery and Radiosurgery, Helios Klinikum Erfurt Nordhäuser Straße 74, D-99089

Erfurt, Germany 2) Institute for Diagnostic Imaging, Helios Klinikum Erfurt Nordhäuser Straße 74, D-99089 Erfurt, Germany 3) Clinic for Nuclear Medicine / PET Centre, Zentralklinik Bad Berka, Robert-Koch-Allee 9, D-99437 Bad Berka, Germany Purpose: An important prerequisite for minimal invasive stereotactic or neuronavigated surgery and radiosurgery (RS) resp. stereotactic radiotherapy (SRT) is the exact target definition even in case of very small target volumes. Together with the use of high resolution imaging modalities, image fusion, 3D treatment planning and accurate patient positioning this ensures the demanded high precision level of the entire surgical or radiation treatment process. Based on innovative software solutions image fusion may offer the desired data superposition within a short time. Also metabolic images from PET data sets and, in case of arteriovenous malformations, stereotactic angiography projections (DSA) might be integrated. Special conditions and advantages of BrainLAB`s fully automatic image fusion are being discussed.

Methods: Within the last 3.5 years 535 patients have been treated with stereotactically guided biopsy/puncture or RS / SRT. For each treatment planning procedure a fully automatic image fusion of all interesting image modalities was performed, visually controlled and if necessary corrected. The planning CT (slice thickness 1.25 mm) and for arteriovenous malformations also a stereotactic DSA was acquired using head fixation with stereotactic arc or in case of stereotactic radiotherapy a relocatable stereotactic mask. Different sequences of MRI (slice thickness 1-2mm) and in 17 cases F-18-FDG- or FET-PET (slice thickness: 3.4mm) were made without head fixation.

Results: The fully automatic image fusion of different MRI, CT and PET series could be realized for each patient. Only in few cases it seemed to be necessary to correct the fusion manually after visual evaluation, what was not improving fusion quality in general. The precision level of the automatic fusion result was depending most of all on the image quality of the used modalities, especially the selected slice thickness and the field homogenity in case of MRI, but also on the degree of patient movement during data acquisition. Fusing thin slices of a region of interest with a complete head data set was also possible in a good quality. In general, target volume outlining could be done more exactly by using all image information of the fused data sets.

Conclusions: Additional informations provided by PET, CT and different MRI scans enable us to improve target definition for stereotactic or neuronavigated treatment planning. The used automatic image fusion is a very sufficient tool that allowes a fast (approx. 1-2 min) and precise fusion of all available image data sets depending on their acquisition quality.

2

RE2

Registration IThursday, 9:30

Non-Rigid Registration of Intraoperatively Acquired 3D Ultrasound Images of Brain Tumors

M. Letteboer1), P. Hellier2), D. Rueckert3), P. Willems4), J. W. Berkelbach4) and W. Niessen1)

1) Image Sciences Institute, University Medical Center, Heidelberglaan 100, NL-3584 CX, Utrecht, the Netherlands

2) Projet Vista, IRISA/INRIA-CNRS, Campus Universitaire de Beaulieu, F-35042 Rennes Cedex, France 3) Department of Computing, Imperial College, South Kensington Campus, 180 Queen's Gate, SW7 2AZLondon, UK

4) Department of Neurosurgery, University Medical Center, Heidelberglaan 100, NL-3584 CX, Utrecht, the Netherlands Introduction: In image-guided neurosurgical interventions the position of the tumor is determined by navigation based on preoperatively acquired MR data. In the image-guided surgery systems that are currently available it is assumed that no brain deformations occur during the interventions. However brain deformations of 10 mm and more have been reported. As a consequence, during surgery the tumor location and shape with respect to the preoperative MR is uncertain. To correct for these deformations of the tumor and surrounding brain tissue, 3D ultrasound data, acquired at different stages of surgery, can be used in combination with the preoperatively acquired MR data. To calculate the deformations between subsequent ultrasound volumes, non-rigid registration techniques are necessary.

Materials and Methods: During the image-guided neurosurgical procedures the ultrasound probe was tracked using a Polaris camera, which is part of the neuronavigation system. The relative positions of the 2D scans were used to reconstruct a 3D ultrasound volume. For four patients at least two ultrasound volumes were acquired; one prior to opening the dura and one after opening the dura, but prior to tumor removal.

Non-Rigid Registration Methods: To goal of the registration process is to find the optimal transformation, which maps all points of the ultrasound volume acquired after opening the dura, to the ultrasound volume acquired prior to opening the dura. A comparison is made between two non-rigid registration methods. The first method is a based spline-based free-form deformations algorithm, described by Rueckert et al. [1]. The second method is an optical flow based algorithm, described by Hellier et al. [2]. For both non-rigid registration methods the optimal parameter settings for this specific task were determined

Results and Discussion: For the four patients that were evaluated the overlap of the segmented tumor tissue was used as a quality measure. The overlap of the tumor tissue after registration based on the image-guided surgery system was on average 75%, which implies a considerable brain shift due to opening of the dura. After rigid registration the overlap increased to, on average 85%. After non-rigid registration with the free-form deformation algorithm the overlap increased to 93%, while after non-rigid registration with the optical flow algorithm the overlap increased to 91%.

Also the correlation between the ultrasound volumes and the average distance between the tumor surfaces after registration (from about 2 mm before registration to 0.61 mm for free-form deformation and 0.63 mm for optical flow) indicate that the free-form deformations algorithm performs slightly better than the optical flow algorithm. However both algorithms perform much better than registration based on the image-guided surgery system and rigid registration. An important note is that the optical flow method is, in our current implementation, more than 100 times faster than the free-form deformation method. However, both methods are currently insufficiently fast for intraoperative use, therefore speed up techniques are required.

References: [1] D.L Rueckert et al., “Nonrigid Registration Using Free-Form Deformations: Application to Breast

MR Images”, IEEE Transactions on Medical Imaging, Vol. 18(8), pp. 712-721, 1999 [2] P. Hellier et al., “Hierarchical Estimation of a Dense Deformation Field for 3D Robust Registra-

tion”, IEEE Transactions on Medical Imaging, Vol. 20(5), pp. 388-402, 2001

3

Registration I Thursday, 9:45 RE3

Comparison of Different Registration Methods for Navigation in Craniomaxillofacial Surgery

M. Zinser, R. A. Mischkowski, M. Siessegger, A. Kübler and J. E. Zöller

Department of Craniomaxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62, D-50937 Köln, Germany

Introduction: The correlation between the surgical site and the corresponding image data set in the operating room is the most time-consuming non-operative process for the surgeon.

Recent innovations in laser scanning technology provide a potentially useful tool for three-dimensional surface registration in image-guided surgery. The purpose of this study is to evaluate the clinical reliability of this technique in comparison with the conventional registration tools headset and skin markers in craniomaxillofacial procedures using image-guided navigation.

Methods: In an experimental setting, a stable anthropomorphic skull model with prelabeled markers was scanned and registered with laser surface scanning (Z-touch, BrainLAB) and marker-based algo-rithms (skin markers and head-set). The registration protocol was then repeated 60 times.

Registration error as well as accuracy were calculated.

In a clinical setting, totally seventy-two patients with different indications for oral and craniomaxillo-facial surgery were planned for image-guided surgery using the same passive infrared surgical naviga-tion system (VectorVision, BrainLAB) and marker based algorithms (skin-markers and head-set).

Registration errors were noted. The clinical application accuracy was determined for anatomical landmark (teeth) localization deviation.

Results: In the experimental protocol registration with head-set shows the most reliable results with deviation less than 1 mm in 74% versus the skin markers in 42% and the laser scanning (Z-touch) in 40%.

During various clinical procedures involving oral and craniomaxillofacial surgery, the best results were shown when registrations were taken with the headset. The headset showed a deviation of less than 2mm in 94%, versus skin markers in 80% and laser-scanner (Z-touch) in 68%.

Conclusion: The results show a significant difference between the external registration tools (headset and skin markers) compared to the laser scanning technique used in this study. The three-dimensional laser surface scanning technique, may be a interesting and useful approach to register the patient for

4

RE4

Registration IThursday, 10:00

Localisation of Moving Targets for Navigated Radiotherapy

L. Vences, O. Sauer, M. Roth, K. Berlinger, M. Doetter and A. Schweikard

Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11, D-97080 Würzburg, Germany For an effective radiotherapy treatment with external beams it is essential to know tumour’s position. Tumours laying in the lung or abdomen can be displaced some centimetres during the treatment e.g. due to weight loss, different filling of bladder or rectum, or due to respiratory movement. Therefore, a precise tracking of tumour’s position during treatment is an important issue in radiotherapy.

A radiotherapy treatment is split into fractions, a treatment course generally takes some weeks. Therefore, interfractional and intrafractional organ movement have to be taken into account. Prior to a treatment course the target is delineated in a 3D patient model achieved with CT. The beams and their isocentre are planned on the base of this planning CT. Prior to each treatment session a CT in the vicinity of the target is performed in order to get the actual patient geometry. This examination is done with the patient on the treatment table in its treatment position. With a global registration of the planning and the treatment CT data, the target volume and the actual isocentre may be found in the treatment data set. However, this approach is not precise enough, if the tumour tissue moves relatively to the patient’s bony structure. To account for this effect, in a second step, a local registration of a volume of interest (VOI) is necessary. Only a small correction of the target position is expected. This allows for a small search space around the VOI, resulting in a fast registration calculation.

3D registration was performed with a mutual registration algorithm, as implemented by the Medical Application Research Group at the TU Munich. At the Clinic for Radiotherapy of the University Würzburg, several tools were programmed in order to get a software for clinical use. Among them: reading images from different DICOM servers, a GUI for the VOI definition, the local registration capability with output of the translation and rotation relatively to an external fiducial system.

We realized 180 tests of our application with eleven pairs of CT-CT volumes. We found that in 75% of the cases, the local registration covers better the region of interest (62%) or at least as good as the global registration (13%). The other 25% corresponds to body parts with higher flexibility, like spine. In these cases both local and global registration performed poor. If the result is not satisfactory, the user can mark with the mouse the isocentre in the treatment volume and the program calculates its coordinates. In our tests, the local registration needed 20 to 60 seconds on a PC Pentium 4 with 512MB RAM running at 2.4 GHz.

The developed tools form the basis for future developments, treating intrafractional organ movement. The trajectory of the target within the respiratory cycle may be found by means of local registration from dynamic CT scans. Correlating the data with a real time signal of the respiratory cycle, the control of a treatment machine will be possible.

5

Advanced Navigation and Motion Tracking I Thursday, 10:45 N1

Clinical Relevance of Preoperative CT Based Computer Aided 3D Planning in Hepatobiliary Surgery and Living Related Liver Transplantation

J. Harms1), H. Bourquain2), K. Oldhafer2,4), T. Kahn3), J. Fangmann1), H.-O. Peitgen2) and J. Hauss1)

1) Chirurgische Klinik II, Klinik für Abdominal-, Transplantations- und Gefäßchirurgie, Universität Leipzig, Liebigstr. 20a,

D-04103 Leipzig, Germany 2) MeVis – Centrum für Medizinische Diagnosesysteme und Visualisierung, Universitätsallee 29, D-28359 Bremen, Germany 3) Klinik für Diagnostische und Interventionelle Radiologie, Universität Leipzig, Liebigstr. 20a, D-04103 Leipzig, Germany 4) Chirurgische Klinik, Allgemeines Krankenhaus Celle, Siemensplatz 4, D-29223 Celle, Germany Introduction: In the last few years hepatobiliary and liver transplantation surgery has shown notable developments. Despite the general application for the assessment of resectability of hepatobiliary tumors and patient evaluation for living donor liver transplantation (LDLT) the use conventional CT technology remains problematic. Developments with the implication of mathematical methods on digital image data enabled CT based 3D visualizations with the result that preoperative planning becomes more reliable and reproduceable.

Material and methods: The initial experience of 3D CT visualzations was assessed in bile duct (n=4) and pancreatic tumors (n=12), living related liver donors (n=8) and in selected cases of liver transplant recipients (n=3). CT scans were performed with a 4 slice helical scanner (Siemens Volume Zoom®, Siemens,Erlangen, Germany). 3D modelling of the CT data set was performed by the IT- research institute MeVis, Bremen, Germany. Image processing included segmentation of the anatomical and pathological structures, i.e. extraction from the raw data. The centre lines for vascular structures of interest were calculated. For calculation of the individual vascular territories a hierarchical mathematical model was created. Data were implemented in a computerized operation planning system that defines rescetion planes depending on safety margins and vascular trees. Results were displayed one by one or in arbitrary combinations in both, 3D and overlayed to the original CT data.

Results: In pancreatic tumors affection of the regional vascular supply was noted in 5/12 patients. Relevant arterial variants were detected in 2/12 cases. In central bile duct tumors (3/4) 3 D visualization provided significant additional information. The spatial relationship of the tumor to cru-cial vascular structures could be clarified allowing optimized assessment of resectability. In LDLT evaluation arterial variants were detected in 3/8 cases. The results were confirmed by angiography. Significant variants of the hepatovenous vessels were registered (portal vein 2/8, hepatic veins 6/8, anatomic confirmation of the confluens venae 5/8) as well. 3D representations were superior to angiography (portal vein) and contrast enhanced 2D CT examination (hepatic veins). Beside the result of histologic examination of the liver, 3D CT visualizations were of major impact in donor selection. As shown in a pediatric LDLT recipient with various anatomic anomalies intraoperative display of 3D visualization enabled "image guided surgery".

Conclusion: Multiple imaging approaches are used for diagnosis and treatment planning in hepatobiliary and pancreatic tumors and in LDLT´s. CT based 3D visualization not only enable clarification of the spatial anatomy but also offer an integrated preoperative view to delicate structures within the liver hilum and the hepatoduodenal ligament. In elective patients such as LDLT´s preoperative 3D planning is mandatory as the technique is non- invasive, objective in anatomic and volumetric assessment and allows exact prediction of risk to the donor.

6

N2

Advanced Navigation and Motion Tracking IThursday, 11:00

Analysis of Drill Sound in Spine Surgery

I. Boesnach1), M. Hahn1), J. Moldenhauer1), Th. Beth1) and U. Spetzger2)

1) Universität Karlsruhe, Institut für Algorithmen und Kognitive Systeme, Am Fasanengarten 5, D-76131 Karlsruhe, Germany 2) Neurochirurgische Klinik, Klinikum Karlsruhe, Moltkestraße 90, D-76133 Karlsruhe, Germany One challenging task in spine surgery is to drill in a vertebra with high accuracy, e.g. to place pedicle screws. Especially, essential vertebra arteries and the spinal chord must not be touched. During the drilling process blood, surgical instruments, and a small access to the spine worsen the view of the surgeon. To maintain accuracy however, the surgeon needs a good 3D imagination based on anatomical knowledge from pre- and intraoperative image data. Additionally, the surgeon gets haptic and acoustic feedback by the drilling device during the operation. Particularly, the sound generated by the drill provides significant information about tissue. Transitions between areas of different bone densities are highly correlated with the change of drill sound. Especially the transition from the cancellous bone of the inner vertebra to the high density of outer bone structures comes along with changes in sound.

Though latest computer systems for surgical planning and navigation try to assist the surgeon in the operation theatre the described feedback information is not used. However, sound information is independent of current navigation data. Thus, it is a powerful add-on which gives information if the navigation system should be improper or even fail. These problems arise from bad initial calibration or shifts of vertebrae in situ. The consideration of drill sound has not been studied by other research groups, yet. Therefore, we could not rely on existing results. Thus, in a first step we had to find adequate positions for suitable microphones in a surgical test environment. Further on, we recorded the sound during several drillings in vertebrae of a human torso. The received audio data was manually classified into "idle speed", "drill starting at the bone surface", "cancellous bone", and "break-through to the vertebral canal". Based on this classifications we did first studies with automatic recognition methods (hidden Markov models, neural networks and support vector machines).

The objective of our work is the development of automatic real-time methods to analyse drill sounds in spine surgery. These methods are very important to augment the view of the surgeon in minimal invasive surgery. We expect that physicians will profit from warning mechanisms based on the sound analysis. Additionally, the methods can be used for the training of unskilled surgeons. There is only small effort to integrate the system in the operation theatre and the purchase costs of a final system will be low.

7

Advanced Navigation and Motion Tracking I Thursday, 11:15 N3

Experimental Setup for Navigation for Coronary Interventions

J. Borgert1), H. Timinger1,2), S. Krüger1) and R. Grewer1)

1) Philips Research Laboratories, Division Technical Systems, Röntgenstrasse 24-26, D-22315 Hamburg, Germany 2) Department of Measurement, Control, and Microtechnology, University Ulm, Albert-Einstein-Allee 41, D-89081 Ulm,

Germany Introduction and Purpose: We will present an experimental setup for the development and benchmark of algorithms and applications for coronary interventions, using non-line-of-sight motion compensated navigation on coronary 3D roadmaps. The setup comprises a pneumatically driven dynamic heartphantom, design based on a cooperation with Prof. O. Dössel [1], to simulate the motion of the coronaries due to heartbeat and respiration. It includes the possibility of selecting different heartbeat rates and respiratory cycle lengths, as well as the shape of the individual cycles. The setup can furthermore produce an artificial ECG signal and is equipped with a respiratory sensor. Roadmaps of the coronaries can be extracted from volumetric image data, that are acquired using different modalities, like 3D-RX, CT, or MR. These modalities are interfaced to the experimental setup to allow for import of 3D image data and 2D real-time image information in case of an interventional C-arm system. The setup furthermore comprises a magnetic tracking system (Northern Digital Aurora [2]), to perform spatial measurements of position and orientation of interventional devices without line-of-sight restriction. It includes software to compensate the position and orientation measurements for motion due to heartbeat and respiration as well as metal influence, which will occur due to instrumentation in a usual catheter laboratory, e.g. C-arm systems. Furthermore software to visualize the individual data present in an interventional application, e.g. real-time fluoroscopic images, 3D visualization of the coronary roadmap with position and orientation of the interventional device, can be selected and arranged specifically for a given interventional application. As a first step we analyzed the reproducibility of the motion of the heartphantom due to heartbeat and respiratory motion. This raw reproducibility of the heartphantom’s motion is a key factor in assessing and benchmarking algorithms for motion compensation, as the compensated result can never perform better than the underlying setup.

Method: For a given heartrate and cycle length, the trajectory of a tracked catheter, introduced in a given position in the artificial coronaries, was recorded for several cycles, divided into subphases, and related using the artificial ECG signal and the respiratory sensor reading. The resulting reproducibility is then given by the standard deviation averaged over the individual subphases.

Results: The reproducibility of the motion due to heartbeat ranged from 0.44m at 60bpm to 0.74mm at 120bpm. The reproducibility of the motion due to respiration ranged from 0.15mm at 6s cycle length to 0.17mm at 3s cycle length.

Conclusions: The measured reproducibility of the motion of the heartphantom is well below the diameter of coronaries, which are subject to interventions (1mm). According to Dodge et al. [3], only the distal parts of the LAD, the L4d, does have a diameter below 1mm (data for normal, right-dominate male). Thus the setup is suitable to simulate the motion of the heartphantom and to develop and benchmark navigation applications for coronary interventions in a realistic scenario. First benchmarks of algorithms for motion compensation showed results in the same magnitude as the presented reproducibility of the motion of the heartphantom.

References: [1] W. Sediono, O. Dössel; Heart Phantom: A Simple Elastomechanical Model of Ventricle;

Proceedings CARS 2002, p. 1112; 2002 [2] Northern Digital Inc.; Waterloo, Ontario; http://www.ndigital.com [3] J. T. Dodge Jr, B. G. Brown, E. L. Bolson, H. T. Dogde; Lumen Diameter of Normal Human

Coronary Arteries; Circulation Vol. 86 No. 1; 1992

8

N4

Advanced Navigation and Motion Tracking IThursday, 11:30

Beating Heart Tracking in Robotic Surgery Using 500 Hz Visual Servoing, Model Predictive Control and an Adaptive Observer

R. Ginhoux, J. A. Gangloff, M. F. de Mathelin, L. Soler, M. M. Arenas Sanchez and J. Marescaux

LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France

Motion tracking and compensation devices for computer-assisted surgery are shown a growing interest in the research community as well as in the everyday clinical use. They appear to be the key of a safer

interaction of robots with both patients and surgeons. Specific mechanical systems (e.g., [1]) or computer algorithms have been shown for the analysis of the heart motion in thoracic surgery (e.g., [2]). Beating heart surgery, particularly coronary artery bypass grafting (CABPG), is maybe the most challenging task for robots today. Motion of the heart limits the use of traditional approaches. It is complex and fast and is created by the influence of both the respiratory motion and the electro-mechanical activity of the heart pump. The difficulty is actually twofold. On the one hand, an appropriate sensor or measurement device is needed to precisely estimate the motion with ist full dynamics. On the other hand, tracking the motion with a robot requires an adequate control law that should take into account the robot dynamics.

This paper presents first in-vivo results of the beating heart tracking with a surgical robot arm in off-pump cardiac surgery. The tracking is performed in a high-speed 2D visual servoing scheme using a 500 frame per second video camera. Heart motion is measured by means of active optical markers that are put onto the heart surface. Amplitude of the motion is evaluated along the two axis of the image reference frame. This is a complex and fast motion that mainly reflects the influence of both the respiratory motion and the electro-mechanical activity of the myocardium. A model predictive controller is setup to track the two degrees of freedom of the observed motion by computing velocities for two of the robot joints. Predictive control requires a model of the robot dynamics. It is preferred to other strategies for its ability to filter random disturbances and to anticipate over future references provided they are known or they can be predicted. An adaptive observer is defined along with a simple cardiac model made up of a troncated Fourier series to compute predictions for the two components of the heart motion. The predictions are then fed into the controller references and this is shown to improve on the tracking behaviour.

An experimental setup is built with a prototype medical robot. Results on a simulation setup and in real conditions on the beating heart of a living pig are reported in the paper.

References: [1] Nakamura, Y. and Kishi, K. and Kawakami, H.. Heartbeat Synchronization for Robotic Cardiac

Surgery. Proc. Of the 2001 Int. Conf. on Robotics and Automation. Mai 2001. [2] Ortmaier, T. J.. Motion Compensation in Minimally Invasive Robotic Surgery, TU Muenchen,

Germany, 2003.

9

Calibration and Accuracy Analysis Thursday, 13:00 A1

Non-Invasive Intraoperative Imaging Using Laser Radar System in Hip-Joint Replacement Surgery

G. Kamucha1) and G. Kompa2)

1) Deptartment of Electrical & Communications Engineering, Moi University, P.O. Box 3900, Eldoret, Kenya 2) Department of High Frequency Engineering (HFT), Kassel University, Wilhelmshöher Allee 73, D-34121 Kassel, Germany This paper explores the possibility of employing a non-invasive registration technique using a high resolution pulsed laser radar system as an intraoperative imaging system in computer assisted hip-joint replacement surgery. The method involves acquiring 3D laser surface points of the anatomical part to be operated on intraoperatively, which are then registered to a 3D surface model from preoperative magnetic resonance imaging (MRI) using a surface based registration program which utilizes an Iterative Closest Point (ICP) algorithm. The described registration analysis was carried out with respect to the hip-joint socket (acetabulum).

The performance of the laser radar system for the given task was tested in clinical environment where it was used to scan an acetabulum of a female osteoarthritic patient who was undergoing a total hip replacement surgery at the Orthopaedic Clinic, Kassel, Germany. The preoperative data of the affected hip joint was acquired one day before the operation using a 1.5 Tesla high resolution MR imager (MRT-Symphony Quantum, Siemens, Erlangen, Germany) at Clinical Centre, Kassel, Germany. In order to capture clearly any remaining cartilage on the acetabulum, fat-suppressed T1-weighted 3D gradient echo pulse sequence was applied in the acquisition of the MR images. Axial slices of the affected hip joint were acquired with an in-plane resolution of 1 mm × 1 mm and an inter-slice thickness of 1mm. The size of the MR images was 256 × 256 × 87 voxels. Other acquisition parameter settings were as follows: 50 degrees flip angle, 34 ms TR, 4.8 ms TE, and 250 mm field of view. During the operation, the laser radar system was used to obtain 3D points of the acetabulum, once part of the pelvis was exposed. Laser points were acquired from a distance of about 85 cm from the scanning mirrors at a resolution of 1 mm in both X and Y axes of an XY coordinate system. The scanned area was 40 mm × 40 mm and it was selected such that it covered the entire laser beam accessible region of the acetabulum.

The laser surface points were registered to a 3D surface extracted from the MRI data using the surface based registration program. The results obtained shows that the laser radar system is capable of acquiring surface points of the acetabulum within an accuracy of 1 mm. Enough coverage by the registration points on the acetabulum was obtained during the clinical trial, which ensured that the registration procedure converged. It is demonstrated that by simple selection of registration points, the effect of outliers on the parameters of registration transform can be eliminated. This makes the registration technique robust in the presence of spurious signals from the surgical site.

10

A2

Calibration and Accuracy AnalysisThursday, 13:15

Accuracy in Computer Assisted Iplant Dentistry. Image Guided Template Production vs. Burr Tracking

G. Widmann1), R. Widmann2), E. Widmann3) and R. J. Bale1)

1) Interdisciplinary Stereotactic Intervention- and Planning Laboratory (SIP-Lab), Department of Radiology I, University

Innsbruck, Anichstr. 35, A-6020 Innsbruck, Austria 2) Zahntechnisches Labor Czech, Kreuzstr. 20, A-6067 Absam, Austria 3) Ordination Dr. Widmann, Unterauweg 7a, A-6280 Zell/Ziller, Austria Background: The literature shows only rare data on the accuracy of image guided template production (IGTP) for oral implantology. We present an accuracy analysis of our SIP-LAB IGTP technique and a comparison to image guided burr tracking.

Methods: A complete standard set (maxilla, mandible) of class III dental stone casts was prepared with 56 lead pellets and mounted on a dental articulator. Guided by the articulator, a Vogele-Bale-Hohner (VBH) mouthpiece was supplied with maxillary and mandibular impressions. Wearing the mouthpiece and an external reference frame, axial computer tomography of the dental casts was performed. Registration of the planning set up was done via VBH-mouthpiece and reference frame. Using the Treon optical navigation system (Medtronic Inc., USA), a surgical path to the target pellets was planned and an aiming device was adjusted. Burr tubes were positioned by a metal rod advanced through the aiming device and fixed into prefabricated templates by blue light curing resin.

Drillings were made through the templates on the dental casts with lead pellets as well as on a second set of duplicated casts without lead pellets. On the CT-data of the drilled dental casts, the accuracy was determined as normal deviation to the defined target.

As the duplicated dental casts lack lead pellets, image fusion to the CT-data of the dental casts with lead pellets was necessary.

In order to evaluate the accuracy in the z-axis, 50 point measurements to a predefined target were performed through the aiming device.

Results: In this study a total of 56 drillings and 50 point measurements have been evaluated.

The mean accuracy of 28 drillings on the dental casts with lead pellets was M(p)[xy] 0.48 mm and SD(p)[xy] 0.35 mm. The maximum deviation was Max(p)[xy] 1.2 mm.

The following 28 drillings on the duplicated dental casts without lead pellets showed a mean accuracy of M(d)[xy] 0.5 mm and SD(d)[xy] 0.34 mm. The maximum deviation was Max(d)[xy] 1.2 mm.

Evaluating the accuracy in the z-axis, 50 point measurements had a mean of M[z] 0.25 mm and SD[z] 0.12 mm. The maximum deviation was Max[z] 0.6 mm.

Conclusions: Using the SIP-LAB IGTP technique similar accuracies compared to burr tracking could be achieved. However a safety distance of 1.5 mm is necessary to reliably avoid damaging critical anatomical structures like the mandible nerve or the maxillary sinus. The SIP-LAB IGTP technique allows for robot assisted positioning of burr tubes, which may lead to even better results. Regarding costs and effort, the SIP-LAB IGTP technique realized at hospital based planning centres is an attractive alternative to burr tracking systems especially for dentists and oral implantologists working in private practice.

11

Calibration and Accuracy Analysis Thursday, 13:30 A3

3D-Accuracy Analysis of Fluoroscopic Planning and Navigation of Bone-Drilling Procedures

J. A. K. Ohnsorge1,2), E. Schkommodau2), D. C. Wirtz1), J. E. Wildberger3),

A. Prescher4) and C. H. Siebert1)

Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, D-52074 Aachen, Germany 1) Orthopädische Universitätsklinik, UK Aachen, Pauwelsstrasse 30

2) Institut für Biomedizinische Technologien, UK Aachen, Pauwelsstrasse 20 3) Klinik für Radiologische Diagnostik, UK Aachen, Pauwelsstrasse 30

4) Institut für Anatomie, UK Aachen, Wendlingweg 2 Objectives / Background: Accurate drilling in bone is an essential of many orthopaedic operations. The geometric accuracy of this usually free-hand executed surgical step is likely to be improved by fluoroscopic navigation. Reliability, accuracy and intraoperative applicability of this method need to be analysed. Only an evidence based benefit may justify additional expenditure of whatever kind. Radiation exposure and total operating time are expected to be reduced, but must be quantified before clinical application of the method. Besides, empiric revelation of defective details can possibly lead to decisive improvements.

Design / Methods: In a standardised in vitro trial the drilling onto a 5 mm spherical target implanted in an artificial femoral head was performed using a navigated drill-guide, a navigated drill and the FluoroNavTM system. In respect of the primarily aspired minimal-invasive character of the procedure a new dynamic reference base was developed avoiding additional trauma through its fixation to the bone. In one group (A) the surgeon was supposed to place the drill in the middle of the BaSO4-augmented plaster bullet. In the other group (B) he additionally was asked to follow a defined direction. For that purpose a retrograde canal of 15 mm length was drilled with a 2 mm Ø K-wire from the target towards the lateral aspect of the femur and then filled with the same plaster mixture. For both groups the distance of the tip of the drill to the center of the lesion was analysed in a 3D CT-generated model and in macroscopic cross section. In group B the direction of the actual drilling canal was measured relatively to the preformed.

Results: The mean distance in group A was measured to be 1 mm, with all results ranging between 0 and 2.5 mm and the lesion not being missed once. In group B the planned direction of the canal was reproduced with an deviation of 0° to 7°, the center of the target (Ø 5 mm) only being missed by a mean distance of 2,5 mm and a maximum of 3,5 mm. The mean distance fault of both groups together was calculated 1.9 mm representing a proportional error of 2 % regarding the 8.7 mm average length of the drilling canal. Compared to the macroscopic and 3D-CT-findings, the correlation of the data calculated by the navigation system was accurate up to a maximum difference of 4° or 2 mm in single cases. The virtual image of the effected drilling and the previously planned trajectory due to the manual performance differed from 0 to 2 mm and 3° respectively. Radiation exposure was reduced to just a few x-ray images compared to an up to ten times higher amount caused by conventional permanent c-arm control, whereas there was no difference in the total operating time.

Conclusion: The fluoroscopically assisted freehand-navigation provides high three-dimensional accuracy reducing radiation exposure to a minimum. With the new DRB computer-assistance makes sense also for percutaneous procedures and represents a promising and efficient application for a greater variety in orthopaedic surgery.

Keywords: Fluoroscopy – navigation – accuracy analysis – computer-assisted surgery – femoral head – C-arm – 3D – DISOS – FluoroNav – StealthStation

12

A4

Calibration and Accuracy AnalysisThursday, 13:45

Accuracy of Fluoroscope and Navigated Controlled Hind- and Midfoot Correction of Deformities

J. Geerling, S. Zech., D. Kendoff, M. Citak, T. Hüfner, M. Richter and C. Krettek

Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Germany

Introduction: The accuracy of the intraoperative correction of the correction operati correlates with the clinical result. The precision of the intraoperative correction with fluoroscopy control is challenging. In other body regions CAS has shown to be a valuable tool for correction operation and may also be helpful in correction operations of foot deformities.

The aim of the study was the comparrison of CT-based navigation correction operation of hind- and midfoot deformities with fluoroscopy controlled correction.

Material and Methods: Plastic models with defined deformities (Sawbone, Pacific Research Laboratories, Vashon, WA, USA) were used. The aim of the correction was to transform the shape of the specimen with deformity to the the shape of a normal plactic bone model.

For accuracy analysis two methods were used: 1. conventional C-arm based correction, 2. navigated CT-based correction operation (Surgigate, Medivision, Oberdorf, Switzerland) .

Of each deformity (Equinus deformity, Calcaneus Malunion, Equinovarus Deformity) fife models were corrected with each method. Standardized osteotomies were performed before the correction when necessary (calcaneus malunion and eqinovarus). The only visualization was provided by the image of the fluroscope or the navigation system, the direct view of the surgeon was disabled by drapes,. The retention was performed using K-wires. The following parameters were registeres: 1. Time for entire procedure, 2. fluroscopy time, 3. difference between corrected specimen and normal model in foot length, lenght and height of longitudinal arch, calcaneus inclination, hindfoot angle for all models (n=30) and additional Boehlers angleand calcaneus length for the Calcaneus Malunion models (n=10).

The shape of the correted models were graded in normal, nearly normal, abnormal and severe abnormal.

Result: The shape were graded normal in all specimens of the CAS group (n=15) and in eight models in the fluroscopy group. The other grades in the C-Arm group were six nearly normal and one abnormal; p=0,05, Chi2-test)

The other parameters were compared using the t-test. Parameter CAOS Fluroskopie Signifikanzniveau Time of procedure 782 410 p<0.001 Fluroscopy time 0 11 p<0.001 Foot length -1,7±1,9 mm -4,1±3,8 mm p=0.03 length longit. arch -0,9±0,9 mm -5,6±4,9 mm p= 0.001 heigth longit. arch -0,1±0,5 mm 1,7±4,3 mm p=0.14 calcaneus inclination 0,1±1,4 2,7±4,8 p=0.05 calcaneus lenght -0,5±0.4 -2,8±1,3 p=0.005 Boehler´s angle 0,4±1,1 4,1±8,6 p=0.37

Conclusion: The accuracy of the correction operation of hind- and midfoot deformities with computer assisted navigation was higher than usin conventional fluoroscopy in this experimental setting. No additional intraoperative fluoroscopy was needed using the CT-based navigation compared with 11 seconds in avarage using fluroscopy controlled correction. However, the time for the whole procedure was almost twice in the CAOS Group.

CAS allowing higher accuracy of the correction and reduction and might be a valuable tool in correction operation in foot surgery. Clinical studies has to show if this higher accuracy will be achived in real operations as well and if this leads to better clinical results.

13

Clinical Case Studies Thursday, 15:45 C1

Resection of Bony Tumors within the Pelvis – Hemipelvectomy Using Navigation and Implantation of a New Custom Made Prosthesis

J. Geerling, D. Kendoff, L. Bastian, E. Mössinger, M. Richter, T. Hüfner and C. Krettek

Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Germany

Introduction: Treatment for malignant bony tumors includes large resections [3, 4]. This represents a complex problem in clinical practice due to important organs and structures. The biomechanical function transferring load from the spine to the lower extremities should be restored. Computer assisted surgery (CAS) based on CT data could increase the accuracy of tumor resections. Currently, commercially available navigation systems use the precise preoperative CT data allowing navigated pedicle screw applications and cup placement for hip arthroplasty [1, 2]. Further applications have been reported within the pelvis as the CAS supported posttraumatic pelvic osteotomies and the placement of sacroiliac screws or long lag screws as required in the osteosynthesis of acetabular fractures.

Clinical case: At a 54 year old active patient a chondrosarcoma was diagnosed. It appears as a 7x7x5 cm tumor in the region of the left acetabulum with huge soft tissue proportion. Due to the necessary resection, the only treatment option, the femur will be devided from the body because the femoral head has to be removed as well.

Solution: The question of filling the gab of the resected bone was solved with a custom made prosthesis. Based on CT-dataset a plastic model of the patients pelvis was produced. The resection lines were planned and the fragment with the tumor was resected. A custom made prosthesis was produced, including a cup for a hip prosthesis based on the fragments dimension. The prosthesis was build of cpTi with a coated surface of bonit and antibiotics. Due to this procedure a prosthesis was build fitting exactly into the resected plastic pelvis model. With a navigation system (Medivision, Switzerland) an exact intraoperative resection was performed. A CT of the pelvis model was performed. Due to the manufacturing method the model is 3% bigger than the patients pelvis. The dataset were calculated to 100% for the navigation. The resection was performed using navigated chisels in combination with the module “spine”.

The implantation of the prosthesis was done 10 days after resection of the tumor demonstrating an exakt fit of the prosthesis into. Pathological investigation showed tumor free margins.

Conclusion: The challenging operation could be done with a satisfying result for the patient combining modern technologies like computer based production of a prosthesis within the pelvis and the innovative resection using computer assisted surgery.

References: [1] Berlemann, U. et al. [Computer-assisted orthopedic surgery. From pedicle screw insertion to

further applications]. Orthopade, 1997. 26(5): p. 463-9. [2] Jaramaz, B. et al., Computer assisted measurement of cup placement in total hip replacement. Clin

Orthop, 1998(354): p. 70-81. [3] Phang, P.T. et al., Effects of positive resection margin and tumor distance from anus on rectal

cancer treatment outcomes. Am J Surg, 2002. 183(5): p. 504-8. [4] York, J.E., et al., Sacral chordoma: 40-year experience at a major cancer center. Neurosurgery,

1999. 44(1): p. 74-9.

14

C2

Clinical Case StudiesThursday, 16:00

f-MRI integrated Neuronavigation – Lesion Proximity to Eloquent Cortex as Predictor for Motor Deficits

R. Krishnan1), H. Yahya1), A. Szelényi1), E. Hattingen2), A. Raabe1) and V. Seifert1)

Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528 Frankfurt/Main, Germany

1) Department of Neurosurgery 2) Institute of Neuroradiology

Objective: Since the introduction of f-MRI in the early 90´s, this technique became important to the neurosurgeon at four key stages in the clinical management of his patients: 1) assessing the feasibility of radical surgical resection, 2) surgical planning, 3) selecting patients for invasive functional mapping procedures and 4) intraoperative visualization of functional areas. In this prospective study we examined the occurrence of a new postoperative motor deficit as a function of a lesions distance to the functional areas as provided by functional magnetic resonance imaging.

Methods: 54 patients with lesions in close proximity to the motor cortex were included in the protocol. Preoperative EPI T2* BOLD imaging was performed during standardized paradigms for hand, foot and tongue movement. Data analysis was done with BrainVoyager software (Brain Innovation, Maastricht, NL). For functional neuronavigation we use the Vector Vision2 system (BrainLAB, Heimstetten, Germany). Outcome was analyzed as a function of a lesions distance to functional motor areas, resection grade, lesion size, patient age and histology. Regarding to the distance of a lesion to motor areas (=LAD), four risk groups were graded.

Results: Gross total resection was achieved in 47 patients. 9 patients (low grade glioma = 3, glioblastoma = 6) had a subtotal resection. The neurological outcome improved in 16 patients (30%), was unchanged in 29 patients (53%) and deteriorated in 9 patients (17%). Significant predictors of a new neurological deficit were a lesion to activation distance < 5mm (p<0,009) and subtotal resection (p<0,014).

Conclusion: The determination of a lesion’s proximity to the motor primary cortex, based on preoperative functional MRI, may be a key for predicting the risk of postoperative deterioration. Our data suggest that a lesion to activation distance < 5mm is associated with a higher risk for neurological deterioration. Within a 10 mm range cortical stimulation should be performed. For a lesion to activation distance > 10 mm a complete resection can be achieved safely.

15

Clinical Case Studies Thursday, 16:15 C3

Trends and Perspectives in Computer-Assisted Dental Implantology

K. Schicho, G. Wittwer, A. Wagner, R. Seemann and R. Ewers

University Hospital of Cranio-Maxillofacial and Oral Surgery, Medical School, University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna, Austria

Since the middle of the 1990ies application of computer-assisted navigation technology is gaining in significance in dental implantology, allowing an accurate intraoperative realization of preoperative planning according to prosthetical, functional and aesthetical criteria. Basic research has been followed by the development of commercially distributed systems. They are based on similar technical priciples as navigation systems used in other applications, but provide software that is optimized for implantology and therefore easy to use [1-4]. Computer assisted navigation contributes to improved intraoperative safety and helps the surgeon to avoid damages of anatomical structures (e.g. nerves).

Preoperative evaluation of available bone using computer-generated 2D- and 3D- renderings from the CT-scan is the base for the definition of implant positions and therefore for an optimization of the mechanical load situation. In several studies sufficient accuracy of navigated implant placement could be proven, also in difficult implantological cases or e.g. in the proximity to the mental foramen [5-8].

Since 1995 at our clinic 72 computer assisted dental implantations have been finished successfully. A total of 395 implants has been positioned by means of this technology, including 68 transmucosal interforaminal implants. Due to improved software (especially developed for dental implantology, e.g. the „Virtual ImplantTM“ by Artma, Vienna) the time necessary to prepare a surgical intervention with navigation significantly decreased since the initial phase. Positioning of fiducial markers at the patient, CT-scanning, preoperative planning, sterilization of instruments (trackers, ...), setup of the navigation system required about 2-3 days in the inital phase of routine clinical application and is now reduced to approximately half a day [9]. This lecture will summarize our clinical and technical experiences with navigation-supported implantation. The presentation will also report on the initial application of the Medtronic TreonTM StealthStationTM for transmucosal interforaminal implantation. The transmucosal implantation method has the potential to reduce postoperative pain, soft tissue swelling and can improve patient’s comfort.

References: [1] Ewers R, Schicho K, Seemann R, Reichwein A, Figl M, Wagner A. Computer aided navigation in

dental implantology: 7 years of clinical experience. Journal of Oral and Maxillofacial Surgery; in press. [2] Schicho K, Ewers R. Teleconsultation and 3D-Visulatisation in computer-assisted Implantology:

The current state of development (german). Impl J 5: 86-88, 2001. [3] Watzinger F, Birkfellner W, Wanschitz F, Millesi W, Schopper C, Sinko K, Huber K, Bergmann H,

Ewers R. Positioning of dental implants using computer-aided navigation and an optical tracking system: case report and presentation of a new method. J Craniomaxillofac Surg 1999:27: 77-81.

[4] Ploder O, Wagner A, Enislidis G, Ewers R. Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine. Radiologe 1995:35: 569-572.

[5] Siessegger M, Schneider BT, Mischkowski RA, Lazar F, Krug B, Klesper B, Zoller JE. Use of an image-guided navigation system in dental implant surgery in anatomically complex operation sites. J Craniomaxillofac Surg 29(5):276-281, 2001.

[6] Wanschitz F, Birkfellner W, Watzinger F, Schopper C, Patruta S, Kainberger F, Figl M, Kettenbach J, Bergmann H, Ewers R. Evaluation of accuracy of computer-aided intraoperative positioning of endosseous oral implants in the edentulous mandible. Clin Oral Implants Res 2002:13: 59-64.

[7] Wagner A, Wanschitz F, Birkfellner W, Zauza K, Watzinger F, Schicho K, Kainberger F, Czerny C, Bergmann H, Ewers R. Computer-Aided Placement of Endosseous Oral Implants in Patients after Ablative Tumor Surgery: Assessment of Accuracy. Clin Oral Impl Res 14:340-348, 2003.

[8] Cavalcanti MG, Ruprecht A, Vannier MW. 3D volume rendering using multislice CT for dental implants. Dentomaxillofac Radiol 31:218-223, 2002.

[9] Ewers R, Schicho K, Truppe M, Seemann R, Reichwein A, Figl M, Wagner A. Computer Aided Navigation in Dental Implantology: 7 Years of Clinical Experience. J Oral Maxillofac Surg, in press

16

C4 Clinical Case Studies

Thursday, 16:30

The Experience of the Working Group for Computer Assisted Surgery at the University of Cologne

R. A. Mischkowski, M. Zinser, M. Siessegger, A. Kübler and J. E. Zöller

Department of Cranio-Maxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62, D-50937 Köln, Germany With the introduction of reliable image guided navigation systems an interdisciplinary working group for computer assisted surgery (Arbeitskreis Computer Assistierte Chirurgie = ACAS) has been established at the University of Cologne. The participating specialities are Craniomaxillofacial Surgery, Neurosurgery, ENT, Ophthalmology and Trauma Surgery. In monthly conferences complex multidisciplinary cases are presented and their management discussed. Next to clinical issues, scientific investigations are planned and technical innovations demonstrated.

For intraoperative navigation the infrared based system “Vector Vision 2”® (BrainLAB, Heimstetten, Germany) was used. Image guided insertion of dental implants was performed with the infrared based RoboDent system (RoboDent, Berlin). Between 8/1999 and 11/2003 354 image guided navigation procedures were carried out on 235 patients. 65 operations were performed interdisciplinary. The most frequent indication was removal of tumors in 173 cases followed by biopsies in areas difficult to access in 43 cases, insertion of dental implants in 23 cases and removal of foreign bodies in 19 cases. For referencing purposes several methods as skin fixed fiducials, head set and laser surface scanning (Z-Touch) were used. A study evaluating the precision of employed methods showed the best results for head set referencing followed by skin fiducials and laser scanning. Innovative techniques as image guided microscope support and intraoperative ultrasound visualisation were succsessfully implemented and used.

The expirience of the Working Group for Computer Assisted Surgery at the University of Cologne confirm the benefits of image guided navigation for surgical procedures in the head and neck area. The current evolution level of the systems allow for a reliable and precise intraoperative guidance mainly in oncologic procedures, implant placement and removal of foreign bodies. Recent development aims at the utilization of image guided navigation in new indication areas as minimal-invasive treatment of mandibular fractures and orthognatic surgery.

17

Clinical Case Studies Thursday, 16:45 C5

Fluoroscopic Navigation of the Dynamic Hip Screw (DHS): an Experimental Study.

D. Kendoff 1), M. Kfuri Jr.2), J. Geerling1), T. Gösling1), M. Citak1) and C. Krettek1)

1) Unfallchirurgische Klinik, Medizinische Hochschule Hannover (MHH), Carl-Neuberg-Str. 1, D-30625 Hannover, Germany 2) Alexander-von-Humboldt-Stiftung, Ribeirao Preto Medical School, Sao Paulo, Brasilien Objective: Trocantheric fractures are common injuries related to high morbidity and mortality. [1,2] The dynamic hip screw has been the favored treatment in such a cases. [3] Although excellent results were described with this technique, the mechanical failure rate of sliding screws has been reported to vary from 8% to 23%. [4] The most predictive failure is the screw position in the femoral head. [5] Extended radiation time can accomplish the optimal placing of the screw by the traditional technique. Computer assisted surgery could be an alternative in order to achieve the screw ideal position under low doses of radiation. An experimental comparative study was developed to address this question.

Material and Methods: The dynamic hip screw (Synthes®) was inserted in 80 proximal foam femora, divided in four equal groups. In the Group 1 the insertion of the screw followed under fluoroscopic control. Navigation (Surgigate®, C-arm module, Medivision) was used to guide the screws placement in the other three groups. Based on the argument that the drill diameter could interfere with navigation accuracy, we modified the usual technique by replacing the Kirschner guide wire with a 3,2mm drill (Groups 2 and 3) or a 4,5mm drill (Group 4). Differentiating the groups 2 and 3, a fluoroscopic control was added in the Group 2 to verify the position of the guide drill before opening the definitive screw hole. Additionally, an anti-rotational 6,5mm screw (Synthes®) was inserted under fluoroscopic control (Group 1) or navigated control (Groups 2, 3 and 4). Operative time and radiation time were recorded. Anterior-posterior and lateral radiographies were done postoperatively enabling the position of screws analysis by the TAD index. [5] The angle between each screw and dyaphisis was assessed in order to define the parallelism between the sliding screw and the anti-rotational screw. A t-test was used to interpret the results.

Results: The medium operative time was 14,8 min by the fluoroscopic group and 16,1 min by the navigated groups. (p<0,01) The medium radiation time by the fluoroscopic group was 55,3 sec contrasting with 5,4 sec by the navigated groups. (p<0,0001). The TAD medium for all groups was below the accepted limit of 25mm. The use of fluoroscope control however favored significant (p<0,001) smaller TAD values of 13,53 (Group 1) and 13,35 (Group 2) in comparison to 15,83 (Group 3) and 17,01 (Group 4). The medium angle between the sliding screw and femoral dyaphisis varied from 135,9 (navigated groups) to 136,7 (group 1). No significant divergence between sliding screw and anti-rotational screw was also evident.

Conclusions: The navigated hip screw is dependent on planning and functional set-up of the operation theater, which normally add time to the normal operation. The navigation enables however the precise insertion of screw with a dramatic reduction of radiation exposure. Further clinical studies should be oriented in order to complement these findings.

References: [1] Bannister GC, Gibson AGF et al. The fixation and prognosis of trochanteric fractures – a

randomized prospective controlled trial. Clin Orthop 254: 242-246, 1990. [2] Kyle RF, Gustilo RB, Premer RF. Analysis of six hundred and twenty two intertrochanteric hip

fractures. J Bone Joint Surg Am 61(2): 216-221. [3] Cole JD, Ansel LJ. Intramedullary nail and lag-screw fixation of proximal femur fractures-

operative technique and preliminary results. Orthop Rev 23:35-44, 1994 [4] Simpson AH, Varty K, Dodd CA. Sliding hip screws: modes of failure. Injury 20:227-231, 1989 [5] Baumgaertner MR, Curtin SL, Lindskog BA, Keggi JM. The value of the Tip-Apex Distance in

predicting failure of fixation of perithochanteric fractures of the hip. J Bone Joint Surg. Am 77(7): 1058-1064, 1995.

18

N5

Advanced Navigation and Motion Tracking IIThursday, 17:00

Occlusion-Robust, Low-Latency Optical Tracking Using a Modular Scalable System Architecture

A. Köpfle1), M. Schill2), M. Rautmann3), M. L. R. Schwarz4), P. P. Pott4), A. Wagner5), R. Männer1),

E. Badreddin5), P. Weiser6) and H. P. Scharf 3)

1) Lehrstuhl für Informatik V, Universität Mannheim, B6, 23-29 C, D-68161 Mannheim, Germany 2) VRmagic GmbH, B6, 23-29 C, D-68032 Mannheim, Germany 3) Orthopädische Universitätsklinik Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany 4) Labor für Biomechanik und experimentelle Orthopädie, Orthopädische Universitätsklinik Mannheim, Theodor-Kutzer-

Ufer 1-3, D-68167 Mannheim, Germany 5) Lehrstuhl für Automation, Universität Mannheim, B6, 23-29 C, D-68131 Mannheim, Germany 6) Institut für CAE, Fachhochschule Mannheim, Windeckstr. 110, D-68163 Mannheim, Germany Objective: This paper describes the development of an advanced optical trackingsystem for use in image guided surgery (IGS). Current tracking systems, while being in broad use successfully, have a number of limitations: they block the usual place of the surgeon's assistant at the operating table, they are sensitive to occlusion of the line of sight, their latency is too high for demanding applications like robotics control or tremor compensation, their accuracy drifts during operating time and age. We present a new approach to tracking systems, MOSCOT (MOdular SCalable Optical Tracking), that tries to eliminate these disadvantages.

Material and Methods: The MOSCOT tracking system consists of a central PC and in principle any number of camera modules. This modular system architecture allows an easy scalability and adaption of the tracking setup to more complex tracking demands. The individual camera modules can be placed at arbitrary positions, e.g. directly at an operation lamp and moving with it, at fixed positions at the ceiling of the operating theater, or temporarily at other suitable positions.

Each camera module is composed of a commercially available camera with an attached proprietary image processing hardware. This dedicated hardware uses an FPGA chip to extract the marker positions on the tracked object in real-time, including all steps of filtering, segmentation and classification and only transmits the detected marker positions to the central PC, thus reducing data bandwidth and necessary processing resources in the PC. The PC combines the data of the individual camera modules, weighting them with a quality factor, ignoring modules with no data, distorted or unplausible data and reconstructs the 3D object positions. It also handles the movement of a single camera and does a dynamical recalibration of the system while online.

Results: After integration of the major hardware & software modules first tests with a prototype system were performed.

Latencies below 10ms between image readout from the camera and availability of reconstructed 3D data were achieved. A system setup with 3 cameras proved the robustness of tracked object positions against occlusion of one - arbitrary - camera. First measurements showed RMS errors of 1-1.5mm in a tracking distance of approx. 0.5m.

Discussion/Future Work: Results show that the approach to develop an occlusion-robust, low-latency tracking system was successful. The accuracy of tracked marker positions in this first setup is impaired by the currently used off-the-shelf interlaced video cameras and the sensitivity of the passive color marker detection to environmental lighting changes.

Having shown the principal feasibility of our modular approach to tracking, we intend to improve the accuracy of tracking in the next steps. New CMOS cameras will give higher resolution source images, advanced hardware with extended resources will allow better image preprocessing and therefore more accurate marker detection. And a combination of the currently used colored tracking markers with retro-reflecting IR-illuminated tracking markers will provide higher accuracy.

Further improvements are intended by using a dynamic online recalibration of the camera-system compensating drift effects caused by heating and aging.

19

Advanced Navigation and Motion Tracking II Thursday, 17:15 N6

Development of Autoclavable Reflective Optical Markers for Navigation Based Surgery

D. Schauer, T. Krüger and T. Lueth

Department of Maxillofacial Surgery, Clinical Navigation and Robotics, Medical Faculty Charité,

Humboldt University at Berlin, D-13353 Berlin, Germany The optical position measurement became generally accepted as a gold standard in the computer-assisted surgery. Light emitting diodes, which are mounted to the instrument, require a cable based current supply. The physical bridge between the non-sterile area to the operation field and ergonomic complications contradict the use of active markers. Passive markers consist of a mainly spherical carrier, coated with a diffuse reflecting material. They are mainly disposable, whereby medical costs rise substantially. The diffuse reflection surface of the recently introduced autoclavable markers by Precision Implants AG and the common passive reflectors are sensitive to impurities by wound liquids and tissue remainders. These reflectors can not be cleaned intraoperatively and must be replaced frequently.

The goal of the presented work was the development of reflective optical markers, which can be sterilised in saturated wet steam and therefore ensure immediate intraoperative cleaning as well as a multiple reuse. Based on the glass ball technology presented by Rohwedder Visotech GmbH and Northern Digital Inc. different technological verifications and their influences on the optical and technical characteristics were evaluated. The coating materials as well as the assembly of the markers used so far did not permit a sterilisation in saturated wet steam. The coatings spoiled with heating and were mechanically damaged easily. The adhesive joints between the glass balls and their carriers were mechanically destroyed by thermally induced tensions.

In a first step the materials of reflection and anti-reflection coating were varied. By the coating of the glass balls with suitable reflection and anti-reflection material we succeeded to manufacture mechanically and thermally high resistant optical markers which permit an unlimited sterilisation in saturated wet steam with an ambient pressure of 3 bar and a duration time of 3 min. An adhesive for the connection of the glass balls with a carrier must compensate thermally induced mechanical stresses. It must have a small water absorption and must be certified after USP 23 class 6. Different designs of the bonding slit were examined. A suitable certified adhesive was found which must be applied with a bonding slit of approx. 0.2 mm thickness between the glass ball and the carrier surface. The pan shaped spherical carrier represented a radius little bigger than the glass balls. The contact-free gluing of glass ball and carrier guarantees the compensation of high mechanical stresses. The entire influence matrix was verified by a frequent sterilisation in saturated wet steam up to the failure of the samples. The optical markers survived more than 50 sterilisation circles without any influences to the optical quality.

The presented work describes the successful development of autoclavable passive optical markers, which are compatible to the common position measurement equipment in computer-assisted and navigated surgery. The optical markers offer unusual positive physical characteristics, are easy to clean and fits optimal to the surgical instruments. The passive glass ball markers are very resistant to mechanical strains.

20

N7

Advanced Navigation and Motion Tracking IIThursday, 17:30

Experimental Application of Ultrasound-Guided Navigation in Head and Neck

I. Arapakis, J.Schipper and R. Laszig

Deptartment of Otorhinolaryngology, Albert-Ludwigs-University Freiburg, Killianstr. 5, D-79106 Freiburg, Germany

Ultrasound-guided navigation systems have been introduced to optimize the neurosurgical strategies minimizing the damage to the healthy brain tissue. During the neurosurgical intervention tracked ultrasound is used for recognition of changes (brain shift) which is missing in head and neck. Ultrasound-guided imaging (UGI) can offer a precise spatial differentiation of soft tissue in head and neck.

As possible indications for UGI we see parotid neoplasms inclusive recurrencies and soft tissue tumors near lateral skull base.

In a clinical study we performed accuracy examinations (n = 5) on different control points of the head. In our surgical procedures the navigation system BrainLAB VectorVision2 and the ultrasound device Siemens Sonoline G50 with a 5,0 MHz probe were used.

The average standard deviation of the ultrasound-guided navigation was about 1 cm. Thus our accuracy studies have shown considerable differences in accuracy during surgery.

The intraoperative use of ultrasound-guided imaging is a helpful method for determining the size, shape and localization of lesions in head. However it is existing a deviating accuracy. Further it is very time-consuming, rarely indicated and not yet developed for the ENT clinical routine.

21

Advanced Navigation and Motion Tracking II Thursday, 17:45 N8

Iso-C 3D Navigated Drilling of Osteochondral Defects of the Talus (A Cadaver Study)

M. Citak1), J. Geerling1), D. Kendoff 1), T. Hüfner1), M. Richter1), M. Kfuri Jr.2) and C. Krettek1)

3) Unfallchirurgische Klinik, Medizinische Hochschule Hannover (MHH), Carl-Neuberg-Str. 1, D-30625 Hannover, Germany 4) Alexander-von-Humboldt-Stiftung, Ribeirao Preto Medical School, Sao Paulo, Brasilien Introduction: The aim of the operative therapy of osteochondral defects of the Talus is the revascularisation of the defect areas. Proper visualizing of the defect intraoperatively with arthroscopy or X-ray image is not guaranteed depending on localisation of the defect area. Exact retrograde drilling of these lesions might be problematic and failure of drilling might occur. Alternative to open therapy is the use of computer assisted navigated retrograde drilling. This method is in use for a short period of time and has shown precision improvements of drilling procedures. The accuracy and the operation setup of the Iso-C 3D based computer assisted drilling and resection of osteochondral lesions was determined in a cadaver study.

Material and Methods: In 7 human cadaver feet a osteochondral like lesion at the posteromedial facet was created over a medial malleolus osteotomy. The dynamic reference base was positioned in the head of the Talus with rotation stable new developed single screw. The Iso-C 3D three- dimensional image data was performed by a c-arm scan and sent to a Surgigate Navigationsystem Medivision. The defects were visualised in multidimensional plains. Defined trajectories were used to define the entry and depth of the planned drilling. Under permanent navigation control the drilling with a 2.5 mm drill was performed, along the preplaned trajectories. To check results an Iso-C 3D Scan was performed at the end of the protocol. Another conventional computer tomography Scan of every cadaver was done to assure the results before anatomic control with opening of the malleolous was performed.

Results: The result showed exact retrograde drilling of all lesions with the 2.5 mm drill. No drill failures occurred and the planed trajectories were confirmed.

The accuracy was confirmed with immediate intraoperative Iso-C3D and postoperative CT scans. Both modalities showed same results and were congruently. Dissecting via the medial malleolous osteotomy showed no anatomic perforation of the drill in the talus.

Discussion: The use of computer-assisted navigated retrograde drilling of osteochondral lesions has been described with promising results as a new technique. Currently used computertomography(CT) and fluoroscopy based navigation are limited in their flexibility and in their intraoperative image data. Advantage of the three dimensional navigation is the direct visual control of the drilling procedure in multiplanar reconstructions which allows exact drilling of also anatomic difficult regions. So far Iso-C3D navigation still needs accessorily equipment with extra cost and training of personel. Whether there will be significant advantage in comparing to conventional methods under operative conditions has to be shown in further clinical studies.

22

RE5

Registration IIFriday, 9:15

Accuracy and Practicability of Laser Surface Scanning for Registration in Image Guided Neurosurgery

R. Krishnan, A. Raabe and V. Seifert

Department of Neurosurgery, Johann Wolfgang Goethe-University, Schleusenweg 2-16, D-60528 Frankfurt/Main, Germany Objective: Placing multiple external fiducial markers for patient registration in image guided neurosurgery has some major disadvantages. A technique avoiding these markers would be attractive.

We report our clinical experience with a new laser scanning-based technique of surface registration. The purpose of this study was to prospectively measure both the calculated registration error and the application accuracy using laser surface registration for intracranial image guided surgery in a routine clinical setting.

Methods: 180 consecutive patients with different intracranial pathologies were scheduled for intracranial image guided surgery utilizing a passive infrared surgical navigation system (z-touch, BrainLAB, Heimstetten, Germany). The first 34 consecutive patients were registered both with laser and marker based techniques. Surface registration was performed using a class I laser device that emits a visible laser beam. The polaris camera system detects the skin reflections of the laser which the software uses to generate a virtual 3D matrix of the individual anatomy of the patient. An advanced surface-matching algorithm then matches this virtual 3D matrix to the 3-dimensional MRT data set. Application accuracy was assessed using the localization error for three distant anatomical landmarks.

Results: Laser surface registration was successful in 174 patients. The registration for 6 patients failed due to mismatch of the registered and calculated surfaces (n=4) and technical problems (n=2). In the 34 patients registered with both techniques, the application accuracy for the surgical field was 2.4 ± 1.7 mm (range 1-9 mm). Application accuracy was higher for frontally located lesions (mean 1.8 ± 0.8 mm, n=13) compared to temporal, parietal, occipital or infratentorial lesions (mean 2.8 ± 2.1 mm, n=21). The true application accuracy was not correlated to the calculated accuracy, returned from the system after registration.

Conclusion: In this clinical study laser scanning for surface registration was an accurate, robust and easy to use method of patient registration for image guided surgery. We now use this registration method in our daily routine.

23

Registration II Friday, 9:30 RE6

Using the AWIGS System for Preparation of Computer Aided Surgery

H. Knoop1), J. Raczkowsky1), U. Wyslucha2), T. Fiegele3) and H. Wörn1)

1) Universität Karlsruhe (TH), Institut für Prozessrechentechnik, Automation und Robotik (IPR), Gebäude 40.28, Engler-

Bunte-Ring 8, D-76131 Karlsruhe, Germany 2) MAQUET GmbH & Co. KG, Kehler Straße 31, D-76437 Rastatt, Germany 3) Leopold-Franzens-Universität Innsbruck, Universitätsklinik für Neurochirurgie, Anichstraße 35, A-6020 Innsbruck, Austria Introduction: The Advanced Workplace for Image Guided Surgery (AWIGS) system of MAQUET GmbH & Co KG, Rastatt, Germany is a synthesis of an operating table, a radiolucent patient transfer board and a Computer Tomograph (CT). The table rests on two columns that travel on rails in the floor of the OR. In some setups, an additional CT-table is used, other setups contain a sliding gantry system. The use of intraoperative imaging requires a registration to the patient´s location on the transfer board when moved back out of the CT for intervention.

Material and Methods: Our cooperation project aims to develop a procedure for an automatic registration of the intraoperative image data from the tomograph using the AWIGS transfer board. For convenience of the surgeon, a scan reference frame (SRF) of 50 x 50 x 70 mm (prototype I) or 30 x 50 x 70 mm (prototype II) is used. The SRF can easily be fixed in the scan ROI. Inside the SRF, six to seven titanium fiducial rods are located in a POM plastic cover and geometrically arranged according to stereotactic head frame approaches. An automatic algorithm searches the fiducials in the tresholded and binarized image data and calculates the rigid body transformation. This transformation can be used by navigation systems, a mechanical 3D pointing device or robot assisted surgery. A universal holder system at the SRF is compatible to a rigid body adapter for external navigation using the relative translation and rotation of the holder adapter.

Results: Low Fiducial Registration Errors (FREs) are possible, although only a part of the SRF is visible in the CT image data; 30 slices can be processed in less than five seconds on an AMD Athlon (TM) XP 1900+ with 1GB RAM and an ATI Radeon 9700 graphics adapter with 128 MB of texture memory. Fiducial Registration of an image data volume with over 200 slices is not significantly better, but processing time increases to half a minute. After an exact 3D-mesurement of the coordinates for the two prototypes, an FRE of about 0.2 mm can be obtained in a laboratory setup (e.g. Pixelspacing < 0.3 mm). Intraoperative imaging however, challenges the algorithm because of its higher pixel- and slicespacings (e.g. Pixelspacing > 0.5 mm, Slicespacing = 1.5 mm). The aquired FREs in this case however are below 0.5 mm without adapting the algorithm parameters.

Discussion: The universal holder system is evaluated under intraoperative conditions. The algorithm already provides external transformation transfer of a 4 x 4 matrix. Especially the second prototype will be tested with standard neurosurgery scenarios. Scenarios in maxillo-facial or spine surgery might require a partial redesign of the protoypes.

The first intraoperative testings were done at the Department of Neurosurgery, University Hospital Innsbruck, Austria.

24

RE7

Registration IIFriday, 9:45

Ultra-Fast Holographic Recording and Aautomatic 3D Scan Matching of Living Human Faces

D. Giel1), S. Frey1), A. Thelen1), J. Bongartz1), P. Hering1), A. Nüchter2), H. Surmann2),

K. Lingemann2) and J. Hertzberg2)

1) Forschungszentrum caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn, Germany 2) Fraunhofer Institute for Autonomous Intelligent Systems (AIS), Schloss Birlinghoven, D-53754 Sankt Augustin, Germany For the treatment of diseases, injuries and congenital or acquired deformities of the head and neck, maxillo-facial surgeons deal with complex surgery. For example, the correction of disfiguring facial birth defects requires the repositioning of scull bones which must be performed with great care and precision. The pre-operative simulation of such surgical procedures requires a high resolution 3D computer model of the patient s face. We describe a novel approach to create such a 3D patient model by ultra-fast holographic recording and automatic scan matching of synchronously captured perspectives.

With a pulsed laser (pulse duration 35 ns), a hologram of the patient is recorded. In a second step, the so-called holographic real image is reconstructed by means of a continous-wave laser. The real image is static and thus void of motions artifacts caused by breathing, heartbeat and the mimic muscular system of the patient. The 3D surface information of the facial profile is extracted from the real image by socalled hologram tomography [1] resulting in a depth map regarding a particular perspective. With planar mirrors, multiple perspectives of a patient are recorded synchronously into the same hologram. They produce multiple depth images of the patient surface which are registered to yield a complete patient facial model by automatic shape matching.

Given two independently acquired sets of 3D points which correspond to a single shape, we find the transformation consisting of a rotation and a translation that minimizes a cost function, that contains the Euclidian distances between points pairs [2]. The ICP (Iterative Closest Points) algorithm [3] is used to compute a minimum. In each iteration step, the algorithm selects the closest points as correspondences and calculates the transformation for minimum costs. The assumption is that in the last iteration the point pairs are correct.

The ICP algorithm spends most of its time in creating the closest point pairs. We propose a new fast approximation based on kD-trees [4] for this problem. The key idea is to return as an approximate nearest neighbor for point p, the median point in the bucket region where p lies. This value is determined from the depth-first search, thus expensive Ball-Within-Bounds tests and backtracking are not used. We demonstrate, that this modified ICP registers fast, reliable and correctly two surface which were synchronously recorded and reconstructed by hologram tomography.

References: [1] D.M. Giel. Hologram tomography for surface topometry. PhD thesis, Mathematisch-

Naturwissenschaftliche Fakultät der Heinrich-Heine-Universität Düsseldorf, http://deposit.ddb.de/cgi-bin/dokserv?idn=968530842, 2003.

[2] A. Nüchter, H. Surmann, K. Lingemann, and J. Hertzberg. Consistent 3D Model Construction with Autonomous Mobile Robots. In Proceedings of the KI 2003: Advances in Artificial Intelli-gence. 26th Annual German Conference on AI, Proceedings Springer LNAI vol. 2821, pages 550 564, Hamburg, Germany, September 2003.

[3] P. Besl and N. McKay. A method for Registration of 3 D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239 256, February 1992.

[4] J. L. Bentley. Multidimensional binary search trees used for associative searching. Communica-tions of the ACM, 18(9):509 517, September 1975.

25

Simulation and Modelling Friday, 10:30 S1

Realistic Haptic Interaction for Computer Simulation of Dental Surgery

A. Petersik1), B. Pflesser1), U. Tiede1), K. H. Höhne1), M. Heiland2) and H. Handels1)

University Hospital Hamburg-Eppendorf, Martinistr. 52, D-20246 Hamburg, Germany

1) Institute of Medical Informatics, House S 14 2) Klinik und Poliklinik für Zahn-, Mund-, Kiefer- und Gesichtschirurgie (Nordwestdeutsche Kieferklinik)

Background: Defined reduction of bone with a drill without injuring therein lying structures is an es-sential part of surgical techniques, especially during dental surgery. A computer based simulator with haptic feedback could reduce cost and offer new training possibilities. However, due to extremely demanding computational requirements, haptic rendering of high detailed anatomic models together with interactive techniques for reduction of bone is still a big challenge. In the following, we portray new algorithms which, on the one hand, allow for realistic haptic rendering of even very small anat-omic structures and on the other hand are capable of simulating the reduction of bone with realistic haptic sensations. As an application a system for simulating apicoectomies is shown.

Material and Methods: From CT data, a virtual three-dimensional model of a skull was created. Both inferior alveolar nerves and apical inflammations of teeth 23, 25, 36, 35 were modelled. The visualiza-tion is done by a ray-casting algorithm which renders iso-surfaces directly from the segmented volume data at subvoxel resolution. To get haptic feedback while touching the bone with a virtual tool, we developed a multi-point collision detection approach, which adequately considers shape and extent of the tool and, most important, is not limited to the resolution of the underlying voxel data. The resolu-tion enhancement is achieved by using the same subvoxel algorithm as for the visualization. This leads to a congruent graphic and haptic display. Since collisions can not be detected for regions of the model which are currently modified, we developed a "look-forward" approach which is used while drilling. The "look-forward" approach detects collisions in front of the virtual tool, to determine material distri-bution and properties. This information, together with the drilling direction, is used to calculate the resulting drilling force. Additionally, vibrations are modulated onto the force to further enhance the sensation of drilling. All algorithms are working with update rates of 3000 Hz which enables a stable haptic rendering of hard surfaces like bone. Spatial 3D-perception is possible with the help of shutter glasses.

Results: The presented simulation system allows the visual and haptic observation of complex volume based models and virtual interaction with them. The performed drilling routes can be assessed both on the 3D model unveiling the route and on transverse, coronal and sagittal CT reconstructions after the procedure. More than 30 dental students performed virtual apicoectomies and evaluated the simulation by filling in questionnaires. The evaluation showed that offering virtual training possibilities to dental students is a very valuable addition to existing teaching modalities of dental surgery. Also, resolution, force feedback simulation and haptic observation of the presented computer model were reported to be realistic. However, most mentioned criticism was the lack of soft tissue simulation and the inclusion of further procedures.

Conclusion: With the example of apicoectomy we have shown that realistic simulation of dental surgi-cal procedures, even in complex anatomical models, is possible. In principle, it is possible to add vir-tual pathologies in data sets and/or to use further patient data sets to extend the range of simulated surgical procedures.

26

S2

Simulation and ModellingFriday, 10:45

Robot Simulation System for Precise Positioning in Medical Applications

E. Freund, F. Heinze and J. Roßmann

Institute of Robotics Research (IRF), University of Dortmund, Otto-Hahn-Str. 8, D-44227 Dortmund, Germany Precise positioning and repositioning of patients is still an unsolved problem in medical technology. It is especially relevant in radiation therapy (radiotherapy), where the therapy is usually fractioned into 30 to 50 single sessions and the therapist has to position the patient before each single session. The success of the radiotherapy highly depends on the precision of the positioning procedure. Thus, position verification becomes the key technology for improving radiotherapy. As current sensors cannot directly measure the location of the treatment region during the therapy session, a detection of body-inherent landmarks like the skeleton is a possible solution. As radiotherapists need a system that provides improved localization capabilities independent of the location of the treatment region inside the body we focused on a localization algorithm independent of special geometrical properties.

We propose an integrated system approach based on the 3D simulation software COSIMIR® by trans-ferring results from the simulation of robotic and automation systems in projective virtual reality into a medical environment. The idea of using a 3D simulation system for setup verification is to supply the therapist with an easy-to-use and easily expandable system that he can use for multiple different pur-poses, like treatment simulation, treatment setup verification, and treatment visualization. COSIMIR® has a model of the treatment room and the treatment equipment. As inputs it takes the CT data of the patient and at least two orthogonal X-ray images of the patient after the treatment setup. The CT data is used to derive a 3D surface model of the patient's skeleton for the visual feedback to the user, and to register X-ray images of the patient during the treatment verification phase. As the aim is to derive a system that is independent of the location of the region of interest inside the body, pixel based algo-rithms are used for registration. The registration process assumes a rigid transformation with three translational and three rotational degrees of freedom (DOF). It is based on the calculation of DRRs using a ray casting approach. The system matches the calculated DRRs with real X-ray images and derives a similarity gauge based on normalized cross correlation or mutual information. An optimiza-tion process maximizes the similarity measure and provides a difference vector between the actual and the desired position of the patient. COSIMIR® visualizes the results and returns the similarity gauge for the actual position.

This paper provides first results based on synthetic and real X-ray images of a human skull phantom. The results demonstrate that two X-ray images contain sufficient information to solve the positioning problem for all six DOFs. Compared to other approaches, our system is independent of the patient's region of interest and uses more than one X-ray image to improve the registration in all 6 DOFs. While the system is undergoing last software test, our cooperation partners at the clinic and polyclinic of the university of Essen, Germany, are setting up the required hardware to prepare for the planned test of the developed approach in clinical practice.

27

Simulation and Modelling Friday, 11:00 S3

Computer-Aided Suturing in Laparoscopic Surgery

F. Nageotte1), C. Doignon1), M. de Mathelin1), L. Soler2), J. Leroy2) and J. Marescaux2)

1) LSIIT, Equipe Automatique Vision et Robotique, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP

10413, F-67412 Illkirch Cedex, France 2) Institut de Recherche sur les Cancers de l'Appareil Digestif (IRCAD), Hôpital Civil, BP 426, F-67091 Strasbourg Cedex,

France This paper addresses the problem of the suturing task in laparoscopic surgery using a circular needle and a usual needle-holder. Suturing is probably one of the most awkward tasks for surgeons in laparoscopy [Cao96]. It can be decomposed in two stages : stitching, i.e. the movement that makes the needle go through tissues, and knot tying. Most of previous works have focused on knot tying [Kang02] but, to our knowledge, few techniques to assist the surgeons during stitching have been published yet.

When driving the needle from an incision point to a desired exit point on the surface of the tissue, large motions and deformations of tissues are often involved. The main reasons are the bad position of the needle in the needle-holder and the difficulty for the surgeon to figure out which movements of the needle are possible, due to the trocar constraint that limits the motions of the needle-holder to only four degrees-of-freedom. Therefore, in order to help the surgeons, we propose a system that provides them information during the stitching stage.

In this paper, a geometrical and kinematics modelling of the task with usual instruments (4 DOFs) is presented. Based on this modelling, theoretical conditions are defined that allow for good stitching, that is with limited longitudinal deformations of tissues and good configurations to drive the needle through the tissues. The influence of the handling of the needle in the needle-holder, of the position of incision points and of the position of the trocar have been studied and requirements on these parameters are deduced to guarantee the feasibility of the task. For a given set of parameters, we also present a method to plan the movements of the instrument so as to make the needle follow an acceptable path through the tissues.

Based on this analysis we are developping a computer-aided system for laparoscopic suturing. This system can be used by the surgeons before the stitching to choose the position of the trocar. During the operation, the planning software indicates to the surgeon how the stitching can occur with the given configuration and suggests possible modifications of the needle position in the needle-holder to decrease the movements of the tissues. The surgeon can then tune the parameters to reach a configuration that enables a successful stitching.

This scheme can also be used in the framework of a robotic assisted laparoscopic suturing. After being validated by the surgeon the planned trajectory of the instrument can be followed by a visual servoing system to realize the stitching task in a semiautomatic mode.

Practically, the assistance software needs the knowledge of different positions and orientations. This information can be estimated by the use of a vision system. Our developments are based on a previous work by Krupa et al [Krupa03] for assisting the surgeons in positionning the surgical instrument. The potential of our method is demonstrated in a series of simulations and at time being we are validating this technique by means of the Computer Motion AESOP surgical robot and two endoscopic color cameras.

28

S4

Simulation and ModellingFriday, 11:15

Experimental Validation of a Force Prediction Algorithm for Robot Assisted Bone-Milling

C. Plaskos1), A. J. Hodgson2) and P. Cinquin1)

1) Laboratoire TIMC-IMAG, Groupe GMCAO, Faculté de Médecine de Grenoble, Joseph Fourier University, F-38706 La

Tronche Cedex, France 2) Neuromotor Control Laboratory, Department of Mechanical Engineering, University of British Columbia, 2324 Main

Mall, Vancouver, B.C., Canada V6T 1Z4 Background: Quantitative prediction of bone milling forces in robotic surgery is useful for simulating different surgical techniques, for evaluating safety protocols, and for analysing and optimising milling parameters to improve cutting accuracy, efficiency and temperatures. Knowledge of milling forces is also useful for designing and optimising miniature bone-mounted robots which must be sufficiently light and stiff. It appears however that few models exist for predicting bone milling forces, despite the significant developments that milling models have permitted in the metal cutting industry.

Model formulation: The tangential and radial cutting-force components acting on each milling tooth element are calculated as a function of the instantaneous milling tool kinematics. For a particular set of cutting conditions (feed rate, RPM, axial and radial cutting depth, number of cutting teeth, and tooth inclination or helix angle), we calculate the instantaneous geometry and velocity of the cutting chip. The non-linear relationships between these kinematic parameters and the cutting force components were determined previously in a series of orthogonal and oblique bovine bone-cutting experiments. Forces were measured on single cutting-tooth edges for a range of chip thicknesses, tooth rake and obliquity (helix) angles, and cutting orientations relative to the bone anisotropy [1][2]. The instantaneous cutting force components in milling are then calculated for each cutter tooth from these relationships. Mean and maximum milling forces and torques are then determined by integrating the instantaneous forces over a number of tool revolutions.

Model validation: We conducted a series of bone milling experiments (over 200 measurements) on an industrial milling machine and measured force components in the feed, normal, and axial directions with a Kistler dynamometer. Cortical bone specimens taken from the mid-diaphysis of fresh bovine femurs were milled under the following conditions: • feed rate [0.4-200 mm/min], • angular velocity [40-5000 RPM], • tool geometry [5 milling cutters: 1-4 teeth, 0-30 helix, 0-20 rake], • radial immersion depth [full slot milling, half up and half down milling], • cutting direction [parallel and transverse to the long bone axis].

These cutting conditions were then simulating using our force prediction algorithm and the results are compared with the experimental milling-force measurements.

Results: As predicted by our algorithm and demonstrated by the force measurements, bone-milling force components are periodic about the tooth passing frequency with the force curve shape changing dramatically with milling tool geometry and kinematics. In general the differences between the predicted and measured mean and maximum forces were within ~10 and ~20 percent, respectively.

Conclusions: The simple chip load model proved effective in quantifying milling forces for a wide range of cutting conditions. A full description of the development and validation of the method will be presented at the conference.

References: [1] Plaskos C, Hodgson A, Cinquin P. An orthogonal bone-cutting database for modeling high-speed

machining operations. CAOS 2003, Marbella Spain. [2] Plaskos C, Hodgson A, Cinquin P. Modeling and optimisation of bone cutting forces in ortho-

paedic surgery. MICCAI 2003, Montreal Canada.

29

Simulation and Modelling Friday, 11:30 S5

SKALPEL-ICT: Simulation Kernel Applied to the Planning and Evaluation of Image-Guided Cryotherapy

A. Branzan Albu1), D. Laurendeau1), C. Moisan2) and D. Rancourt3)

1) Computer Vision and Systems Laboratory, Dept. of Electrical and Computer Engineering, Laval University,

Cité Universitaire, Sainte-Foy, (QC), G1K 7P4, Canada 2) Dept. of Radiology, Laval University, Cité Universitaire, Sainte-Foy, (QC), G1K 7P4, Canada 3) Dept. of Mechanical Engineering, Sherbrooke University, 2500 boul. de l'Université, Sherbrooke (QC), J1K

2R1, Canada The SKALPEL-ICT project aims at developing a Simulation Kernel Applied to the Planning and Evaluation of Image-Guided Cryotherapy. Input data for the design of this virtual environment consists in sequences of MR images, image acquisition parameters, and measurements of mechanical tissue properties. The prototype system developed during this project will offer three operating modes.

The surgery planning mode computes the optimal configuration of the interventional cryoprobes by using geometric models of the liver, the intra-hepatic vascular system, and hepatic tumours. Predicting the spatio-temporal expansion of the iceball is relevant for this mode as well.

The intra-operative assistance mode offers an augmented reality environment, providing additional information such as the dynamic temperature map inside the growing iceball. This information is essential for a successfull intervention, because cellular death occurs at beyond 0 C, where magnetic resonance does not exist.

The training mode implements a virtual environment containing entities such as a virtual patient, virtual cryoprobes, etc. This mode offers an efficient way to improve the learning curve of the trainees and to accurately assess their skills.

Since the design of SKALPEL-ICT involves multidisciplinary challenges, we opted for a modular approach, involving a parallel and independent development of the geometric, mechanical and thermal modules. Brief descriptions of each module follow. The final phase of the project deals with the integration of these three modules using a custom-made simulation framework.

Geometric modelling contains two steps: segmentation and reconstruction. Avoiding the breathing artefact imposes thick slices during MR acquisition, therefore 3D segmentation techniques are not reliable. We consider 2D segmentation and concentrate on specific features of MRI data: the partial volume effect, and the inhomogeneous texture of advanced liver cancer. Segmentation results are input data for the proposed 3D reconstruction algorithm, which combines shape-based interpolation and contour-based extrapolation. A new surface rendering algorithm generates a triangular mesh using contour parameterizations.

The mechanical model is based on a tensor-mass approach, which computes forces from a combination of local stiffness tensors attached to every mesh element. These tensors depend on the mesh geometry at rest, and on the mechanical tissue properties. Therefore they can be pre-computed, while the real-time computation implements a linear combination of stiffness matrices and displacement vectors.

The thermal model uses the correlation between the MR signal intensity with the liquid-to-solid phase transition. The iceball expansion is estimated from time-series of MR images. Regressions over the volume in the temporal and thermal domains allow cryogenic temperatures beyond 0 C to be predicted. Our method was successfully validated using samples of ex vivo pig liver and thermosensor readings.

The final integration phase of the project uses a task-oriented simulation framework, namely the Actor-Property-Interaction Architecture (APIA). APIA implements a new paradigm named Interaction Centric Modeling that allows a flexible design of virtual entities and their physical behaviours, such as visco-elasticity, dry friction during the needle insertion, etc. APIA is able to accept input from external peripherals, thus it can be easily interfaced with the MR image acquisition system, as well as with haptic devices.

30

S6

Simulation and ModellingFriday, 11:45

Simulation of Radio-Frequency Ablation Using Composite-Finite-Element Methods

T. Preusser1), F. Liehr2), U. Weikard2), M. Rumpf 2), S. Sauter3) and H.-O. Peitgen1)

1) CeVis, University of Bremen, Universitätsallee 29, D-28359 Bremen, Germany 2) University of Duisburg-Essen, Lotharstrasse 65, D-47048 Duisburg, Germany 3) University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland

The radio-frequency (RF) ablation of primary and metastatic liver tumors has become a promising treatment as an alternative to chemotherapy, radiotherapy and the surgical resection. Together with appropriate mathematical, physical and biochemical models which describe the ablation process, the success of the treatment can be estimated or even optimized. The goal is to reduce the recurrency rate by ensuring a complete destruction of the malignant tissue.

During the last decade, the simulation of RF-ablation using monopolar probes has been modeled and simulated by several researchers. Such models consist of a system of partial-differential-equations which describe the distribution of the electric potential in the vicinity of the RF-probe, the evolution of heat in the tissue and finally the damage which is inflicted on the tissue. The models are discretized using Finite-Difference (FD) or Finite-Element (FE) methods on uniform rectilinear grids or using tetrahedral meshes. Unfortunately, a lot of effort has to be put in building a computational mesh which resolves the complicated geometry consisting of parenchyma and malignant tissue, vessels, probes, etc.

This paper presents a simulation model, based on a previous work of Stein [1], but which is discretized using Composite-Finite-Element (CFE) methods to resolve the fine scale structure of the domain with its variety of different vessels. CFE methods are characterized by their capability of resolving complicated geometries while keeping the computational mesh simple, in particular by not adapting the mesh but the basis-functions to the underlying domain. This way the approximation quality of the numerical model can be enhanced significantly while the computational effort is kept optimal. A multigrid solver, which respects the structure of the CFE basis-functions and thus the domain, further enhances the performance of the method. The application of the simulation to different cases is shown.

[1] T. Stein. Untersuchungen zur Dosimetrie der hochfrequenzstrominduzierten interstitiellen Thermotherapie in bipolarer Technik. Fortschritte in der Lasermedizin, LMTB, 2000.

31

Robotic Interventions Friday, 13:15 RO1

Principles of Navigation in Surgical Robotics

D. Henrich and P. Stolka

Lehrstuhl für Angewandte Informatik III (Robotik und Eingebettete Systeme) Universität Bayreuth, D-95440 Bayreuth, Germany

We propose a framework for different modes of navigation in surgical robotics. Using robots in medicine and especially in surgery requires a representation of a changing environment. This is achieved by modeling at different abstraction levels, from comparatively straightforward 3D imaging modalities to appropriate control parameters for processing.

Between global navigation and control, we introduce the concept of local navigation into surgical robotics, i.e. the concurrent creation and maintenance of a local environment map used for navigation. This intermediate level of sensory feedback allows to react to changes in the environment through e.g. additional features like safety-ensuring robot path changes. Furthermore, in comparison to global navigation based on a-priori available information, local navigation permits sampling of information which may be unattainable before process execution or only with reduced precision. We illustrate this idea of nested control loops on the basis of a specific surgical application – robot-based milling at the lateral skull base.

Navigation Principles: For global navigation with a preoperative map, a data set of the intervention region is required which serves as a global map. This map is typically acquired preoperatively and is used for planning purposes. Locations and paths can be described within this map in a global fashion.

Global navigation based on an intraoperatively acquired map is conceptually similar to the previous principle. Here as well, one has knowledge of the complete environment via a global map. However, acquisition may take place only shortly before process execution begins, or even occasionally during the intervention. The assumption of a non-changed environment becomes more plausible.

In contrast to the above navigation principles, local navigation does not require a map of the environment before the process starts. In fact, execution starts without any prior knowledge. The robot is positioned in the proper execution area by an operator, e.g. through some kind of force-following control. A local map is then continually filled with information sampled during execution, effecting an iteratively enhanced environment representation. The added information has two important properties: it is local in nature, and it may provide more precise knowledge of the environment than global sensors could.

Control encompasses the data cycle of measurement of data elements from the process (sampling), computing a reaction that is fed to an actuating element in the process, and a data feedback path to the controller for closed-loop control. For effective control, a tight temporal coupling between these steps is paramount to follow process changes. Pure control does not require any kind of spatial information to work; it serves as a entirely reactive navigation principle without any persistent mapping functionality.

These four sensory feedback cycles are implemented in the RONAF system and described in the full paper together with details for specific sensors for every navigation principle.

Conclusion: Complementary to global navigation, we introduce the term local navigation to describe online data sampling during an intervention, allowing for e.g. more precise and/or current information than is achievable with global sensors. By integrating the four described navigation principles we de-fine a framework into which sensors can be accommodated in a modular fashion. The surgical robot system RONAF serves as a demonstration platform for these navigation principles, aimed at creating a safe and fast tool for milling interventions.

Acknowledgements: This work is a result of the project “Robotergestützte Navigation zum Fräsen an der lateralen Schädelbasis (RONAF)” of the special research cluster “Medizinische Navigation und Robotik” (SPP 1124) funded by the Deutsche Forschungsgemeinschaft (DFG). It was created in coo-peration with the “Zentrum für Schädelbasis-Chirurgie der Universitätskliniken des Saarlandes” in Homburg/Germany. Further information can be found at http://ai3.inf.uni-bayreuth.de/projects/ronaf/.

32

RO2

Robotic InterventionsFriday, 13:30

Robotic Surgery in Neurosurgical Field

H. Iseki1), Y. Muragaki1), S. Oomori1), K. Nishizawa1), M. Hayashi1), R. Nakamura1) and I. Sakuma2)

1) Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine, Tokyo Women's Medical

University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan 2) The University of Tokyo,7-3-1, Hongo, Bunkyou-ku, Tokyo 113-8654, Japan Information technology (IT), visualization, and manipulation will be the key words for next generation neurosurgery. In order to let the surgical manipulation evolve from craftwork level to expert manipulation it is inevitable to establish a strategy desk based on target optimizing managing system which makes preoperative precise surgical planning and surgical course based on digital images as so-called a road map and lead the manipulator to the given destination. A robotic surgery system under development is a combination of a microlaser ablation consists of 300-fold microendoscope and semiconductor micro laser with 2.8 µm wave length and open MRI compatible micromamipulator. Upon completion of this system more microscopic surgery becomes possible. Micro-laser, in combination with the surgical strategy system and the intraoperative imaging, is of choice for the “pin-point attack” because the penetrance is as short as 100 µm or less. Gamma knife is an instrument that radically cure the brain lesion using γ-ray as if a conventional surgical knife would cut out the lesion, without disturbing the surrounding normal brain tissue. With the old Model B the minimum accuracy was 0.5 mm because putting a helmet on the patient’s head and positioning the each target were carried out manually. However, small devices of motor system was installed at both (left and right) side of inside of the helmet in Model C-APS, so that it became possible to adjust positions to all the target automatically with 0.1 mm accuracy by simply wearing the helmet. Therefore, this system enabled less invasive and safer treatment circumstances for patients by reducing temporal and physical strains irrespective of number of shots. This robotic surgery system (C-APS) advantageously reduces these strains considerably for medical staffs as well. The ultimate goal is to achieve the total resection of residual brain tumor located in or adjacent to the functional brain region using the operating system in which the micromanipulator, microlasar, and microradiosurgery system (C-APS) is used aided by visualized anatomical, physiological and metabolic information.

33

Robotic Interventions Friday, 13:45 RO3

From the Laboratory to the Operating Room: Usability Testing of LER, the Light Endoscope Robot

P. Berkelman, E. Boidard, P. Cinquin and J. Troccaz

TIMC-IMAG, Institut d'Ingénierie de l'Information de Santé, F-38706 La Tronche Cedex, France

We have developed a surgical assistant robot to hold and manipulate an endoscope during minimally invasive surgery. The novel features of this endoscope robot are its simplicity and small size, which confer significant advantages in ease of use, safety, and versatility compared to current endoscope manipulators which hold the endoscope with a robotic arm extended over the patient and have a large, heavy base resting on the floor or mounted to the operating table.

The LER consists of an annular base placed on the abdomen, a clamp to hold the endoscope trocar, and two joints which enable azimuth rotation and inclination of the endoscope about a pivot point at the incision. A compression spring around the endoscope shaft and a cable wrapped around a spool control the insertion depth of the endoscope. Small brushless motors actuate each motion degree of freedom. Control of the robot is simple and straightforward, as the motion of each motor directly corresponds to horizontal and vertical motion and the zoom of the endoscope camera image. No kinematic calculation, initialization procedure, or homing sequence is necessary for operation of the robot. The latest prototype of our endoscope robot is sterilizable so that plastic draping for sterility is unnecessary and it may be autoclaved with other surgical equipment.

In order to prepare for clinical trials on patients and improve its integration and ease of use in an operating room environment, the LER has been used on a regular basis by surgeons during minimally invasive surgical training procedures on cadavers and animals. Two aspects in particular that have been evaluated during testing are the means of fixation of the LER on the patient and the user command interface. To fix the location of the robot on the abdomen it was found that a small articulated arm clamped to the table was preferable to attachment by flexible straps or adhesive sheets, although suturing the robot to the abdomen was also found to be adequate in certain cases. Most users preferred a voice recognition command interface to buttons or pedals, even if the response was slightly delayed. A miniature keypad attached to a surgical instrument performed well, but adds the inconvenience of having to change the keypad attachment whenever the instrument is changed.

The motion speed of the LER in different directions was adjusted according to the preferences of surgeons who have used it. Based on feedback from these surgeons, a hook attachment was added to the insertion cable to simplify the removal of the endoscope to clean the lens, and a command was added to the voice recognition interface to switch off the motors and enable manual repositioning of the LER. Current plans for further modifications include adding watertight seals for the motor shafts and simplifying disassembly to improve the convenience of cleaning and sterilization of the LER.

34

RO4

Robotic InterventionsFriday, 14:00

Safety of Surgical Robots in Clinical Trials

W. Korb1), D. Engel2), R. Boesecke1), G. Eggers1), B. Kotrikova1), H. Knoop1), R. Marmulla1), J. Raczkowsky2), N. O'Sullivan1), H. Wörn2), J. Mühling1) and S. Hassfeld1)

1) MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120 Heidelberg, Germany 2) Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28, Engler-Bunte-Ring 8,

D-76128 Karlsruhe, Germany Surgical robots are complex mechatronic systems. It is important to apply systematic approaches for quality- and safety management. For medical device manufacturers, who want to develop and distribute robots, it is natural to implement a quality management system and consider the special requirements for such complex systems. In universities and research institutions it is often not possible to apply the same measures for software design and error analysis. Nevertheless, these considerations are very important for clinical trials.

This paper reviews different publications in the area of risk management for surgical robots: Davies 2000, Ng 1996, Varley 1999. The results and the know-how of the RobaCKa project (development of a craniotomy robot applied in a patient trial for the first time in April 2003) are summarized here.

The main safety critical items in surgical robotics are: 1. Accuracy and Precision of Planning and Execution: The main problem in robotic surgery is not the

precise execution – though robots work very accurately – it is the precise planning. Arising problems are: Quantitative valid segmentation and 3D-modelling.

2. Registration: So far often landmark-based methods have been used. This causes pain and discomfort for the patients if artificial landmarks are used, and often poor precision if anatomical landmarks are used. Surface-based methods are available in some research projects. But the problem of how the registration accuracy can be verified during the surgical intervention has only been partly solved until now.

3. Collision Prevention and Path Planning: Robots have to move slowly (this is especially important for former industrial robots). The kinematics with an “elbow” can endanger the clinical personnel.

4. Reliability of control software: Quality management for safe and reliable software has to be introduced from the very beginning of the development (also in research projects!), cf. Varley 1999.

5. Vigilance: A protocol for behavior in emergency case is essential before the first patient trial is executed.

6. Hygienic Considerations: The large robots have to be covered by sterile hulls, which is often not so easy to do.

7. Clinical workflow: This includes hygiene as well as other safety-related steps, such as fixation of the patient, fixation of mechanical parts, patient and robot supervision, etc.

Conclusion: Many surgical robot projects did not lead to patient trials so far. One reason is probably the lack of knowledge in quality- and risk management of technical complex systems in research institutions and university clinics. The shown methods can be applied easily also to other research projects in surgical robotics and navigation.

References: B. Davies. A review of robotics in surgery. P I Mech Eng H 214: 129-139, 2000.

W.S. Ng and C.K. Tan. On safety enhancements for medical robots. Reliab Eng Syst Safe 54: 35-45, 1996.

P. Varley. Techniques for development of safety-related software for surgical robots, IEEE Trans Inf Technol Biomed 3 (4) (1999) 261- 267.

35

Sensor-Feedback Systems and Augmented Reality Friday, 14:45 AR1

Virtual Reality, Augmented Reality and Robotics in Digestive Surgery

L. Soler1), N. Ayache2), S. Nicolau1,2), X. Pennec2), C. Forest1,2), H. Delingette2), D. Mutter1), J. Marescaux1)

1) IRCAD/EITS/VIRTUALIS, 1, Place de l’Hôpital, F-67091 Strasbourg Cedex, France

2) EPIDAURE Project, INRIA, 2004 Route des Lucioles, F-06902 Sophia Antipolis Cedex, France Although modern medical imaging offers a precise representation of the internal anatomy of patients, its interpretation often remains a difficult task. Indeed, as the patient is represented on a set of 2D films (MRI or CT-scan), it is necessary to create a mental 3-dimensional reconstruction of the patient’s anatomy. Interactive delineation of organs and volumetric rendering proposed on the latest generation of scanners remain unsuitable in routine procedures due to a time-consuming delineation process and a limited portability on personal computers. In answer to these limitations, we have developed an automated computer-assisted planning system, which includes the automatic 3D reconstruction of patient anatomy derived from the conventional scanner image, and also the interactive visualization from a standard PC and laptop fitted with conventional 3D cards. From a CT-scan or an MRI, developed software fully automatically detects, recognizes and delineates anatomical and pathological structures that are visible in the image. These 3D reconstructions are then visualized through a user-friendly interface allowing to see organs in transparency, to interact on them, to navigate anywhere, to simulate any kind of cœlioscopy (laparoscopy, fibroscopy, colonoscopy, cholangioscopy…), to perform virtual resection and to automatically provide the volume of any visualized 3D structure. The network version of this software also permits a real-time long distant connection between practitioners, allowing them to share the same 3D reconstructed patient and to virtually interact on it before the intervention. Such planning systems, though useful to take the best therapeutical choice, are not realistic enough to be used for surgical learning. By adding physical properties to the 3D modelling of the patient and using force feed-back devices, we have developed a surgical simulator that can be used to cut organs or to clamp vascular structures virtually. All these tools are useful for surgical training, but can also be used pre-operatively to plan a surgical procedure, and intra-operatively to improve, control and supervise the procedure. In order to improve the intra-operative use, we have also developed an augmented reality system that superimposes the pre-operative 3D reconstruction on a live video view of the patient. It also permits a real-time tracking of surgical tools and a screening of the target direction of tools. This augmented reality thus offers a real-time virtual transparency that allows to exactly and accurately know the location of organs and pathologies. Our validation, performed on an abdominal model, shows that this system allows to reach a 1 cm target located on a liver model with a 2 mm precision in less than one minute. Combined with a robotic system, in a near future, all these works will offer long distance and automation of the surgical procedure. Our encouraging experiments, including the first great distance surgery between New York and Strasbourg, and automated visual control of surgical robots, show the potential of such improved procedures that should represent one possible future for surgery.

36

AR2

Sensor-Feedback Systems and Augmented RealityFriday, 15:00

Palpation Imaging and Guidance Using a Haptic Sensor Actuator System for Medical Applications

W. Khaled1), S. Reichling2), O. T. Bruhns2); S. Egersdörfer3), G. Monkman3), H. Böse4), M. Baumann4),

A. Tunayar5), H. Freimuth5), A. Lorenz6), A. Pesavento6) and H. Ermert1)

1) Institute of High Frequency Engineering, Ruhr-University Bochum, D-44780 Bochum, Germany

2) Institute of Mechanical Engineering, Ruhr-University Bochum, D-44780 Bochum, Germany 3) Fraunhofer Institute for Silicate Research, Neunerplatz 2, D-97082 Würzburg, Germany

4) University of Applied Sciences Regensburg, Prüfeninger Straße 58, D-93049 Regensburg, Germany 5) Institute for Micro Technology Mainz GmbH, Carl-Zeiss-Strasse 18-20, D-55129 Mainz, Germany

6) LP-IT Innovative Technologies GmbH, Huestr. 5, D-44787 Bochum, Germany Background/Problem: In the field of medical diagnosis, there is a strong need to determine mechanical properties of biological tissue, which are of histological and pathological relevance. One of the established diagnosis procedures to locate changes in the pathological state of body organs and tissue is palpation, which is limited with respect to sensitivity and quantitative documentation. Current medical practice routinely uses sophisticated diagnostic tests through magnetic resonance imaging, computed tomography and ultrasound imaging. However, they cannot provide direct measure of tissue elasticity. On the one hand, a suitable sensor is required for quantitative detection of mechanical tissue properties. On the other hand, there is also some need for a realistic mechanical actuator to display these properties.

Methods/Tools: A new haptic sensor actuator system has been designed to visualize and reconstruct mechanical properties of tissue for palpation imaging. The sensor system is based on ultrasonic elastography and the actuator is a controllable haptic display using electrorheological fluid.

Real time ultrasound elastography has been recently developed based on high efficiency signal processing approaches. In order to calculate mechanical properties, finite element simulations were performed for a number of soft biological tissue models. The results obtained from finite element analysis were confirmed in the ultrasonic experiments on a set of tissue-mimicking phantoms with known acoustical and mechanical properties. Finally, using numerical solution models and solving the inverse problem we deduce relative mechanical properties. These properties are transferred to the actuator system which consists of a 2D piston-array of elements filled with an electrorheological fluid. The array, which is electrically controlled, reconstructs the corresponding resistance forces due to the viscosity increase of the electrorheological fluid.

Results: The results show, that our real time ultrasound elastography system is able to differentiate hard lumps from soft tissue, like malignant and benign tissue areas in the prostate or breast, with a high degree of accuracy.

The actuator array shows a spatial resolution up to 2 mm x 4 mm of receptors on the finger tip, which is useful for medical diagnosis. The final results show good agreement of analytical predictions and visual results with experimental data on the haptic actuator display.

Conclusions: The sensor actuator system can be used in various medical applications including virtual palpation to specify and detect subsurface hard tumors, virtual training simulation for medical education and intraoperative navigation to locate subsurface organs especially in a remote surgery, where the surgeon is incapable of palpating the tissues of interest before they are cut. The system has the potential of improving cancer detection and allows a more reliable diagnosis.

37

Sensor-Feedback Systems and Augmented Reality Friday, 15:00 AR3

In Vivo Study of Forces During Needle Insertions

B. Maurin1), L. Barbe1), B. Bayle1), P. Zanne1), J. Gangloff1), M. de Mathelin1), A. Gangi2), L. Soler3) and A. Forgione3)

1) LSIIT, Parc d'Innovation, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France

2) Hôpital Civil, 1, place de l'Hôpital, BP 426, F-67091 Strasbourg Cedex, France 3) IRCAD/EITS/VIRTUALIS, 1, Place de l’Hôpital, F-67091 Strasbourg Cedex, France

Needle insertions are used in a wide range of traditional medical treatments like injections or punc-tures, that are among the less invasive procedures. They are also used in minimally invasive surgery techniques like, e.g., the treatment of cancer by radiofrequencies, the vertebroplasty. An increasing number of percutaneous treatments should be clearly expected in the coming years.

During these procedures, the practitioner perception is both visual (inserting point, needle penetration) and haptic. In the case of deep needle insertions (e.g., epidural anesthesia) the haptic perception might become dominant. Depending on the complexity of the operation, intra-operative medical imaging (CT, fluoroscopy, echography) might be needed to reinforce the haptic perception. Hence, these types of treatments presently require a lengthly procedure with successive image taking steps and needle insertion steps to allow for a precise positioning of the needle in the targeted anatomical structures. As the effectiveness of the treatment is strongly dependent on the tip needle position, the practitioner's skill and experience, as well as the number of per-operative image taking steps are critical for the out-come of the percutaneous procedure. Furthermore, in some case, the procedures also expose the medi-cal staff to dangerous X-rays due to the intra-operative medical imaging. For all these reasons, we are currently designing a robotic system for needle insertions. This robot will be teleoperated with force feedback. It will thus allow, at the same time, a good protection of the practitioner and an haptic per-ception, which is a limitation of current robotics systems. The haptic interface will be also used for simulation and training.

The nature of the interaction between the needle and the tissues makes the understanding of deforma-tion, cutting and friction mechanisms quite difficult and deserves further detailed examinations. This paper deals specifically with the forces and torques involved in percutaneous procedures. From in vivo measurements, we characterize the efforts related to several needle insertion procedures. We draw a comparative study for different operative techniques - insertion with or without incision, with ce-lioscopic assistance, open surgery - and different targeted organs. This work could be distinguished from similar ones dealing with the mechanics of the needle-tissues interaction, as it considers realistic insertion conditions on living tissues, which behave noticeably differently from dead tissues, from a mechanical point of view.

Two studies are detailed: • the first one details the range of forces for a large set of percutaneous procedures. It aims notably

to define the design constraints for a tele-operated robotic system. It also underlines both the nec-essary sensibility to render the organs transitions and the important force required to pierce the tis-sues;

• the second one analyses the temporal evolution of the efforts during the insertions and its link with the successive anatomical layers. It allows to discuss and refine the models recently proposed in the literature. This element is particularly important to model the force feedback when the dy-namical interaction of the robotic system and the operated body are simulated.

38

AR4

Sensor-Feedback Systems and Augmented RealityFriday, 15:30

Teaching Bone Drilling: 3D Graphical and Haptic Simulation of a Bone Drilling Operation

H. Esen1), K. Yano2), M. Buss1)

1) Munich University of Technology, Institute of Automatic Control Engineering, D-80290 Munich, Germany 2) Toyohashi University of Technology, Department of Production Systems Engineering, Hibarigaoka 1-1, Tempaku, 441-

8580 Toyohashi, Japan Haptic devices have found a place for themselves in many different application areas recently. Applications in medical sciences are one of the most popular and most attractive areas for the researchers, because of the very interesting and different problems that are introduced by the field. In comparison with the researches on soft tissue interaction and on developing simulators for minimally invasive surgery (MIS), there are rather few works concerning surgery for cutting, burring and drilling of bones.

Bone drilling is needed prior to many orthopedical operation, such as pin or screw insertion to the bone and it requires a high surgeon skill. The main problem of a bone drilling operation is the heat generation. The friction that occurs during a bone drilling process produces high temperature which may cause irreparable damages on the bone. If the drilling time is long, then the produced temperature will be high, i.e. for an ideal drilling, drill's feeding velocity has to be as big as possible. The feeding velocity is directly related to the applied thrust force. Big feeding (axial) velocities require big thrust (axial) forces. The other main influencing factors of the axial drilling force are cutting speed and drill tip angle, where the effect of cutting speed is negligibly small.

Therefore, it is possible to state out that the core skills for bone drilling procedures are: Recognizing drilling end – point; applying constant, sufficient but non – excessive thrust force and applying constant, sufficient but non - excessive feeding velocity.

In this work a skill training system for bone drilling in a 3D virtual environment with haptic feedback is proposed. The system has three main elements: A 3 DOF haptic display, a 3D visual display and controllers working behind the displays.

Apart from previous works in the "bone drilling simulation" field, our work concentrates on teaching young surgeon candidates core skills of bone drilling processes. For this purpose, a novel control algorithm is developed and presented in this paper. The algorithm enables medical students and novice surgeons to train their surgical skill for drilling into bone. The core skills can be trained simultaneously using this algorithm.

The haptic device has been developed at Technical University of Berlin, while all authors were in Berlin. Normally it has 3 active DOF, but for this project an end effector in the shape of a medical drill is designed with 3 additional passive DOF.

For the visual feedback, a 3D skull model is developed using real CT data. Also a medical drill model is placed into the graphical user interface (GUI). When the trainee moves the end effector of the haptic display, the medical drill model in the GUI moves as well. If there is a collision with a skull model, the trainee feels it from the haptic display. For teaching purposes, the developed GUI is enriched by some indicators for the applied force, velocity and acceleration. Some other additional features on the GUI are end position warning and a comment box, informing the trainee about his/her trial online.

Preliminary user test results suggest that proposed medical training system is a powerful tool for training core skills of a bone drilling procedure. The applied thrust force and the timing of the operation are used for an objective evaluation of the test results.

39

Poster Contributions P1

Optimisation of the Robot Placement in the Operating Room

P. Maillet, P. Poignet and E. Dombre

LIRMM UMR 5506 CNRS / Université Montpellier II, 161 rue Ada, F-34392 Montpellier Cedex 5, France Placement is a recurrent problem in robotics that influences robot performance and may have dramatic outcomes on safety. Especially in the operating room (OR), very quick procedures should be provided to facilitate the robot installation and removal.

Practically, when installing a robot, the surgical staff has to deal with several constraints due to the surgical procedure at-hand but also due to regulation issues and environmental conditions (the OR is cluttered with many people and several apparatuses, regarded as obstacles by the robot). Moreover, it is well known that the dexterity of a robot is not constant throughout its operational workspace. Namely, fine motions should be better performed along or around selected directions rather than others. This strengthens the need for optimising the robot placement with respect to the environment, the surgical staff and the patient.

However, we have to face an overconstrained optimisation problem and some trade-offs have to be found. Several solutions have already been proposed for industrial robots (see for instance [Abdel-Malek] or [Barral] for a comprehensive reviews), but a few of them can apply for surgical robots [Coste-Manière].

We present in this paper an original approach based on interval analysis, which offers several advantages compared with the aforementioned conventional algorithms : it fits well to solve the class of constraint satisfaction problem (CSP) to which belongs the robot placement problem; it is easy to implement; it gives a solution space (a set of solutions instead of a single one), which makes it possible to assess the robustness of a chosen robot placement in terms of safety; it may be time efficient.

As in CSP problems, there are constraints and criteria. The constraints may be classified as robotics one (robot kinematics, joint limits, reachable workspace, various obstacles in the OR…), and surgical one (type of procedure, surgical targets, accuracy requirements…). The criteria, so far, are dexterity, reachability, and distance to obstacles. All constraints and criteria are defined in terms of intervals (which are rather easy to specify).

Simulation have been run on a planar robot. They demonstrate the efficiency and performance of the proposed algorithm for surgical robot placement in the OR.

References: [Abdel-Malek] K. Abdel-Malek and W. Yu. On the placement of serial manipulators. In Proc. Of DETC00 2000 ASME Design Engineering Technical Conferences. Sptember 10-13, 2000, Baltimore, Maryland.

[Barral] Development of optimisation tools in the context of an industrial robotic CAD software product. BARRAL D. ; PERRIN J. ; DOMBRE E. ; LIEGEOIS A. International Journal of Advanced Manufacturing Technology , 1999 , Vol. 11, n 15 , pp. 822-831.

[Coste-Manière] E. Coste-Manière, L. Adhami, R. Severac-Bastide, A. Lobontiu et al. Optimized port placement for the totally endoscopic coronary artery bypass grafting using the da Vinci robotic system. In Russ, D.and Singh, S., editors, Lecture Notes in Control and Information Sciences, Experimental Robotics VII, volume 271. Springer.

40

P2 Poster Contributions

Development of a Navigation System for TMS

A. Wechsler, S. Woessner and J. Stallkamp

Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Nobelstr. 12, 70569 Stuttgart, Germany Transcranial Magnetic Stimulation (TMS) uses magnetic field pulses to non-invasively stimulate currents on the cortex. The magnetic field pulses are generated by a coil located near the patient´s head. The neurological effects of the currents can be monitored using different techniques, such as EEG.

In neurology, TMS is used for the exploration of functional regions on the cortex. In psychiatry, the method is intended to be used for the clinical treatment of certain psychological disorders such as depression. Currently, this application is being evaluated in clinical studies. In both cases, the precise location and orientation of the stimulating currents are crucial for the interpretation of the results.

In close collaboration with the University Hospital Ulm, the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) has developed a navigation system for the visualisation and documentation of the stimulation coil position.

In order to measure the position and orientation of the stimulation coil, an optical tracking system is used. Since TMS is a non-invasive procedure, the registration bases on patient specific distinctive features (landmarks). The measurement of the coil position is performed in relation to the patient´s head which allows for the head movement of the patient. Through the evaluation of different types of user interaction, it was found that a visualisation of the position and orientation of the stimulation coil in a 3D view using a virtual reality model gives the examiner the most intuitive feedback while positioning the coil. The display of a surface model of the cortex and coloured functional datasets also improved the visual navigation. In order to simplify the coil positioning for the stimulation of a certain cortex region another feature was added: the 3D-visualisation is able to track the focus of the coil by automatically displaying the three slices from the dataset that intersect at this particular point. In addition to the pre-surgical planning capabilities that navigation systems for surgery can offer, in TMS the documentation of the actual process is important, as TMS is applied repeatedly to one patient. This documentation includes the positions and orientations of the coil and the stimulation time. In order to position the coil according to the recorded position and orientation data, a virtual coil marker is displayed in the 3D view, with which the current coil representation can be aligned. Since the coil is commonly positioned by hand, it cannot be kept still. Therefore automatically recording the coil position synchronously with the stimulation pulses is vital for a precise documentation. This is achieved using a TTL signal from the stimulation hardware to trigger the recording.

From a first clinical evaluation of the navigation system we can conclude that the direction of the stimulation current is of great importance for the effect of TMS on the patient. Also, the improved precision in relation to previous navigation systems adds to the significance of the TMS results.

41

Poster Contributions P3

Robotics in Health Care. An Interdisciplinary Technology Assessment Including Ethical Reflection

M. Decker

Institut für Technikfolgenabschätzung und Systemanalyse (ITAS), Forschungszentrum Karlsruhe,

Postfach 3640, D-76021 Karlsruhe, Germany Technology Assessment (TA) as a problem orientated research area begins usually with a detailed description of the problem to be solved. In the case of the applications of robotics in health care such a description would start asking technical questions. Is a certain robot able to fulfil the technical tasks it was build for? This is typically done by defining technical criteria in advance and scrutinizing prototypes referring to these criteria. In addition economical aspects must be taken into consideration by asking if the robot is cheaper than the work of a human being. This is typically done by a context related cost-benefit analysis. Moreover legal aspects must be considered if one investigates the liability for malfunction of the robot. This becomes of crucial importance if learning algorithms are used in the control system of the robot. This may lead to conflicts between product liability and robot owner liability. And last but not least ethical reflection is necessary in order to identify areas in which we don’t want to have robots acting instead of human beings. Especially the whole sector of nursing is of interest here.

Due to this definition one needs an interdisciplinary TA-approach including all the above mentioned scientific disciplines to solve the problem. Ethics is one discipline in the “concert” of this interdisciplinary endeavour. At second sight the role of ethics differs from the other disciplines; this becomes obvious if we describe robots in a “means-ends” context. While the technical, economical and even legal perspective refer to “means” aspects, ethical reflection contributes to the discussion about “ends” to be reached by introducing robots in health care.

This contribution reports on the project “Robotics. Options of the Substitutability of Human Beings” realised by the European Academy GmbH. It was organised as an interdisciplinary discussion, including experts from robotics, artificial intelligence, philosophy, medicine, (health) economics, and jurisprudence, to develop concrete recommendations for action of policy makers and scientists. Special attention was devoted to quality control of interdisciplinary research which was realised by several accompanying evaluation processes. Robotical applications in health care served as case studies in this project.

42

P4 Poster Contributions

State of the Art of Surgical Robotics

P. P. Pott1), A. Köpfle2), A. Wagner3), E. Badreddin3), R. Männer2), P. Weiser4), H.-P. Scharf 5) and M. L. R. Schwarz1)

1) Labor für Biomechanik und experimentelle Orthopädie, Orthopädische Universitätsklinik Mannheim, Theodor-Kutzer-

Ufer 1-3; D-68167 Mannheim, Germany 2) Lehrstuhl für Informatik V, Universität Mannheim, B6, 23-29, Bauteil C, EG; D-68131 Mannheim, Germany 3) Lehrstuhl für Automation, Universität Mannheim, B6, 23-29, Bauteil C, EG; D-68131 Mannheim, Germany 4) Institut für CAE, Fachhochschule Mannheim, Windeckstr. 110, D-68163 Mannheim, Germany 5) Orthopädische Universitätsklinik Mannheim, Theodor-Kutzer-Ufer 1-3; D-68167 Mannheim, Germany Objective: This work describes the worldwide state of the art or concerning the development of robot-aided operative medicine.

Material and Methods: The review is based on a literature research (Pubmed, IEEExplore, CiteSeer) and on the proceedings of MICCAI 2002 and 2003, CARS 2003, CAOS 2003 and CURAC 2003. Additionally the searching-machine Google was used.

Results: Devided in the different surgical disciplines (imaging (12 systems), surgery (16), Ear-Nose-Throat/Oral-Mallofacial-Surgery (5), radiotherapy (2), neurosurgery (13), orthopedics (18), trauma surgery (3) and urology (4 systems)) 73 robotic systems are presented briefly by describing functions, applications and mechanical setup. A quarter of these are developed for orthopeadic applications. About the fith part of the robotic systems are used for general surgery and neurosurgery respectively. 16% of the presented systems is used for imaging such as e.g. the guiding of endoscopes.

These developments are promoted by public research facilities and also by commercial companies.

Conclusions: The examination of the international research and development in the field of robotic applications in medicine shows a wide distribution in different areas of medicine. It ranges from care of burns via the milling of cavities in bone to radiosurgery and to the application of pedicle screws. Considered technically the strong specialization of the mechanical setups is evident:

On the one hand industry robots which were adapted to the surgical field are used and on the other hand kinematics especially developed for their surgical purpose come into operation.

Only a few systems can be used clincally.

43

Poster Contributions P5

Automatic Coarse Registration of 3D Surface Data in Oral and Maxillofacial Surgery

T. Maier1), N. Schön1), M. Benz1), E. Nkenke2), F. W. Neukam2), F. Vogt3) and G. Häusler1)

1) Chair for Optics, Institute for Information und Photonics, University Erlangen-Nuremberg, Staudtstr. 7/B2, D-91058

Erlangen, Germany 2) Department of Oral and Maxillofacial Surgery, University Erlangen-Nuremberg, Glückstr. 11, D-91054 Erlangen, Germany 3) Chair for Pattern Recognition, University Erlangen-Nuremberg, Martensstr. 3, D-91058 Erlangen, Germany The zygomatic fracture associated with a dislocation of the eye ball is one of the most frequent traumata to the facial skeleton. We have developed a system that supports the surgeon in adjusting the correct eye position. The system’s core is an optical range sensor according to [1] that allows fast, highly accurate and non contact 3D data acquisition. We preoperatively compute 3D target data of the patient’s face and eye position [2]. During the operation the surgeon can acquire actual data at any times. The system compares the actual with the target data in order to give practical feedback to the surgeon. The data have to be registered for comparison. The registration process is done in two steps: Coarse registration and fine registration. The fine registration is done by default via the ICP-algorithm whereas the necessary coarse registration is often accomplished by user interaction [3]. Despite of several approaches the reliable real time automation of coarse registration still remains a challenge. Moreover we need an algorithm that deals with data that are subject to form changes by the operation.

Our approach exploits the Gaussian image of a face. It is built up by the normals of the face surface that are anyway calculated for visualization. The Gaussian image looks like a dotted unit sphere surface. The application for matching has been limited so far to convex or closed objects.

Now an approach for human faces (non-convex and non-closed) has been introduced for the first time [4]. The algorithm rapidly discretises the Gaussian image by an overlaying cubic lattice. The cubic cells that intersect the sphere surface are used for the discretization. One feature – the cell density – is calculated for each cell by counting the enclosed normals (normalised by the area of intersection). Searching the densest cell among the non-empty cells leads to a corresponding subset of vertices of the facial data that represents a part of the patient’s cheek. This region remains unaffected by surgery. The subsets of different data from the same person have similar patchy shapes that are suited for registration by principal axes transformation.

Although sufficient in many cases, there is much room for improvement: In this paper we achieve a higher robustness of the algorithm by modifying the feature search strategy: First, the cubic cell size has been chosen lower to make the discretization finer. Secondly, the search for the densest cell has been extended from single cells to the combination of one cell and it’s neighbouring cells on the unit sphere. On the one hand this procedure increases the time cost. On the other hand it allows a more uniformly feature sampling.

We tested the method with a number of healthy persons: In order to simulate form changes in the face we injected saline solution in the malar region and registered the 3D data without and with injection. The results showed that i) the computing time is still in the range of our requirements (~1 sec) and ii) the similarity of the subset shapes is higher leading to better registration results.

References: [1] M.Gruber, G.Häusler: Simple, robust and accurate phase-measuring triangulation. Optik 89(3),

pp118-122, 1992, www.3d-shape.com [2] M.Benz, X.Laboureux, T.Maier, E.Nkenke, S.Seeger, F.W.Neukam, G.Häusler: The Symmetry of

Faces. Procs VMV’01, pp43-50, 2002 [3] T.Maier, M.Benz: Interactive coarse registration of triangle meshes. Lehrstuhl für Optik, Annual

Report, p30, 2002 [4] T.Maier, M.Benz, G.Häusler, E.Nkenke, F.W.Neukam, F.Vogt: Automatische Grobregistrierung

intraoperativ akquirierter 3D-Daten von Gesichtsoberflächen anhand ihrer Gauß’schen Abbilder. Procs BVM’03, pp11-15, 2003

44

P6 Poster Contributions

Iso-C 3D Accuracy-Control and Usefullness at Calcanues Osteosynthesis

D. Kendoff 1), M. Kfuri Jr.2), J. Geerling1), M. Richter1), T. Hüfner1) and C. Krettek1)

1) Unfallchirurgische Klinik, Medizinische Hochschule Hannover (MHH), Carl-Neuberg-Str. 1, D-30625 Hannover, Germany 2) Alexander-von-Humboldt-Stiftung, Ribeirao Preto Medical School, Sao Paulo, Brasilien Objectives: The calcaneus is a tarsal bone often injured due high-energy trauma. [1] The bidimensional image methods are usually limited to show details of its complex shape. Therefore the computer tomography has been the standard method on decision-making and postoperative analysis. [2] The problem remains at the operation theater, where the judgment should be done acutely and is usually assisted by bidimensional image techniques. The intraoperative tomography is foreseen but costly. The introduction of mobile Iso-C 3D could be a reasonable solution in order to achieve a three dimensional operative control. [3] We designed an experimental study to evaluate the precision of Iso-C 3D regarding the detection of articular steps and implant misplacements on calcaneus.

Material and Methods: Cadaveric feet were assembled with calcaneus osteotomies addressing the articular surface of the lateral joint fragment, according to Sanders classification. [4] The fracture was then fixed with a Sanders plate (Synthes®). We created four different simulations. In the Group 1 the fracture was anatomically reduced with normal screw placement. In the Group 2 we simulated articular steps of 0.4mm, 1.46mm, 1.98mm, and 3.5mm. The screws however were normally placed. We measured the steps with a caliper (CD-15CP Mitutoyo Inc, Aurora, Illinois, USA) with an accuracy of 1mm according to producer. Anatomical articular reduction accomplished by screws misplacements constituted the Group 3. And finally, in the Group 4, articular steps of 0,5mm and 1,5mm were simulated with associated misplacement of screws. All the groups were submitted to traditional radiographic control including AP view, Lateral view, Broden 100, Broden 200 and Broden 300. They were also submitted to a tomographic control. The IsoC control was done with the protocols slow and quick and an arc of movement of 1800. In order to certify if the foot decubitus could interfere with image quality, the IsoC slow protocol was done with the foot on lateral decubitus, on 300 external rotation and on 600 external rotation. A point in the middle of articular fragment was taken as a reference regarding the measure of real step and the virtual step showed on the monitor. The known length of screws was compared with the virtual length showed on the monitor.

Results: Iso-C enabled the diagnosis of screw misplacement and articular step in all cases. The measures of virtual and real distances were comparable. The CT data did not bring any further missing detail to the IsoC analysis.

Conclusions: Iso-C is a valuable tool enabling precise three-dimensional analysis of calcaneus fracture reduction and fixation. Clinic studies should be conducted to complement this data.

References: [1] Zwipp H, Tscherne H, Thermann H, et al. Osteosynthesis of displaced intra-articular fractures of

the calcaneus. Results in 123 cases. Clin Orthop 290:76-86,1993. [2] Eastwood DM, Phipp L. Intra-articular fractures of the calcaneum: why such controversy? Injury

28 (4): 247-259, 1997. [3] Kotsianos D, Rock C, Euler E, Wirth S, Linsenmaier U, Brandl R, Mutschler W, Pfeifer KJ. 3-D

imaging with a mobile surgical image enhancement equipment (ISO-C-3D). Initial examples of fracture diagnosis of peripheral joints in comparison with spiral CT and conventional radiography. Unfallchirurg 104(9): 834-8, 2001.

[4] Sanders R, Fortin P, DiPasquale T, et al. Operative treatment in 120 displaced intra-articular calcaneal fractures. Results using a prognostic computed tomography scan classification. Clin Orthop 290: 87-95, 1993.

45

Poster Contributions P7

Calibration of a Stereo See-Through Head-Mounted Display

S. Ghanai1), T. Salb2), G. Eggers1), R. Dillmann2), J. Mühling1), R. Marmulla1) and S. Haßfeld1)

1) MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120 Heidelberg, Germany 2) Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28, Engler-Bunte-Ring 8, D-76128 Karlsruhe, Germany Abstract: This paper presents developments and pre-clinical validation results of a recent approach for augmented reality (AR) in craniofacial surgery. The system presented uses a commercial Sony Glasstron display which is tracked by a standard NDI Polaris navigation system. The software system INPRES (intra-operative presentation of surgical planning and simulation results), which was developed for INPRES, is used to controll the navigation system and to project the AR into the glasses. First evaluations of the calibration process in the laboratories present promising results. Another study in the operation room (OP) showed further opportunities for improvement.

Introduction: Craniofacial surgery is a highly complex medical discipline with various risks for the affected patient and the acting surgeon. Preoperative planning of intervention demands a high degree of spatial sense of the surgeon. The goal of the INPRES System is to support the spatial cognition by projecting the patients’ data onto the glasses which are worn by the surgeon. A realistic overlap of the CT-model with the patient requires a very precise calibration. In order to achieve this without using a separate camera, different parameters have to be asserted and entered into the system.

Fuhrmann et. al. [1] introduced the “fast calibration” method, which was adopted for the INPRES system. It displays four virtual crosses for each eye which have to be matched one by one with a tracked handheld reference cross. Once the size and orientation of both, the real and the virtual cross, are identical, a button is pressed to mark the position of the reference cross.

Methods: Through the INPRES System, the surgeon can visualize the patient’s data.Risk areas, which have been marked during the planning phase, will be accentuated during the operation. This will help to minimize the risks of an operation.

Applying the idea of Fuhrmann et al, the calibration process uses eight virtual crosses for each eye to collect the needed parameters. In addition the eight crosses were divided into two groups of four which are placed onto two planes of a seeing-pyramid. The real cross is placed on a tracked reference object. For marking the position of equivalence of the real and virtual cross, a button of a pointer device is used. Through trigonometric conversion of the scatter-plot, the needed parameters can be calculated. Furthermore a least-squares method is used to improve the marked positions.

Results: In order to evaluate the achievements engineers and surgeons were asked to rate the complexity, functionality and quality of superposition. The result in the laboratory made it clear that the group of testers could be divided into two groups marking the degree of experience in using AR systems. The testers with AR system experience had a translatory error of about 1.94 mm and no rotational error. The other one had a translatory error of 5.88 mm and a rotational error of 2.02 mm.

The first testing in the operating room resulted in minor improvements, e.g. in terms of better wear ability.

[1] A. Fuhrmann, D. Schmalstieg and W. Purgathofer: Fast calibration for augmented reality. Proceedings of ACM Virtual Reality Software & Technology '99 (VRST'99), pp. 166-167, London, December 20-22, 1999.

46

P8 Poster Contributions

Sensor-Based Intraoperative Navigation for the Robot-Assisted MIC

J. Stallkamp

Fraunhofer Institut für Produktionstechnik und Automatisiserung IPA, Nobelstrasse 12, D-70569 Stuttgart, Germany Introduction: Today manipulators in the MIC are mostly used for the fixation of endoscopes in the operating field or for the control of the effector’s movement according to the master-slave-principle. The navigation and control of the systems by the surgeon is based upon the interpretation of the image from the video-endoscope. Therefore a high-precision-surgery with automated movement control, precise measurements of the operating field or an undistorted view is impossible. Alternatively the use of US, CT or MRT cannot provide actual and/or accurate data, especially for invasive procedures in soft-tissue. This paper describes a new system based on a 3D-sensor, which makes a precise, intraoperative/local planning and a robot-assisted treatment of single steps possible.

Materials and Methods: A new concept of a miniaturized 3D-laser-triangulation sensor has been developed, that can be integrated at the tip of an endoscope. During operation the sensor – if necessary contineously – measures the spatial coordinates of single points on the surfaces being visible in the video-image. The sensor data are directly exchanged with a navigation system. The system analyzes the data and displays a distortion-less image from the surgical environment. The graphical user interface (GUI) guides the surgeon step by step through the required planning tasks. During operation, the surgeon teaches the robot by drawing the path or marking the points in the video-image separately for each task. Because the pixel of the video-image are correlated to the coordinates of the surface points, the planning data in the video-image can be transformed to pathes for the robot-control.

Results: A prototype of the endoscope has been developed for first experiments. The results show, that an accuracy less than 0.5mm and a resolution of <0.7mm is possible. Accuracy and resolution are nearly independent from the sensor size, the measurement-distance and the various tissue-surfaces. Measuring errors can be indicated and compensated by the correlation of proximate points. Hidden parts or undercuts occuring by high complex surface geometries causes gaps in the reconstructed contour lines, that are clearly marked out for the surgeon in the display image. The particular behaviour of the system in the surgical anatomical environment has been investigated and adaptations for the specific conditions have been realized. The GUI makes the control and the programming of the system very intuitive for the future use in the operating room. From the technical point of view the main challenge is the increase of the measurement cycle-rates of today more than 2 minutes per frame, which is too slow for the clinical use. However for the next prototype the rate can be increased by using high-speed-frame-grabbers and -cameras together with oscillating micro-mirror-arrays.

Conclusion: The concept of sensors for a combined online and local data-acquisition is a new approach for a optimized utilization of robots or manipulators in the MIC. The system has now to be discussed with clinical users and to be tested in scenarios close to reality.

47

Poster Contributions P9

SURGICOBOT: Surgical Gesture Assistance COBOT for Maxillo-Facial Interventions

E. Bonneau1), F. Taha2), P. Gravez1) and S. Lamy1)

1) CEA-List, Service Robotique et Systèmes Interactifs, Centre de Fontenay-aux-Roses, BP 6 92265 Fontenay-aux-Roses

Cedex, France 2) CHU Amiens, Service de Chirurgie Maxillo-Faciale, CHU Nord, 80054 Amiens, France This paper describes a COBOT (collaborative robot) demonstrator developed for applications in maxillo-facial surgery. Its basic purpose is to help surgeons to avoid harming sensitive anatomical structures that are only seen on pre-operative imagery. The kernel of the system is a haptic robotic arm featuring both excellent transparency and a capability to generate forces in the hand of its operator. A standard surgical instrument, a maxillo-facial drill in our case, is fixed at the extremity of the haptic arm but is hold by the surgeon in the same manner as in a manual surgery configuration. The other essential components of the system are an advanced haptic controller that implements the desired behaviours of the haptic arm and a virtual reality engine able to compute force constraints based on a simulation of the interactions between the surgical instrument and the patient modelled anatomy. Obviously, the system must also embody a calibration function to match pre-operative models with the per-operative real anatomical features. Using the COBOT, the surgeon can freely handle his surgical instrument except in the volumes defined beforehand as protected anatomical zones. The COBOT then restrains the surgeon penetrating these zones and thus actually assists the safe performance of delicate surgical operations by actively protecting anatomical structures like the spinal cord from the rachis or the dental nerve in the jaw. This concept clearly combines several technologies, mainly force-feedback robotics, virtual reality and exploitation of 3D models constructed from medical imagery of patient. It also possesses a huge development potential, since by merging repulsive and attractive guidance forces and by trimming their values, a COBOT may become a multifunctional tool able to assist surgeons in a wide range of applications. In order to investigate this potential, the CEA and CHU Amiens have turned the concept into a full-scale demonstrator used to perform an osteotomy operation on a resin jaw. Surgical and robotics expertises were closely involved on these experiments that aimed at assessing the efficiency of the COBOT in the active protection of the dental nerve of the jaw. The paper describes with some details the main steps of the demonstrator evaluation : the selection of a relevant surgical intervention, the adaptation of a commercially-available Virtuose™ haptic arm initially designed to interact with virtual reality applications, its interface with a virtual reality software engine, the processing of the medical imaging data and the various test phases. It also deals with fundamental difficulties which were raised during this experiment, namely the specification in the virtual 3D model of the anatomical zone that is to be protected, the mapping between the simulated and the real surgical gestures, the repositioning in real time of the digital model to match the movements of the patient, and the combined setup of the force, visual and audio feedbacks that provides the best assistance to the surgeon.

48

P10 Poster Contributions

Micromachined Silicon 2-Axis Force Sensor for Teleoperated Surgery

F. Van Meer and D. Esteve

LAAS/CNRS, 7, avenue du Colonel Roche, F-31077 Toulouse, France This paper reports a double-wafer micromachined capacitive in-plane silicon force sensor with high sensitivity designed for surgical applications using telemanipulation, 2-axis macro-force detection and bulk micromachining technologies. Due to the softness of live-tissue, the detection of strains applied on it is difficult. Force sensors need to be positioned close to the end effectors because the length of endoscopic surgical instruments often induces perturbations in the measured strains. It is also preferable to use force-feedback systems to increase the accuracy and dexterity of surgical robotics for Minimal Invasive Surgery.

The force sensors commonly used nowadays cannot be implemented in surgical applications because of several limitations: size incompatible with the instrument, low sensitivity to small strains, expensive, 1-axis detection (miniature models).

In this paper will presented a new type of 2-dimensional mechanical sensor designed for direct implementation inside the jaws of surgical instruments. The architecture of this sensor is close to micro-accelerometers using capacitive detection and silicon springs. The design of the sensor can be easily change for a 3-dimensional strength and torque detection. Our first prototype was meant to be mounted in a 5 mm forceps jaws with the possibility to measure forces in two directions. In this, three constraints had to be fulfilled: the force detection range (0,1 N => 4 N for soft tissue manipulations), the capacitive detection sensitivity (< 20 fF) and the sensor size (length: 10 mm, width: 3.5 mm, height: 1 mm). We chose to develop the assembling in the manner of a double wafer using Flip Chip (FC). On the first layer of the sensor, four main springs are attached to a big platform, which is connected by bonding with the second layer according to FC. Applied forces are then estimated with several comb-shaped electrodes located on the first and second layers. A strain applied on the second layer yields a deformation of the main springs. Consequently, the electrodes of the second layer move, inducing a change in the capacities relative to orthogonal strain directions between the two layers. Given the springs rigidity is well known, it is possible to estimate the force vector in two orthogonal directions in the wafer plane. In your first developments our sensors allow to measure a force from 0 to 15 N with a sensitivity of 10 g. The full paper will describe the process for the design of the silicon sensor, its calibration system, the first experimental measurements, and its integration in an articulated instrument for the French surgical robotic project named EndoxiroB.

49

Poster Contributions P11

MEDARPA – Implantation of Brachytherapy-Catheters Using Augmented Reality

S. Röddiger1), D. Baltas1), G. Sakas2), M. Schnaider3), S. Wesarg3), P. Zogal2), B. Schwald3),

H. Seibert3), R. Kurek1), T. Martin1) and N. Zamboglou1)

1) Klinikum Offenbach, Department of Radiation Oncology, Starkenburgring 66, D-63069 Offenbach, Germany 2) MedCom Gesellschaft für medizinische Bildverarbeitung mbH, Rundeturmstraße 12, D-64283 Darmstadt, Germany 3) Fraunhofer ZGDV, Visual Computing, Fraunhoferstr. 5, D-64283 Darmstadt, Germany Purpose: To evaluate a prototype consistent of a flexible semitransparent display, a computer and two tracking systems (infrared and electromagnetic). This prototype was designed to facilitate the navigation of brachytherapy-catheters during the procedure of implantation.

Material&Methods: A cubic phantom was used and filled with stearin gel and objects resembling tumour and organs at risk. 10 CT-markers were fixed on the surface and a CT-scan (2 mm slice distance) was done. After transfer of CT-data to the computer, the phantom and the handle for the catheter were registered using the electromagnetic tracking system, physician and display were registered using the infrared tracking system. The display was placed above the phantom and catheters were implanted guided by the 3D model of the phantom shown on the display and overlayered with the image of catheter and handle. The virtual positions of the catheters were fixed, catheters were left in place. After another CT-scan, virtual and real positions of the catheters´ tips as well as equidistant points along the catheters were reconstructed and compared.

Results: The systems is easy to handle and needs 6 minutes (3.5-10.0) for the set-up. Median variation of the catheter tip was 5.5 (2.7-8.7) mm and 10.1 (9.6-11.3) mm for equidistant points.

Conclusion: The MEDARPA-prototype enables the physician to implant brachytherapy-catheters guided by augmented reality. The puncture of organs at risk can be avoided. The accuracy of the system will be further improved and first tests with patients will be presented.

50

P12 Poster Contributions

Robotic and Laser Aided Navigation for Dental Implants

T. M. Buzug1), U. Hartmann1), D. Holz1), G. Schmitz1), P. Hering2), J. Bongartz1), M. Ivanenko2), G. Wahl3) and Y. Pohl3)

1) Department of Mathematics and Technology, RheinAhrCampus Remagen, Südallee 2, D-53424 Remagen, Germany 2) Holography and Laser Technology, Stiftung caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn, Germany 3) Poliklinik für Chirurgische Zahn-, Mund- und Kieferheilkunde, University Dental Clinic Bonn, Welschnonnenstr. 17,

53111 Bonn, Germany The present paper introduces a research network between RheinAhrCampus Remagen, Caesar Bonn and University Clinic Bonn that is currently build up in the field of robot assisted surgical laser interventions. The main project focus is an accuracy study for laser surgery in dental implantology. We will outline the main ideas and some preliminary results of the project “Robotic and Laser Aided Navigation for Dental Implants” (ROLANDI). The project is embedded in the Center of Expertise for Medical Imaging, Computing and Robotics – CeMicro – being a research center for medical image acquisition technology, medical signal processing and computation, and improvement of clinical procedures in image-guided interventions with robotic assistance.

The application of computer-aided planning, navigation and robotics in dental surgery provides significant advantages – compared to conventional practice – due to today’s sophisticated techniques of patient-data visualization in combination with the flexibility and precision of novel robots. However, an implementation of navigation and robot assistance in a field where only local anesthesia is applied is a challenging task because of unavoidable patient movements during the intervention. In this paper we propose the combination of image-guided navigation, robotic assistance and laser surgery to improve the conventional surgical procedure.

A critical point for the success of the surgical intervention is the registration of the planned operation trajectory to the real trajectory in the OR. In the course of the project this point will be thoroughly investigated. As a preliminary study we have focussed on the registration error in a point-based registration. The point-based registration is used in a large number of systems and is therefore chosen as a standard in our project. Different matching strategies as surface-surface or palpation based registration will be evaluated with respect to the point-based case.

To evaluate the overall accuracy of the entire registration processing chain we have to estimate the localization error in the underlying image modality, the localization error of the fiducial or anatomical markers in the OR and the positioning error of the surgical tool. In our case this means a calculation of error propagation from the CT via the optical navigator to the robotic laser holder. We start the investigation with an anatomical phantom equipped with fiducial landmarks. To obtain a ground truth a holographic image is taken of the phantom that allows for marker localisation with a very high precision.

A second work package in the project deals with laser intervention. Laser surgery is a smooth preserving intervention which is important whenever holes in small ridge-like bone structures have to be prepared. However, our main goal is to reduce the efforts of patient fixation. The laser “drilling” is a contactless process, and therefore, in comparison with the conventional drilling, the jaw is not subjected to forces. As a consequence, the online tracking and re-registration of the patient has to cope with small residual movements only, even in the case of non-invasive patient fixation. On the other hand, a major draw back in surgical laser drilling is the loss of tactile feed back, which in fact is the disadvantageous consequence when no forces are applied. Thus, no direct information about the drilling depth is available. This is a critical issue and it will be discussed how this is treated in the project, because vital neighbouring structures as canalis mandibulae with the alveolar nerve must not be damaged during the intervention.

51

Poster Contributions P13

Functional Mapping of the Cortex – Surface Based Visualization of Functional MRI

R. Krishnan, A. Raabe, M. Zimmermann and V. Seifert

Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528 Frankfurt/Main, Germany Objective: The interindividual variability of the complexity of cortical geography often impedes studies of functional specialization. The display of functional data in the stereotactic space may lead to underestimation of the distance of activated areas, as two point close to each other in stereotactic space can be distant in Euclidian space. In this study we discuss compensation strategies, involving explicit reconstruction and display of cortical surfaces (such as 3D native configuration, 2D slices, extensively smoothed surfaces, spherical representations and cortical flat maps).

Methods: Functional MRI studies of 5 patients during standardized paradigms for hand-, foot-, and tongue movements were performed and EPI T2* weighted images were acquired. Data analysis was done with the Brain Voyager software (Brain Innovation, Maastricht, NL) on a standard PC. Statistical maps for the time course of activated voxels and corresponding 3D surfaces were rendered.

Results: The functional motor areas were clearly visible in all reconstructions. In native 3D reconstructions activation within the sulci was not always fully visible, whereas these areas can clearly be seen on the morphed surfaces. Inflated models and flat maps were superior in demonstrating the spatial extent of an activated area. For interindividual comparison data mapping on well defined geometrical objects is an alternative solution.

Conclusion: Surface based warping allows functional data to be displayed while respecting surface topology in contrast to slice representations. Surface based morphing of functional and anatomical data is new perspective in visualization and 3D reconstruction of functional MRI data.

52

P14 Poster Contributions

Automated Marker Detection for Patient Registration in Image Guided Neurosurgery

R. Krishnan, E. Herrmann, R. Wolff, A. Raabe and V. Seifert

Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528 Frankfurt/Main, Germany Objective: Fiducial marker registration within preoperative data is often left to the surgeon, who has to identify and tag the center of all markers, which is time consuming and a potential source of error. For this reason an automated procedure was developed. In this study, we investigated the accuracy of this algorithm in detecting markers within navigation data.

Methods: We used the BrainLAB planning software VVplanning1.3 (BrainLAB, Heimstetten, Germany) to detect a total of 591 applied fiducial markers in 100 consecutive MP-RAGE MRI data sets of patients scheduled for image guided surgery. The software requires to adjust different parameters for the size (“accuracy”) and grey-level (“threshold”) to detect marker structures in the data set, where “threshold” describes the grey-level, above which the algorithm starts searching for pixel clusters representing markers. This parameter was stepwise changed on the basis of a constant “accuracy” value. The size of a marker (accuracy) in y-direction was stepwise changed on the basis of that threshold value, which detected all markers correctly.

Results: Time for marker detection varied between 12 and 25 seconds. An optimum “accuracy” value was found for 1,1 mm with 8 (1,35%) undetected markers and 7 (1,18%) additional detected structures. The average grey level (threshold) for correct marker detection was 248,9. For a high “threshold” the rate of missed makers was low (1,86%). Starting the algorithm at lower grey levels decreased the rate of missed markers (0,17%) but the rate of additional detected structures raised up to 27,92%.

Conclusion: The automatic marker detection algorithm is a robust, fast and objective instrument for reliable fiducial marker registration. Specifity and sensitivity is high when optimum settings for “threshold” and “accuracy” are used.

53

Poster Contributions P15

Navigation Error of Update Navigation System Based on Intraoperative MRI

Y. Muragaki1), T. Maruyama2), H. Iseki1), M. Sugiura1), K. Suzukawa1), K. Nambu1), O. Kubo2),

K. Takakura1) and H. Tomokatsu2)

Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan 1) Faculty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate

School of Medicine 2) Department of Neurosurgery A navigation system based on an intraoperative MR image (iMRI) is more accurate system than a conventional navigation system based on the preoperative images is, because an error caused by the intraoperative brain deformity such as a brain shift can be eliminated in the new system. We here present the study about the accuracy of the updated-navigation (UN) based on the iMRI. The UN system was installed in the navigation system for the intraoparetive irradiation (PRS navigator) The accuracy was estimated retrospectively in 50 cases by the assessment of the errors with least-squares method. The maximum errors were more than 5 mm in three cases whose iMRIs were distorted. Navigational errors were also compared between two types of the developed fiducial markers in 47 cases (94 times). The UN by the new markers whose markers for iMRI and markers for the registration were different showed significantly less errors (Max 2.36mm, Medium 1.16mm) compared to the UN by the previous markers (Max 4.65mm, Medium 2.07mm). In conclusion, the accuracy of the UN depends not only on the internal error, but also on the image quality including the distortion and the fiducial markers.

54

P16 Poster Contributions

Improvement of Computer- and Robot-Assisted Surgery at the Lateral Skull Base by Sensory Feedback

D. Malthan1), J. Stallkamp1), F. Dammann3), E. Schwaderer3) and M. M. Maassen2)

1) Fraunhofer-Institute for Manufacturing Engineering and Automation IPA, Nobelstraße 12, 70569 Stuttgart, Germany 2) Department of Otolaryngology – Head & Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Straße 5, D-72076

Tübingen, Germany 3) Department of Diagnostic Radiology, University Hospital of Tübingen, Otfried-Müller-Str. 10, D-72076 Tübingen,

Germany Introduction: Lately, surgeons performing lateral skull base surgery have been paying more attention to the development and advancement of computer-aided surgical instruments. In particular the implantation of active auditory amplifier implants of the highest medical device class places demands on a highly reproducible and accurate fixation technique. Here, the interconnection between the amplifier’s effector and the receptor tissue, and the fixation of so-called mountain brackets, requires an accurate milling of the skull cavities. In order to avoid a distortion of the transmitted acoustical signals, the implantation angle has to be defined very carefully. Employing conventional techniques, an optimization of the anatomical connection between the hearing aid and the receptor tissue cannot be realized. This results in a disproportional resection of bone tissue and a consequent loosening of the implant.

Material and Methods: Within the scope of a federal-funded joint project, a PC-based computer- and robot assisted system for the precise and efficient planning and execution of surgical processes at the lateral skull base has been developed. The robot system is equipped with multimodal sensory feedback, consisting of a load cell, a laser range sensor and fluorescence spectroscopy sensors. This way, in combination with the robotic kinematics, a laser scanner has been realized, enabling the robot to analyze the geometry of the surgical area with a resolution of several µm. Employing the obtained geometric information, a novel automated registration algorithm by local landmark identification has been realized.

Results: The integration of sensory feedback provides additional and essential information about the actual surgical conditions in-situ. Thus, the robot system can be operated in a so-called closed loop control. Here, a continual nominal/actual value comparison of the surgical process takes place. The nominal information is calculated by the developed software system through a simulation of the surgical process in real-time. The actual values are obtained directly by the integrated sensor modules. As a result of the sensory feedback, the precision as well as the safety of the robot assisted surgery has been improved by the realized process surveillance. It has been shown, that especially the geometric information of the surgical area and the resulting forces of the robot system onto the skull base are crucial factors for an optimized surgery. The fluorescence sensor allows a differentiation of the different tissue types, further improving the safety of the setup. Additionally, the algorithm for the automatic surface-based registration has proven its functionality under laboratory conditions with an overall precision of 0,5mm.

Conclusion: Laboratory testing demonstrated that the employment of sensory feedback to a navigation and control system can improve robot-assisted skull base surgery. Both the reliability and the efficiency of the performed surgical process steps were augmented. In addition, the acquired data can be used to implement an exhaustive intraoperative procedural documentation needed for quality assurance and quality management tasks.

Acknowledgement: This study was made possible by a grant awarded by the German Research Foundation (DFG No. MA 1458/2).

55

Poster Contributions P17

An Interactive Planning and Simulation Tool for Maxillo-Facial Surgery

G. Berti1), J. Fingberg1), T. Hierl2) and J. G. Schmidt1)

1) C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10, D-53757 Sankt Augustin, Germany 2) Klinik und Poliklinik für Mund-, Kiefer- und Plastische Gesichtschirurgie, Universitätsklinikum Leipzig, Nürnberger Str.

57, D-04103 Leipzig, Germany Severe malformations of the midface such as maxillary retrognathia or hypoplasia can be treated by distraction osteogenesis. During an operation the appropriate bony part of the midface is separated from the rest of the skull (osteotomy)and slowly moved into the ideal position by way of a distraction device. Thus even large displacements over 20 mm can be treated effectively.

A critical point in this procedure is osteotomy design and the resulting outcome with respect to aesthetics. In the current clinical practice, planning is based on CT scans and the surgeon's experience. Our tool chain allows the surgeon to specify arbitrary cuts of facial bones and to simulate the displacements of bones and soft tissue. It therefore provides the possibility to predict and compare the outcome of different surgical treatments in silico.

An important component of the planning chain is a tool for virtual bone cutting. For this purpose, a CT image of the patient's head is segmented, and a surface mesh of the bone is generated. Onto this surface mesh, the surgeon can interactively draw closed curves corresponding to bone cuts. Moreover, the distraction of individual bone components can be specified. The geometric information describing the cuts is handed to an image manipulation tool which transforms them into a 3D cut volume with user-controllable thickness and modifies the image data accordingly. This image and the distraction specification is further processed by a volume mesh generator, producing a model suitable for a finite element (FEM) simulation of the distraction process.

A number of visual aids for the user are implemented or currently under development: Coloring strategies allow to visualize connectivity of components, and to detect bridges resulting from insufficient cuts. Mapping of a head template allows to reuse predefined boundary conditions. Clipping and adapting opacity of surfaces allows to deal with complicated geometries. Finally, interpolating the simulation results back onto the original image data allows for the use of state-of-the-art volume rendering for visualization of the predicted outcome.

The bone cutting tool is built on top of OpenDX, for which a number of additional modules have been developed. OpenDX is a general and powerful data visualization engine, featuring a graphic programming language which allows easy integration of additional functionality; it is also used to visualize the simulation results. These developments are carried out in the context of the GEMSS project (Grid Enabled Medical Simulation Services, http://www.gemss.de). Compute-intensive tasks like the FEM simulation are made available as Grid services and can be run remotely on HPC platforms in a transparent way. Thus, advanced simulation is brought to the desktop of the medical practitioner.

56

P18 Poster Contributions

Robotized Distraction Device for Soft Tissue Monitoring in Knee Replacement Surgery

C. Marmignon, A. Leimnei and P. Cinquin

Laboratoire TIMC, Equipe GMCAO, Institut d'Ingénierie de l'Information de Santé, Faculté de Médecine,

F-38706 La Tronche Cedex, France Navigation devices have proven their efficiency for correct alignment of the inferior limb in Total Knee Arthroplasty procedures. But the question of ligament balance remains a key issue. Today the surgeon uses a tensor to distract the bone pieces which tights the ligament system and measure the distances between bone extremities (Insall, Freeman, Cores). These systems are used after the tibial cut and theirs shapes require having the patella upturned. Those tools seem insufficient and two new tendencies have appeared. The tensor of Muratsu is used with the patella in place, which reproduces the normal condition of the knee and improves the measures especially in flexion. The measure of the force (Balansys, Centerpulse) adds a dynamic control at the ligament balancing. Our system is used with the patella in place, the forces are measured and the ligament balancing is monitoring on a navi-gation station. The tension is applied in the frontal plane by applying two forces, an internal one and an external one, both parallel to the mechanical axis of the tibia.

The prototype has a height range between 6 to 21 millimeters for force range of 100 N. The Robotized Distraction Device (RDD) has a base plate carrying two independent and parallel superior trays. A jack keeps the parallelism. In place of the worm and the crank, a sheathed cable applies a force be-tween the central axes. The tension of the cable is exercised by a rack and pinion drive. The motor on the pinion is controlled in position by a PID controller. An analogical I/O card connected to a PCI port of a computer supplies the input voltage. On the RDD, force and height sensors measure the distrac-tion force and the height of each superior plate. It is computer plugged, and presently interfaced with the generic TKA application of the SURGETICS station of PRAXIM Company. The clinical protocol is the acquisition of the articular centers, of the articular surfaces and matching with a statistical model, of the ligament insertions, the tibial cut then the positioning of the RDD, and finally the dis-traction step.

A surgeon made a feasibility validation on two corpses. The approach was validated but a greater force range would have been better. A more powerful prototype have been designed, and led to a new hydraulic prototype. In conclusion the RDD provides the surgeon with the possibility to simulate the consequences of his choices about the prosthesis and its positioning, and to quantify the effect of a possible ligament release. The RDD helps providing the surgeon with the possibility to validate his way to implant prosthesis. That in vivo information is useful for a better understanding of a complex joint like the knee.

We would like to thank R. Julliard, P. Merloz, S. Plasweski and D. Saragaglia for valuable advice and surgical expertise. This work is developed in partnership with PRAXIM and ALPES-INSTRUMENT.

57

Poster Contributions P19

Accuracy Analysis of Vessel Segmentation for a LITT Dosimetry Planning System

J. Drexl, V. Knappe, K.S. Lehmann, B. Frericks, Hoen-oh Shin and H.-O. Peitgen

Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29, D-28359 Bremen, Germany Calculating laser-induced thermal tissue (LITT) [1] reactions is a complex task, which requires the combination of different physical processes while considering variable parameters. A dosimetry model (LITCIT for windows) was developed for the thermal laser treatment of biological tissue and applied to laser-induced thermotherapy of organ tumors like liver metastases [2]. However, for a clinically relevant simulation the vascular situation must be taken into consideration. This point is very impor-tant because it is known from larger vessels that the blood flow acts as a cooling source and conse-quently the risk of insufficiently treated regions can not be neglected. These regions are potential sources of tumor recurrence and therefore need to be prevented in any case. To improve this situation, we integrated fast multisclice ct imaging, image segmentation and radiation planning. The segmen-tation part is handled by our software tool HepaVision, which allows for the segmentation of intra-hepatic structures like parenchyma, vessels and tumors.

We already presented qualitative results of our integrated planning system in [3]. The purpose of this work is to complement the promising results published in [3] by quantitative data on the performance of our system, from in-vivo measurements on a pig animal model as well as from computer simula-tions of the CT imaging process.

We compared the vessel segmentation obtained by a region-growing algorithm [4] on an in-vivo CT scan (obtained on a Siemens Somatom Sensation 16) to an acrylic resin cast made of the same ani-mal’s portal vein, especially considering the vessel radii resulting from segmentation. The radii were expected to significantly influence subsequent calculations through poiseuille’s law and the r^4 de-pendency of vascular resistance.

We augmented our animal model results with computer simulations on digital phantoms [5]. In con-trast to animal experiments, this strategy has the advantage of an 100% accurate gold standard for validation and a 100% reproducibility. Furthermore, ground truth and segmentation results are a-priori aligned, so that no image registration must be performed which might introduce additional error of unknown magnitude into the evaluation. Imaging parameters can be varied under a broad range of possible conditions, allowing also to assert an algorithm’s robustness. We constructed digital phan-toms using a database of expert-segmented human intrahepatic portal veins. Using the known attenua-tion coefficients for liver tissue, blood and contrast agent, and with the help of a commercial CT simu-lator software (SNARK93 [6]), we rendered the digital phantoms into simulated ct scans. Segmenting this simulated scans with the vessel segmentation algorithm and comparing them to the digital phan-toms as gold standard, we were able to assess the segmentation’s performance with respect to preci-sion and robustness under varying imaging conditions, partial volume effects and image noise.

The second topic of this work is the assessment of the predicting power of our planning system. We studied the ability of the system to predict the coagulated volume of a LITT intervention when taking vessel perfusion into account. After the intervention, the liver was explanted and the coagulated vol-ume was calculated from the thermolesion’s planimetric measurement. To study to what extent seg-mentation errors will affect the coagulated volume, we again used our set of simulated scans. We ob-tained coagulated volumes starting from the digital phantoms and coagulated volumes starting from the segmentation of the simulated scans. We quantified the deviation of coagulated volume due to segmentation error and were in a position to demonstrate the reliability and robustness of the planning system.

58

P19 Poster Contributions

References: [1] D.Albrecht et. al.: Die laserinduzierte Thermotheraphie als neues Behandlungskonzept für maligne

Lebertumoren – Ergebnisse einer klinischen Studie. Endoskopie heute; 10:145-146. [2] Roggan et al: Radiation Planning for thermal laser Treatment, Med. Laser Appl. 16: 65-72, 2001 [3] Lehmann et al.: Fusionierung von 3D-Bildgebung der Leber mit einem Planungs-System für die

in-situ-Ablation maligner Lebertumoren, presented on CURAC 2003 [4] Selle et. al.: Analysis of vasculature for liver surgery planning, IEEE Transactions on medical

imaging, Vol. 21, No. 11, Nov. 2003 [5] Bezy-Wendling et. al.: Toward a better understanding of texture in vascular CT scan simulated

images, IEEE Transactions on biomedical engineering Vol 48, No. 1, January 2001 [6] http://www.cs.gc.cuny.edu/~gherman/snark2001.html

59

Poster Contributions P20

3D-Reconstruction and Visualization of Bone Mineral Density for the Ethmoid Bone

C. Kober1,3), R. Sader2,3) and H.-F. Zeilhofer2,3)

1) Univ. of Appl. Sc. Osnabrück, Albrechtstr. 30, P.O. Box 19 40, D-49009 Osnabrück, Germany

2) University Hospital Basel, Spitalstrasse 21, CH-4031 Basel, Switzerland 3) HFZ, TU Munich, Ismaningerstr. 22, D-81675 Munich, Germany

Introduction: Three dimensional surface reconstructions basing on computer tomography or magnetic resonance imaging are more and more estimated for the purpose of surgery or treatment planning. But a reliable analysis of very fine and at the same time very inhomogeneous structures is still a topic of research.

The ethmoid bone is situated at the anterior part of the base of the cranium, between the two orbits, at the roof of the nose, and contributes to each of these cavities. It is exceedingly light and spongy. The two lateral labyrinths consist of a number of thin-walled cellular cavities, the ethmoidal cells.

Methods: The input of our analysis were ct data without diagnostic findings in the ethmoid bone but with overall reduced bone quality. The slice thickness was 2 mm. By a combined application of a Lanczos filter and the original data, the thin bony structures were returned as dark lines of nearly homogeneous grey value. Now, the anatomical details could be captured mainly by threshold segmentation. Additionally, by a combination of computer graphics modules as presented in [2], a three dimensional volumetric visualization of bone mineral density could be given. For all programming and image processing steps, we used the visualization package Amira 3.0 developed at ZIB Berlin [1].

Results: We present a three dimensional reconstruction of the entire ethmoid bone including the labyrinths, the nasal conchae, and the bony part of the nasal septum. By the macroscopic density profile, the very differentiated inner structure of this part of the skull came to the fore. We focused on the labyrinths and the conchae. An additional analysis revealed the refined inner structure of the nasal septum.

As a further successful application of our approach, we report a reconstruction of the floor of the orbit which is often hardly visible in standard ct data.

Conclusion and outlook: The presented visualizations give suggestive impressions of the inner structure of the respective part of the organ under consideration, for instance the nasal septum or the conchae. The approach turned out to be applicable to data sets stemming from the clinical routine. Due to the preparatory image processing of the input data, the segmentation effort was acceptable.

Therefore, the next step will be the application to pathological cases. Further, we try to contribute to validation of the reconstruction of various thin structures. A special concern of the group is the analysis of tumours in this area or the minimal invasive approach to the skull base via the ethmoid bone.

Acknowledgement: The first author wants to thank H.-C. Hege and his team at ZIB Berlin for providing her with an Amira license.

References: [1] Amira User's Guide and Amira Reference Manual, http://www.amiravis.com. [2] C. Kober, R. Sader, H.-F. Zeilhofer, Segmentation and visualization of the inner structure of cra-

niofacial hard tissue, in Proc. Comp. Ass. Rad. Surg., CARS, Vol. 1256, London, 2003, 1257-62

60

P21 Poster Contributions

Craniofacial Endosseus Implant Positioning with Image-Guided Surgical Navigation

J. Hoffmann1), D. Troitzsch1), C. Westendorff 1), F. Dammann2) und S. Reinert1)

1) Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen, Osianderstrasse 2-8,

D-72076 Tübingen, Germany 2) Abteilung für Radiologische Diagnostik, Radiologische Universitätsklinik, Universitätsklinikum, Eberhard-Karls-

Universität Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany Purpose: Craniofacial implants have been shown to provide excellent stability and retention for auricular prosthodontic rehabilitation. The locations of implant placement are critical to achieve optimal prosthetic results. We used image-guided navigation for implant positioning to test their feasibility and practical impacts.

Material and Methods: All patients undergoing navigation-asssisted craniofacial implant treatment were included. Image-guided surgery was performed by use of a passive infrared surgical navigation system (VectorVisionTM, BrainLAB). The preoperative computed tomography (CT) data was ob-tained using the Somatom Sensation 16 multislice-scanner (Siemens). After skull reference attachment, the patient-to-image registration was performed using surface laser scanning technique.

Results: A total of 8 implants were placed in the mastoid area and various other craniofacial locations. The implant positions were conventional planned and updated after image-guided measurements in conjunction with the bone thickness. After registration axial, coronal and sagittal reconstructions of the pointer tip position in the region of interests were displayed in real time. The proper location and positioning of implants in the craniofacial area could strongly improve and controlled.

Conclusion: Navigation-assisted surgical technique can assist in proper positioning of craniofacial implants, which in turn can complement the prosthetic result.

61

Poster Contributions P22

Image-Guided Navigation for Interstitial Laser Treatment in Vascular Malformations in the Head and Neck

J. Hoffmann1), C. Westendorff 1), D. Troitzsch1), U. Ernemann2) und S. Reinert1)

1) Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen, Osianderstrasse 2-8,

D-72076 Tübingen, Germany 2) Abteilung für Radiologische Diagnostik, Radiologische Universitätsklinik, Universitätsklinikum, Eberhard-Karls-

Universität Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany Purpose: Laser-induced interstitial thermal therapy (LITT) is a minimally invasive surgical technique for the treatment of haemangioma and vascular malformations. As the technique is interstitial, the placement of the laser fibre occurs remotely from the operator/surgeon and is not visible as would be the case in an open surgical procedure or by percutaneous laser application. Image guided navigation-controlled LITT offer a non-invasive safe treatment option.

Material and Methods: Multiple image data navigation guided LITT was performed in patients (five procedures) with giant venous malformations of the maxillofacial area. The system consisted of a special new developed Nd:YAG laser fibre introducer set in conjunction with fused computed tomography and magnetic resonance based surgical navigation.

Results: As a result of the 3-D reconstruction for laser surgical planning and the defined target areas for laser probe navigation, the application of the interstitial laser treatment could be performed exactly. In all cases, control examination clearly showed a diminished tumour volume and all patients reported subjective amelioration.

Conclusion: The results suggest that navigation-guided LITT can be performed safely with preserving of vital structures and can be effective in the treatment of complex vascular malformations.

62

P23 Poster Contributions

The Hybrid Approach to Minimally Invasive Craniomaxillofacial Surgery: Videoendoscopic-Assisted Intervention with Image-Guided Navigation

J. Hoffmann1), D. Troitzsch1), F. Dammann2) und S. Reinert1)

1) Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen, Osianderstrasse 2-8,

D-72076 Tübingen, Germany 2) Abteilung für Radiologische Diagnostik, Radiologische Universitätsklinik, Universitätsklinikum, Eberhard-Karls-

Universität Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany Introduction: The application of minimally invasive approaches in case of complex craniomaxillofacial procedures will require new technologies involving image data based surgical navigation. However, the increased experience with endoscopic-assisted and navigation-guided techniques, as well as the development of specialized instrumentation, has allowed the minimally invasive procedures even in craniofacial regions. One such potentially option is the hybrid approach to craniomaxillofacial and plastic surgery. This combines minimally invasive endoscopic techniques with image-guided surgical navigation technology. The purpose of this study was to evaluate the performance of surgical navigation for different treatment indications.

Material and Methods: Thirty-eight patients covering a broad range of treatment indications were scheduled for image-guided surgery by use of a wireless passive infrared surgical navigation system (VectorVisionTM, BrainLAB). The software contains various tools for virtual surgical path planning and linear measurement. The preoperative computed tomography (CT) data was obtained before surgery using a newest generation Somatom Sensation 16 multislice-scanner (Siemens). After attaching the skull reference to the patients head, the patient-to-image registration was performed using surface scanning. After registration axial, coronal and sagittal reconstructions of the pointer tip position was displayed in real time on the video display. The registration accuracy was expressed by a calculated value, the root mean square (RMS) and while for system validation the intraoperative accuracy is visually checked with identification of anatomical landmarks.

Results: The system has been used in surgery for endoscopically assisted surgery, for the insertion of dental and craniofacial implants, for the reduction of complex midfacial trauma, for skull bone tumor removal and further miscellaneous interventions. Intraoperative navigation and endoscopic-assisted procedures were successful in all patients. All surgical approaches healed uneventful with barely visible incisions. Some important features were preoperative 3D-visualization and image-guided target fixed surgery with the necessity to indicate at any given time the topographic relationship between the instrument and anatomical structures. The overall mean accuracy (RMS) was 1.24 (SD: 0.58) when referencing was based on laser registration. The overall time expenditure to use the system was 20 to 30 minutes. Laser scanning surface registration offers a newest technical step to register the patient without using fiducial markers in shortest time. Short hospital stay and less invasive surgery could possible when a patient is carefully selected.

Conclusion: Preliminary outcomes of patients receiving the hybrid approach have been strikingly positive. This image guidance enabled a correct and smooth approach even in those patients with complicated and abnormal anatomical structures. Despite a few limitations, we believe that, with an expanding role and a rapid change in minimally invasive approaches, many common problems seen in the oral and craniomaxillofacial surgical patient could be solved safely and effectively. Coupling of surgical navigation with various minimally invasive techniques, such as endoscopic or endoscopically assisted surgery, will open new avenues of treatment for craniomaxillofacial disorders with decreased scarring.

63

Poster Contributions P24

Minimally Invasive Navigation-Assisted Excision of Bone Tumor in the Temporo-Parietal Skull Base

J. Hoffmann1), D. Troitzsch1), C. Westendorff 1), F. Dammann2) und S. Reinert1)

1) Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen, Osianderstrasse 2-8,

D-72076 Tübingen, Germany 2) Abteilung für Radiologische Diagnostik, Radiologische Universitätsklinik, Universitätsklinikum, Eberhard-Karls-

Universität Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany Purpose: Skull bone tumours are highly complicated to resects, especially that originates between external and internal tabula. The primary treatment is surgical excision. Traditionally surgical approaches are associated with significant bone deformity. We introduce and outline the clinical advantages of a navigation-assisted to lateral skull bone tumours using image-guided surgery.

Material and Methods: Our experience includes one patient with a bone tumour. Skull radiographic computed tomography revealed bone lesion in the right temporo-parietal region. Computed tomographic scan indicated a destructive structure involving external and internal bone layer. The patient was scheduled for image-guided surgery by use of a wireless passive infrared surgical navigation system (VectorVisionTM, BrainLAB). The preoperative computed tomography (CT) data was obtained before surgery using a newest generation Somatom Sensation 16 multi-slice-scanner (Siemens). After attaching the skull reference to the patients head, the patient-to-image registration was performed using laser surface scanning. The registration accuracy was expressed by a calculated value, the root mean square (RMS) and while for system validation the intraoperative accuracy was visually checked with identification of anatomical landmarks.

Results: The procedure was successful and the tumour was minimally invasive removed, with no peri- and postoperative complications. The patient was ready for discharge 2 days after surgery. Postoperative imaging scans show no recurrent tumour process.

Conclusion: Traditional surgical approaches to skull bone tumours may result in significant cranial deformity and morbidity. Image-guided excision with surgical navigation technique is a safe and effective minimally invasive surgical treatment.

64

P25 Poster Contributions

A Surgical Mechatronic Assistance System with Haptic Interface

S. Pieck, P. Knappe, I. Gross and J. Wahrburg

University of Siegen, Institute of Automatic Control Engineering, ZESS – Centrum for Sensor-Systems, Paul-Bonatz-Str. 9-11, D-57068 Siegen, Germany

Purpose: We present a versatile navigated mechatronic system to assist in surgical interventions. It is characterized by the integration of an optical navigation system and a mechatronic arm. In contrast to known autonomous solutions, our highly interactive designed system facilitates the surgeon to take appropriate actions to the system's work at any time.

The interactive faculties of the mechatronic system are achieved meanly by 1) optical position-control of the mechatronic arm and 2) a haptic interface to the mechatronic arm.

Material and Methods: Unique to this system, and resulting from 1), is that the mechatronic arm keeps track to the patient's position even during patient's movements. For permanently measuring the patient's position, the reference frame of the optical navigation-system is fixed to the bone of interest. Thereafter a patient registration process has to be performed as by the conventional use of a navigation system. This online tracking capability eliminates the need for rigid patient fixation. As the mechatronic arm is equipped with an additional reference-frame, its position is permanently known to the system, too. With these position data, the arm-movement instructions, which are needed to keep the mechatronic arm adjusting its guided instruments to the patient's position in real-time, can be computed.

As this approach integrates a conventional navigation system, the surgeon can utilize it as accustomed for additional support, if desired.

The haptic interface 2), using a force-torque-sensor, permits the surgeon to move the mechatronic arm from its home position to its working position. No sophisticated algorithms have to be implemented for collision avoidance. The surgeon guides the arm easily by grabbing a grip and by pulling it in the desired position. This integrates the mechatronic assistance system seamlessly in the operating procedure, because there is no need to use any input-device like a mouse, a touch screen or a keyboard.

Before the real-time patient-tracking mode can be activated, the tip of the surgical tool must have been moved haptically into its working space. Only within this predefined working space, the mechatronic assistance system is allowed to move automatically. The surgeon controls all other movements by using the haptic interface.

Results and Conclusion: A prototype system has been built up. It is a combination of a lightweight robot (35kg), type "PA10" from Mitsubishi Heavy Industries, Japan, a "Polaris" digitizing-system from NDI Inc., Canada and a self designed real-time-control-computer-system,.

This prototype has successfully been tested several times at the "Orthopädische Universitätsklinik Frankfurt". During a total hip replacement surgery the cup prosthesis has been implanted with mechatronic system assistance for the first time worldwide.

The presented navigated mechatronic assistance system can be considered as an intelligent instrument that supports surgeons to achieve a more precise and reproducible surgery. It combines the advantages of navigation and robotics, by using the digitizing system for registration and the precise tool guidance by the mechatronic arm. The total modular system-design allows an easy extension to other applications e.g. to spinal surgery.

65

Poster Contributions P26

Fluoroscopy Based Navigated Drilling of Four Osteonecrosis Lesions in One Patient

M. Citak, D. Kendoff, J. Geerling, C. Krettek and T. Hüfner

Unfallchirurgische Klinik, Medizinische Hochschule Hannover, Carl-Neuberg-Str. 1, D-30625 Hannover, Germany

Introduction: The precise drilling of osteonecrotic bony areals are challenging with normal fluoroscopic control.

Preoperatively excists accurate three-dimensional data which is usually not available during surgery. Intraoperativly there are just fluoroscope images available. During drilling the fluoroscope has to be moved consequently drilling can be imprecise. Fluoroscopy-based navigation allows precise drilling in up to four images, with high accuracy and decreased radiation exposure time. An example of fluoroskopie based navigation is shown with navigated drilling of local osteonecrosis near articular surfaces of all four extremities in one patinet during one operation.

Clinical case: A 27 year old female patient with a Non- Hodgkin Lymphoma was treated with Cortison. After this a bilateral femoral and humeral head necrosis was diagnosed. In MRI a FICAT Stadium 3 and 4 was diagnosed. Age depending an operation with drilling of all four extremities was indicated. Several drillings of 3.5 and 4.0 mm have been made with the aid of a fluoroscopy based navigation system (Medivision, Surgigate, Switzerland) of the four osteonecrosis. The bigger diameter of the drills were used to minimise the drill flexion. The dynamic reference base was positioned close to each defect. Two X-ray images were taken in two plains. Trajectories were planned and the drilling was performed percutaneously. Finally two X-ray images were taken from the virtual endposition of the drill.

Results: The realisation of a navigated drilling of all four extremities with osteonecrosis within one operation was not a problem. Because of the time-consuming covering and relocationing of the patient and the complex setup of the navigation the operation took 180 minutes. The total time of X-ray exposure including all adjustments and the intraoperative navigation of all four extremities was 48 sec. Only two or three images were needed for registration, all other images were used for setup and adjustment. The postoperative images confirmed the correct position of all drillings.

Conclusion: Fluoroscopy based navigated drilling of osteonecrotic leasions made it possible to raise accuracy and to minimize radiation time. The permanent visualization of the axis of the drill helps the surgeon to avoid unnecessary traumatisation of the bone. By resampling the images in contrast it is possible to demark the area of interest better than in conventional fluoroscope images. In the future it might be possible to raise the accuracy even more by combining properative MRI and fluoroscopy pictures.

66

P27 Poster Contributions

Geometrical Control Approaches for Minimally Invasive Surgery

M. Michelin, P. Poignet and E. Dombre

LIRMM – UMR 5506 CNRS / Université Montpellier II, 161 rue Ada, F-34392 Montpellier Cedex 5, France Minimally invasive surgery (MIS) consists in achieving operation through small incisions in the body. The surgeon uses dedicated instruments consisting in a long tube (30 – 40 cm) equipped with a tool at one end and a handle at the other. MIS adds several difficulties in the surgical procedure. Two of them are related with the manipulation of tools: the penetration point reduces the tool orientation capabilities and the amplitude of motion; the distance between the tool and the handle is longer than in open surgery, which increases the complexity.

After training, the surgeon can overcome these difficulties in most abdominal operations. However, for microsurgery or cardiac surgery, the solution could be to let a robot doing the job in a teleoperated mode. The surgeon can then focus on the tool motion rather than on the complex motion of the arm due to the constraint. Telerobotics can bring lot of advantages for surgeon such as immersion in the intra-cavital field, more intuitive gestures and motion scaling.

In most cases, the existing robotic systems that manipulate the surgical instruments, have a kinematic structure designed to respect the entry point (trocart). Then, for any robot motion, the instrument is constrained to pass through the trocart. Zeus system makes use of a passive universal joint [1]. Da Vinci system and other prototypes such as FZK Artemis and UCB/UCSF RTW systems are designed as remote center devices [2, 3, 4, 5]. All these devices are dedicated to MIS.

To respect the entry point, we propose two different geometrical approaches where the constraint is taken into account by the control algorithm itself and not by a dedicated kinematic design.

The first approach is based on the study of the reachable spaces of the joint outside of the body and the one inside of the body. These two joints being aligned with the trocart and given the fixed distance between them. These four constraints are resolved by an optimisation procedure.

The second approach is based on the capability to geometrically orientate the endoscopic instrument guaranteeing the trocart constraint, from its desired tool tip position and the trocart position.

The first control algorithm has been validated on a seven degree-of-freedom (dof) anthropomorphic robot (PA10 from Mistubishi): translational motions of the tool tip are achieved while passing the forearm through a fixed penetration point (5 dof may be regarded as external, 2 dof are internal). For a desired tool tip position, this procedure gives the elbow and wrist position, which allows the forearm to pass through the fixed point. Experimental results will be presented where the tool tip tracks various trajectories (straight line, circle, helix and sewing trajectory).

The second algorithm will be also illustrated with simulation results on a model of the PA10 robot.

References: [1] http://www.computermotion.com. [2] G. Guthart, and J.K. Salisbury, “The Intuitive Telesurgery System: Overview and Applications,”

Proc. IEEE Int. Conf. on Robotics and Automation, San Fransisco, 2000, pp. 618-621. [3] A.J. Madhani, G. Niemeyer, and J.K. Salisbury, “The Black Falcon: A Teleoperated Surgical

Instrument for Minimally Invasive Surgery,” Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Victoria B.C., Canada, October, 1998, pp. 936-944.

[4] A.J. Madhani, “Design of Teleoperated Surgical Instruments for Minimally Invasive Surgery,” PhD Thesis, Massachusetts Institute of Technology, 1998.

[5] H. Rininsland, “ARTEMIS. A telemanipulator for cardiac surgery,” European Journal of Cardio-Thoracic Surgery, 16: S106-S111 Suppl. 2, 1999.

67

Poster Contributions P28

Combined Tracking System for Augmented Reality Assisted Treatment Device

R. Wu1), I. Kassim1), W. S. Bock2) and N. W. Sing3)

1) Clinical Research Unit, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore 308433

2) General Surgery Department, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore 308433 3) Computer Integrated Medical Intervetion Lab, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798

The most common surgery procedure for removing breast cancer is Lumpectomy, in which surgeon removes the tumor plus 1cm margin with open surgery. Our research project aims to use minimally invasive breast biopsy device, such as Mammotome, to carry out lumpectomy, transforming a biopsy device into a treatment device. By using minimally invasive devices, the time for the breast to heal and the risk of infection during open surgery could be reduced.

Since minimal invasive approach limits the access of the surgeon in visually perceiving the malignant tissue in direct, computer visualization and augmented reality is introduced to help the surgeon to situate the needle correctly. We have developed an experimental system to achieve this by providing 3D visualization using virtual scene and augmented video. This paper will focus on tracking the position of objects, which is one of the kernels of computer-assisted surgery.

The system needs to know the position of at least two objects: the tumor and the Mammotome needle. On a common main mechanical arm, the system has two sub-arms to hold Mammotome and ultrasound probe respectively. The ultrasound probe scans the patient and sends the image to the system via network. As the ultrasound image has known position with respect to the ultrasound probe, tracking the tumor within the image is equivalent to tracking the ultrasound probe.

The system’s tracking module consists of two parts. The first part is encoder-based arm posture tracking, which works by reading the encoders embedded in very arm joints and computing the arm posture with kinematics. The second part is stereo-camera-based optical tracking, which tracks an optical marker pattern mounted on the sub-arm holding the ultrasound probe. These two parts are complementary in our Augmented Reality context. The camera tracking maps the position of tracked object into camera coordinate, but is inefficient in tracking multiple objects and computationally heavy. Furthermore, the optical marker must face the camera at all time with good illumination and there must be no blocking between the camera and the marker. On the other hand, the encoder-based tracking is much faster and able to track multiple objects (the two arms in our case) simultaneously. The encoders are inside the probe holder and no marker is required.

The Augmented Reality is implemented in our system like this: First, camera tracks the marker and register the physical position of the main arm. Second, the camera and the main arm are fixed so that the mapping from arm to camera is fixed. Third, encoder-based tracking now takes over to track the position of ultrasound probe and the Mammotome probe, and the optical marker can be removed. This combination does not support head-mount display, in which case the camera is always moving. However, it is easy and cost-efficient to implement.

Using encoder-based tracking alone, without camera tracking, can also provides a good virtual visualization of the whole arm system, plus the ultrasound probe, the Mammotome probe, the ultrasound image, and the segmented tumor, all in one 3D scene.

68

P29 Poster Contributions

Generation of Well-Formed Image by Mosaicing Split Images: An Approach Based on Quad-Tree Technique

R. Babu1), G. H. Kumar2) and P.Nagabhushan2)

1) Pune Institute of Computer Technology, Pune University,Sr. No 27, Pune-Satara Road, Dhankawadi, Pune-411 043,

Maharashtra, India 2) DOS in Computer Science, University of Mysore, Karnataka, India Abstract: In real-life situations, the field of view of the imaging devices is usually smaller than the scene to be imaged, due to the inherent limitations of the imaging devices. Even if the whole scene is captured in a single exposure, it could result in an image having poor resolution. In such circumstances, it will never be possible to capture the scene as a single image for further processing or human inspection or secondary storage. The only solution to such a problem is to capture the scene by splitting the scene into adjacently placed multiple frames. This process warrants a post imaging operation that requires cohering of these small images to create a single image corresponding to the source scene. The process of cohering of these small / split images has been referred as IMAGE MOSAICING in the literature. The cohering of split images has been done manually, so far. But carrying out the process of cohering interactively on a computer or completely automating the task of cohering is an active research area in the development of Image Processing, Pattern Recognition and Computer Vision applications. Research activity in the field of image mosaicing is started as early as 1970 and there has been continued interest even today investigating the suitability of image mosaicing in different applications. In this paper, a new method devised to identify the overlap region in the split images to be mosaiced is addressed. The method uses divide and conquer strategy to decompose the images into sub-images for finding the overlap region. The time complexity of this method is of the order log4(3n-1).

Keywords: Image Mosaicing, Computer Vision, Registration, Quad-tree.

69

Poster Contributions P30

Prospective Head Motion Compensation by Updating the Gradients of the MRT

C. Dold1), M. Zaitsev2) and B. Schwald3

1) Institute for Computer Graphics, Fraunhofer Gesellschaft, Fraunhoferstr. 5, D-64283 Darmstadt, Germany 2) University Hospital of Freiburg, Department Radiology, Hugstetter Str. 55, D-79106 Freiburg, Germany

3) Computer Graphics Center, Department Visual Computing, Fraunhoferstr. 5, D-64283 Darmstadt, Germany Introduction: Functional MRI (fMRI) is a non-invasive imaging technique that is used to investigate cerebral function. Patient motion remains a significant problem in many MRI applications, including fMRI, cardiac and abdominal imaging, and conventional long TR acquisitions. Many techniques are available to reduce or to compensate for bulk motion effects, such as physiological gating, phase-encode reordering, fiducial markers, fast acquisitions, image volume registration, or alternative data acquisitions strategies such as projection reconstruction, spiral and propeller. Navigator echoes are used to measure motion with one or more degrees of freedom; the motion is then compensated for either retrospectively or prospectively. An orbital navigator (ONAV) echo captures data in a circle in some plane of k-space, centered at the origin [1]. This data can be used to detect rotational and transla-tional motion in this plane, and to correct for this motion. However, multiple orthogonal ONAVs are required for general 3D motion determination, and the accuracy of a given ONAV is adversely af-fected by motion out of its plane. Methods capable of correcting for head motion in all six degrees of freedom have been proposed for human positron emission tomography (PET) brain imaging [2]. These methods rely on the accurate measurement of head motion in relation to the reconstruction coordinate frame.

Implementing a similar technique in MRI presents additional challenges. Foremost the tracking system and the MRI system have to be compatible. High magnetic fields >= 1.5 Tesla in magnetic resonance imaging systems require that the tracking camera system be positioned a sufficient distance from the MRI system to ensure proper function and safety. Functional MRI also proves challenging because of the high spatial accuracy (RMS <0,3mm) required by the complete measurement chain with a small latency time (<30ms) of the tracking system. The coordinate system of a MRI scanner is controlled using magnetic field gradients and frequencies in the sequence. Determining a precise relationship between the spatial varying magnetic field gradients and the spatial tracking information is necessary to compensate for motion artifacts.

Materials and Method: The tracking system reports the position and orientation of rigid targets fitted with “passive” retro reflective markers in six degrees of freedom (DOF) using two progressive scan cameras synchronized by a frame grabber card in a standard PC [3,4]. Several targets can be tracked simultaneously at a sampling rate of up to 25 Hz, with a quoted positional accuracy less than 0.3 mm (RMS). Communication with the MRI host computer takes place over a TCP/IP connection. Passive targets consist of at least three coplanar reflective markers. All retro reflective markers are filled with doped water to be detectable for both MR and the tracking system. The point of origin of the tracking system is transformed to the physical center of the gradient system to overlap both coordinate systems. The tracking system returns, in the overlapped coordinate frame, target orientation as a unit quaternion and target position as 3 dimensional vector (6 DOF). Every translation and rotation value is stored in a log file. The rotation is measured by rotating a phantom 10 times by one degree. MR images are ac-quired with and without motion correction. The motion correction is performed prospectively by up-dating the slice positions and orientations based on the tracking data.

Results and Discussion: The determined positional accuracy was less than 0.1 mm (RMS). The mean value of the measured reproducibility of the rotation and translation was 0,008/0,067. No artifacts were detectable in the MR images originating from interactions of the MR system and the tracking system. The latency of the whole measurement chain including the image reconstruction PC was less than 25ms. So in the near future these method may be used to investigate non-cooperative patients like childs or padiadrics. Christian Dold, “Reduzierung von Bewegungsartefakten bei Kernspinresonanz-messungen” German Patent Application 103 60 677.7 , Dec., 2003.

70

P30 Poster Contributions

Incorporation of the real-time 3D prospective motion correction demonstrated here in fMRI studies will allow to use all data from non-cooperative subjects even during periods of fast head motion within the acquisition of one volume. Keywords: MRI, Infrared Tracking, Compensation Motion Artifacts

References [1] Fu ZW et al.. Orbital navigator echoes for motion measurements in magnetic resonance imaging.

Magn Reson Med 1995;34:746–753. [2] Roger R. Fulton et al.: Correction for Head Movements in Positron Emission Tomography Using

an Optical Motion-Tracking System; IEEE Trans Nucl Sci 49, 2002. [3] Schwald Bernd, Malerczyk, Cornelius. Controlling Virtual Worlds Using Interaction Spheres.

Vidal, Creto Augusto (Ed.) u.a.; Brazilian Computer Society (SBC) 5th Symposium on Virtual Reality. Proceedings 2002. Fortaleza, CE, Brazil, 2002, pp. 3-14

[4] Schwald Bernd, Seibert Helmut, Weller Tanja. A Flexible Tracking Concept Applied to Medical Scenarios Using an AR Window. Proceedings of the International Symposium on Mixed and Augmented Reality, ISMAR 2002

71

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

List of Authors Arapakis, I. Department of Otorhinolaryngology, Albert-Ludwigs-University Freiburg, Killianstr. 5, D-79106

Freiburg, Germany Arenas Sanchez, M. M. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412

Illkirch Cedex, France Aschenbach, R. Institute for Diagnostic Imaging, Helios Klinikum Erfurt Nordhäuser Straße 74, D-99089 Erfurt,

Germany Ayache, N. EPIDAURE Project, INRIA, 2004 Route des Lucioles, F-06902 Sophia Antipolis Cedex, France Babu, R. Pune Institute of Computer Technology, Pune University,Sr. No 27, Pune-Satara Road, Dhanka-

wadi, Pune-411 043, Maharashtra, India Badreddin, E. Lehrstuhl für Automation, Universität Mannheim, B6, 23-29 C, D-68131 Mannheim, Germany Bale R. J. Interdisciplinary Stereotactic Intervention- and Planning Laboratory (SIP-Lab), Department of

Radiology I,University Innsbruck, Anichstr. 35, A-6020 Innsbruck, Austria Baltas, D. Klinikum Offenbach, Department of Radiation Oncology, Starkenburgring 66, D-63069 Offen-

bach, Germany Barbe, L. LSIIT, Parc d'Innovation, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France Bastian, L. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-

many Baum, R. P. Clinic for Nuclear Medicine / PET Centre, Zentralklinik Bad Berka, Robert-Koch-Allee 9, D-

99437 Bad Berka, Germany Baumann, M. University of Applied Sciences Regensburg, Prüfeninger Straße 58, D-93049 Regensburg, Ger-

many Bayle, B. LSIIT, Parc d'Innovation, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France Benz, M. Chair for Optics, Institute for Information und Photonics, University Erlangen-Nuremberg,

Staudtstr. 7/B2, D-91058 Erlangen, Germany Berkelbach, J. W. Department of Neurosurgery, University Medical Center, Heidelberglaan 100, NL-3584 CX,

Utrecht, the Netherlands Berkelmann, P. TIMC-IMAG, Institut d'Ingénierie de l'Information de Santé, F-38706 La Tronche Cedex, France Berlinger, K. Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11, D-

97080 Würzburg, Germany Berti, G. C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10, D-53757 Sankt Augustin, Ger-

many Beth, Th. Universität Karlsruhe, Institut für Algorithmen und Kognitive Systeme, Am Fasanengarten 5, D-

76131 Karlsruhe, Germany Bock, W. S. General Surgery Department, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore

308433 Boesecke, R. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Boesnach, I. Universität Karlsruhe, Institut für Algorithmen und Kognitive Systeme, Am Fasanengarten 5, D-

76131 Karlsruhe, Germany Boidard, E. TIMC-IMAG, Institut d'Ingénierie de l'Information de Santé, F-38706 La Tronche Cedex, France Bongartz, J. Department of Mathematics and Technology, RheinAhrCampus Remagen, Südallee 2, D-53424

Remagen, Germany Bonneau, E. CEA-List, Service Robotique et Systèmes Interactifs, Centre de Fontenay-aux-Roses, BP 6 92265

Fontenay-aux-Roses Cedex, France Borgert, J. Philips Research Laboratories, Division Technical Systems, Röntgenstrasse 24-26, D-22315

Hamburg, Germany Böse, H. University of Applied Sciences Regensburg, Prüfeninger Straße 58, D-93049 Regensburg, Ger-

many Bourquain, H. MeVis – Centrum für Medizinische Diagnosesysteme und Visualisierung, Universitätsallee 29, D-

28359 Bremen, Germany Branzan Albu, A. Computer Vision and Systems Laboratory, Dept. of Electrical and Computer Engineering, Laval

University, Cité Universitaire, Sainte-Foy, (QC), G1K 7P4, Canada Bruhns, O. T. Institute of Mechanical Engineering, Ruhr-University Bochum, D-44780 Bochum, Germany Buss, M. Munich University of Technology, Institute of Automatic Control Engineering, D-80290 Munich,

Germany

72

Medical Robotics, Navigation and Visualization (MRNV 2004)Book of Abstracts

Buzug, T. M. Department of Mathematics and Technology, RheinAhrCampus Remagen, Südallee 2, D-53424 Remagen, Germany

Citak, M. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-many

Dammann, F. Abteilung für Radiologische Diagnostik, Radiologische Universitätsklinik, Universitätsklinikum, Eberhard-Karls-Universität Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany

Decker, M. Institut für Technikfolgenabschätzung und Systemanalyse (ITAS), Forschungszentrum Karlsruhe, Postfach 3640, D-76021 Karlsruhe, Germany

Delingette, H. EPIDAURE Project, INRIA, 2004 Route des Lucioles, F-06902 Sophia Antipolis Cedex, France Dillmann, R. Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28,

Engler-Bunte-Ring 8, D-76128 Karlsruhe, Germany Doetter,M. Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11, D-

97080 Würzburg, Germany Doignon, C. LSIIT, Equipe Automatique Vision et Robotique, Louis Pasteur University, Pôle API, Boulevard

Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France Dold, C. Institute for Computer Graphics, Fraunhofer Gesellschaft, Fraunhoferstr. 5, D-64283 Darmstadt,

Germany Dombre, E. LIRMM – UMR 5506 CNRS / Université Montpellier II, 161 rue Ada, F-34392 Montpellier Ce-

dex 5, France Drexl, J. Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29,

D-28359 Bremen, Germany Egersdörfer, S. Fraunhofer Institute for Silicate Research, Neunerplatz 2, D-97082 Würzburg, Germany Eggers, G. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Engel, D. Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28,

Engler-Bunte-Ring 8, D-76128 Karlsruhe, Germany Ermert, H. Institute of High Frequency Engineering, Ruhr-University Bochum, D-44780 Bochum, Germany Ernemann, U. Abteilung für Radiologische Diagnostik, Radiologische Universitätsklinik, Universitätsklinikum,

Eberhard-Karls-Universität Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany Esen, H. Munich University of Technology, Institute of Automatic Control Engineering, D-80290 Munich,

Germany Esteve, D. LAAS/CNRS, 7, avenue du Colonel Roche, F-31077 Toulouse, France Ewers, R. University Hospital of Cranio-Maxillofacial and Oral Surgery, Medical School, University of

Vienna, Waehringer Guertel 18-20, A-1090 Vienna, Austria Fangmann, J. Chirurgische Klinik II, Klinik für Abdominal-, Transplantations- und Gefäßchirurgie, Universität

Leipzig, Liebigstr. 20a, D-04103 Leipzig, Germany Fiegele, T. Leopold-Franzens-Universität Innsbruck, Universitätsklinik für Neurochirurgie, Anichstraße 35,

A-6020 Innsbruck, Austria Fingber, J. C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10, D-53757 Sankt Augustin,

Germany Forest, C. IRCAD/EITS/VIRTUALIS, 1, Place de l’Hôpital, F-67091 Strasbourg Cedex, France Forgione, A. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412

Illkirch Cedex, France Freimuth, H. Institute for Micro Technology Mainz GmbH, Carl-Zeiss-Strasse 18-20, D-55129 Mainz,

Germany Frericks, B. Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29,

D-28359 Bremen, Germany Freund, E. Institute of Robotics Research (IRF), University of Dortmund, Otto-Hahn-Str. 8, D-44227

Dortmund, Germany Frey, S. Forschungszentrum caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn, Germany Gangi, A. Hôpital Civil, 1, place de l'Hôpital, BP 426, F-67091 Strasbourg Cedex, France Gangloff, J. A. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412

Illkirch Cedex, France Geerling, J. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-

many Ghanai, S. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany

73

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Giel, D. Forschungszentrum caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn, Germany Ginhoux, R. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412

Illkirch Cedex, France Gösling, M. Unfallchirurgische Klinik, Medizinische Hochschule Hannover (MHH), Carl-Neuberg-Str. 1, D-

30625 Hannover, Germany Gravez, P. CEA-List, Service Robotique et Systèmes Interactifs, Centre de Fontenay-aux-Roses, BP 6 92265

Fontenay-aux-Roses Cedex, France Grewer, R. Philips Research Laboratories, Division Technical Systems, Röntgenstrasse 24-26, D-22315

Hamburg, Germany Gross, I. University of Siegen, Institute of Automatic Control Engineering, ZESS – Centrum for Sensor-

Systems, Paul-Bonatz-Str. 9-11, D-57068 Siegen, Germany Hahn, M. Universität Karlsruhe, Institut für Algorithmen und Kognitive Systeme, Am Fasanengarten 5, D-

76131 Karlsruhe, Germany Hamm, K. Dept. for Stereotactic Neurosurgery and Radiosurgery, Helios Klinikum Erfurt Nordhäuser Straße

74, D-99089 Erfurt, Germany Handels, H. University Hospital Hamburg-Eppendorf, Institute of Medical Informatics, House S 14, Martinistr.

52, D-20246 Hamburg, Germany Harms, J. Chirurgische Klinik II, Klinik für Abdominal-, Transplantations- und Gefäßchirurgie, Universität

Leipzig, Liebigstr. 20a, D-04103 Leipzig, Germany Hartmann, U. Department of Mathematics and Technology, RheinAhrCampus Remagen, Südallee 2, D-53424

Remagen, Germany Hassfeld, S. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Hattingen, E. Johann Wolfgang Goethe University, Institute of Neuroradiology, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Häusler, G. Chair for Optics, Institute for Information und Photonics, University Erlangen-Nuremberg,

Staudtstr. 7/B2, D-91058 Erlangen, Germany Hauss, J. Chirurgische Klinik II, Klinik für Abdominal-, Transplantations- und Gefäßchirurgie, Universität

Leipzig, Liebigstr. 20a, D-04103 Leipzig, Germany Hayashi, M. Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine, Tokyo

Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan Heiland, M. University Hospital Hamburg-Eppendorf, Klinik und Poliklinik für Zahn-, Mund-, Kiefer- und

Gesichtschirurgie (Nordwestdeutsche Kieferklinik), Martinistr. 52, D-20246 Hamburg, Germany Heinze, F. Institute of Robotics Research (IRF), University of Dortmund, Otto-Hahn-Str. 8, D-44227 Dort-

mund, Germany Hellier, P. Projet Vista, IRISA/INRIA-CNRS, Campus Universitaire de Beaulieu, F-35042 Rennes Cedex,

France Henrich, D. Lehrstuhl für Angewandte Informatik III (Robotik und Eingebettete Systeme), Universität

Bayreuth, D-95440 Bayreuth, Germany Hering, P. Holography and Laser Technology, Stiftung caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn,

Germany Herrmann, E. Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Hertzberg, J. Fraunhofer Institute for Autonomous Intelligent Systems (AIS), Schloss Birlinghoven, D-53754

Sankt Augustin, Germany Hierl, T. Klinik und Poliklinik für Mund-, Kiefer- und Plastische Gesichtschirurgie, Universitätsklinikum

Leipzig, Nürnberger Str. 57, D-04103 Leipzig, Germany Hodgson, A. J. Neuromotor Control Laboratory, Department of Mechanical Engineering, University of British

Columbia, 2324 Main Mall, Vancouver, B.C., Canada V6T 1Z4 Hoffmann, J. Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen,

Osianderstrasse 2-8, D-72076 Tübingen, Germany Höhne, K. H. University Hospital Hamburg-Eppendorf, Institute of Medical Informatics, House S 14, Martinistr.

52, D-20246 Hamburg, Germany Holz, D. Department of Mathematics and Technology, RheinAhrCampus Remagen, Südallee 2, D-53424

Remagen, Germany Hüfner, T. Unfallchirurgische Klinik, Medizinische Hochschule Hannover (MHH), Carl-Neuberg-Str. 1, D-

30625 Hannover, Germany

74

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Iseki, H. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan Fac-ulty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine

Ivanenko, M. Holography and Laser Technology, Stiftung caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn, Germany

Kahn, T. Klinik für Diagnostische und Interventionelle Radiologie, Universität Leipzig, Liebigstr. 20a, D-04103 Leipzig, Germany

Kamucha, G. Deptartment of Electrical & Communications Engineering, Moi University, P.O. Box 3900, Eldoret, Kenya

Kassim, I. Clinical Research Unit, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore 308433 Kendoff, D. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-

many Kfuri, M., Jr. Alexander-von-Humboldt-Stiftung, Ribeirao Preto Medical School, Sao Paulo, Brasilien Khaled, W. Institute of High Frequency Engineering, Ruhr-University Bochum, D-44780 Bochum, Germany Kleinert, G. Dept. for Stereotactic Neurosurgery and Radiosurgery, Helios Klinikum Erfurt Nordhäuser Straße

74, D-99089 Erfurt, Germany Knappe, P. University of Siegen, Institute of Automatic Control Engineering, ZESS – Centrum for Sensor-

Systems, Paul-Bonatz-Str. 9-11, D-57068 Siegen, Germany Knappe, V. Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29,

D-28359 Bremen, Germany Knoop, H. Universität Karlsruhe (TH), Institut für Prozessrechentechnik, Automation und Robotik (IPR),

Gebäude 40.28, Engler-Bunte-Ring 8, D-76131 Karlsruhe, Germany Kober, C. Univ. of Appl. Sc. Osnabrück, Albrechtstr. 30, P.O. Box 19 40, D-49009 Osnabrück, Germany Kompa, G. Department of High Frequency Engineering (HFT), Kassel University, Wilhelmshöher Allee 73,

D-34121 Kassel, Germany Köpfle, A. Lehrstuhl für Informatik V, Universität Mannheim, B6, 23-29 C, D-68161 Mannheim, Germany Korb, W. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Kotrikova, B. MKG-Chirurgie, RupMKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer

Feld 400, D-69120 Heidelberg, Germanyrecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120 Heidelberg, Germany

Krettek, C. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-many

Krishnan, R. Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528 Frankfurt/Main, Germany

Krüger, S. Philips Research Laboratories, Division Technical Systems, Röntgenstrasse 24-26, D-22315 Hamburg, Germany

Krüger, T. Department of Maxillofacial Surgery, Clinical Navigation and Robotics, Medical Faculty Charité, Humboldt University at Berlin, D-13353 Berlin, Germany

Kübler, A. Department of Cranio-Maxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62, D-50937 Köln, Germany

Kubo, O. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan, Department of Neurosurgery

Kumar, G. H. DOS in Computer Science, University of Mysore, Karnataka, India Kurek, R. Klinikum Offenbach, Strahlenklinik, Starkenburgring 66, D-63069 Offenbach, Germany Lamy, S. CEA-List, Service Robotique et Systèmes Interactifs, Centre de Fontenay-aux-Roses, BP 6, 92265

Fontenay-aux-Roses Cedex, France Laszig, R. Department of Otorhinolaryngology, Albert-Ludwigs-University Freiburg, Killianstr. 5, D-79106

Freiburg, Germany Laurendeau, D. Computer Vision and Systems Laboratory, Dept. of Electrical and Computer Engineering, Laval

University, Cité Universitaire, Sainte-Foy, (QC), G1K 7P4, Canada Lehmann, K.S. Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29,

D-28359 Bremen, Germany Leimnei, A. Laboratoire TIMC, Equipe GMCAO, Institut d'Ingénierie de l'Information de Santé, Faculté de

Médecine, F-38706 La Tronche Cedex, France

75

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Leroy, J. Institut de Recherche sur les Cancers de l'Appareil Digestif (IRCAD), Hôpital Civil, BP 426, F-67091 Strasbourg Cedex, France

Letteboer, M. Image Sciences Institute, University Medical Center, Heidelberglaan 100, NL-3584 CX, Utrecht, the Netherlands

Liehr, F. University of Duisburg-Essen, Lotharstrasse 65, D-47048 Duisburg, Germany Lingemann, K. Fraunhofer Institute for Autonomous Intelligent Systems (AIS), Schloss Birlinghoven, D-53754

Sankt Augustin, Germany Lorenz, A. LP-IT Innovative Technologies GmbH, Huestr. 5, D-44787 Bochum, Germany Lueth, T. Department of Maxillofacial Surgery, Clinical Navigation and Robotics, Medical Faculty Charité,

Humboldt University at Berlin, D-13353 Berlin, Germany Maassen, M. M. Department of Otolaryngology – Head & Neck Surgery, University of Tübingen, Elfriede-

Aulhorn-Straße 5, D-72076 Tübingen, Germany Maier, T. Chair for Optics, Institute for Information und Photonics, University Erlangen-Nuremberg,

Staudtstr. 7/B2, D-91058 Erlangen, Germany Maillet, P. LIRMM UMR 5506 CNRS / Université Montpellier II, 161 rue Ada, F-34392 Montpellier Cedex

5, France Malthan, D. Fraunhofer-Institute for Manufacturing Engineering and Automation IPA, Nobelstraße 12, 70569

Stuttgart, Germany Männer, R. Lehrstuhl für Informatik V, Universität Mannheim, B6, 23-29 C, D-68161 Mannheim, Germany Marescaux, J. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412

Illkirch Cedex, France Marmignon, C. Laboratoire TIMC, Equipe GMCAO, Institut d'Ingénierie de l'Information de Santé, Faculté de

Médecine, F-38706 La Tronche Cedex, France Marmulla, R. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Martin, T. Klinikum Offenbach, Strahlenklinik, Starkenburgring 66, D-63069 Offenbach, Germany Maruyama, T. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan,

Department of Neurosurgery Mathelin de, M. F. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412

Illkirch Cedex, France Maurin, B. LSIIT, Parc d'Innovation, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France Meer, F. van LAAS/CNRS, 7, avenue du Colonel Roche, F-31077 Toulouse, France Michelin, M. LIRMM – UMR 5506 CNRS / Université Montpellier II, 161 rue Ada, F-34392 Montpellier

Cedex 5, France Mischkowski, R. A. Department of Cranio-Maxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62,

D-50937 Köln, Germany Moisan, C. Dept. of Radiology, Laval University, Cité Universitaire, Sainte-Foy, (QC), G1K 7P4, Canada Moldenhauer J. Universität Karlsruhe, Institut für Algorithmen und Kognitive Systeme, Am Fasanengarten 5,

D-76131 Karlsruhe, Germany Monkman, G. Fraunhofer Institute for Silicate Research, Neunerplatz 2, D-97082 Würzburg, Germany Mössinger, E. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-

many Mühling, J. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Mühling, J. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Muragaki, Y. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan Fac-

ulty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine

Mutter, D. IRCAD/EITS/VIRTUALIS, 1, Place de l’Hôpital, F-67091 Strasbourg Cedex, France Nagabhushan, P. DOS in Computer Science, University of Mysore, Karnataka, India Nageotte, F. LSIIT, Equipe Automatique Vision et Robotique, Louis Pasteur University, Pôle API, Boulevard

Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France Nakamura, R. Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine, Tokyo

Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan

76

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Nambu, K. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan, Faculty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine

Neukam, F. W. Department of Oral and Maxillofacial Surgery, University Erlangen-Nuremberg, Glückstr. 11, D-91054 Erlangen, Germany

Nicolau, S. IRCAD/EITS/VIRTUALIS, 1, Place de l’Hôpital, F-67091 Strasbourg Cedex, France Niesen, A. Clinic for Nuclear Medicine / PET Centre, Zentralklinik Bad Berka, Robert-Koch-Allee 9,

D-99437 Bad Berka, Germany Niessen, W. Image Sciences Institute, University Medical Center, Heidelberglaan 100, NL-3584 CX, Utrecht,

the Netherlands Nishizawa, K. Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine, Tokyo

Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan Nkenke, E. Department of Oral and Maxillofacial Surgery, University Erlangen-Nuremberg, Glückstr. 11,

D-91054 Erlangen, Germany Nüchter, A. Fraunhofer Institute for Autonomous Intelligent Systems (AIS), Schloss Birlinghoven, D-53754

Sankt Augustin, Germany Ohnsorge, J. A. K. Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, Orthopädische Universitäts-

klinik, UK Aachen, Pauwelsstrasse 30, D-52074 Aachen, Germany Oldhafer, K. MeVis – Centrum für Medizinische Diagnosesysteme und Visualisierung, Universitätsallee 29, D-

28359 Bremen, Germany Oomori, S. Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine, Tokyo

Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan O'Sullivan, N. MKG-Chirurgie, Ruprecht-Karls-University Heidelberg, Im Neuenheimer Feld 400, D-69120

Heidelberg, Germany Peitgen, H.-O. Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29,

D-28359 Bremen, Germany Pennec, X. EPIDAURE Project, INRIA, 2004 Route des Lucioles, F-06902 Sophia Antipolis Cedex, France Pesavento, A. LP-IT Innovative Technologies GmbH, Huestr. 5, D-44787 Bochum, Germany Petersik, A. University Hospital Hamburg-Eppendorf, Institute of Medical Informatics, House S 14, Martinistr.

52, D-20246 Hamburg, Germany Pflesser, B. University Hospital Hamburg-Eppendorf, Institute of Medical Informatics, House S 14, Martinistr.

52, D-20246 Hamburg, Germany Pieck, S. University of Siegen, Institute of Automatic Control Engineering, ZESS – Centrum for Sensor-

Systems, Paul-Bonatz-Str. 9-11, D-57068 Siegen, Germany Plaskos, C. Laboratoire TIMC-IMAG, Groupe GMCAO, Faculté de Médecine de Grenoble, Joseph Fourier

University, F-38706 La Tronche Cedex, France Pohl, Y. Poliklinik für Chirurgische Zahn-, Mund- und Kieferheilkunde, University Dental Clinic Bonn,

Welschnonnenstr. 17, 53111 Bonn, Germany Poignet, P. LIRMM UMR 5506 CNRS / Université Montpellier II, 161 rue Ada, F-34392 Montpellier Cedex

5, France Pott, P. P. Labor für Biomechanik und experimentelle Orthopädie, Orthopädische Universitätsklinik

Mannheim, Theodor-Kutzer-Ufer 1-3; D-68167 Mannheim, Germany Prescher, A. Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, Institut für Anatomie, UK

Aachen, Wendlingweg 2, D-52074 Aachen, Germany Preusser, T. CeVis, University of Bremen, Universitätsallee 29, D-28359 Bremen, Germany Raabe, A. Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Raczkowsky, J. Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28,

Engler-Bunte-Ring 8, D-76128 Karlsruhe, Germany Rancourt, D. Dept. of Mechanical Engineering, Sherbrooke University, 2500 boul. de l'Université, Sherbrooke

(QC), J1K 2R1, Canada Rautmann, M. Orthopädische Universitätsklinik Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim,

Germany Reichling, S. Institute of Mechanical Engineering, Ruhr-University Bochum, D-44780 Bochum, Germany Reinert, S. Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen,

Osianderstrasse 2-8, D-72076 Tübingen, Germany

77

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Richter, M. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-many

Röddiger, S. Klinikum Offenbach, Strahlenklinik, Starkenburgring 66, D-63069 Offenbach, Germany Roßmann, J. Institute of Robotics Research (IRF), University of Dortmund, Otto-Hahn-Str. 8, D-44227 Dort-

mund, Germany Roth, M. Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11, D-

97080 Würzburg, Germany Rueckert, D. Department of Computing, Imperial College, South Kensington Campus, 180 Queen's Gate, SW7

2AZLondon, UK Rumpf, M. University of Duisburg-Essen, Lotharstrasse 65, D-47048 Duisburg, Germany Sader, R. University Hospital Basel, Spitalstrasse 21, CH-4031 Basel, Switzerland, HFZ, TU Munich,

Ismaningerstr. 22, D-81675 Munich, Germany Sakas, G. MedCom Gesellschaft für medizinische Bildverarbeitung mbH, Rundeturmstraße 12, D-64283

Darmstadt, Germany Sakuma, I. The University of Tokyo,7-3-1, Hongo, Bunkyou-ku, Tokyo 113-8654, Japan Salb, T. Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28,

Engler-Bunte-Ring 8, D-76128 Karlsruhe, Germany Sauer, O. Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11,

D-97080 Würzburg, Germany Sauter, S. University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland Scharf, H. P. Orthopädische Universitätsklinik Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim,

Germany Schauer, D. Department of Maxillofacial Surgery, Clinical Navigation and Robotics, Medical Faculty Charité,

Humboldt University at Berlin, D-13353 Berlin, Germany Schicho, K. University Hospital of Cranio-Maxillofacial and Oral Surgery, Medical School, University of

Vienna, Waehringer Guertel 18-20, A-1090 Vienna, Austria Schill, M. VRmagic GmbH, B6, 23-29 C, D-68032 Mannheim, Germany Schipper, J. Department of Otorhinolaryngology, Albert-Ludwigs-University Freiburg, Killianstr. 5, D-79106

Freiburg, Germany Schkommodau, E. Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, Institut für Biomedizinische

Technologien, UK Aachen, Pauwelsstrasse 20, D-52074 Aachen, Germany Schmidt, J. G. C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10, D-53757 Sankt Augustin,

Germany Schmitz, G. Department of Mathematics and Technology, RheinAhrCampus Remagen, Südallee 2, D-53424

Remagen, Germany Schmücking, M. Clinic for Nuclear Medicine / PET Centre, Zentralklinik Bad Berka, Robert-Koch-Allee 9,

D-99437 Bad Berka, Germany Schnaider, M. ZGDV eV, Abt. Z2 - Visual Computing, Fraunhoferstr. 5, D-64283 Darmstadt, Germany Schön, N. Chair for Optics, Institute for Information und Photonics, University Erlangen-Nuremberg,

Staudtstr. 7/B2, D-91058 Erlangen, Germany Schwaderer, E. Department of Diagnostic Radiology, University Hospital of Tübingen, Otfried-Müller-Str. 10,

D-72076 Tübingen, Germany Schwald, B. Computer Graphics Center, Department Visual Computing, Fraunhoferstr. 5, D-64283 Darmstadt,

Germany Schwarz, M. L. R. Labor für Biomechanik und experimentelle Orthopädie, Orthopädische Universitätsklinik

Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany Schweikard, A. Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11,

D-97080 Würzburg, Germany Seemann, R. University Hospital of Cranio-Maxillofacial and Oral Surgery, Medical School, University of

Vienna, Waehringer Guertel 18-20, A-1090 Vienna, Austria Seibert, H. Fraunfofer ZGDV, Visual Computing, Fraunhoferstr. 5, D-64283 Darmstadt, Germany Seifert, V. Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Shin, Hoen-oh Center for Medical Diagnostic Systems and Visualization (MeVis) GmbH, Universitätsallee 29,

D-28359 Bremen, Germany

78

Medical Robotics, Navigation and Visualization (MRNV 2004)

Book of Abstracts

Siebert, C. H. Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, Orthopädische Universitäts-klinik, UK Aachen, Pauwelsstrasse 30, D-52074 Aachen, Germany

Siessegger, M. Department of Cranio-Maxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62, D-50937 Köln, Germany

Sing, N. W. Computer Integrated Medical Intervetion Lab, Nanyang Technological University, 50 Nanyang Ave 6, Singapore 39798

Soler, L. LSIIT, Louis Pasteur University, Pôle API, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France

Spetzger, U. Neurochirurgische Klinik, Klinikum Karlsruhe, Moltkestraße 90, D-76133 Karlsruhe, Germany Stallkamp, J. Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Nobelstr. 12, 70569

Stuttgart, Germany Stolka, P. Lehrstuhl für Angewandte Informatik III (Robotik und Eingebettete Systeme), Universität

Bayreuth, D-95440 Bayreuth, Germany Sugiura, M. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan Fac-

ulty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine

Surber, G. Dept. for Stereotactic Neurosurgery and Radiosurgery, Helios Klinikum Erfurt Nordhäuser Straße 74, D-99089 Erfurt, Germany

Surmann, H. Fraunhofer Institute for Autonomous Intelligent Systems (AIS), Schloss Birlinghoven, D-53754 Sankt Augustin, Germany

Suzukawa, K. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan, Faculty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine

Szelényi, A. Johann Wolfgang Goethe University, Department of Neurosurgery, Schleusenweg 2-16, D-60528 Frankfurt/Main, Germany

Taha, F. CHU Amiens, Service de Chirurgie Maxillo-Faciale, CHU Nord, 80054 Amiens, France Takakura, K. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan,

Faculty of Advanced Techno-Surgery (FATS), Institute of Advanced Biomedical Engineering & Science, Graduate School of Medicine

Thelen, A. Forschungszentrum caesar, Ludwig-Erhard-Allee 2, D-53175 Bonn, Germany Tiede, U. University Hospital Hamburg-Eppendorf, Institute of Medical Informatics, House S 14, Martinistr.

52, D-20246 Hamburg, Germany Timminger, H. Department of Measurement, Control, and Microtechnology, University Ulm, Albert-Einstein-

Allee 41, D-89081 Ulm, Germany Tomokatsu, H. Tokyo Women's Medical University, 8-1 Kawada-cho Shinjuku-ku, Tokyo 162-8666, Japan,

Department of Neurosurgery Troccaz, J. TIMC-IMAG, Institut d'Ingénierie de l'Information de Santé, F-38706 La Tronche Cedex, France Troitzsch, D. Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen,

Osianderstrasse 2-8, D-72076 Tübingen, Germany Tunayar, A. Institute for Micro Technology Mainz GmbH, Carl-Zeiss-Strasse 18-20, D-55129 Mainz, Ger-

many Vences, L. Klinik und Poliklinik für Strahlentherapie, Universität Würzburg, Josef-Schneider-Str. 11, D-

97080 Würzburg, Germany Vogt, F. Chair for Pattern Recognition, University Erlangen-Nuremberg, Martensstr. 3, D-91058 Erlangen,

Germany Wagner, A. Lehrstuhl für Automation, Universität Mannheim, B6, 23-29 C, D-68131 Mannheim, Germany Wagner, R. University Hospital of Cranio-Maxillofacial and Oral Surgery, Medical School, University of

Vienna, Waehringer Guertel 18-20, A-1090 Vienna, Austria Wahl, G. Poliklinik für Chirurgische Zahn-, Mund- und Kieferheilkunde, University Dental Clinic Bonn,

Welschnonnenstr. 17, 53111 Bonn, Germany Wahrburg, J. University of Siegen, Institute of Automatic Control Engineering, ZESS – Centrum for Sensor-

Systems, Paul-Bonatz-Str. 9-11, D-57068 Siegen, Germany Wechsler, A. Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Nobelstr. 12, 70569

Stuttgart, Germany Weikard, U. University of Duisburg-Essen, Lotharstrasse 65, D-47048 Duisburg, Germany Weiser, P. Institut für CAE, Fachhochschule Mannheim, Windeckstr. 110, D-68163 Mannheim, Germany Wesarg, S. Fraunfofer ZGDV, Visual Computing, Fraunhoferstr. 5, D-64283 Darmstadt, Germany

79

Medical Robotics, Navigation and Visualization (MRNV 2004) Book of Abstracts

Westendorff, C. Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum Tübingen, Osianderstrasse 2-8, D-72076 Tübingen, Germany

Widmann, E. Ordination Dr. Widmann, Unterauweg 7a, A-6280 Zell/Ziller, Austria Widmann, G. Interdisciplinary Stereotactic Intervention- and Planning Laboratory (SIP-Lab), Department of

Radiology I,University Innsbruck, Anichstr. 35, A-6020 Innsbruck, Austria Widmann, R. Zahntechnisches Labor Czech, Kreuzstr. 20, A-6067 Absam, Austria Wildberger, J. E. Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, Klinik für Radiologische

Diagnostik, UK Aachen, Pauwelsstrasse 30, D-52074 Aachen, Germany Willems, P. Department of Neurosurgery, University Medical Center, Heidelberglaan 100, NL-3584 CX,

Utrecht, the Netherlands Wirtz, D. C. Rheinisch-Westfälische Technische Hochschule, RWTH Aachen, Orthopädische Universitäts-

klinik, UK Aachen, Pauwelsstrasse 30, D-52074 Aachen, Germany Wittwer, G. University Hospital of Cranio-Maxillofacial and Oral Surgery, Medical School, University of

Vienna, Waehringer Guertel 18-20, A-1090 Vienna, Austria Woessner, S. Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Nobelstr. 12, 70569

Stuttgart, Germany Wolff, R. Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Wörn, H. Institute for Process Control and Robotics (IPR), University of Karlsruhe (TH), Building 40.28,

Engler-Bunte-Ring 8, D-76128 Karlsruhe, Germany Wu, R. Clinical Research Unit, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore 308433 Wyslucha, U. MAQUET GmbH & Co. KG, Kehler Straße 31, D-76437 Rastatt, Germany Yahya, H. Johann Wolfgang Goethe University, Institute of Neuroradiology, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Yano, K. Toyohashi University of Technology, Department of Production Systems Engineering,

Hibarigaoka 1-1, Tempaku, 441-8580 Toyohashi, Japan Zaitsev, M. University Hospital of Freiburg, Department Radiology, Hugstetter Str. 55, D-79106 Freiburg,

Germany Zamboglou, N. Klinikum Offenbach, Strahlenklinik, Starkenburgring 66, D-63069 Offenbach, Germany Zanne, P. LSIIT, Parc d'Innovation, Boulevard Sébastien Brant, BP 10413, F-67412 Illkirch Cedex, France Zech, S. Trauma Department, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Ger-

many Zeilhofer, H.-F. University Hospital Basel, Spitalstrasse 21, CH-4031 Basel, Switzerland

HFZ, TU Munich, Ismaningerstr. 22, D-81675 Munich, Germany Zimmermann, M. Department of Neurosurgery, Johann Wolfgang Goethe University, Schleusenweg 2-16, D-60528

Frankfurt/Main, Germany Zinser, M. Department of Cranio-Maxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62,

D-50937 Köln, Germany Zogal, P. MedCom Gesellschaft für medizinische Bildverarbeitung mbH, Rundeturmstraße 12, D-64283

Darmstadt, Germany Zöller, J. E. Department of Cranio-Maxillofacial and Plastic Surgery, University of Cologne, Kerpener Str. 62,

D-50937 Köln, Germany

80 ISBN: 3-9807690-5-4 © 2004 RheinAhrCampus Remagen

81

82