21
Biomedical Paper Multimodal Image Fusion in Ultrasound-Based Neuronavigation: Improving Overview and Interpretation by Integrating Preoperative MRI with Intraoperative 3D Ultrasound Frank Lindseth, M.Sc., Jon Harald Kaspersen, Ph.D., Steinar Ommedal, B.Sc., Thomas Langø, Ph.D., Jon Bang, Ph.D., Jørn Hokland, Ph.D., Geirmund Unsgaard, M.D., Ph.D., and Toril A. Nagelhus Hernes, Ph.D. SINTEF Unimed, Ultrasound (F.L., J.H.K., S.O., T.L., J.B., T.A.N.H.); Department of Computer and Information Science, The Norwegian University of Science and Technology (F.L., J.H.); and Department of Neurosurgery, University Hospital in Trondheim, The Norwegian University of Science and Technology (G.U.), Trondheim, Norway ABSTRACT Objective: We have investigated alternative ways to integrate intraoperative 3D ultrasound images and preoperative MR images in the same 3D scene for visualizing brain shift and improving overview and interpretation in ultrasound-based neuronavigation. Materials and Methods: A Multi-Modal Volume Visualizer (MMVV) was developed that can read data exported from the SonoWand neuronavigation system and reconstruct the spatial rela- tionship between the volumes available at any given time during an operation, thus enabling the exploration of new ways to fuse pre- and intraoperative data for planning, guidance and therapy control. In addition, the mismatch between MRI volumes registered to the patient and intraoperative ultrasound acquired from the dura was qualified. Results: The results show that image fusion of intraoperative ultrasound images in combination with preoperative MRI will make perception of available information easier by providing updated (real-time) image information and an extended overview of the operating field during surgery. This approach will assess the degree of anatomical changes during surgery and give the surgeon an understanding of how identical structures are imaged using the different imaging modalities. The present study showed that in 50% of the cases there were indications of brain shift even before the surgical procedure had started. Conclusions: We believe that image fusion between intraoperative 3D ultrasound and preop- erative MRI might improve the quality of the surgical procedure and hence also improve the patient outcome. Comp Aid Surg 8:49–69 (2003). ©2003 CAS Journal, LLC Key words: multimodal visualization, image fusion, neuronavigation, 3D ultrasound, intraoperative imaging, brain shift, image guided neurosurgery, ultrasound-based neuronavigation, 3D display, computer assisted surgery, minimally invasive surgery Key link: www.us.unimed.sintef.no Received July 16, 2002; accepted July 1, 2003 Address correspondence/reprint requests to: Frank Lindseth, SINTEF Unimed, Ultrasound, Elgeseter gt. 10, Olav Kyrres gt, 7465 Trondheim, Norway. Telephone: + 47 73 59 03 53; Fax: + 47 73 59 78 73; E-mail: [email protected] ©2003 CAS Journal, LLC Computer Aided Surgery 8:49–69 (2003)

Multimodal image fusion in ultrasound-based neuronavigation: improving overview and interpretation by integrating preoperative MRI with intraoperative 3D ultrasound

Embed Size (px)

Citation preview

Biomedical Paper

Multimodal Image Fusion in Ultrasound-BasedNeuronavigation: Improving Overview and

Interpretation by Integrating Preoperative MRI withIntraoperative 3D Ultrasound

Frank Lindseth, M.Sc., Jon Harald Kaspersen, Ph.D., Steinar Ommedal, B.Sc.,Thomas Langø, Ph.D., Jon Bang, Ph.D., Jørn Hokland, Ph.D., Geirmund Unsgaard, M.D., Ph.D., and

Toril A. Nagelhus Hernes, Ph.D.SINTEF Unimed, Ultrasound (F.L., J.H.K., S.O., T.L., J.B., T.A.N.H.); Department of Computerand Information Science, The Norwegian University of Science and Technology (F.L., J.H.); and

Department of Neurosurgery, University Hospital in Trondheim, The Norwegian University of Scienceand Technology (G.U.), Trondheim, Norway

ABSTRACTObjective: We have investigated alternative ways to integrate intraoperative 3D ultrasound

images and preoperative MR images in the same 3D scene for visualizing brain shift and improvingoverview and interpretation in ultrasound-based neuronavigation.

Materials and Methods: A Multi-Modal Volume Visualizer (MMVV) was developed that canread data exported from the SonoWand� neuronavigation system and reconstruct the spatial rela-tionship between the volumes available at any given time during an operation, thus enabling theexploration of new ways to fuse pre- and intraoperative data for planning, guidance and therapycontrol. In addition, the mismatch between MRI volumes registered to the patient and intraoperativeultrasound acquired from the dura was qualified.

Results: The results show that image fusion of intraoperative ultrasound images in combinationwith preoperative MRI will make perception of available information easier by providing updated(real-time) image information and an extended overview of the operating field during surgery. Thisapproach will assess the degree of anatomical changes during surgery and give the surgeon anunderstanding of how identical structures are imaged using the different imaging modalities. Thepresent study showed that in 50% of the cases there were indications of brain shift even before thesurgical procedure had started.

Conclusions: We believe that image fusion between intraoperative 3D ultrasound and preop-erative MRI might improve the quality of the surgical procedure and hence also improve the patientoutcome. Comp Aid Surg 8:49–69 (2003). ©2003 CAS Journal, LLC

Key words: multimodal visualization, image fusion, neuronavigation, 3D ultrasound, intraoperativeimaging, brain shift, image guided neurosurgery, ultrasound-based neuronavigation, 3D display,computer assisted surgery, minimally invasive surgeryKey link: www.us.unimed.sintef.no

Received July 16, 2002; accepted July 1, 2003

Address correspondence/reprint requests to: Frank Lindseth, SINTEF Unimed, Ultrasound, Elgeseter gt. 10, Olav Kyrres gt, 7465Trondheim, Norway. Telephone: + 47 73 59 03 53; Fax: + 47 73 59 78 73; E-mail: [email protected]

©2003 CAS Journal, LLC

Computer Aided Surgery 8:49–69 (2003)

INTRODUCTION

Image-Guided Neurosurgery:A Brief OverviewModern neurosurgery has seen a dramatic changein the use of image information over the last de-cade. Image data from modalities like ComputedTomography (CT) and Magnetic Resonance Im-aging (MRI) are increasingly being used for pre-operative planning, intraoperative guidance andpostoperative control, not just for diagnostics.Computer-aided systems are used to fully takeadvantage of the increasing amount of informa-tion available for any given patient.

Stereotactic systems try to bridge the gap be-tween preoperative image data (CT, MRI) andthe physical object in the operating room (OR).The first systems were referred to as frame-basedsystems because they used specially designedframes that were attached to the patient’s headduring both the preoperative image scan and thesurgery itself.1–3 Despite the fact that these sys-tems were highly accurate, they had and still haveseveral disadvantages: The frames are invasive,bulky, and interfere with the surgical procedure,while the surgical approach is time-consumingand provides no real-time feedback on currentpatient anatomy.3 With advances in sensing andcomputer technology, a new generation of frame-less stereotactic systems (i.e., neuronavigation sys-tems) have been developed that try to overcomethese problems without sacrificing accuracy.4–11

Neuronavigation systems differ in the way theyintegrate preoperative image data with physicalspace and in the kind of tracking system they useto follow the surgical tools (e.g., optical, magnetic,ultrasonic or mechanical). In addition, these sys-tems vary in the way image information is con-trolled by various tools and displayed to the sur-geon for interpretation. Although conventionalnavigation systems have proven quite useful overthe last decade, they suffer from the fact that theyonly use preoperative images, making them un-able to adapt to changes that occur during sur-gery. Thus, if the brain shifts or deforms due todrainage or surgical manipulation,12–14 surgeryguided by these images will become inaccurate.

The brain-shift problem can only be solvedadequately by integrating intraoperative imagingwith navigation technology. Several intraopera-tive imaging modalities have been proposed.These include CT, MRI and ultrasound (US).Open CT-15,16 and MRI-17–23 based systems,

where the patient is transported into and out ofthe scanner, have obvious logistic drawbacks thatlimit the practical number of 3D scans acquiredduring surgery. Also, repeated use of intraopera-tive CT exposes both patient and medical person-nel to considerable radiation doses. Thus, themost promising alternatives in the foreseeable fu-ture are interventional MRI24–28 and intraopera-tive US.29–33 In an interventional MRI system, thesurgeon operates inside the limited working spaceof the magnet. Choosing speed over quality, it ispossible to obtain near-real-time 2D images de-fined by the position of various surgical tools inaddition to updated 3D maps in minutes withoutmoving the patient. However, these systems re-quire high investment, high running costs, and aspecial OR, as well as special surgical equipment.

Ultrasound, although used by some groupsfor several years, has only recently gained broaderacceptance in neurosurgery,34 mainly due to im-proved image quality and its relatively low costs.The image quality and user friendliness of UShave partly been achieved by optimizing and ad-justing the surgical set-up,35 as well as technicalscan parameters, in addition to integration withnavigation technology.31 The additional real-time2D and 3D freehand capabilities, as well as real-time 3D possibilities, may establish intraoperativeUS as the main intraoperative imaging modalityfor future neuronavigation. Ultrasound may beused indirectly in neuronavigation to track theanatomical changes that occur, using thesechanges to elastically modify preoperative dataand navigate according to the manipulated MRI/CT scans,32,36 or the US images themselves maybe used directly as maps for navigation.29–31,33,37,38

Even though the direct approach wasadopted and demonstrated in the present study,this should not exclude the use of preoperativeMRI data during surgery, or automatic deforma-tion of preoperative data to match the intraopera-tive anatomy detected by US when an accurateand robust method for achieving this exists. Theless reliable preoperative images may be usefulfor obtaining information on surroundinganatomy and as an aid to interpretation of the USimages (especially for inexperienced users of thismodality). To make essential information avail-able to the surgeon, both preoperative MRI and

50 Lindseth et al.: Image Fusion in US-Based Neuronavigation

intraoperative US images should be displayed si-multaneously. This results, however, in a vastamount of multimodal image information thatmust be handled appropriately. By combining theavailable data at any given time using modernmedical visualization techniques, various possi-bilities are available for presenting an optimal in-tegrated 3D scene to the surgeon. To achieve this,however, it is necessary to overcome obstacles atseveral critical steps in image-guided surgery.

Critical Steps in Image Guided SurgeryPatient treatment using image-guided surgery sys-tems involves several important steps, some ofwhich are more critical than others for obtainingoptimal therapy for the patient. These steps areshown in Figure 1 and involve: 1) preoperativeimage acquisition, data processing, and preopera-tive image visualization for optimal diagnostics, aswell as satisfying preoperative therapy decisionsand planning; 2) accurate registration of preop-erative image data and visualization in the OR for

accurate and optimal planning just prior to sur-gery; 3) intraoperative imaging for updating im-ages for guidance, as well as intraoperative visu-alization and navigation for safe, efficient and ac-curate image-guided surgery in the OR; and 4)postoperative imaging and visualization for ad-equate evaluation of patient treatment. The fol-lowing sections give a more theoretical descrip-tion of some of these steps, because they are, to-gether with intraoperative imaging, of the utmostimportance for understanding how optimal im-age-guided surgery with satisfactory precisionmay be obtained. Figures and images from ourlaboratory are used to better illustrate and explainthe theoretical content.

Registration

The objective of registration is to establish a geo-metric transformation that relates two represen-tations (e.g., images or corresponding points) ofthe same physical object.39 It is common to dis-tinguish between image-to-image (I2I) registra-tion and image-to-patient (physical space, refer-ence frame, tracking system) (I2P) registration(Fig. 2). Multimodal I2I registration makes it pos-sible to combine structural (MRI, CT) and func-tional (fMRI, PET, SPECT) information for di-agnosis and surgical planning from various imag-ing modalities. By comparing images acquired atdifferent times (usually from the same imagingmodality), I2I registration is further used to moni-tor the progress of a disease and in postoperativefollow up. I2P registration is a required step inany neuronavigation system based on preopera-tive images.

Most registration methods can be character-ized as point-based, surface-based, or voxel/volume-based.39 Point-based methods optimizethe alignment of corresponding points in two im-ages (I2I) or in one image and in physical space(I2P), and are the underlying methods for patientregistration based on skull fiducials, skin fiducials,or anatomical landmarks. Surface-based methodstry to match corresponding surfaces. For I2I reg-istration the two surfaces are extracted from theimage data, and for I2P registration the physicalspace surface is generated by either sweeping overthe skin with a tracked pointer or using a 3D lasercamera.40 Voxel-based methods are used for I2Iregistration and match two volumes by optimizingtheir similarity (correlation or mutual informa-tion41 is often used). It should be mentioned thatif an intraoperative imaging modality is available,the preoperative images could be registered to

Fig. 1. Important steps in image-guided surgery. 1) Pre-operative data acquisition and planning. 2) Patient regis-tration and planning in the OR. 3) Intraoperative dataacquisition and navigation. 4) Postoperative control oftreatment.

Lindseth et al.: Image Fusion in US-Based Neuronavigation 51

physical space by using a volume-based I2I regis-tration between the pre- and intraoperative data.

Accuracy

The overall clinical accuracy in image-guided sur-gery is the difference between the apparent loca-tion of a surgical tool as indicated in the imageinformation presented to the surgeon and the ac-tual physical location of the tool tip in the patient.This accuracy determines the delicacy of the workthat can be performed, and is a direct result of achain of error sources.42 For navigation based onpreoperative images, the main contributors to thisnavigation inaccuracy are the registration processand the fact that preoperatively acquired imagesdo not reflect intraoperative changes. Navigationbased on intraoperative 3D US is associated witha similar but independent error chain, where USprobe calibration and varying speed of sound arethe main contributors. No patient registration isneeded for US-based guidance, which makes thiskind of navigation comparable to, or even betterthan, conventional navigation in terms of accu-racy, even before the dura is opened.42 In addi-tion, US-based navigation will retain this accuracy

throughout the operation if guidance is based onrecently acquired US volumes.

A mismatch between image information dis-played on the computer screen and what is physi-cally going on inside the patient’s head (see thenavigation triangle in Figure 2) can only be evalu-ated using well-defined physical reference pointsin the patient, a necessity that is not always avail-able during image-guided surgery. An observedmismatch between preoperative MRI and intra-operative US images could be a direct result ofthe independent navigation inaccuracies of thetwo modalities. If a mismatch exceeds a threshold,defined by the navigation inaccuracies, we canconclude that a shift has occurred. A measure ofthe accuracy given by most navigation systems us-ing preoperative MR images is often referred toas the Fiducial Registration Error (FRE), whichgives the mean difference between correspondingimage points and patient points. The FRE should,however, always be verified by physically touch-ing the patient’s head with a tracked pointer andmaking sure that the physical space correspondsto the image space. If the two spaces match, wehave an estimate of the accuracy on the surface of

Fig. 2. Registration of preoperative images to each other (I2I reg.) for diagnostics and planning in the office, and to thepatient (I2P reg.) for intraoperative planning and guidance in the OR. Acquisition and reconstruction of US volumes areperformed relative to the reference frame of the tracking system so that registration is not required. The navigation trianglesymbolizes the fact that the accuracies involved in navigation based on preoperative MRI and intraoperative ultrasound areindependent, and that an observed mismatch between the two modalities does not necessarily imply brain shift. Visualiza-tions at different stages in the operation can be simulated by experimenting with the data available at those stages.

52 Lindseth et al.: Image Fusion in US-Based Neuronavigation

the head, which is probably close to the accuracyinside the head (i.e., the Target Registration Er-ror), but this is only valid before the operationactually commences. On the other hand, if navi-gation is based on 3D US scans reconstructed witha sound velocity matching that of the imaged ob-jects, the accuracy will be close to what can befound in a thorough laboratory evaluation, andthis navigation accuracy will be maintainedthroughout the operation as long as the 3D map isupdated.42

Visualization of Preoperative andIntraoperative Image Information

There are various ways to classify the differentvisualization techniques that exist.43 For medicalvisualization of 3D data from modalities like CT,MRI and US, it is common to refer to three dif-ferent approaches: slicing, volume rendering, andgeometric (surface/polygon/triangle) rendering.Slicing methods can be further sub-classified ac-cording to how the 2D slice data is generated andhow this information is displayed. The sequenceof slices acquired by the modality and used togenerate a regular image volume is often referredto as the raw or natural slices. From the recon-structed volume we can extract both orthogonal(Fig. 3A–C) and oblique (Fig. 3D–F) slices. Or-thogonal slicing is often used in systems for pre-and postoperative visualization, as well as in in-traoperative navigation systems, where the tip of

the tracked instrument determines the three ex-tracted slices (Fig. 3A). The slices can also beorthogonal relative to the tracked instrument(Fig. 3D) or the surgeon’s view (i.e., oblique slic-ing relative to the volume axis or patient), and thisis becoming an increasingly popular option innavigation systems.44

Volume- and geometric rendering tech-niques are not easily distinguished. Often the twodifferent approaches can produce similar results,and in some cases one approach may be consid-ered both a volume-rendering and a geometricrendering technique.43 Still, the term volume ren-dering is used to describe a direct rendering pro-cess applied to 3D data where information existsthroughout a 3D space instead of simply on 2Dsurfaces defined in (and often extracted from)such a 3D space. The two most common ap-proaches to volume rendering are volumetric raycasting and 2D texture mapping. In ray casting,each pixel in the image is determined by sendinga ray into the volume and evaluating the voxeldata encountered along the ray using a specifiedray function (maximum, isovalue, compositing).Using 2D texture mapping, polygons are gener-ated along the axis of the volume that is mostclosely aligned with the viewing direction. Thedata is then mapped onto these quads and pro-jected into a picture using standard graphics hard-ware. The technique used to render the texture-mapped quads is essentially the same technique

Fig. 3. Generating and displaying slice data. Orthogonal (A) or oblique (D) slices relative to the volume axis can becontrolled by a surgical tool (intraoperatively) or by a mouse (pre- and postoperatively). The extracted slices can bedisplayed directly in a window (B, E), or texture-mapped on polygons in a 3D scene and rendered into a window (C, F).

Lindseth et al.: Image Fusion in US-Based Neuronavigation 53

that is used to render geometric surface represen-tations of relevant structures. However, the geo-metric representations must first be extractedfrom the image information. While it is possible insome cases to extract a structure and generate a3D model of it by directly using an isosurface ex-traction algorithm,45 the generation of an accurategeometric model from medical data often requiresa segmentation step first. The most common sur-face representation is to use a lot of simple geo-metric primitives (e.g., triangles), though otherpossibilities exist.

Finally, image fusion techniques might bebeneficial when using the best of both MRI andUS, because it is easier to perceive an integrationof two or more volumes in the same scene than tomentally fuse those same volumes when pre-sented in their separate display windows. Thisalso offers an opportunity to pick relevant andnecessary information from the most appropriateof the available datasets. Ideally, relevant infor-mation should include not only anatomical struc-tures for reference and pathological structures tobe targeted (MRI and US tissue), but also impor-tant structures to be avoided (MRA, fMRI andUS Doppler).

The Present StudyThree-dimensional display techniques are consid-ered to be more user friendly and convenient than2D display, and have shown potential for improv-ing the planning and outcome of surgery.46–50

Rendered 3D medical image data and virtual-reality visualizations have already been reportedto be beneficial in diagnosis of cerebral aneu-rysms, as well as in preoperative evaluation, plan-ning and rehearsal of various surgical ap-proaches.51–61 However, only a few studies have

been reported where 3D visualizations werebrought into the OR and used interactively fornavigating surgical tools to the lesion.44,62,63 Ad-ditionally, the 3D scene should be continuouslyupdated using intraoperative imaging techniquesso as to always represent the true patient anatomyfor safe and efficient surgery.

In the present study, we have developed aMulti-Modal Volume Visualizer (MMVV) for in-vestigating alternative ways of displaying the im-age information that is available at differentstages of an operation. The module was testedusing various data sets generated during the treat-ment of patients with brain tumors and cerebro-vascular lesions in our clinic. The neuronavigationsystem applied during surgery uses both preop-erative MRI/CT and intraoperative 3D US. TheMMVV scenes were generated after surgery inorder to have time to try different visualizationapproaches. Nevertheless, the application recon-structs the spatial relationship between all theavailable volumes as seen in the OR, and makes itpossible to explore the optimal integration of pre-operative MRI data with intraoperative 3D USdata.

MATERIALS AND METHODS

3D Image Acquisition

Preoperative 3D MRI Acquisition andPatient Registration

Prior to surgery, patients included in the studywere scanned by a 1.5T MRI scanner (Picker orSiemens) that acquired one or more 3D data sets(Fig. 4A) with an in-plane resolution of 1.0 mm(0.78 mm for MRA) and a slice thickness of 1.5mm (1.0 mm for MRA). The MR images were

Fig. 4. 3D image acquisition. A) Prior to surgery the patient is scanned, and one or more MRI datasets are generated. B)Preoperative MRI data are transferred to the navigation system in the OR and registered to the patient (C). High-qualityUS images are acquired when needed (D), and the tracked digital images are reconstructed into a regular volume (E) thatis automatically registered and can be treated the same way as the MRI volumes in the navigation system.

54 Lindseth et al.: Image Fusion in US-Based Neuronavigation

transferred to the US-based neuronavigation sys-tem SonoWand� (MISON AS, Norway; www.mison.no) (Fig. 4B) described elsewhere.31 In theOR, the images were registered to the patient toallow conventional planning and navigation basedon preoperatively acquired MR images (Fig. 4C).The registration algorithm used is based on pin-pointing five corresponding skin fiducials in theimage data as well as on the patient using a pre-calibrated pointer.

Intraoperative 3D Ultrasound Acquisition

After making the craniotomy, updated high-quality 3D US maps were acquired several timesduring surgery using the integrated US scanner ofthe navigation system (Fig. 4D). The sensor framemounted on the US probe (5-MHz FPA probeoptimized for brain surgery applications) wastracked using an optical positioning system (Po-laris, Northern Digital Inc., Ontario, Canada) dur-ing freehand probe movement. The vendor deter-mined the rigid-body transformation from thesensor frame to the US scan plane so that theposition and orientation of every 2D US imagecould be recorded. A pyramid-shaped volume ofthe brain was acquired by tilting the probe ap-proximately 80° in 15 s. The digital images werereconstructed into a regular volume with a reso-lution of approximately 0.6 mm in all three direc-tions and treated the same way as the MRI vol-umes (Fig. 4E). The process of US acquisition,data transfer, reconstruction and display takesless than 45 s for a typical US volume. Repeated3D scans were performed when needed (as indi-cated by real-time 2D US, for example). The ac-curacy of US-based neuronavigation using theSonoWand� system has been evaluated to be 1.4mm on average,42 and this will be valid through-out the operation as long as the dataset used fornavigation is frequently updated. However, mostof the datasets used in the present study wereacquired by a pre-release version of the system,with an inaccuracy not exceeding 2 mm (from ourown laboratory results).

Integrating the Different Datasets into aCommon Coordinate System

The method used to register preoperative imagesto the patient (I2P) in the navigation system is apoint-based method that uses skin-fiducials (Fig.4C). For a given patient, therefore, all the avail-able MRI volumes would contain fiducials. Thesepoints were used to integrate all the preoperative

data (MRA in addition to T1 or T2, for example)into the common coordinate system of the “mas-ter” volume using a point-based I2I registrationmethod (Fig. 2). This made it possible to simulatepreoperative planning after surgery, using thedata available at this particular stage in the op-eration. Intraoperatively, the tracked patient ref-erence frame was used as the common coordinatesystem for both pre- and intraoperative data. Pre-operative data was moved into the physical spaceby registering the “master” volume to the patient.The US volumes were acquired in the coordinatespace of the tracking system and were accordinglyplaced correctly relative to the reference frame inthe operating room. Postoperative data was notused. The visualization module (MMVV) sup-ports different registration methods, but in thepresent study the registration matrices exportedby the navigation system were used, as the aimwas to fuse data with the same spatial relations asseen in the OR.

Medical Image Fusion Using theMulti-Modal Volume Visualizer (MMVV)

The multimodal image fusion application was de-veloped and used on a 500-MHz PowerBook G3computer with 384 MB RAM (ATI Rage Mobility128 graphics card with 8 MB RAM, Mac OS X,Apple Inc., CA). The software was built around aset of classes from Atamai (www.atamai.com),which in turn was build on top of the visualizationtoolkit VTK (public.kitware.com/VTK) and theOpenGL API using the Python programming lan-guage. No preprocessing was done to improve thequality of the MRI and US volumes presented inthis paper.

Slicing

The slicer object implemented in the visualizationmodule supports both orthogonal and obliqueslicing relative to the volume axis. The extractedslices can be displayed in a variety of ways, wherethe main difference is between directly displayingthe slices in a window on the screen (Fig. 3B, E)or texture mapping the slices on polygons that areplaced (together with other objects) in a 3D scenethat is rendered into a window (Fig. 3C, F). Fig-ures 3B and 3E show images from the two mo-dalities in separate windows, while Figures 3C and3F show the MRI and US slices in a fused fashion.Each of the three planes can be turned on and off,and on each plane we can place slice data fromany of the available volumes. It is also possible to

Lindseth et al.: Image Fusion in US-Based Neuronavigation 55

fuse slice data from different volumes and mapthe resulting image onto one of the planes usingcompositing techniques. The user can easily cyclethrough the available volumes and map corre-sponding slice data to a given plane, or rotate avolume from one of the orthogonal planes to thenext. Instead of using sliders to interact with theslices, they are directly positioned by pointing,grabbing and pushing them with a mouse in the3D scene. Figure 5A shows three different MRIvolumes mapped to the same slicer object: thebottom axial slice is taken from a T2 volume andthe top-right coronal slice from a T1 volume,while the top-left sagittal slice is a fusion betweenthe T2 volume and an MRA volume. As can beseen from the red MRA data, every volume hasits own color table, so that color as well as bright-ness and contrast can be adjusted individually forthe slices belonging to a given volume.

Volume Rendering

The volume-rendered object developed in thepresent study is 2D texture-based. This made itpossible to generate pleasant visualizations withinteractive speeds even on a notebook computer.The operator determined what to see by choosingfrom a set of predefined transfer functions forcolor and opacity that was easily adjusted to aparticular dataset (fuzzy classification). Severalstructures within a volume can, in theory, be vi-sualized using a single volume-rendered object.However, unsegmented medical data often con-tain different structures occupying nearly thesame gray values, making it very difficult to iso-late and render just the structure of interest. Thevolume-rendered object therefore makes it pos-sible to interactively cut the volume with six clip-ping planes. Also, many volumes only contain asingle structure (e.g., the vascular tree in MRA orUS flow volumes).

Geometric Rendering

To render a geometric object, the surface of thestructure must first be given a geometric repre-sentation. The algorithm used to extract the MRItumors in the present study was based on themanual part of a semi-automatic method devel-oped to segment structures in US volumes.64 In anarbitrary number of the slices that went throughthe structure of interest, a number of controlpoints on the tumor border were marked. A B-spline was run through these points, and from allthese parallel splines a 3D model representing thetumor surface was created. The geometric object

used to represent a surface model in the 3D scenecould be assigned an arbitrary color, and it waspossible to see inside the object, either by clippingit interactively using the bounding box of thestructure (Fig. 5B), or by making the model trans-parent.

Multimodal Image Fusion: CombiningDifferent Visualization Techniques withMultimodal Images in the Same 3D Scene

The MMVV module can render into multiple win-dows, each with its own contents and where theviewpoints in the different 3D scenes can option-ally be coupled. Each window offers the possibil-ity of placing multiple slicer, volume-rendered(VR) and geometric-rendered (GR) objects to-gether in a single 3D scene. A slicer object canshow any of the available volumes, as well as dif-ferent combinations of volumes on each of itsthree slice planes. Each VR object can only visu-alize a single volume. If two objects are used tovisualize spatially overlapping structures from dif-ferent volumes, artifacts may occur.43 Implement-ing a single VR object that is capable of visualiz-ing attributed voxel data originating from mul-tiple registered volumes can solve this. A GRobject typically shows only one structure, thoughmultiple structures extracted from a single volumecould be visualized using the hierarchical optionof the geometry class. Rendering semitransparentgeometry together with VR structures might pro-duce artifacts.43 This can be fixed by sorting thegeometry and the texture-mapped polygons be-fore rendering. Three-dimensional scenes consist-ing of more than a single slicer, VR and GR ob-ject at the time were rarely used. All objects couldeasily be turned on and off. Direct interactionwith the objects in the 3D scene was used insteadof spreading numerous controls around the ren-dered images. Support for stereo is build intoVTK and should therefore be easy to integrateinto the visualization module. However, the com-puter used in the current study does not supportOpenGL-based stereo in hardware, so the optionto turn the stereo on and off was not used. Figure5C shows a slicer object, a VR object and a GRobject in the same scene. All objects use datafrom the same MRI volume. The slicer objecthas turned its coronal and sagittal slices off, andthe axial plane is positioned so that it slicesthrough the tumor. The VR (grey) and GR (red)objects show the same MRI tumor. The grey VRtumor surrounds the red GR tumor, illustrating

56 Lindseth et al.: Image Fusion in US-Based Neuronavigation

the greater detail often achieved by direct vol-ume rendering as compared to segmented andsmoothed GR objects. Figures 5D and 5E illus-trate how multimodal information can be inte-grated using different combinations of slicing, vol-ume rendering and geometric rendering. Table 1summarizes alternative ways to fuse the availabledata at a given stage in the operation using theMMVV.

Clinical Feasibility Studies: MultimodalVisualization for Optimal PatientTreatment

Preoperative Planning and IntraoperativeImage Guidance

In the following, we have chosen image data fromtypical clinical cases to demonstrate how multi-modal imaging and visualization techniques maybe used to explore essential information requiredfor patient treatment. Preoperative MRI and in-traoperative 3D US images have been integratedin various ways, according to practical require-ments and the availability of information in theprocess. In particular, we have focused on how thevarious display and image fusion algorithms maymost efficiently solve practical problems in thevarious steps of preoperative planning and imageguidance. For preoperative planning we have cho-sen various MR image data from a tumor opera-tion. Essential information to be explored in thiscase was tumor location in relation to the sur-rounding anatomy, tumor vascularization, and in-traoperative US imaging early in the operation,

before resection was initiated. In this case, it wasalso important for the surgeon to detect any ves-sels that might be present inside or near the tu-mor, the tumor border, and any shift that might bepresent initially. Using the MMVV software, wealso evaluated various solutions for visualizationof brain shift, as well as solutions for correctionand compensation. In addition, we have focusedon how to make satisfactory display alternativesfor following the progress of an operation and forcontrolling the operation at the end of the proce-dure. Various techniques for controlling the pro-cedure at the end of both tumor resections weretested using US tissue imaging, and also for con-trolling aneurysm surgery performed by clipping,using US Doppler acquisition and visualization.

Clinical Study to Quantify the MismatchPrior to Surgery

In addition to visualizing the mismatch, theMMVV module can be used to quantify the spa-tial mismatch between volumes. This feature wasused to quantify the mismatch between preopera-tive MRI and intraoperative 3D US at the earliestpossible stage in the operation. From a total of120 patients undergoing US-based neuronaviga-tion from January 2000 to December 2001, 12were randomly chosen for inclusion in this study.The US volumes used in the comparison wereacquired immediately after an acoustic windowthrough the skull was established, but before thedura was opened. At this early stage, the two vol-umes should be very similar, the main difference

Table 1. Datasets and Visualization Techniques in Various CombinationsVolumes originating from both pre- and intraoperative acquisitions, which can be visualized in a variety of ways, offermany options. However, only relevant structures for a given operation (type and phase) should be shown. Thesestructures should be extracted from the most appropriate volume (considering factors like image quality and impor-tance of updated information) and visualized in an optimal way (both individually and in relation to each other). Thetable summarizes the situations illustrated in Figure 7F, where the most interesting structures are the tumor (T) andthe vessels (V), with the other objects being used for overview (O).

Intraoperative ultrasound

Preoperative MRI Time1 ••• Timen

T1 T2 PD Angio fMRI MRSI Tissue Doppler

W1 Ortho sliced A TC TS O

Window1

(3D scene1/ Volume rendered (VR) V V

Viewpoint1) Geometry rendered (GR) T

•••Wm

Lindseth et al.: Image Fusion in US-Based Neuronavigation 57

being that the same structure could be imageddifferently due to the underlying characteristics ofMRI and US. For each of these patients, the twovolumes (approximately aligned by the trackingsystem) were orthogonally sliced and fused usingdifferent splitting techniques (Fig. 6A–C). The de-sired axial, coronal and sagittal slices, as well asthe splitting point, were adjusted using sliders.The MRI volume was then manually translateduntil the match with the US volume was as good

as possible (Fig. 6D–F), using structures seen inboth modalities as reference (e.g., a lesion, falx, orthe ventricles). The results were evaluated by apanel of three skilled operators and adjusted untilconsensus regarding the optimal match wasachieved. The mismatch vector was than re-corded. A conservative approach is to say that amismatch greater than the sum of the navigationinaccuracies associated with MRI- and US-basedguidance is most likely caused by brain shift.

Fig. 6. Manual quantification of mismatch. Three orthogonal slice planes are displayed. Each plane is split in the middleof an interesting structure and the different regions of the plane are assigned data from the two modalities (A–C). The MRIvolume is then translated until the match with the US volume is as good as can be achieved using the eye for adjustments(D–F). The length of the shift vector can then be calculated.

>

Fig. 5. Integrating different visualization techniques and modalities (D–F) in the same 3D scene. A) A slicer object whereeach of the three orthogonal planes shows a different MRI-volume (T1, T2, and T2 fused with MRA). B) Volume-rendered(VR) arteries from an MRA volume (VR-MRA arteries in red) in the same 3D scene as a geometric-rendered (GR) tumorextracted from a T2-MRI volume (GR-T2 tumor in green). C) The same MRI tumor is geometric rendered (in red) as wellas volume rendered (white fog), illustrating the greater detail often achieved with the latter technique. In addition, an axialMRI slice is displayed where the tumor is located and improves the overview. D) An intraoperative coronal US slice (as wellas VR-US-Doppler vessels) integrated with axial and sagittal MRI slices. The mismatch between pre- and intraoperativedata is visualized using a GR-MRI tumor together with a US slice through the same tumor (E), and an MRI slice togetherwith a VR-US tumor (F).Fig. 7. Preoperative planning in the office based on MRI (A–C), as well as in the OR based on both MRI and updatedUS (D–F). A) Localization of the target, i.e., the green GR T2-MRI tumor seen in front of the axial and coronal T1-MRIslices used for overview. B) Search for the optimal surgical approach that avoids important structures, e.g., the VR-MRAvessels shown in red. C) Infiltrating vessels complicating the surgical procedure (revealed by cutting into the tumor). Theplan is updated with intraoperative US data to make sure that vessel locations (D shows VR-Power-Doppler-US vessels inred) and the tumor border (E shows an axial US slice) are correctly displayed. F) Use of updated US data when available(VR-US vessels in blue as well as an axial US slice) with preoperative MRI data filled in for improved overview.

58 Lindseth et al.: Image Fusion in US-Based Neuronavigation

RESULTS

Multimodal Image Fusion inPreoperative PlanningGenerally, multimodal image fusion has beenshown to be beneficial for various surgical ap-proaches in our clinic. We have illustrated howintegrated visualization may be used for planning

the surgical approach in tumor surgery (Fig. 7).The conventional planning process consists of lo-calizing the target area (Fig. 7A), which in thiscase is a brain tumor to be resected. Importantactivities will be to choose an optimal surgical ap-proach that avoids critical structures like bloodvessels (Fig. 7B, C) as well as eloquent areas(which may be shown using fMRI [not shown

Fig. 5.

Fig. 7.

Lindseth et al.: Image Fusion in US-Based Neuronavigation 59

here]). This can be done in the office after preop-erative MRI data is acquired. When US is used asthe intraoperative imaging modality, it is also im-portant to plan an optimal acoustic window intothe skull so that the relevant portion of the sur-gical field can be covered by 2D/3D US data.35 Inthe OR, after the patient is positioned accordingto the plan and the preoperative MRI images areregistered to the patient, the preoperative plan inthe computer is transferred to the patient bymarking the entry point, and possibly a separatemini-craniotomy for the US probe, on the pa-tient’s skull. After the craniotomy is made, 3D UScan be acquired and the preoperative plan can beupdated to an intraoperative plan that corre-sponds to the true patient anatomy. Important up-dating features of US are blood detection (Fig.7D, F) and tissue imaging (Fig 7E, F). An alter-native imaging modality like US may show differ-ent or additional characteristics of anatomy andpathology to those shown by MRI, e.g., regardingtumor borders65 and vascularization. In addition,MRI data with matching high-quality 3D US dataacquired before surgery begins was found to bethe best way for inexperienced neurosurgeons tobecome familiar with US, interpret essential in-formation in the images, and discover how iden-tical structures are imaged using the two modali-ties.

Identification, Correction andQuantification of Brain ShiftTo be able to use medical images for guiding sur-gical procedures, it is essential that the imagesreflect the true patient anatomy. Brain shiftshould be monitored, and when the shift exceedsthe acceptable amount for the operation at hand,preoperative MR images should no longer be re-lied upon for guidance. Navigation must than bebased on updated 3D US data of the target area.We present (Fig. 8) various ways in which brain

shift can be visualized so that the surgeon caninterpret this information in an easy and intuitiveway and use it in a safe and efficient way foroptimal surgical guidance. As can be seen fromFigure 8A, both modalities must be present todetect brain shift (i.e., a minimally invasive orclosed procedure is performed so that the physicaltarget area with surgical instruments will not bevisible). Image fusion based on blending MRI andUS together can, to a certain degree, reveal brainshift in the border zone (Fig. 8B). To clearly ob-serve a mismatch, we must either split a slice inthe middle of an interesting structure and put in-formation from the two modalities on differentsides (Fig. 6), or put updated US information onone slice and MRI on another and observe theintersection between the two slices (Fig. 8C, G).Alternatively, we can overlay some kind of data(e.g., a volume-rendered or segmented geometricobject) from one modality onto a slice from an-other modality (Fig. 8C–F), or, based on datafrom one modality, volume render the same ob-ject as is segmented and surface rendered fromanother modality (Fig. 8I). As can be seen in Fig-ure 8, a considerable mismatch is detected in theright-to-left direction.

Monitoring brain shift by visualizing themismatch between pre- and intraoperative imagedata helps the surgeon to decide when unmodifiedMRI data should be used only for overview andinterpretation. Correcting and repositioning thepreoperative images so that they correspond tothe true patient anatomy (as monitored by intra-operative US) will greatly increase the usefulnessof the MRI data during surgery. Ideally, thisshould be done automatically in a robust and ac-curate manner. Until such a method exists, how-ever, only US is to be trusted for guidance andcontrol, and various slicing and rendering tech-niques are used to fuse preoperative MRI data,that might be manually translated into a more

>

Fig. 8. Identification and correction of brain shift using multimodal image fusion. A) Orthogonal MRI slices cut throughthe target area. B) Intraoperative US is shown transparent and overlaid on existing MRI slices. C) Axial MRI slice, sagittalUS slice, and blended coronal slice in addition to an MRI-segmented tumor that is given a geometric representation(GR-MRI tumor in red). Mismatch is seen between an MRI slice and a VR-US tumor in red (D); between a US slice anda VR-MRI tumor in red (E); between a US slice and a GR-MRI tumor in red; between an MRI slice and a US slice (G);and between a VR-US tumor in gray and a GR-MRI tumor in red. H and J are mismatch corrected views of G and I,respectively.Fig. 9. Multimodal imaging for guiding a tumor operation with resection control. A) A coronal MRI slice cuts through thetarget area (VR-US-flow in red). The coronal slice is replaced by the corresponding first US slice (B), and a slice extractedfrom one of the last 3D US volumes acquired (C). D, E and F are volume-rendered representations of A, B and C,respectively. As can be seen from C and F, there may be some tumor tissue left.

60 Lindseth et al.: Image Fusion in US-Based Neuronavigation

Fig. 8.

Fig. 9.

Lindseth et al.: Image Fusion in US-Based Neuronavigation 61

correct position, around the US data. Figure 8shows the mismatch before (G, I) and after (H, J)manual correction.

To quantify the mismatch between similarstructures recognized in both modalities at theearliest possible stage, immediately after the firstUS volume became available (i.e., after the cra-niotomy but before the dura was opened), weused the manual method previously outlined (Fig.6). Table 2 shows the quantitative data obtainedin the present study based on a random sample of12 patients undergoing surgical treatment in ourclinic. A quantified mismatch greater then thesum of the two navigation inaccuracies is an indi-cation of brain shift, as previously explained. Inthe present study, a mismatch indicating brainshift was detected in 50% of the cases even beforethe surgical procedure had started.

Surgery Guidance and Control

In addition to a safe approach to the surgical tar-get area, recent reports have shown that radicalresections are important for patient outcomes inthe management of brain tumors.66 To achievethis, it is important to know where the tumor bor-der is and how much tumor tissue is left. A mini-mally invasive procedure, where clear sight is notan option, will require some form of image guid-ance. Ideally, the entire resection should be moni-tored by a real-time 3D intraoperative modalityand presented to the surgeon in the form of a 3Dscene consisting of the true intraoperative posi-tions of the surgical instruments in relation tostructures to be avoided and removed. Still, muchcan be achieved by updating the region of interestwith 3D US during the procedure and displayingpreoperative data for an increased overview, aspresented in Figure 9. Axial and sagittal MRI

slices are used for overview purposes, while thecoronal slice of interest cuts through the tumor.The coronal slice shows preoperative MRI data(Fig. 9A), an early US volume acquired before theresection started (Fig. 9B), and US data acquiredtowards the end of the operation for resectioncontrol (Fig. 9C). From comparison of A) and B),or their volume-rendered representations in D)and E), respectively, it can be clearly seen that thesame tumor is imaged differently by the two mo-dalities and that a shift is present. Looking at C)and F), there might still be some tumor tissue leftbefore a complete radical resection is performed.

Direct volume rendering of MRA and 3D

>

Fig. 10. 3D displays for improved aneurysm operation with clipping control. A) VR-MRA aneurysm for overview. B) Thetarget area is replaced with updated US data (VR-US-flow aneurysm in red). C) The mismatch between the preoperativeMRA slice through the aneurysm and the intraoperative VR-US-flow aneurysm in red is clearly visible. The other panelsshow magnifications of the VR-MRA aneurysm (D), and the VR-US-flow aneurysm before (E) and after (F) clipping.Fig. 11. The US image in B can be replaced by C, where MRI data from A is filled around the US data without obscuringthe updated map of the target area at the same time as an improved overview is achieved. Use of overlay as an aid tointerpretation as well as brain-shift assessment will partly hide the data and should therefore be easy to turn on and off (D).E) Traditional display of orthogonal slices coupled to the viewpoint in the 3D scene. When the 3D scene is rotated tosimulate the surgeon looking at the patient from a different direction, the 2D display of the slices follow in discrete steps(each slice has two sides and each side can be rotated in steps of 90°) to approximately match the surgeon’s view of thepatient. F) Virtual navigation scene. Four objects can be seen: 1) the patient reference frame used by the tracking system;2) a US probe with real-time US data (both tissue and flow) mapped onto the virtual scan sector; 3) the target, which couldbe extracted from preoperative data and given a geometric representation; and 4) a surgical tool with an attached trackingframe.

Table 2. Results from Clinical MismatchAnalysisIf the mismatch between preoperative MRI and intra-operative US is greater than the sum of the indepen-dent navigation inaccuracies, we have an indication ofbrain shift. In the present study, this happened in 6 ofthe 12 cases where the mismatch was quantified. Thismeans that in approximately 50% of the cases one canexpect to find considerable shifts even in the early stageof an operation.

62 Lindseth et al.: Image Fusion in US-Based Neuronavigation

US Doppler data have proven quite useful forexploring complex anatomical and pathologicalvascular structures in the brain. High-quality ren-derings can be generated without the need of anyfiltering or segmentation. We have tested this dis-

play technique for surgical guidance of both an-eurysms and arteriovenous malformations(AVMs). Figure 10 shows a 3D scene from a pa-tient with an aneurysm, which is treated by mi-crosurgical clipping. Preoperative MRA is impor-

Fig. 10.

Fig. 11.

Lindseth et al.: Image Fusion in US-Based Neuronavigation 63

tant for exploring the extension and location ofthe lesion for optimal preoperative planning (Fig.10A). As in the tumor case, it is important to planthe position of the craniotomy, not only to findthe most optimal approach to the lesion, but alsoto obtain high-quality intraoperative images. Af-ter the craniotomy has been made, a 3D USDoppler scan is acquired and the target area isreplaced with updated US data displayed in red(Fig. 10B), while MRA data is kept in the 3Dscene to give an increased overview of the vascu-lar tree. By using an axial MRA slice through theaneurysm instead of a 3D MRA rendering, themismatch with the US Doppler angiography caneasily be seen, indicating that a brain shift hasoccurred as in the tumor case (Fig. 10C). Zoomingin on the aneurysm, it can be seen what is to beremoved, and a comparison of Figures 10D and10E shows how identical structures are imagedand rendered using MRA and US-Doppler, re-spectively. To confirm that the clipping of the an-eurysm was performed according to the plan, wecan compare volume renderings of the aneurysmbased on US acquired before (Fig. 10E) and after(Fig. 10F) clipping. Here, 3D visualization was im-portant both for locating the lesion as well as forcontrolling the vessel anatomy and blood flow be-fore and after surgery.

DISCUSSIONIn this paper, we have demonstrated technologythat integrates various imaging modalities, alongwith different 2D and 3D visualization techniquesthat may be used for improving image-guided sur-gery, as well as preoperative planning and post-operative evaluation. The advantages of 3D dis-play technologies have been pointed out by otherresearch groups, and include the increased over-view and improved diagnostics and surgery plan-ning.47,51–54 Although many commercially avail-able surgical navigation systems offer integrated3D display facilities for overview and planning ofthe procedure, few have integrated intraoperative3D imaging that can cope with brain shift duringsurgery. At the same time, intraoperative imagingmodalities like interventional MRI and intraop-erative 2D and 3D ultrasound are increasingly be-ing presented.13,17,32,36,66 Most available 3D dis-play technology is, however, demonstrated on CTand MR image data because the images are ofhigh resolution with reasonably good contrast andlow noise level. Ultrasound images, which havegenerally been relatively inhomogeneous with ahigh noise level, are now improving in image qual-

ity and have shown promising results for 3D dis-play using volume-rendering techniques.44 Also,3D visualization of intraoperative images encoun-ters other challenges due to increased image arti-facts, as well as decreased image quality through-out the operation. The various approaches used toobtain intraoperative 3D imaging, as well as thefact that preoperative images may also be usefulduring navigation and not only for planning, re-veal a demand for 3D display technology that cancope with the various imaging modalities usedboth pre- and intraoperatively.

Advantages and Challenges UsingMultimodal 3D Visualization in the Clinic

The results from the feasibility studies presentedin this paper are promising. 3D visualizationseems to offer many advantages due to improvedperception of complex 3D anatomy and easy ac-cess to more detailed information inside the 3Dvolume, especially in combination with 2D displaytechniques.

Slicing versus 3D Display

Displaying slices in separate windows (Fig. 3B, E)with crosshairs overlaid to indicate the currenttool tip position, makes it possible to displaymany slices without obscuring others. The draw-back is that it might be difficult to handle all thisinformation that is distributed on many separatewindows. We have shown that the slices (one tothree) from one to several volumes (acquiredpre-, intra- or postoperatively) may be displayedtogether in one window, making it easier to inter-pret information. Furthermore, the US image inFigure 11B can be replaced by the one in Figure11C, where MRI data from Figure 11A is filled inaround the target area for an improved overviewbut does not obscure the updated US data. On theother hand, overlays used to aid US interpretationoften hide information (Fig. 11D), and shouldhence be easy to turn on and off. Still, it is difficultto mentally match images presented in this waywith current patient anatomy as seen from thesurgeon’s viewpoint, or to understand the orien-tation of the surgical tools relative to the orthogo-nal volume slicing.

The orientation problem can be solved byintegrating the 2D planes in a 3D scene (Fig. 3C,F) and manually rotating the scene until the view-point corresponds to the surgeon’s view. This mayalso be controlled automatically by tracking thehead movements of the surgeon in addition to

64 Lindseth et al.: Image Fusion in US-Based Neuronavigation

tracking the surgical tools and probes. Potentialproblems with this approach are that some sliceswill be partly obscured (Fig. 3C), and that slicesalmost parallel to the viewing direction will bedifficult to see (Fig. 3F). To minimize these prob-lems, only relevant information should be dis-played, and it should be easy to turn on and offthe different objects in the scene. Furthermore,the orientation of the 3D scene can be tied to thetraditional 2D display of the slices so that the in-formation is presented as close to the surgeon’sview as possible (Fig. 11E). Thus, when a surgeonmoves the instrument to the left relative to him-self, the tracked instrument will also move (ap-proximately) left in the axial and coronal slices,while the sagittal window will display new data,extracted further to the left as seen from the sur-geon’s perspective.

Surface versus Volume Rendering

Although 2D slicing is essential for detailed infor-mation interpretation, complex 3D structuresmust be mentally generated based on a stack of2D slices. This requires years of experience, and isone of the reasons why research groups have nowintroduced various 3D display techniques forplanning as well as in the OR for surgical guid-ance. Computers using modern 3D renderingtechniques are particularly useful for assessingcomplex 3D structures like the vascular tree, andfor obtaining an overview of important relationsbetween relevant structures (e.g., infiltrating ves-sels in a tumor). Theoretically, it is possible toapply both volume-rendering and geometric ex-traction techniques directly to volume data, aswell as to segmented data. For practical visualiza-tion of 3D MRI and US data, we often found thatit was possible to generate nice views by isolatinginteresting parts by clipping the volume and opac-ity-classifying the content of the sub-volume. An-other important advantage of volume rendering isthat both the surface of the object of interest aswell as the inner content (e.g., for a tumor withcysts) may be displayed.

Geometric rendering of clinically interestingstructures is most successful if an intermediatesegmentation step is performed first, so that anaccurate surface representation can be generated.Although advanced methods for automatic orsemi-automatic segmentation exist, manual meth-ods must often be used, especially for US data. Inmany cases, it is also necessary to verify the tumorborder, e.g., in a low-grade tumor where it is hardeven for an experienced radiologist to delineate

the border. Still, promising segmentation methodsexist. One example is the deformable models ap-proach,67 where a template (taken from an ana-tomical atlas, for example) is deformed to fit newUS volumes acquired during the operation. Insummary, it is our experience that volume render-ing is the most appropriate 3D visualizationmethod for US data, since the generation of asurface representation often requires a segmenta-tion step, which is generally a more demandingtask than segmentation based on MRI. In addi-tion, the time available for the additional segmen-tation step is more limited in the OR than pre- orpostoperatively.

Future Prospects

Multimodal Imaging in Neuronavigation

As previously stated, the Multi-Modal VolumeVisualizer is currently being used to explore dif-ferent ways to integrate available image informa-tion. We plan to integrate the MMVV modulewith tracking technology, making it a suitable toolfor direct image-guided surgery in the OR. Thismeans that the 3D scene will be controlled bysurgical instruments and not only by the mouse.Virtual representations of the tracked pointersand surgical instruments, as well as the US probewith the real-time 2D scan plane attached, willalso be integrated in the 3D scene (Fig. 11F). Byfusing the different datasets in a common scene,we can compare real-time 2D US to correspond-ing slices from MRI and US volumes to detectbrain shift.

Real-Time 3D Ultrasound Imaging

Real-time monitoring of the position of surgicalinstruments in relation to the patient’s currentanatomy is a prerequisite for safe performance ofcompletely image-guided resections. A limitationwith the real-time 2D US technique is that it isdifficult to obtain a longitudinal view of the sur-gical instrument at all times.35 This can only besolved by real-time 3D US. Instead of extractingslices from a recently acquired 3D ultrasound vol-ume, the displayed real-time 2D slices from the3D volume would include monitoring of the in-strument in the image itself. In addition, it will bepossible to render and integrate the real-time im-age data in exciting new ways. Real-time 3D vi-sualization requires real-time 3D acquisition,transfer and rendering.

Lindseth et al.: Image Fusion in US-Based Neuronavigation 65

Automatic Registration and Real-TimeUpdating of Preoperative Data

The current study makes use of preoperative MRIfor surgical planning as well as for overview andinterpretation during surgery. Preoperative dataare registered directly to physical space, and arenot modified during surgery. Though challenging,multimodal image-to-image registration68 be-tween MRI and the first intraoperative US vol-ume would allow us to indirectly move preopera-tive data to physical space (Fig. 2). However, thiswill have accuracy implications, because placingthe MRI volumes in the patient this way will de-pend on the error chain associated with US-basednavigation, in addition to the errors of the multi-modal registration process itself. Still, the presentpaper and earlier work42 show that this could befavorable in terms of accuracy, since the first USvolume acquired is more accurately placed in thepatient than the MRI data directly registered tophysical space. However, there is still a need forI2P registration to allow conventional planningbased on MRI in the OR before the craniotomyfor the US probe is made. A simple point-basedmethod that uses anatomical landmarks will prob-ably be sufficient until the acoustic window intothe brain is opened. We are currently searchingfor optimal ways to do both I2P as well as multi-modal I2I registration in an efficient, robust anduser-friendly way. Surgical manipulation and re-section will alter the anatomy, which may be con-tinuously monitored using real-time 2D and free-hand 3D US. Preoperative MR images may berepeatedly aligned69 (elastically) with new USdata (Fig. 2) or, alternatively, the differences be-tween consecutive US volumes70 may be mea-sured and used to update the MRI volume in se-quence. The second approach will probably be theeasiest, as this implies a mono-modal registrationbetween relatively similar US volumes if the timegap between the acquisitions is not too long.

Multimodal Real-Time Image Guidance

When real-time 3D US and real-time non-rigidregistration of preoperative image data to currentpatient anatomy are available, and integrationwith navigation technology has been accom-plished, complete multimodal image guided neu-ronavigation may be performed. Until such meth-ods are developed, our approach is to obtain high-quality intraoperative 2D and 3D US data in thetarget area. Preoperative MRI data may be filledin to give an increased overview of the surgical

field, as well as being overlaid on the ultrasounddata in various ways for enhanced interpretationand assessment of brain shift.

CONCLUSIONWe have developed and demonstrated clinical useof a multimodal visualization module that inte-grates various imaging modalities like ultrasoundand MRI, along with different 2D and 3D visual-ization techniques that may be used for improvingimage-guided surgery, as well as preoperativeplanning and postoperative evaluation. The re-sults show that image fusion of intraoperative 2Dand 3D ultrasound images in combination withMRI will make perception of available informa-tion easier both by providing updated (real-time)image information and an extended overview ofthe operating field during surgery. This approachwill assess the degree of anatomical changes thatoccur during surgery and provide the surgeonwith an understanding of how identical structuresare imaged using the different imaging modalities.We believe that this may improve the quality ofthe surgical procedure and hence the outcome forthe patient.

ACKNOWLEDGMENTSThis work was supported by The Norwegian Uni-versity of Science and Technology, through theStrategic University Program for Medical Tech-nology, and SINTEF Unimed, through the Stra-tegic Institute Program from the Research Coun-cil of Norway, the Ministry of Health and SocialAffairs of Norway, and GE Vingmed Ultrasound(Horten, Norway).

REFERENCES1. Kelly PJ, Alker GJ, Goerss S. Computer-assisted ste-

reotactic microsurgery for the treatment of intracra-nial neoplasms. Neurosurgery 1982;10:324–331.

2. Kitchen ND, Lemieux L, Thomas DGT. Accuracy inframe-based and frameless stereotaxy. StereotacticFunctional Neurosurgery 1993;61:195–206.

3. Smith KR, Frank KJ, Bucholz RD. The Neurosta-tion—A highly accurate, minimal invasive solution toframeless stereotactic neurosurgery. Comput MedImaging Graphics 1994;18:247–256.

4. Roberts DW, Strohbehn JW, Hatch JF, Murray W,Kettenberger H. A frameless stereotactic integrationof computerized tomographic imaging and the oper-ating microscope. J Neurosurg 1986;65:545–549.

5. Watanabe E, Watanabe T, Manaka S, Mayanagi Y,Takakura K. Three-dimensional digitizer (neuro-navigator). Surg Neurol 1987;27:543–547.

6. Kato A, Yoshimine T, Hayakawa T, Tomita Y, Ikeda

66 Lindseth et al.: Image Fusion in US-Based Neuronavigation

T, Mitomo M, Harada K, Mogami H. A frameless,armless navigational system for computer-assistedneurosurgery. J Neurosurg 1991;74:845–849.

7. Barnett GH, Kormos DW, Steiner CP, WeisenbergerJ. Use of a frameless, armless stereotactic wand forbrain tumor localization with two-dimensional andthree-dimensional neuroimaging. Neurosurgery 1993;33:674–678.

8. Reinhardt HF, Horstmann GA, Gratzl O. Sonic ste-reometry in microsurgical procedures for deep-seatedbrain tumors and vascular malformations. Neurosur-gery 1993;32:51–57.

9. Reinhardt H, Trippel M, Westermann B, Gratzl O.Computer aided surgery with special focus on neuro-navigation. Comput Med Imaging Graph 1999;23:237–244.

10. Gumprecht HK, Widenka DC, Lumenta CB. Brain-Lab VectorVision neuronavigation system: technol-ogy and clinical experiences in 131 cases. Neurosur-gery 1999;44:97–105.

11. Dorward NL, Alberti O, Palmer JD, Kitchen ND,Thomas DGT. Accuracy of true frameless stereotaxy:in vivo measurement and laboratory phantom stud-ies. J Neurosurg 1999;90:160–168.

12. Roberts DW, Hartov A, Kennedy FE, Miga MI,Paulsen KD. Intraoperative brain shift and deforma-tion: A quantitative analysis of cortical displacementin 28 cases. Neurosurgery 1998;43:749–760.

13. Nimsky C, Ganslandt O, Cerny P, Hastreiter P,Greiner G, Fahlbush R. Quantification of, visualiza-tion of, and compensating for brain shift using intra-operative magnetic resonance imaging. Neurosurgery2000;47:1070–1080.

14. Nimsky C, Ganslandt O, Hastreiter P, Fahlbusch R.Intraoperative compensation for brain shift. SurgNeurol 2001;56:357–364.

15. Grunert P, Muller-Forell W, Darabi K, Reisch R,Busert C, Hopf N, Perneczky A. Basic principles andclinical applications of neuronavigation and intraop-erative computed tomography. Comp Aid Surg 1998;3:166–173.

16. Matula C, Rossler K, Reddy M, Schindler E, KoosWT. Intraoperative computed tomography guidedneuronavigation: Concepts, efficiency, and work flow.Comp Aid Surg 1998;3:174–182.

17. Tronnier V, Wirtz CR, Knauth M, Lenz G, Pastyr O,Bonsanto MM, Albert FK, Kuth R, Staubert A,Schlegel W, Sartor K, Kunze S. Intraoperative diag-nostic and interventional magnetic resonance imagingin neurosurgery. Neurosurgery 1997;40:891–902

18. Rubino GJ, Farahani K, McGill D, Wiele B, Villa-blanca JP, Wang-Mathieson A. Magnetic resonanceimaging-guided neurosurgery in the magnetic fringefields: The next step in neuronavigation. Neurosur-gery 2000;46:643–654.

19. Martin AJ, Hall WA, Liu H, Pozza CH, Michel E,Casey SO, Maxwell RE, Truwit CL. Brain tumor re-section: Intraoperative monitoring with high-field-

strength MR imaging—initial results. Radiology 2000;215:221–228.

20. Kaibara T, Saunders JK, Sutherland GR. Advancesin mobile intraoperative magnetic resonance imaging.Neurosurgery 2000;47:131–138.

21. Hall WA, Martin AJ, Liu H, Nussbaum ES, MaxwellRE, Truwit CL. Brain biopsy using high-field strengthinterventional magnetic resonance imaging. Neuro-surgery 1999;44:807–814.

22. Steinmeier R, Fahlbusch R, Ganslandt O, Nimsky C,Buchfelder M, Kaus M, Heigl T, Gerald L, Kuth R,Huk W. Intraoperative magnetic resonance imagingwith the Magnetom open scanner: concepts, neuro-surgical indications, and procedures: a preliminary re-port. Neurosurgery 1998;43:739–748.

23. Yrjana SK, Katisko JP, Ojala RO, Tervonen O,Schiffbauer H, Koivukangas J. Versatile intraopera-tive MRI in neurosurgery and radiology. Acta Neu-rochir (Wien) 2002;144:271–278.

24. Black PM, Moriarty T, Alexander E, Stieg P, WoodardEJ, Gleason PL, Martin CH, Kikinis R, Schwartz RB,Jolesz FA. Development and implementation of in-traoperative magnetic resonance imaging and its neu-rosurgical applications. Neurosurgery 1997;41:831–845.

25. Black PM, Alexander E, Martin C, Moriarty T, NabaviA, Wong TZ, Schwartz RB, Jolesz F. Craniotomy fortumor treatment in an intraoperative magnetic reso-nance imaging unit. Neurosurgery 1999;45:423–433.

26. Kettenbach J, Wong T, Kacher DF, Hata N, SchwartzRB, Black PM, Kikinis R, Jolesz FA. Computer-based imaging and interventional MRI: applicationsfor neurosurgery. Comput Med Imaging Graph 1999;23:245–258.

27. Seifert V, Zimmermann M, Trantakis C, VitzthumH-E, Kuhnel K, Raabe A, Bootz F, Schneider JP,Schmidt F, Dietrich J. Open MRI-guided neurosur-gery. Acta Neurochir (Wien) 1999;141:455–464.

28. Moriarty TM, Quinones-Hinojosa A, Larson PS, Al-exander E, Gleason PL, Schwartz RB, Jolesz FA,Black PM. Frameless stereotactic neurosurgery usingintraoperative magnetic resonance imaging: stereo-tactic brain biopsy. Neurosurgery 2000;47:1138–1146.

29. Bonsanto MM, Staubert A, Wirtz CR, Tronnier V,Kunze S. Initial experience with an ultrasound-integrated single-rack neuronavigation system. ActaNeurochirurgica 2001;143:1127–1132.

30. Woydt M, Krone A, Soerensen N, Roosen K. Ultra-sound-guided neuronavigation of deep-seated cav-ernous haemangiomas: clinical results and navigationtechniques. Br J Neurosurg 2001;15:485–495.

31. Gronningsaeter A, Kleven A, Ommedal S, AarsethTE, Lie T, Lindseth F, Langø T, Unsgård G.SonoWand, an ultrasound-based neuronavigationsystem. Neurosurgery 2000;47:1373–1380.

32. Hata N, Dohi T, Iseki H, Takakura K. Developmentof a frameless and armless stereotactic neuronaviga-

Lindseth et al.: Image Fusion in US-Based Neuronavigation 67

tion system with ultrasonographic registration. Neu-rosurgery 1997;41:608–614.

33. Koivukangas J, Louhisalmi Y, Alakuijala J, OikarinenJ. Ultrasound-controlled neuronavigator-guidedbrain surgery. J Neurosurg 1993;79:36–42.

34. Kelly PJ. Comments to: Neuronavigation by intraop-erative three-dimensional ultrasound: initial experi-ence during brain tumor resection. Neurosurgery2002;50:812.

35. Unsgaard G, Gronningsaeter A, Ommedal S, HernesTAN. Brain operations guided by real-time 2D ultra-sound: New possibilities as a result of improved imagequality–surgical approaches. Neurosurgery 2002;51:402–411.

36. Bucholz RD, Yeh DD, Trobaugh J, McDurmont LL,Sturm CD, Baumann C, Henderson JM, Levy A,Kessman P. The correction of stereotactic inaccuracycaused by brain shift using an intraoperative ultra-sound device. In: Troccaz J, Grimson E, Mosges R,editors: Proceedings of the First Joint Conference onComputer Vision, Virtual Reality and Robotics inMedicine and Medical Robotics and Computer-Assisted Surgery (CVRMed-MRCAS ’97), Grenoble,France, March 1997. Lecture Notes in Computer Sci-ence 1205. Berlin: Springer, 1997. p 459–466.

37. Regelsberger J, Lohmann F, Helmke K, Westphal M.Ultrasound-guided surgery of deep seated brain le-sions. Eur J Ultrasound 2000;12:115–121.

38. Strowitzki M, Moringlane JR, Steudel WI. Ultra-sound-based navigation during intracranial burr holeprocedures: experience in a series of 100 cases. SurgNeurol 2000;54:134–144.

39. Maintz JBA, Viergever MA. A survey of medical im-age registration. Med Image Anal 1998;2:1–36.

40. Maurer CR, Jr., Fitzpatrick JM, Wang MY, GallowayRL, Jr., Maciunas RJ, Allen GS. Registration of headvolume images using implantable fiducial markers.IEEE Trans Med Imaging 1997;16:447–462.

41. Maes F, Collignon A, Vandermeulen D, Marchal G,Suetens P. Multimodality image registration by maxi-mization of mutual information. IEEE Trans MedImaging 1997;16:187–198.

42. Lindseth F, Langø T, Bang J, Hernes TAN. Accuracyevaluation of a 3D ultrasound-based neuronavigationsystem. Comp Aid Surg 2002;7:197–222.

43. Schroeder W, Martin K, Lorensen B. The Visualiza-tion Toolkit: An Object-Oriented Approach To 3DGraphics. Second Edition. Prentice Hall, 1997.

44. Hernes TAN, Ommedal S, Lie T, Lindseth F, LangøT, Unsgaard G. Stereoscopic navigation-controlleddisplay of preoperative MRI and intraoperative 3Dultrasound in planning and guidance of neurosurgery:New technology for minimally invasive image guidedsurgery approaches. Minimally Invasive Neurosur-gery 2003;46(3):129–137.

45. Lorensen WE, Cline HE. Marching Cubes: A highresolution 3D surface construction algorithm. CompGraphics 1987;21:163–169.

46. Gronningsaeter A, Lie T, Kleven A, Mørland T,Langø T, Unsgård G, Myhre HO, Mårvik R. Initialexperience with stereoscopic visualisation of three-dimensional ultrasound data in surgery. Surgical En-doscopy 2000;14:1074–1078.

47. Hayashi N, Endo S, Shibata T, Ikeda H, Takaku A.Neurosurgical simulation and navigation with three-dimensional computer graphics. Neurosurgical Res1999;21.

48. Psiani LJ, Comeau R, Davey BLK, Peters TM. Incor-poration of stereoscopic video into an image-guidedneurosurgery environment. IEEE-EMBC 1995:365–366.

49. Peters TM, Comeau RM, Psiani L, Bakar M, MungerP, Davey BLK. Visualization for image guided neu-rosurgery. IEEE-EMBC 1995:399–400.

50. Glombitza G, Lamade W, Demiris AM, Gopfert MR,Mayer A, Bahner ML, Meinzer HP, Richter G, Leh-nert T, Herfarth C. Virtual planning of liver resection:image processing, visualization and volumetric evalu-ation. Int J Med Informatics 1999;53:225–237.

51. Harbaugh RE, Schlusselberg DS, Jeffrey R, HaydenS, Cromwell LD, Pluta D, English RA. Three-dimensional computer tomograpic angiography in thepreoperative evaluation of cerebrovascular lesions.Neurosurgery 1995;36:320–326.

52. Hope JK, Wilson JL, Thomson FJ. Three-dimen-sional CT angiography in the detection and charac-terization of intracranial berry aneurysms. Am J Neu-roradiol 1996;17:437–445.

53. Nakajima S, Atsumi H, Bhalerao AH, Jolesz FA, Ki-kinis R, Yoshimine T, Moriarty TM, Stieg PE. Com-puter-assisted surgical planning for cerebrovascularneurosurgery. Neurosurgery 1997;41:403–409.

54. Muacevic A, Steiger HJ. Computer-assisted resectionof cerebral arteriovenous malformation. Neurosur-gery 1999;45:1164–1171.

55. Pflesser B, Leuwer R, Tiede U, Hohne KH. Planningand rehearsal of surgical interventions in the volumemodel. In: Westwood JD, Hoffman HM, Mogel GT,Robb RA, Stredney D, editors: Medicine Meets Vir-tual Reality 2000. Amsterdam: IOS Press, 2000.

56. Soler L, Delingette H, Malandain G, Ayache N,Koehl C, Clemet JM, Dourthe O, Marescaux J. Anautomatic virtual patient reconstruction from CTscans for hepatic surgical planning. In: Westwood JD,Hoffman HM, Mogel GT, Robb RA, Stredney D,editors: Medicine Meets Virtual Reality 2000. Am-sterdam: IOS Press, 2000.

57. Hans P, Grant AJ, Laitt RD, Ramsden RT, KassnerA, Jackson A. Comparison of three-dimensional vi-sualization techniques for depicting the scala vestibuliand scala tympani of the cochlea by using high reso-lution MR imaging. Am J Neuroradiol 1999;20:1197–1206.

58. Kato Y, Sano H, Katada K, Ogura Y, Hayakawa M,Kanaoka N, Kanno T. Application of three-dimen-

68 Lindseth et al.: Image Fusion in US-Based Neuronavigation

sional CT angiography (3D-CTA) to cerebral aneu-rysms. Surg Neurol 1999;52:113–122.

59. Masutani Y, Dohi T, Yamane F, Iseki H, TakakuraK. Augmented reality visualization system for intra-vascular neurosurgery. Comp Aid Surg 1998;3:239–247.

60. Vannier MW. Evaluation of 3D imaging. Critical Re-views in Diagnostic Imaging 2000;41:315–378.

61. Anxionnat R, Bracard S, Ducrocq X, Trousset Y,Launay L, Kerrien E, Braun M, Vaillant R, Scomaz-zoni F, Lebedinsky A, Picard L. Intracranial aneu-rysms: Clinical value of 3D digital substraction angi-ography in therapeutic decision and endovasculartreatment. Radiology 2001;218:799–808.

62. Walker DG, Kapur T, Kikinis R, Yezzi A, Zollei L,Bramley MD, Ma F, Black P. Automatic segmenta-tion and its use with an electromagnetic neuronavi-gation system. Abstracts from the 12th World Con-gress of Neurosurgery, Sydney, Australia, 2001. p 16–20.

63. Wilkinson EP, Shahidi R, Wang B, Martin DP, AdlerJR, Steinberg GK. Remote-rendered 3D CT angiog-raphy (3DCTA) as an intraoperative aid in cerebro-vascular neurosurgery. Comp Aid Surg 1999;4:256–263.

64. Kaspersen JH, Langø T, Lindseth F. Wavelet basededge detection in ultrasound images. J UltrasoundMed Biol 2001;27:89–99.

65. Selbekk T, Unsgård G, Ommedal S, Muller T, Torp

S, Myhr G, Bang J, Hernes TAN. Neurosurgical bi-opsies guided by 3D ultrasound—comparison of im-age evaluations and histopathological results. In:Lemke HU, Vannier MW, Inamura K, Farman AG,Doi K, Reiber JHC, editors: Computer Assisted Ra-diology and Surgery. Proceedings of the 16th Interna-tional Congress and Exhibition (CARS 2002) Paris,France, June 2002. Amsterdam: Elsevier, 2002. p 133–138.

66. Wirtz CR, Knauth M, Staubert A, Bonsanto MM,Sartor K, Kunze S, Tronnier VM. Clinical evaluationand follow-up results for intraoperative magneticresonance imaging in neurosurgery. Neurosurgery2000;46:1112–1122.

67. Xu C, Prince JL. Snakes, shapes, and gradient vectorflow. IEEE Trans Image Processing 1998;7:359–369.

68. Roche A, Pennec X, Malandain G, Ayache N. Rigidregistration of 3-D ultrasound with MR images: a newapproach combining intensity and gradient informa-tion. IEEE Trans Med Imaging 2001;20:1038–1049.

69. Pennec X, Ayache N, Roche A, Cachier P. Non-rigidMR/US registration for tracking brain deformations.Proceedings of International Workshop on MedicalImaging and Augmented Reality, Sophia Antipolis,France, 2001. France: Practical, 2001. p 79–86.

70. Shekhar R, Zagrodsky V. Mutual information-basedrigid and nonrigid registration of ultrasound volumes.IEEE Trans Med Imaging 2002;21:9–22.

Lindseth et al.: Image Fusion in US-Based Neuronavigation 69