10
A haptic-enabled multimodal interface for the planning of hip arthroplasty Tsagarakis, NG, Gray, JO, Caldwell, DG, Zannoni, C, Petrone, M, Testi, D and Viceconti, M http://dx.doi.org/10.1109/MMUL.2006.55 Title A haptic-enabled multimodal interface for the planning of hip arthroplasty Authors Tsagarakis, NG, Gray, JO, Caldwell, DG, Zannoni, C, Petrone, M, Testi, D and Viceconti, M Type Article URL This version is available at: http://usir.salford.ac.uk/id/eprint/936/ Published Date 2006 USIR is a digital collection of the research output of the University of Salford. Where copyright permits, full text material held in the repository is made freely available online and can be read, downloaded and copied for non-commercial private study or research purposes. Please check the manuscript for any further copyright restrictions. For more information, including our policy and submission procedure, please contact the Repository Team at: [email protected] .

A haptic-enabled multimodal interface for the planning of

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A haptic-enabled multimodal interface for the planning of

A haptic-enabled multimodal interface forthe planning of hip arthroplasty

Tsagarakis, NG, Gray, JO, Caldwell, DG, Zannoni, C, Petrone, M, Testi, D andViceconti, M

http://dx.doi.org/10.1109/MMUL.2006.55

Title A haptic-enabled multimodal interface for the planning of hip arthroplasty

Authors Tsagarakis, NG, Gray, JO, Caldwell, DG, Zannoni, C, Petrone, M, Testi, D and Viceconti, M

Type Article

URL This version is available at: http://usir.salford.ac.uk/id/eprint/936/

Published Date 2006

USIR is a digital collection of the research output of the University of Salford. Where copyright permits, full text material held in the repository is made freely available online and can be read, downloaded and copied for non-commercial private study or research purposes. Please check the manuscript for any further copyright restrictions.

For more information, including our policy and submission procedure, pleasecontact the Repository Team at: [email protected].

Page 2: A haptic-enabled multimodal interface for the planning of

Multimodal environments seek tocreate computational scenariosthat fuse sensory data (sight,sound, touch, and perhaps

smell) to form an advanced, realistic, and intu-itive user interface. This can be particularly com-pelling in medical applications, where surgeonsuse a range of sensory motor cues.1-4 Sampleapplications include simulators, education andtraining, surgical planning, and scientifically ana-lyzing and evaluating new procedures.

Developing such a multimodal environmentis a complex task involving integrating numer-ous algorithms and technologies. Increasingly,researchers are developing open source librariesand toolkits applicable to this field such as theVisualization Tool Kit (VTK) for visualization, theInsight Toolkit (ITK) for segmentation and regis-tration, and the Numerical Library (VNL) fornumerical algorithms. Single libraries from thesetoolkits form a good starting point for efficiently

developing a complex application. However, thisusually requires extending the core implementa-tion with new library modules. In addition, inte-grating new modules can quickly becomeconfusing in the absence of a good softwarearchitecture.

To address this, researchers have developedsemicomplete application frameworks that canrun independently, hiding the core implementa-tion’s complexity. As such, they can be dedicat-ed to produce custom applications.5 However,these systems form frameworks that aren’t mul-timodal because they don’t let us integrate dif-ferent visual representations or other modalitiessuch as haptics and speech. This has motivatedresearch in developing truly multimodal frame-works,6 but the benefits of such integration arestill largely unexplored. For the haptic modalityin particular, hardware and software that canprovide effective touch feedback can enhance thegrowth of innovative medical applications.

From this rationale, the Multisense projectaims to combine different sensory devices (hap-tics, speech, visualization, and tracking) in aunique virtual reality environment for orthope-dic surgery. We developed the Multisensedemonstrator on top of a multimodal applicationframework (MAF)7 that supports multimodalvisualization, interaction, and improved syn-chronization of multiple cues.

This article focuses on applying this multi-modal interaction environment to total hipreplacement (THR) surgery and, in particular, tothe preoperative planning surgical-access phase.8

After validation, this approach will be highly rele-vant to other orthopedic and medical applications.

Hip arthroplasty plannerHip arthroplasty is a procedure in which dis-

eased hip joints are removed and replaced withartificial parts—the socket and prosthesis.Researchers have developed different systems forTHR preoperative planning,4,9 operating in 2Dusing a mouse and flat screen to produce pseudo-3D interaction. This approach makes little or nouse of multisensory inputs, which leads to prob-lems because the graphics interface stronglyaffects implant positioning accuracy.10

A team of orthopedic surgeons defined fourspecific tasks that form the basis for our multi-modal hip arthroplasty planning environment:

❚ Preparing the subject-specific musculoskeletalmodel. Effective planning requires a complete

40 1070-986X/06/$20.00 © 2006 IEEE

A Haptic-EnabledMultimodalInterface forthe Planning ofHip Arthroplasty

Nikolaos G. Tsagarakis, John O. Gray, and Darwin G. CaldwellUniversity of Salford, UK

Cinzia Zannoni and Marco PetroneBiocomputing Competence Center

(CINECA)

Debora Testi and Marco VicecontiInstitute of Orthopedics Rizzoli

Haptic User Interfaces for Multimedia Systems

Multimodalenvironments help fusea diverse range ofsensory modalities,which is particularlyimportant whenintegrating the complexdata involved in surgicalpreoperative planning.The authors apply amultimodal interface forpreoperative planningof hip arthroplasty witha user interface thatintegrates immersivestereo displays andhaptic modalities. Thisarticle overviews thismultimodal applicationframework anddiscusses the benefits ofincorporating the hapticmodality in this area.

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 3: A haptic-enabled multimodal interface for the planning of

and accurate musculoskeletal model usuallyonly available from magnetic resonance imag-ing (MRI) data. Related work shows how wecan map patient computerized tomography(CT) scans to data and models from the VisualHuman to provide complete bone and soft tis-sue models of the hip and thigh muscles.10

❚ Surgical-access planning. The critical surgical-access phase consists of three main surgicaltasks: determining the initial incision locationand size, retracting the muscles, and dislocat-ing the femur.

❚ Components positioning. Here the surgeon posi-tions the prosthesis with respect to the femur.During this process, the surgeon can checkfunctional indicators: feasibility of theplanned position, primary component stabil-ity, and range of joint motion.

❚ Surgical simulation. After determining the pros-theses’ pose, the surgeon can interactivelyposition the neck resection plane to verify thereamer’s insertion path. Once the surgeonaccepts that position, the system generates amodel of the postoperative anatomy for finalverifications and inspections.

The medical users exploited these surgicalactivities to identify the possible benefits that canbe gained on these tasks by integrating the hapticmodality in the preoperative planning applica-tion. Based on this study, we defined the hapticrequirements of this specific application.

Haptic requirements From this series of procedures, the medical

users selected scenarios in which they felt hapticfeedback would be of the greatest benefit. Theseincluded the ability to locate and size the inci-sion, evaluate the surgical access they canachieve through that incision, and identify thefunctional impairment produced by any damageto the soft tissues (muscle or skin).

In addition, haptic feedback can help positionand orient the implant while preventing the sur-geon from positioning the component in a non-feasible location. Based on the position andorientation selected, the surgeon can evaluatethis specific location using a number of haptic-enabled indicators including the thigh joint’srange of motion after the simulation and thecomponent’s stability. The benefits will include

accurately positioning the implant and improvedexecution time.

Considering these surgical activities, the med-ical users defined the following haptic tasks:

❚ Force feedback for evaluating surgical access.Force (or touch) feedback can help surgeonsaccurately locate and size an incision. Duringthe retraction, it can help surgeons estimatethe relationship between visibility and mus-cle damage. Force feedback can also help themevaluate the incision aperture size while dis-locating the femur.

❚ Force feedback for evaluating the planned-posi-tion feasibility. Reaction forces generated bycontact with the surrounding tissues let theuser refine the planned position, check thefeasibility of planned position, and evaluatethe component’s primary stability in thisposition.

We identified the multimodal interface’srequirements using the characteristics of thesehaptic tasks. These requirements let us determinethe necessary features of the multimodal system’ssoftware and hardware modules.

Multimodal system requirements Any multimodal system must interact with

complex data incorporating several features:

❚ integration of multiple I/O devices andmodalities;

❚ seamless synchronization of the differentupdate loops running at much different rates;

❚ a distributed architecture that copes with thecomputational load and simulation loops;

❚ support for complex multimodal visualizationwith multiple representation of the data;

❚ support for dynamically exchangeable hapticrendering algorithms; and

❚ modularity and extensibility, with simpleapplication-level modules hiding the systemarchitecture’s complexity and synchroniza-tion problems.

We developed a multimodal application frame-work to address these requirements and a suitable

41

July–Septem

ber 2006

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 4: A haptic-enabled multimodal interface for the planning of

haptic software and hardware device to providethe haptic modltity within the framework.

Multimodal application frameworkThe multimodal application framework (MAF)

is a software library for rapidly developing inno-vative multimodal environments for medicalapplications. It supports the Multimodal Displayand Interaction (MDI) paradigm with multimodalvisualization and interaction, haptics, and syn-chronization of the multiple cues. An MAF con-sists of components that control system resources,which are organized as data entities and applica-tion services. A data entity is a Virtual MedicalEntity (VME). We distinguish the application ser-vices views, operations (Op), GUIs, and devices.

Every MAF application is an instance of a logiccomponent. The logic component’s main role isto control communication. Figure 1a shows theMAF architecture with the logic, manager, and allMAF resources. Figure 1b gives an example of theMAF multidisplay paradigm we used in thisapplication.

Interaction and synchronization model User interaction involves the I/O devices,

views subsystem, and operation subsystem. TheMDI paradigm requires gathering, synchronizing,and integrating inputs coming from multiple I/Odevices. When users interact with the applica-tion, a stream of events is sent to the framework:discrete events (low-frequency events causing achange in the application state) and continuousevents (high-frequency user interactions).

Handling input events is complex because theuser might perform any set of interactive anddynamic actions. Thus, managing the interactionwith a single, monolithic component is imprac-tical. MAF involves collaboration among manycomponents. GUI events are processed directlyby application components (for example, opera-tions or logic), and events coming from I/Odevices are typically processed by specific objectsnamed interactors. The general MAF interactionmodel for I/O devices implies three elements: asemiotic unit (I/O device), semantic unit (inter-actor), and an application component.

MAF manages interactions with multiple I/Odevices, through routing, locking, and fusionmechanisms within the Interaction Manager.This synchronizes inputs from different devicesusing the Device Manager subcomponentresponsible for keeping the list of connecteddevices and for synchronizing their inputs withthe application’s main visualization loop (seeFigure 2). For haptic devices, which require highand decoupled update rates, high-speed loopsrun inside the haptic subsystem, and only eventssent at visualization rates pass to and from theMAF. MAF ensures synchronization by sendingevents to them—for example, each time a hap-tic surface rendering is started an event contain-ing the rendered surface is sent to the hapticdevice and, hence, to the haptic rendering serv-er/library. This data synchronization is rare, so ithas minimal overhead.

During continuous interaction, the visualiza-tion and haptic loops are synchronized by thehaptic device sending events (at the graphics rate)to the visualization loop. Hence, we can computehaptic and graphical models in a decoupled butsynchronized fashion.

System hardware architectureTo address the intensive computation needs

and particularly to accommodate the differentupdate rates (for example, visualization systemsupdate at 50 to 100 Hz while haptic devices

42

IEEE

Mul

tiM

edia

Logic

Devices View VME Operation GUI

Interactionmanager

Viewmanager

VMEmanager

Operationmanager

Mainwindow

(a)

(b)

Figure 1. Multimodal

application framework:

(a) architecture

diagram and

(b) multiple display

paradigm. (VME =

Virtual Medical

Entity.)

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 5: A haptic-enabled multimodal interface for the planning of

update at more than 1,000 Hz), the multimodalsystem architecture uses multiple networked ded-icated machines (see Figure 3).

We use a dedicated graphics server to performthe graphics rendering, and the haptic servermanages the haptic subsystem via a TCP/IP inter-face. The advantage of this approach over a sin-gle machine, multithreaded approach is that itminimizes the coupling between local rates run-ning on different machines. Also, the rates forcritical processes such as the haptic servo inputand feedback control process are more consis-tent, enabling stable haptic rendering even forcomplex environments. This approach also givesus separate, extensible, and reconfigurable con-trol of the different input feedback subsystems.The disadvantage is it increases synchronizationrequirements between the various subsystems,which the MAF directly addresses.

Haptic subsystem implementation We designed a haptic device to support either

one- or two-handed operation and fabricated itto suit the application workspace and interactionrequirements we defined earlier. The device con-sists of two three-degrees-of-freedom (DOF)closed-chain mechanisms, each forming a classicfive-bar planar device that can also be rotatedaround the axis along its base (see Figure 4a, nextpage). We selected the inner and outer link lengthsto provide a workspace that satisfies the motion-range requirements of both the surgical-accessand the component-position-feasibility tasks.

To support two-handed interactions, we canconfigure the device to work in two modes. Thedouble-independent mode provides two mecha-nisms (6-DOF input and 3-DOF feedback) withtwo separate haptic interface points; the coupledmode configuration provides a single linked

43

(a) (b)

Interactionmanager

Speech

Asynchronousevents

Synchronousevents

Mouse

Tracker

Haptic

Devicemanager

S.E.R.

Logic

Interactionmanager

Viewmanager

Selectedview

Camerainteractor

SelectedVME

Positionalrouting

Static event routing

BehaviorP.E.R.

VMEmanager

Operationmanager

Runningoperation

Staticinteractor

S.E.R.

Figure 2.

(a) Synchronizing the

device’s events, and

(b) multimodal

application framework

(MAF) static and

positional event routing

(S.E.R. and P.E.R.).

(a) (b)

Immersive visual display

Orthopedicsurgeon

Virtualmodels

Multisenseprocessing

unit

Trackingsubsystem

Speechrecognitionsubsystem

Hapticsubsystem

Figure 3.

(a) Multimodal system

architecture and (b) the

system hardware setup

showing the integration

of the immersive stereo

display and the haptic

device.

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 6: A haptic-enabled multimodal interface for the planning of

mechanism (6-DOF input and 5-DOF feedback).11

The haptic rendering library coordinates theinput and haptic feedback signals. We developedthis library to support the haptic modality withinthe MAF (see Figure 4b). We use a multithreadedapproach that includes four core processes: devicecontrol, haptic rendering, event and commandhandling, and communication. Four respectivemanagers manage these process threads.

We provide a haptic tool as a mechanism forforce feedback and couple it to the haptic devicewithin the haptic manager object. The hapticrendering process, managed by the haptic man-ager, uses the current tool to gather forcerequests within the haptic world space and asksthe device to supply the user with the comput-ed haptic feedback. The haptic subsystem runson a dedicated haptic machine, and communi-

cation between the haptic module and the visu-alization station is performed using the hapticsubsystem API.

Surgical-access haptic modulesSurgical-access planning consists of a skin

incision, muscle retraction, and femur head dis-location. The initial incision is defined by tworeference points under force-feedback control.With the incision defined, the surgeon controlsthe aperture size using two additional referencepoints automatically created when the incisionline is defined (see Figure 5).

We implemented the incision haptic render-er as a standard surface renderer, letting surgeonsaccurately locate the reference points while pro-viding them with feedback on the constraintsimposed by the skin surface.

44

(b)(a)

Hapticworld

HapticAPI

Hapticmanager

Devicemanager

Synchronizationmodule

Devicecontroller

Communicationmanager

Eventmanager

Hapticengine

Haptictools

Hapticengine Generic

hapticrenderers

Surgicalprocedurerenderers

Left- and right-handhaptic mechanisms

Figure 4. (a) Prototype

haptic desktop device.

(b) Haptic rendering

library architecture

and interaction among

the library modules.

Figure 5. (a) The

surgeon selects the skin

incision size and

(b) views the incision

aperture.

(a) (b)

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 7: A haptic-enabled multimodal interface for the planning of

The skin incision and retraction is followed bythe much more complex task of muscle retrac-tion, which has a higher probability of damagingmuscles and other tissues. During this procedure,the surgeon introduces the retractor between themuscles and retracts one toward the edge of theskin incision. The retracted muscle is held in posi-tion while the next muscle is retracted and soforth until the head of the femur and the acetab-ulum are visible.

To simulate this, the haptic and visual sub-systems must cooperate within the MAF to pro-vide the correct level of synchronization. Weimplemented the muscle retraction haptic ren-derer, which lets the user estimate the trade-offbetween visibility and muscle damage during theretraction, as a combined haptic node formedfrom two haptic renderers (see Figure 6a). Thefirst node represents the surface of the muscle tobe retracted and is implemented as a surface hap-tic renderer permitting interaction with the mus-cle surface. The second node implements theretraction haptic model realized using a two-

spring (200 to 400 Newton meter [N/m]) model(see Figure 6b).

We tuned the spring parameters using a FiniteElement (FE) analysis of muscle and actualpatient data. The state of the MAF operation con-trols switching between the surface and retrac-tion models. When the femur and acetabulumare visible, the femur head can be dislocated fromthe socket to allow access through the aperture(see Figure 7). To let the user assess the difficultyin operating through the aperture, we generateforce feedback during the dislocation.

This is modeled when a collision occursbetween the femur head and the surroundingsoft tissue objects (muscles and skin). To simulatethe resistance caused by muscle elongation, wegenerate additional feedback forces from themuscles connecting the femur and ileum. Wemodeled each muscle as a spring/damper:

(1)

F a u a

F F

m M m m m f m

elongationi

i

N

m

K d B= ⋅ ⋅ + ⋅ ×

==∑

0

45

(a) (b)

Surface nodeRetraction node

Surface rendererRetraction renderer

Renderer switchingmechanism

Output renderer

Haptic object

Tool before contact

Line of axes

Spring 2

Spring 1Retraction point

Split point

Figure 6. (a) The

muscle retraction

haptic object with the

integration of the

surface and retraction

haptic node, and

(b) the two-spring

retraction haptic render

showing the two spring

elements’ initiation

and termination

points.

(a) (b)

Retractor

Head of the femur

Figure 7. (a) A

visualization of the

muscle retraction, and

(b) an example of

surgical access with a

retracted muscle.

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 8: A haptic-enabled multimodal interface for the planning of

46

IEEE

Mul

tiM

edia

where KM and Bm are the specific muscle stiffnessand damping parameters, dm is the muscle elon-gation, am is the unit vector of the muscle line ofaxes, and uf is the femur bone velocity vector.This is adequate because the rendering of theforce generated by the muscle elongations is asimulation feature we added to improve realismin this operation and doesn’t affect the actualplanning. To permit interactive operation duringthe dislocation, the system visualizes the musclelines of axis with coloration dependent on thestrain (see Figure 8a).

To improve the realism of a femur head dislo-cation, we implemented a pop-up effect by con-necting a strong (1000 N/m) spring—only activein close proximity to the socket’s center, betweenthe femur head and socket centers:

(2)

where KE and BE are the pop-up stiffness anddamping parameters, pd is the femur head dislo-cation distance vector, rd is the pop-up spring’sactive sphere radius, and uf is the femur bonevelocity vector. When the femur head disloca-tion distance becomes greater than this distance,a force discontinuity is created dropping theforce to 0 Newton (N) for 40 milliseconds (seeFigure 8b), creating a discontinuity that the userperceives as the femur head popping out.

Preliminary experimental resultsWe performed two preliminary validation

experiments involving five subjects to evaluate

the multimodal benefits and effectiveness usingthe system shown in Figure 3. Two subjectsinvolved in the system development were well-trained users, and three subjects had no previoussystem experience.

We gave each user an explanatory test sheet.Experiment 1 evaluated the benefits of force feed-back on the accuracy of defining the incisionaperture. A reference aperture was defined, andthe subjects were asked to execute a skin incisionusing the immersive multimodal interface withthe haptic feedback modality active in the firstcase. Each subject tried to replicate the referenceaperture. The users then repeated the processwith the haptic modality disabled.

In the second case, with the haptic modalitydisabled, the system provided visual feedback(color changes) indicating contact between thedevice avatar and the skin. The users repeated thetest five times for each case. We recorded thetime required to carry out the positioning. Figure9 gives the distance error between the position ofthe incision points and the position of the pointsof the reference incision with the haptic multi-modal interface enabled and disabled.

The users obtained significantly higher accu-racy using force feedback in this experiment.Their execution time was also considerablyreduced. The average execution times we record-ed for the haptic- and nonhaptic-enabled execu-tion were 31 and 57 seconds, respectively. Thisshows that integrating the haptic cues providesbenefits in terms of accuracy and execution time.

Another important observation is there wasno significant difference in the performanceamong the five subjects. This initial indication

F p u p

F p

popup E d E f d d

popup d

K B r= ⎡⎣ ⎤⎦ ⋅ + ⎡⎣ ⎤⎦ ⋅ <

=

,

0, >> rd

(b)(a)

0 5 10 15 200

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

Femur head dislocation distance (mm)

Forc

e (N

ewto

n)

Femur head pop-upforce slope

Femur head pop-upforce effect

Muscle elongationforce slope

Figure 8. (a) A

visualization of a

femur dislocation, and

(b) the force profile

recorded during the

dislocation.

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 9: A haptic-enabled multimodal interface for the planning of

shows that the user effect wasn’t significant inthe multimodal environment, and we achieveda good level of accuracy and usability even withminimal prior experience.

Experiment 2 demonstrated the benefits interms of execution time using the two-handedinteraction paradigm. We configured the hapticdevice to work in the two-handed operation modewhere the left part of the device was for toolmanipulation and the right for manipulating thecamera of the visual scene. We asked the subjectsto position and orient the retractor tool close to apredefined location between two muscles withoutactually performing the retraction. This taskrequired complex manipulations of the retractortool while simultaneously manipulating the cam-era. The subjects performed this operation fivetimes using the haptic device, and then theyrepeated the same process using a standard mouse.

We recorded the execution times in bothcases. The time required to execute the task usinga two-handed interaction was considerablyreduced (on average 43 seconds) compared tothat achieved with the mouse (on average 105seconds). These results show the effectiveness ofthis type of interface when complex manipula-tion is necessary.

ConclusionsWe’re currently evaluating the complete mul-

timodal system. Our initial experiments havehelped us validate the multimodal interface inthe surgical access task. We might also see bene-fits from using this multimodal interface in otheraspects of medical planning. We plan to exten-sively evaluate these avenues in the future toassess the usefulness of the planning procedures’various modules. These tests will involve a broad-er selection of clinical users.

We’ll also work on enhancing the system’s abil-ity to address the haptic tasks requirements ofother surgery procedures. These will include alter-ations or trimmings in both the system hardware(mechanical structures) and software with theincorporation of other surgical haptic renderers toform a library of surgical haptic procedures. MM

AcknowledgmentsThis work was supported by the Multisense

European project (IST-2001-34121).

References1. H.D. Dobson et al., “Virtual Reality: New Method of

Teaching Anorectal and Pelvic Floor Anatomy,” Dis

Colon Rectum, vol. 46, no. 3, 2003, pp. 349-352.

2. M.A. Spicer and M.L. Apuzzo, “Virtual Reality

Surgery: Neurosurgery and the Contemporary

Landscape,” Neurosurgery, vol. 52, no. 3, 2003, pp.

489-497.

3. S. Hassfeld and J. Muhling, “Computer Assisted

Oral and Maxillofacial Surgery: A Review and an

Assessment of Technology,” Int’l J. Oral Maxillofacial

Surgery, vol. 30, no. 1, 2001, pp. 2-13.

4. H. Handels et al., “Computer-Assisted Planning and

Simulation of Hip Operations Using Virtual Three-

Dimensional Models,” Studies in Health Technology

and Informatics, vol. 68, 1999, pp. 686-689.

5. M. Fayad and D.C. Schimidt, “Object-Oriented

Application Frameworks,” Comm. ACM, vol. 40, no.

10, 1997, pp. 32-38.

6. F. Flippo, A. Krebs, and I. Marsic, “A Framework for

Rapid Development of Multimodal Interfaces,”

Proc. 5th Int’l Conf. Multimodal interface, ACM Press,

2003, pp. 109-116.

7. M. Krokos et al., “Real-Time Visualisation within the

Multimodal Application Framework,” Proc. 8th Int’l

Conf. Information Visualization (IV04), ACM Press,

pp. 21-26, 2004.

47

July–Septem

ber 2006

Dis

tanc

e er

ror

(mm

)

Dis

tanc

e er

ror

(mm

)

(a) (b)

0

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

1 2 3 4 5UserUser

0

0.005

0.010

0.015

0.020

0.025

1 2 3 4 5

Figure 9. Experimental

results for distance

error with (a) the

haptic modality

enabled and (b) the

force feedback disabled.

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.

Page 10: A haptic-enabled multimodal interface for the planning of

8. C.G. Schizas, B. Parker, and P.-F. Leyvraz, “A Study

of Pre-Operative Planning in CLS Total Hip

Arthroplasty,” Hip Int’l, vol. 6, 1996, pp. 75-81.

9. S. Nishihara et al., “Comparison of the Fit and Fill

between the Anatomic Hip Femoral Component

and the VerSys Taper Femoral Component Using

Virtual Implantation on the ORTHODOC

Workstation,” J. Orthopaedic Science, vol. 8, no. 3.,

2003, pp. 352-360.

10. M. Viceconti et al., “CT-Based Surgical Planning

Software Improves the Accuracy of THR

Preoperative Planning,” Medical Eng. & Physics, vol.

25, no. 5, 2003, pp. 371-377.

11. N.G. Tsagarakis and Darwin G. Caldwell, “Pre-

Operative Planning for Total Hip Replacement

Using a Desktop Haptic Interface”, Proc. IMAACA

2004, DIP Univ. of Genoa, 2005, pp. 209-216 .

Nikolaos G. Tsagarakis is a research

fellow at the University of

Salford, UK. He works in rehabili-

tation, medical robotics, and hap-

tic systems. Other research

interests include novel actuators,

dextrous hands, tactile sensing,

and humanoid robots. Tsagarakis received his PhD in

robotics and haptic technology from the University of

Salford. He is a member of the IEEE Robotics and

Automation Society.

Marco Petrone is a staff member

with the High Performance Systems,

Visualization Group (VISIT) at

the Biocomputing Competence

Center (CINECA). His research

interests include scientific visual-

ization, biomedical applications,

multimodal interaction, and the multimodal applica-

tion framework (openMAF). Petrone received a degree

in computer engineering from the University of

Padova, Italy.

Debora Testi is a researcher at the

Laboratorio di Tecnologia Medica

of the Institute of Orthopedics,

Rizzoli. Her research interests

include bone remodeling, osteo-

porosis, femoral neck fractures,

and software for computer-aided

medicine. Testi has a PhD in bioengineering from the

University of Bologna. She is a member of the European

Society of Biomechanics.

Cinzia Zannoni is a project man-

ager with the High Performance

Systems group at the Bio-

computing Competence Center

(CINECA) and is the coordinator

of VISIT, which runs activities in

scientific visualization and the

development of IT services for the support of scientific

communities. Zannoni received a PhD in bioengineer-

ing at the University of Bologna.

John O. Gray is a professor of

advanced robotics at the University

of Salford. His research interests

include medical robotics, nonlin-

ear control systems, precision

electromagnetic instrumentation,

and robotic systems for the food

industry. Gray received a PhD from the University of

Manchester in control engineering.

Marco G. Viceconti is the tech-

nical director of the Laboratorio

di Tecnologia Medica of the

Institute of Orthopedics, Rizzoli.

His research interests are in devel-

oping and validating medical

technology for orthopedics and

traumatology. Viceconti received a PhD in bioengi-

neering from the University of Florence. He is currently

the secretary general of the European Society of

Biomechanics and a member of the Council of the

European Alliance for Medical and Biological Engineering

and Science (EAMBES).

Darwin G. Caldwell is the Chair

of Advanced Robotics in the Center

for Robotics and Automation at

the University of Salford. His

research interests include innov-

ative actuators and sensors, hap-

tic feedback, force-augmentation

exoskeletons, dexterous manipulators, humanoid

robotics, biomimetic systems, rehabilitation robotics,

and robotic systems for the food industry. Caldwell

received a PhD in robotics from the University of Hull.

He is chair of the United Kingdom and Republic of

Ireland (UKRI) region of the IEEE Robotics and

Automation Society.

Readers may contact Nikolaos Tsagarakis at the

School of Computer Science and Engineering,

University of Salford; [email protected].

48

IEEE

Mul

tiM

edia

Authorized licensed use limited to: UNIVERSITY OF SALFORD. Downloaded on March 24, 2009 at 10:07 from IEEE Xplore. Restrictions apply.