30
Advances in Experimental Medicine and Biology 1093 Guoyan Zheng · Wei Tian · Xiahai Zhuang Editors Intelligent Orthopaedics Artificial Intelligence and Smart Image- guided Technology for Orthopaedics

Guoyan˜Zheng˜· Wei˜Tian˜· Xiahai˜Zhuang Editors ... · this chapter is to present the basic elements of CAOS devices and to review state-of-the-art examples of different imaging

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

  • Advances in Experimental Medicine and Biology 1093

    Guoyan Zheng · Wei Tian · Xiahai Zhuang Editors

    Intelligent OrthopaedicsArti� cial Intelligence and Smart Image-guided Technology for Orthopaedics

  • Advances in Experimental Medicineand Biology

    Volume 1093

    Editorial BoardIRUN R. COHEN, The Weizmann Institute of Science, Rehovot, IsraelABEL LAJTHA, N.S. Kline Institute for Psychiatric Research, Orangeburg,NY, USAJOHN D. LAMBRIS, University of Pennsylvania, Philadelphia, PA, USARODOLFO PAOLETTI, University of Milan, Milan, ItalyNIMA REZAEI, Tehran University of Medical Sciences, Children’s MedicalCenter Hospital, Tehran, Iran

  • More information about this series at http://www.springer.com/series/5584

    http://www.springer.com/series/5584

  • Guoyan Zheng • Wei Tian • Xiahai ZhuangEditors

    Intelligent OrthopaedicsArtificial Intelligence and SmartImage-guided Technology forOrthopaedics

    123

  • EditorGuoyan ZhengUniversity of BernBern, Switzerland

    Xiahai ZhuangFudan UniversityShanghai, China

    Wei TianBeijing Jishuitan HospitalPeking UniversityBeijing, Beijing, China

    ISSN 0065-2598 ISSN 2214-8019 (electronic)Advances in Experimental Medicine and BiologyISBN 978-981-13-1395-0 ISBN 978-981-13-1396-7 (eBook)https://doi.org/10.1007/978-981-13-1396-7

    Library of Congress Control Number: 2018958708

    © Springer Nature Singapore Pte Ltd. 2018This work is subject to copyright. All rights are reserved by the Publisher, whether the wholeor part of the material is concerned, specifically the rights of translation, reprinting, reuse ofillustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way,and transmission or information storage and retrieval, electronic adaptation, computer software,or by similar or dissimilar methodology now known or hereafter developed.The use of general descriptive names, registered names, trademarks, service marks, etc. in thispublication does not imply, even in the absence of a specific statement, that such names areexempt from the relevant protective laws and regulations and therefore free for general use.The publisher, the authors and the editors are safe to assume that the advice and information inthis book are believed to be true and accurate at the date of publication. Neither the publishernor the authors or the editors give a warranty, express or implied, with respect to the materialcontained herein or for any errors or omissions that may have been made. The publisher remainsneutral with regard to jurisdictional claims in published maps and institutional affiliations.

    This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore189721, Singapore

    https://doi.org/10.1007/978-981-13-1396-7

  • Contents

    1 Computer-Aided Orthopaedic Surgery: State-of-the-Artand Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Guoyan Zheng and Lutz-P. Nolte

    2 Computer-Aided Orthopedic Surgery: Incremental Shiftor Paradigm Change? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Leo Joskowicz and Eric J. Hazan

    3 CAMISS Concept and Its Clinical Application . . . . . . . . . . . . . . 31Wei Tian, Yajun Liu, Mingxing Fan, Jingwei Zhao, Peihao Jin,and Cheng Zeng

    4 Surgical Navigation in Orthopedics: Workflow and SystemReview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Chidozie H. Ewurum, Yingying Guo, Seang Pagnha, Zhao Feng,and Xiongbiao Luo

    5 Multi-object Model-Based Multi-atlas SegmentationConstrained Grid Cut for Automatic Segmentation ofLumbar Vertebrae from CT Images . . . . . . . . . . . . . . . . . . . . . . . 65Weimin Yu, Wenyong Liu, Liwen Tan, Shaoxiang Zhang,and Guoyan Zheng

    6 Deep Learning-Based Automatic Segmentation of theProximal Femur from MR Images . . . . . . . . . . . . . . . . . . . . . . . . . 73Guodong Zeng and Guoyan Zheng

    7 Muscle Segmentation for Orthopedic Interventions . . . . . . . . . . 81Naoki Kamiya

    8 3X-Knee: A Novel Technology for 3D Preoperative Planningand Postoperative Evaluation of TKA Based on 2D X-Rays . . . 93Guoyan Zheng, Alper Alcoltekin, Benedikt Thelen,and Lutz-P. Nolte

    9 Atlas-Based 3D Intensity Volume Reconstruction from 2DLong Leg Standing X-Rays: Application to Hard and SoftTissues in Lower Extremity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105Weimin Yu and Guoyan Zheng

    10 3D Ultrasound for Orthopedic Interventions . . . . . . . . . . . . . . . . 113Ilker Hacihaliloglu

    v

  • vi Contents

    11 A Novel Ultrasound-Based Lower Extremity MotionTracking System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131Kenan Niu, Victor Sluiter, Jasper Homminga, André Sprengers,and Nico Verdonschot

    12 Computer-Assisted Planning, Simulation, and NavigationSystem for Periacetabular Osteotomy . . . . . . . . . . . . . . . . . . . . . 143Li Liu, Klaus Siebenrock, Lutz-P. Nolte, and Guoyan Zheng

    13 Biomechanical Optimization-Based Planningof Periacetabular Osteotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157Li Liu, Klaus Siebenrock, Lutz-P. Nolte, and Guoyan Zheng

    14 Biomechanical Guidance System for PeriacetabularOsteotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169Mehran Armand, Robert Grupp, Ryan Murphy, Rachel Hegman,Robert Armiger, Russell Taylor, Benjamin McArthur,and Jyri Lepisto

    15 Gravity-Assisted Navigation System for TotalHip Arthroplasty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Guoyan Zheng

    16 3D Visualization and Augmented Reality for Orthopedics . . . . 193Longfei Ma, Zhencheng Fan, Guochen Ning, Xinran Zhang,and Hongen Liao

    17 Intelligent HMI in Orthopedic Navigation . . . . . . . . . . . . . . . . . 207Guangzhi Wang, Liang Li, Shuwei Xing, and Hui Ding

    18 Patient-Specific Surgical Guidance System for IntelligentOrthopaedics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225Manuela Kunz and John F. Rudan

    19 Intelligent Control for Human-Robot Cooperationin Orthopedics Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245Shaolong Kuang, Yucun Tang, Andi Lin, Shumei Yu,and Lining Sun

    20 Multilevel Fuzzy Control Based on Force Informationin Robot-Assisted Decompressive Laminectomy . . . . . . . . . . . . 263Xiaozhi Qi, Yu Sun, Xiaohang Ma, Ying Hu, Jianwei Zhang,and Wei Tian

    21 Potential Risk of Intelligent Technologies in ClinicalOrthopedics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281Yajun Liu

    22 Clinical Application of Navigation in the Surgical Treatmentof a Pelvic Ring Injury and Acetabular Fracture . . . . . . . . . . . . 289Masaki Takao, Hidetoshi Hamada, Takashi Sakai,and Nobuhiko Sugano

  • Contents vii

    23 Patient-Specific Surgical Guide for Total Hip Arthroplasty . . . 307Takashi Sakai

    24 Computer Navigation in Orthopaedic Tumour Surgery . . . . . . 315Kwok-Chuen Wong, Xiaohui Niu, Hairong Xu, Yuan Li,and Shekhar Kumta

    25 Sensor-Based Soft Tissue Balancing in TotalKnee Arthroplasty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327Jimmy Chow, Tsun Yee Law, and Martin Roche

    26 Implant Orientation Measurement After THA Usingthe EOS X-Ray Image Acquisition System . . . . . . . . . . . . . . . . . . 335Kunihiko Tokunaga, Masashi Okamoto, and Kenji Watanabe

    27 3D Printing in Spine Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345Hong Cai, Zhongjun Liu, Feng Wei, Miao Yu, Nanfang Xu,and Zihe Li

  • 1Computer-Aided OrthopaedicSurgery: State-of-the-Art and FuturePerspectives

    Guoyan Zheng and Lutz-P. Nolte

    Abstract

    Introduced more than two decades ago,computer-aided orthopaedic surgery (CAOS)has emerged as a new and independentarea, due to the importance of treatmentof musculoskeletal diseases in orthopaedicsand traumatology, increasing availability ofdifferent imaging modalities and advances inanalytics and navigation tools. The aim ofthis chapter is to present the basic elementsof CAOS devices and to review state-of-the-art examples of different imaging modalitiesused to create the virtual representations,of different position tracking devices fornavigation systems, of different surgicalrobots, of different methods for registrationand referencing, and of CAOS modules thathave been realized for different surgical pro-cedures. Future perspectives will be outlined.It is expected that the recent advancementon smart instrumentation, medical robotics,artificial intelligence, machine learning, anddeep learning techniques, in combination withbig data analytics, may lead to smart CAOSsystems and intelligent orthopaedics in thenear future.

    G. Zheng (�) · L.-P. NolteInstitute for Surgical Technology and Biomechanics,University of Bern, Bern, Switzerlande-mail: [email protected]

    Keywords

    Computer-aided orthopaedic surgery(CAOS) · Smart instrumentation · Medicalrobotics · Artificial intelligence · Machinelearning · Deep learning · Big data analytics ·Intelligent orthopaedics

    1.1 Introduction

    The human musculoskeletal system is an organsystem that includes the bones of the skeleton andthe cartilages, ligaments, and other connectivetissues that bind tissues and organs together. Themain functions of this system are to provide form,support, stability, and movement to the body.Bones, besides supporting the weight of the body,work together with muscles to maintain bodyposition and to produce controlled, precise move-ments. Musculoskeletal disease is among themost common causes of severe long-term disabil-ity and practical pain in industrialized societies[1]. The impact and importance of musculoskele-tal diseases are critical not only for individualhealth and mobility but also for social function-ing and productivity and economic growth on alarger scale, reflected by the proclamation of theBone and Joint Decade 2000–2010 [1].

    © Springer Nature Singapore Pte Ltd. 2018G. Zheng et al. (eds.), Intelligent Orthopaedics, Advances in Experimental Medicineand Biology 1093, https://doi.org/10.1007/978-981-13-1396-7_1

    1

    http://crossmark.crossref.org/dialog/?doi=10.1007/978-981-13-1396-7_1&domain=pdfmailto:[email protected]://doi.org/10.1007/978-981-13-1396-7_1

  • 2 G. Zheng and L.-P. Nolte

    Both traumatology and orthopaedic surgeryaim at the treatment of musculoskeletal tissues.Surgical steps such as the placement of an im-plant component, the reduction and fixation ofa fracture, ligament reconstruction, osteotomy,tumour resection, and the cutting or drilling ofbone should ideally be carried out as precisely aspossible. Not only will optimal precision improvethe post-operative outcome of the treatment, butit will also minimize the risk factors for intra-and post-operative complications. To this end,a large number of pure mechanical guides havebeen developed for various clinical applications.The pure mechanical guides, though easy to useand easy to handle, do not respect the individualpatient’s morphology. Thus, their general ben-efit has been questioned (see for example [2]).Additionally, surgeons often encounter the chal-lenge of limited visibility of the surgical situs,which makes it difficult to achieve the intendedprocedure as accurately as desired. Moreover,the recent trend towards increased minimallyinvasive surgery makes it more and more im-portant to gain feedback about surgical actionsthat take place subcutaneously. Just as a GlobalPositioning System (GPS)-based car navigationprovides visual instruction to a driver by display-ing the location of the car on a map, a computer-aided orthopaedic surgery (CAOS) module al-lows the surgeon to get real-time feedback aboutthe performed surgical actions using informationconveyed through a virtual scene of the situspresented on a display device [3, 4]. Parallel tothe CAOS module to potentially improve surgicaloutcome is the employment of surgical robotsthat actively or semi-actively participate in thesurgery [5].

    Introduced more than two decades ago [3–5],CAOS has emerged as a new and independentarea and stands for approaches that use computer-enabled tracking systems or robotic devices toimprove visibility to the surgical field and in-crease application accuracy in a variety of sur-gical procedures. Although CAOS modules usenumerous technical methods to realize individualaspects of a procedure, their basic conceptualdesign is very similar. They all involve three ma-jor components: a therapeutic object (TO in ab-

    breviation, which is the target of the treatment),a virtual object (VO in abbreviation, which isthe virtual representation in the planning andnavigation computer), and a so-called navigatorthat links both objects. For reasons of simplicity,the term “CAOS system” will be used within thisarticle to refer to both navigation systems androbotic devices.

    The central element of each CAOS system isthe navigator. It is a device that establishes aglobal, three-dimensional (3-D) coordinate sys-tem (COS) in which the target is to be treatedand the current location and orientation of theutilized end effectors (EE) are mathematicallydescribed. End effectors are usually passive sur-gical instruments but can also be semi-active oractive devices. One of the main functions ofthe navigator is to enable the transmission ofpositional information between the end effectors,the TO and the VO. For robotic devices, the robotitself plays the role of the navigator, while forsurgical navigation a position tracking device isused.

    For the purpose of establishment of a CAOSsystem through coactions of these three entities,three key procedural requirements have to befulfilled. The first is the calibration of the endeffectors, which means to describe the end ef-fectors’ geometry and shape in the coordinatesystem of the navigator. For this purpose, it isrequired to establish physically a local coordinatesystem at the end effectors. When an opticaltracker is used, this is done via rigid attach-ment of three or more optical markers onto eachend effector. The second is registration, whichaims to provide a geometrical transformationbetween the TO and the VO in order to displaythe end effect’s localization with respect to thevirtual representation, just like the display ofthe location of a car in a map in a GPS-basednavigation system. The geometrical transforma-tion could be rigid or non-rigid. In literature,a wide variety of registration concepts and as-sociated algorithms exist (see the next sectionfor more details). The third key ingredient to aCAOS system is referencing, which is necessaryto compensate for possible motion of the navi-gator and/or the TO during the surgical actions

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 3

    to be controlled. This is done by either attach-ing a so-called dynamic reference bases (DRB)holding three or more optical markers to theTO or immobilizing the TO with respect to thenavigator.

    The rest of the chapter is organized as follows.Section 1.2 will review the state-of-the-art exam-ples of basic elements of CAOS systems. Section1.3 will present clinical fields of applications. InSect. 1.4, future perspectives will be outlined,followed by conclusion in Sect. 1.5.

    1.2 Basic Elements of CAOSSystems

    1.2.1 Virtual Object

    The VO in each CAOS system is defined as asufficiently realistic representation of the mus-culoskeletal structures that allows the surgeon toplan the intended intervention, as exemplified inFig. 1.1a Intra-operatively, it also serves as the“background” into which the measured positionof a surgical instrument can be visualized (seeFig. 1.1b for an example). Though most of thetime VO is derived from image data of the pa-tient, it can also be created directly from intra-operative digitization without using anymedical

    image data. Below detailed examples of differentforms of VOs will be reviewed.

    When the VO is derived from medical imagedata, these data may be acquired at two points intime: either pre-operatively or intra-operatively.Two decades ago, the VOs of majority CAOSsystems were derived from pre-operatively ac-quired CT scans, and a few groups also tried touse magnetic resonance imaging (MRI) [6, 7]. Incomparison with MRI, CT has clear advantagesof excellent bone-soft tissue contrast and no ge-ometrical distortion despite its acquisition induc-ing radiation exposure to the patient. Soon afterthe introduction of the first CAOS systems, thelimitations of pre-operative VOs were observed,which led to the introduction of intra-operativeimaging modalities. More specifically, the bonymorphology may have changed between the timeof image acquisition and the actual surgical pro-cedure. As a consequence, the VO may not nec-essarily correspond to the TO any more leadingto unpredictable inaccuracies during navigationor robotic procedures. This effect can be particu-larly adverse for traumatology in the presence ofunstable fractures. To overcome this problem inthe field of surgical navigation, the use of intra-operative CT scanning has been proposed [8], butthe infrastructural changes that are required forthe realization of this approach are tremendous,

    Fig. 1.1 Example of CT-based navigational feedback.These screenshots show a CT-based CAOS system duringpre-operative planning (a) and intra-operative navigation

    (b) of pedicle screw placement. (Courtesy of Brainlab AG,Munich, Germany)

  • 4 G. Zheng and L.-P. Nolte

    Fig. 1.2 Example of fluoroscopy-based navigation. This screenshot shows the fluoroscopy-based navigation for distallocking of an intramedullary nail. (Courtesy of Brainlab AG, Munich, Germany)

    often requiring considerable reconstruction of ahospital’s facilities. This has motivated the de-velopment of navigation systems based on fluo-roscopic images [9–11]. The image intensifier isa well-established device during orthopaedic andtrauma procedures but has the limitations that theimages generated with a fluoroscope are usuallydistorted and that one-dimensional informationgets lost due to image projection. To use theseimages as VOs therefore requires the calibrationof the fluoroscope which aims to compute the im-age projection model and to compensate for theimage distortion [9–11]. The resultant systemsare therefore known as “fluoroscopy-based nav-igation systems” in literature [9–11]. Additionalfeature offered by a fluoroscopy-based navigationsystem is that multiple images acquired fromdifferent positions are co-registered to a com-

    mon coordinate system established on the targetstructure via the DRB technique. Such a systemcan thus provide visual feedback just like the useof multiple fluoroscopes placed at different posi-tions in constant mode but without the associatedradiation exposure, which is a clear advantage(see Fig. 1.2 for an example). This techniqueis therefore also known as “virtual fluoroscopy”[11]. Despite the fact that in such a system, onlytwo-dimensional (2-D) projected images withlow contrast are available, the advantages offeredby a fluoroscopy-based navigation system pre-ponderate for a number of clinical applicationsin orthopaedics and traumatology.

    In order to address the 2-D projection limi-tation of a fluoroscopy-based navigation system,a new imaging device was introduced [12] thatenables the intra-operative generation of 3-D flu-

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 5

    Fig. 1.3 Navigation using surgeon-defined anatomy ap-proach. This virtual model of a patient’s knee is gen-erated intra-operatively by digitizing relevant structures.

    Although a very abstract representation, it provides suf-ficient information to enable navigated high tibial os-teotomy

    oroscopic image data. It consists of a motor-ized, isocentric C-arm that acquires series of 50–100 2-D projections and reconstructs from them13 × 13 × 13 cm3 volumetric datasets whichare comparable to CT scans. Being initially advo-cated primarily for surgery at the extremities, this“fluoro-CT” has been adopted for usage with anavigation system and has been applied to severalanatomical areas already [13, 14]. As a majoradvantage, the device combines the availabilityof 3-D imaging with the intra-operative data ac-quisition. “Fluoro-CT” technology is under con-tinuous development involving smaller and non-isocentric C-arms, “closed” C-arm, i.e. O-armTM

    design [15, 16], faster acquisition speeds, largerfield of view, and also flat panel technology.

    A last category of navigation systems func-tions without any radiological images as VOs. In-stead, the tracking capabilities of the system are

    used to acquire a graphical representation of thepatient’s anatomy by intra-operative digitization.By sliding the tip of a tracked instrument on thesurface of a surgical object, the spatial location ofpoints on the surface can be recorded. Surfacescan then be generated from the recorded sparsepoint clouds and used as the virtual representa-tion of the surgical object. Because this model isgenerated by the operator, the technique is there-fore known as “surgeon-defined anatomy” (SDA)(Fig. 1.3). It is particularly useful when softtissue structures such as ligaments or cartilageboundaries are to be considered that are difficultto identify on CTs or fluoroscopic images [17].Moreover, with SDA-based systems, some land-marks can be acquired even without the directaccess to the anatomy. For instance, the centre ofthe femoral head, which is an important landmarkduring total hip and knee replacement, can be

  • 6 G. Zheng and L.-P. Nolte

    Fig. 1.4 An example of bone morphing. Screenshotsof different stages of an intra-operative bone morphingprocess. (a) Point acquisition; (b) calculation of morphed

    model; and (c) verification of final result. (Courtesy ofBrainlab AG, Munich, Germany)

    calculated from a recorded passive rotation ofthe leg about the acetabulum. It should be notedthat the generated representations are often ratherabstract and not easy to interpret as exemplifiedin Fig. 1.3. This has motivated the developmentof the so-called “bone morphing” techniques [18,19], which aim to derive a patient-specific modelfrom a generic statistical forms of the targetanatomical structure and a set of sparse pointsthat are acquired with the SDA technique [20].As the result, a realistic virtual model of thetarget structure can be presented and used as aVO without any conventional image acquisition(Fig. 1.4).

    1.2.2 Registration

    Position data that is used intra-operatively to dis-play the current tool location (navigation system)or to perform automated actions according to apre-operative plan (robot) are expressed in thelocal coordinate system of the VO. In general,this coordinate system differs from the one inwhich the navigator operates intra-operatively. Inorder to bridge this gap, the mathematical rela-tionships between both coordinate spaces needto be determined. When pre-operative imagesare used as VOs, this step is performed interac-tively by the surgeon during the registration, alsoknown as matching. A wide variety of differentapproaches have been developed and realizedfollowing numerous methodologies [21].

    Early CAOS systems implemented paired-point matching and surface matching [22]. Theoperational procedure for paired-point matchingis simple. Pairs of distinct points are defined pre-operatively in the VO and intra-operatively in theTO. The points on the VO are usually identifiedpre-operatively using the computer mouse, whilethe corresponding points on the TO are usuallydone intra-operatively with a tracked probe.In the case of a navigation system, the probeis tracked by the navigator, and for a roboticsurgery, it is mounted onto the robot’s actuator[23]. Although paired-point matching is easy tosolve mathematically, the accuracy of the resul-tant registration is low. This is due to the fact thatthe accuracy of paired-point matching dependson an optimal selection of the registration pointsand the exact identification of the associatedpairs which is error prone. One obvious solutionto this problem is to implant artificial objects tocreate easily and exactly identifiable fiducialsfor an accurate paired-point matching [23].However, the requirement of implanting theseobjects before the intervention causes extraoperation as well as associated discomfort andinfection risk for the patient [24]. Consequently,none of these methods have gained wide clinicalacceptance. The other alternative that has beenwidely adopted in early CAOS systems is tocomplement the paired-point matching withsurface matching [25, 26], which does not requireimplanting any artificial object and only uses thesurfaces of the VO as a basis for registration.

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 7

    Other methods to compute the registrationtransformation without the need for extensivepre-operative preparation utilize intra-operativeimaging such as calibrated fluoroscopic imagesor calibrated ultrasound images. As describedabove, a limited number of fluoroscopic images(e.g. two) acquired at different positions are cali-brated and co-registered to a common coordinatesystem established on the target structure. A so-called “2-D-3-D registration” procedure can thenbe used to find the geometrical transformationbetween the common coordinate system and apre-operatively acquired 3-D CT dataset by max-imizing a similarity measurement between the 2-D projective representations and the associateddigitally reconstructed radiographs (DRRs) thatare created by simulating X-ray projections (seeFig. 1.5 for an example). Intensity-based as wellas feature-based approaches have been proposedbefore. For a comprehensive review of differ-ent 2-D-3-D registration techniques, we refer to[21].

    Another alternative is the employment ofintra-operative ultrasonography. If an ultrasoundprobe is tracked by a navigator and itsmeasurements are calibrated, it may serve as aspatial digitizer with which points or landmarkson the surfaces of certain subcutaneous bonystructures may be acquired. This is differentfrom the touch-based digitization done with

    a conventional probe which usually requiresan invasive exposure of the surfaces of thetarget structures. Two different tracked modeultrasound probes are available. A (amplitude)-mode ultrasound probes can measure thedepth along the acoustic axis of the device.Placed on the patient’s skin, they can measurepercutaneously the distance to tissue borders,and the resulting point coordinates can beused as inputs to any feature-based registrationalgorithm. The applicability of this technique hasbeen demonstrated previously but with certainlimitations which prevent its wide usage [27, 28].More specifically, the accuracy of the A-modeultrasound probe-based digitization depends onhow well the probe can be placed perpendicularlyto the surfaces of the target bony structures,which is not an easy task when the subcutaneoussoft tissues are thick. Moreover, the velocity ofsound during the probe calibration is usuallydifferent from the velocity of sound when theprobe is used for digitization as the latter dependson the properties of the traversed tissues. Sucha velocity difference will lead to unpredictableinaccuracies when the probe is used to digitizedeeply located structures. As a consequence,the successful application of this techniqueremains limited to a narrow field of application.In contrast to an A-mode probe, a B (brightness)-mode ultrasound probe scans a fan-shaped area.

    Fig. 1.5 An example of CT-fluoro matching. Screenshotsof different stages of a CT-fluoro matching process. (a)Preregistration for CT-fluoro matching and (b) results of

    CT-fluoro matching. (Courtesy of Brainlab AG, Munich,Germany)

  • 8 G. Zheng and L.-P. Nolte

    It is therefore able to detect also surfaces thatare examined from an oblique direction, thoughthe errors caused by the velocity differencestill persist. In order to extract the relevantinformation for the registration of pre-operativeCT scans, the resulting, usually noisy imagesneed to be processed [29]. As for the intra-operative processing of fluoroscopic images, theuse of B-mode ultrasound for registration is notreliable in every case and consequently remainsthe subject of CAOS research [30, 31].

    It is worth to point out that if the VO isgenerated intra-operatively, registration is an in-herent process [21]. This is due to the fact thatsince the imaging device is tracked during dataacquisition, the position of any acquired imageis recorded with respect to the local coordinatesystem established on the TO. The recorded de-vice position, together with the additional imagecalibration process, automatically establishes thespatial relationship between the VO and the TOduring image acquisition, which is a clear advan-tage over the interactive registration in the case ofpre-operative images serving as VOs. Therefore,registration is not an issue when using intra-operative CT, 2-D, 3-D fluoroscopy or O-arm, orthe SDA concept.

    Radermacher et al. [32] introduced an al-ternative way to match pre-operative planningwith the intra-operative situation using individualtemplates. The principle of individualized tem-plates is to create customized templates basedon patient-specific 3-D bone models that arenormally segmented from pre-operative 3-D datasuch as CT or MRI scan. One feature about theindividual templates is that small reference areasof the bone structures are integrated into thetemplates as the contact faces. By this means, theplanned position and orientation of the templatein spatial relation to the bone are stored in a struc-tural way and can be reproduced intra-operativelyby adjusting the contact faces of the templatesuntil an exact fit to the bone is achieved. Byintegrating holes and/or slots, individualized tem-plates function as tool guides, e.g. for the prepa-ration of pedicle screw holes [32] or as cut-ting jigs used in total knee and hip replacementsurgery [33–35].

    1.2.3 Navigator

    Registration closes the gap between VO and TO.The navigator enables this connection by provid-ing a global coordinate space. In addition, it linksthe surgical end effectors, with which a procedureis carried out, to the TO that they act upon. Froma theoretical standpoint, it is the only element inwhich surgical navigation systems and surgicalrobotic systems differ.

    1.2.3.1 RobotsFor this type of CAOS technology, the robotitself is the navigator. Intra-operatively, it has tobe registered to the VO in order to realize theplan that is defined in the pre-operative imagedataset. The end effectors of a robot are usuallydesigned to carry out specific tasks as part of thetherapeutic treatment. Depending on how the endeffectors of a robot act on the patient, two differ-ent types of robots can be found in literature. Theso-called active robots conduct a specific taskautonomously without additional support by thesurgeon. Such a system has been applied for totaljoint replacement [5], but their clinical benefit hasbeen strongly questioned [36]. For traumatologyapplications, the use of active robots has onlybeen explored in the laboratory setting [37, 38].One possible explanation is that the nature offracture treatment is an individualized processthat does not include many steps that an activerobot can repetitively carry out.

    In contrast to active robotic devices, passive orsemi-active robots do not carry out a part of theintervention autonomously but rather guide or as-sist the surgeon in positioning the surgical tools.At present there are two representatives of thisclass, both for bone resection during total kneearthroplasty (TKA). The Navio system (Blue BeltTechnologies Inc. Pittsburgh, PA, USA) [39] isa hand-held semi-active robotic technology forbone shaping that allows a surgeon to move freelyin order to resect the bone as long as this motionstays within a pre-operatively defined safety vol-ume. The Mako system [40] is a passive roboticarm system providing oriental and tactile guid-ance. Both the Navio and the Mako systems re-quire additional tracking technology as described

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 9

    in the next sub-section. During the surgical pro-cedure, the system is under the direct surgeoncontrol and gives real-time tactile feedback to thesurgeon. Other semi-active robots such as Spine-Assist (Mazor Robotics Ltd., Israel) can be seenas intelligent gauges that place, for example, cut-ting jigs or drilling guides automatically [41, 42].

    1.2.3.2 TrackerThe navigator of a surgical navigation system isa spatial position tracking device. It determinesthe location and orientation of objects and pro-vides these data as 3-D coordinates or 3-D rigidtransformations. Although a number of track-ing methods based on various physical media,e.g. acoustic, magnetic, optical, and mechanicalmethods, have been used in the early surgicalnavigation systems, most of today’s products relyupon optical tracking of objects using operatingroom (OR) compatible infrared light that is eitheractively emitted or passively reflected from thetracked objects. To track surgical end effectorswith this technology then requires the tools to beadapted with reference bases holding either light-emitting diodes (LED, active) or light-reflectingspheres or plates (passive). Tracking patternswith known geometry by means of video imageshas been suggested [43, 44] as an inexpensivealternative to an infrared-light optical tracker.

    Optical tracking of surgical end effectors re-quires a direct line of sight between the trackerand the observed objects. This can be a critical is-sue in the OR setting. The use of electromagnetictracking systems has been proposed to overcomethis problem. This technology involves a homo-geneous magnetic field generator that is usuallyplaced near to the surgical situs and the attach-ment of receiver coils to each of the instrumentsallowing measuring their position and orientationwithin the magnetic field. This technique sensespositions even if objects such as the surgeon’shand are in between the emitter coil and thetracked instrument. However, the homogeneityof the magnetic field can be easily disturbed bythe presence of certain metallic objects caus-ing measurement artefacts that may decrease theachievable accuracyconsiderably [45, 46]. There-

    fore, magnetic tracking has been employed onlyin very few commercial navigation systems andwith limited success.

    Recently inertial measurement unit (IMU)-based navigation devices have attracted moreand more interests [47–51]. These devices at-tempt to combine the accuracy of large-consoleCAOS systems with the familiarity of conven-tional alignment methods and have been suc-cessfully applied to applications including TKA[47, 48], pedicle screw placement [49], and pe-riacetabular osteotomy (PAO) surgery [50, 51].With such devices, the line-of-sight issues inthe optical surgical navigation systems can becompletely eliminated. Technical limitations ofsuch devices include (a) relatively lower accuracyin comparison with optical tracking techniqueand (b) difficulty in measuring translations.

    1.2.4 Referencing

    Intra-operatively, it is unavoidable that there willbe relative motions between the TO and thenavigator due to surgical actions. Such motionsneed to be detected and compensated to securesurgical precision. For this purpose, the operatedanatomy is linked to the navigator. For roboticsurgery this connection is established as a phys-ical linkage. Large active robots, such as theearly machines used for total joint replacement,come with a bone clamp that tightly grips thetreated structure or involve an additional multi-link arm, while smaller active and semi-activedevices are mounted directly onto the bone. Forall other tracker types, bone motion is determinedby the attachment of a DRB to the TO [52],which is designed to house infrared LEDs, re-flecting markers, acoustic sensors, or electromag-netic coils, depending on the employed trackingtechnology. Figure 1.6 shows an example of aDRB for an active optical tracking system thatis attached to the spinous process of a lumbarvertebra. Since the DRB is used as an indicatorto inform the tracker precisely about movementsof the operated bone, a stable fixation throughoutthe entire duration of the procedure is essential.

  • 10 G. Zheng and L.-P. Nolte

    Fig. 1.6 Dynamic reference base. A dynamic referencebase allows a navigation system to track the anatomicalstructure that the surgeon is operating on. In the case ofspinal surgery, this DRB is usually attached to the proces-sus spinosus with the help of a clamping mechanism. Itis essential that it remains rigidly affixed during the entireusage of the navigation system on that vertebra

    1.3 Clinical Fieldsof Applications

    Since the mid-1990s when first CAOS systemswere successfully utilized for the insertion ofpedicle screws at the lumbar and thoracic spineand total hip replacement procedures [3, 4], alarge number of modules covering a wide rangeof traumatological and orthopaedic applicationshave been developed, validated in the laboratoryand in clinical trials. Some of them needed tobe abandoned, because the anticipated benefitfailed to be achieved or the technology provedto be unreliable or too complex to be used intra-operatively. Discussing all these applicationswould go beyond the focus of this article. Thus,here we focus on a review of the most importantapplications with the most original technologicalapproaches.

    While there was clearly one pioneeringexample of robot-assisted orthopaedic surgery –ROBODOC [5] – the first spinal navigationsystems were realized independently by severalresearch groups, almost in parallel [3, 4, 52–56].These systems used pre-operative CT scans asthe VO, relied upon paired-point and surfacematching techniques for registration, and werebased on optical or electromagnetic trackers.Their initial clinical success [57–59] boostedthe development of new CAOS systems andmodules. While some groups tried to use theexisting pedicle screw placement systems forother clinical applications, others aimed to applythe underlying technical principle to new clinicalchallenges by developing highly specializednavigation systems [60, 61]. With the advent ofalternative imaging methods for the generationof VOs, the indication for the use of one or theother method was evaluated more critically. Forinstance, it became evident that lumbar pediclescrew insertion in the standard degenerative casecould be carried out with fluoroscopy-basednavigation sufficiently accurate, thus avoidingthe need for a pre-operative CT.

    A similar development took place for totalknee replacement. Initially, this procedure wassupported by active [36, 62] and semi-active orpassive [39, 40] robots, as well as navigationsystems using pre-operative CTs [63], but with afew exceptions, the SDA approach [64] is today’smethod of choice.

    Fluoroscopy-based navigation still seems tohave a large potential to explore new fields ofapplication. The technology has been mainlyused in spinal surgery [65]. Efforts to apply itto total hip arthroplasty (THA) [66] and thetreatment of long-bone fractures [67] have beencommercially less successful. The intra-operative3-D fluoroscopy or O-arm has been exploredintensively [13–16]. It is expected that withthe advent of the flat panel technology, the useof fluoro-CT as a virtual object generator willsignificantly grow [16].

    Recently, computer-assisted surgery using in-dividual templates has gained increasing atten-tion. Initially developed for pedicle screw fixa-tion [32], such a technique has been successfully

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 11

    Fig. 1.7 Patient-specific instrumentation for pelvic tu-mour resection surgery. These images show theapplication of patient-specific instrumentation forpelvic tumour treatment. Implant and templatemanufactured by Mobelife NV, Leuven, Belgium.

    (a) A pre-operative X-ray radiograph, (b) the im-plant; (c) the patient-specific guide; (d) a post-operative X-ray radiograph. (Courtesy of Prof.Dr. K Siebenrock, Inselspital, University of Bern,Switzerland)

    reintroduced to the market for total knee arthro-plasty [33, 68, 69], hip resurfacing [34, 70], totalhip arthroplasty [35], and pelvic tumour resection[71, 72] (see Fig. 1.7 for an example). It shouldbe noted that most of the individual templatesare produced using additive manufacturing tech-niques, while most of the associated implants areproduced conventionally.

    1.4 Future Perspectives

    Despite its touted advantages, such as decreasedradiation exposure to the patient and the sur-gical team for certain surgical procedures andincreased accuracy in most situations, surgicalnavigation has yet to gain general acceptanceamong orthopaedic surgeons. Although issuesrelated to training, technical difficulty, and learn-ing curve are commonly presumed to be majorbarriers to the acceptance of surgical navigation,

    a recent study [73] suggested that surgeons didnot select them as major weaknesses. It has beenindicated that barriers to adoption of surgicalnavigation are neither due to a difficult learningcurve nor to a lack of training opportunities.The barriers to adoption of navigation are moreintrinsic to the technology itself, including intra-operative glitches, unreliable accuracy, frustra-tion with intra-operative registration, and line-of-sight issues. These findings suggest that sig-nificant improvements in the technology will berequired to improve the adoption rate of sur-gical navigation. Addressing these issues fromthe following perspectives may provide solutionsin the continuing effort to implement surgicalnavigation in everyday clinical practice.

    • 2-D or 3-D image stitching. Long-bone frac-ture reduction and spinal deformity correc-tion are two typical clinical applications thatfrequently use the C-arm in its operation.

  • 12 G. Zheng and L.-P. Nolte

    Such a surgery usually involves correctivemanoeuvers to improve the sagittal or coronalprofile. However, intra-operative estimation ofthe amount of correction is difficult, especiallyin longer instrumentation. Mostly, anteropos-terior (AP) and lateral fluoroscopic images areused but have the disadvantage to depict only asmall portion of the target structure in a singleC-arm image due to the limited field of viewof a C-arm machine. As such, orthopaedicsurgeons nowadays are missing an effectivetool to image the entire anatomical structuresuch as the spine or long bones during surgeryfor assessing the extent of correction. Al-though radiographs obtained either by usinga large field detector or by image stitchingcan be used to image the entire structure, theyare usually not available for intra-operativeinterventions. One alternative is to developmethods to stitch multiple intra-operativelyacquired small fluoroscopic images to be ableto display the entire structure at once [74, 75].Figure 1.8 shows an image stitching examplefor spinal intervention. The same idea can beextended to 3-D imaging to create a panoramiccone beam computed tomography [76]. At thismoment, fast and easy-to-use 2-D or 3-D im-age stitching systems are still under develop-ment, and as the technology evolves, surgicalbenefits and improved clinical outcomes areexpected.

    • Image fusion. Fusion of multimodality pre-operative image such as various MRI or CTdatasets with intra-operative images wouldallow for visualization of critical structuressuch as nerve roots or vascular structuresduring surgical navigation. Different imagingmodalities provide complementary informa-tion regarding both anatomy and physiology.The evidence supporting this complementarityhas been gained over the last few yearswith increased interest in the developmentof platform hardware for multimodalityimaging. Because multimodality images bydefinition contain information obtained usingdifferent imaging methods, they introducenew degrees of freedom, raising questionsbeyond those related to exploiting each singlemodality separately. Processing multimodalityimages is then all about enabling modalitiesto fully interact and inform each other. Itis important to choose an analytical modelthat faithfully represents the link betweenthe modalities without imposing phantomconnections or suppressing existing ones.Hence it is important to be as data drivenas possible. In practice, this means makingthe fewest assumptions and using the simplestmodel, both within and across modalities.Example models include linear relationshipsbetween underlying latent variables; use ofmodel-independent priors such as sparsity,

    Fig. 1.8 Image stitching for spinal interventions. Several small field-of-view C-arm images are stitched into one bigimage to depict the entire spine

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 13

    Fig. 1.9 An example ofstatistical shapemodel-based 2-D-3-Dreconstruction.Reconstruction of bonesurface from two calibratedfluoroscopic images and astatistical shape modelusing deformableregistration

    non-negativity, statistical independence, lowrank, and smoothness; or both. Such aprinciple has been successfully applied tosolving challenging problems in a varietyof applications [77]. Despite the evidentpotential benefit, the knowledge of how toactually exploit the additional diversity thatmultimodality images offer is currently atits preliminary stage and remains open forexploration.

    • Statistical shape and deformation analysis.Statistical shape and deformation analysis[78] has been shown to be useful for predicting3-D anatomical shape and structures fromsparse point sets that are acquired withthe SDA technique. Such a technique isheavily employed in so-called “image-free”navigation systems that are commerciallyavailable in the market, mainly for knee andhip surgery. However, with the availability ofstatistical shape models of other anatomicalregions, the technique could be applied to anypart of the skeleton. Such approaches bearsignificant potential for future developmentof computer navigation technology sincethey are not at all bound to the classicalpointer-based acquisition of bony features.In principle, the reconstruction algorithms canbe tuned to any type of patient-specific input,e.g. intra-operatively acquired fluoroscopicimages [79] or tracked ultrasound [30],thereby potentially enabling new minimallyinvasive procedures. Figure 1.9 shows an

    example of bone surface reconstructionfrom calibrated fluoroscopic images and astatistical shape model. Moreover, predictionfrom statistical shape models is possible notonly for the geometric shape of an object.Given statistical shape and intensity models,“synthetic CT scans” could be predicted fromintra-operatively recorded data after a time-consuming computation. With more and morecomputations shifted from CPUs to graphicsprocessing units (GPUs), it is expected thatstatistical shape and deformation analysis-based techniques will be used in more andmore CAOS systems [80].

    • Biomechanical modelling. Numerical modelsof human anatomical structures may helpthe surgeon during the planning, simulation,and intra-operative phases with the finalgoal to optimize the outcome of orthopaedicsurgical interventions. The terms “physical” or“biomechanical” are often used. While mostof existing biomechanical models serve for thebasic understanding of physical phenomena,only a few have been validated for thegeneral prediction of consequences of surgicalinterventions.

    The situation for patient-specific modelsis even more complex. To be used in clinicalpractice, ideally the exact knowledge of theunderlying geometrical tissue configurationand associated mechanical properties as wellas the loading regime is required as inputfor appropriate mathematical frameworks.

  • 14 G. Zheng and L.-P. Nolte

    In addition these models will not only beused pre-operatively but need to functionalso in near real time in the operatingtheatre.

    First attempts have been made to incor-porate biomechanical simulation and mod-elling into the surgical decision-making pro-cess for orthopaedic interventions. For ex-ample, a large spectrum of medical devicesexists for correcting deformities associatedwith spinal disorders. Driscoll et al. [81] de-veloped a detailed volumetric finite elementmodel of the spine to simulate surgical cor-rection of spinal deformities and to assess,compare, and optimize spinal devices. An-other example was presented in [82] wherethe authors showed that with biomechanicalmodelling the instrumentation configurationcan be optimized based on clinical objectives.Murphy et al. [83] presented the developmentof a biomechanical guidance system (BGS)for periacetabular osteotomy. The BGS aimsto provide not only real-time feedback of thejoint repositioning but also the simulated jointcontact pressures.

    Another approach is the combined useof intra-operative sensing devices withsimplified biomechanical models. Crottetet al. [84] introduced a device that intra-operatively measures knee joint forces andmoments and evaluated its performance andsurgical advantages on cadaveric specimensusing a knee joint loading apparatus. Largevariation among specimens reflected thedifficulty of ligament release and the needfor intra-operative force monitoring. Acommercial version of such a device (e-LIBRA Dynamic Knee Balancing System,Synvasive Technology, El Dorado Hills,CA, USA) became available in recent yearsand is clinically used (see, e.g. [85]). It isexpected that incorporation of patient-specificbiomechanical modelling into CAOS systemswith or without the use of intra-operativesensing devices may eventually increase thequality of surgical outcomes [86]. Researchactivities must focus on existing technology

    limitations and models of the musculoskeletalapparatus that are not only anatomically butalso functionally correct and accurate.

    • Musculoskeletal imaging. Musculoskeletalimaging is defined as the imaging of bones,joints, and connected soft tissues with anextensive array of modalities such as X-ray radiography, CT, ultrasonography, andMRI. For the past two decades, rapid butcumulative advances can be observed inthis field, not only for improving diagnosticcapabilities with the recent advancement onlow-dose X-ray imaging, cartilage imaging,diffusion tensor imaging, MR arthrography,and high-resolution ultrasound but also forenabling image-guided interventions withthe introduction of real-time MRI or CTfluoroscopy, molecular imaging with PET/CT,and optical imaging into operating room [87].

    One recent advancement that has founda lot of clinical applications is the EOS 2-D/3-D image system (EOS imaging, Paris,France), which was introduced to the mar-ket in 2007. The EOS 2-D/3-D imaging sys-tem [88] is based on the Nobel Prize-winningwork of French physicist Georges Charpakon multiwire proportional chamber, which isplaced between the X-rays emerging from theradiographed object and the distal detectors.Each of the emerging X-rays generates a sec-ondary flow of photons within the chamber,which in turn stimulate the distal detectors thatgive rise to the digital image. This electronicavalanche effect explains why a low dose ofprimary X-ray beam is sufficient to generatea high-quality 2-D digital radiograph, makingit possible to cover a field of view of 175 cmby 45 cm in a single acquisition of about20s duration [89]. With an orthogonally co-linked, vertically movable, slot-scanning X-ray tube/detector pairs, EOS has the benefitthat it can take a pair of calibrated posteroan-terior (PA) and lateral (LAT) images simul-taneously [90]. EOS allows the acquisitionof images while the patient is in an upright,weight-bearing (standing, seated, or squatting)position and can image the full length of the

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 15

    body, removing the need for digital stitch-ing/manual joining of multiple images [91].The quality and nature of the image gener-ated by EOS system are comparable or evenbetter than computed radiography (CR) anddigital radiography (DR) but with much lowerradiation dosage [90]. It was reported by Illéset al. [90] that absorbed radiation dose byvarious organs during a full-body EOS 2-D/3-D examination required to perform a surface3-D reconstruction was 800–1000 times lessthan the amount of radiation during a typicalCT scan required for a volumetric 3-D recon-struction. When compared with conventionalor digitalized radiographs [92], EOS systemallows a reduction of the X-ray dose of anorder 80–90%. The unique feature of simul-taneously capturing a pair of calibrated PAand LAT images of the patient allows a full3-D reconstruction of the subject’s skeleton[90, 93, 94]. This in turn provides over 100clinical parameters for pre- and post-operativesurgical planning [90]. With a phantom study,Glaser et al. [95] assessed the accuracy of EOS3-D reconstruction by comparing it with 3-DCT. They reported a mean shape reconstruc-tion accuracy of 1.1±0.2 mm (maximum 4.7mm) with 95% confidence interval of 1.7 mm.They also found that there was no significantdifference in each of their analysed parameters(p > 0.05) when the phantom was placed indifferent orientations in the EOS machine.The reconstruction of 3-D bone models allowsanalysis of subject-specific morphology in aweight-bearing situation for different applica-tions to a level of accuracy which was notpreviously possible. For example, Lazennecet al. [96] used the EOS system to measurepelvis and acetabular component orientationsin sitting and standing positions. Further ap-plications of EOS system in planning totalhip arthroplasty include accurate evaluation offemoral offset [97] and rotational alignment[98]. The low dose and biplanar informationof the EOS 2-D/3-D imaging system introducekey benefits in contemporary radiologyand

    open numerous and important perspectives inCAOS research.

    Another novel technology on 2-D/3-D imaging was introduced in [99], whichhad the advantage of being integrated withany conventional X-ray machine. A meanreconstruction parameter of 1.06±0.20 mmwas reported. This technology has been usedfor conducting 3-D pre-operative planningand post-operative treatment evaluation ofTKA based on only 2-D long leg standingX-ray radiographs [100].

    • Artificial intelligence, machine learning, anddeep learning. Recently artificial intelligenceand machine learning-based methods havegained increasing interest in many differentfields including musculoskeletal imaging andsurgical navigation. Most of these methods arebased on ensemble learning principles that canaggregate predictions of multiple classifiersand demonstrate superior performance invarious challenging problems [77, 101, 102].A crucial step in the design of such systemsis the extraction of discriminant featuresfrom the images [103]. In contrast, manydeep learning algorithms that have beenproposed recently, which are based on models(networks) composed of many layers thattransform input data (e.g. images) to outputs(e.g. segmentation), let computers learn thefeatures that optimally represent the data forthe problem at hand. The most successfultype of models for image analysis to date areconvolutional neural networks (CNN) [104],which contain many layers that transform theirinput with convolution filters of a small extent.Deep learning-based methods have beensuccessfully used to solve many challengingproblems in computer-aided orthopaedicsurgery [105–108]. Figure 1.10 shows anexample of the application of cascaded fullyconvolutional networks (FCN) for automaticsegmentation of lumbar vertebrae from CTimages [108]. It is expected that more andmore solutions will be developed based ondifferent types of deep learning techniques.

  • 16 G. Zheng and L.-P. Nolte

    Fig. 1.10 A schematic view of using cascaded fully convolutional networks (FCN), which consists of a localizationnet and a segmentation net for automatic segmentation of lumbar vertebrae from CT images

    1.5 Conclusions

    More than two decades have passed since thefirst robot and navigation systems for CAOS wereintroduced. Today this technology has emergedfrom the laboratory and is being routinely usedin the operating theatre and might be about tobecome state of the art for certain orthopaedicprocedures.

    Still we are at the beginning of a rapid processof evolution. Existing techniques are being sys-tematically optimized, and new techniques willconstantly be integrated into existing systems.Hybrid CAOS systems are under development,which will allow the surgeon to use any combina-tions of the above-described concepts to establishvirtual object information. New generations ofmobile imaging systems, inherently registered,will soon be available. However research fo-cus should particularly be on alternative trackingtechnologies, which remove drawbacks of thecurrently available optical tracking and magneticdevices. This in turn will stimulate the devel-opment of less or even non-invasive registrationmethods and referencing tools. Force-sensing de-vices and real-time computational models mayallow establishing a new generation of CAOSsystems by going beyond pure kinematic controlof the surgical actions. For keyhole proceduresthere is distinct need for smart end effectors tocomplement the surgeon in its ability to performa surgical action. The recent advancement onsmart instrumentation, medical robotics, artificialintelligence, machine learning, and deep learningtechniques, in combination with big data ana-lytics, may lead to smart CAOS systems andintelligent orthopaedics in the near future.

    Acknowledgements This chapter was modified from thepaper published by our group in Frontiers in Surgery(Zheng and Nolte 2016; 2:66). The related contents werereused with the permission.

    References

    1. WHO (2003) The burden of musculoskeletal condi-tions at the start of the new millennium. Report ofa WHO Scientific Group. WHO Technical ReportSeries, 919, Geneva, 2003, pp 218. ISBN: 92-4-120919-4

    2. Digioia AM 3rd, Jaramaz B, Plakseychuk AY,Moody JE Jr, Nikou C, Labarca RS, Levison TJ,Picard F (2002) Comparison of a mechanical ac-etabular alignment guide with computer placementof the socket. J Arthroplast 17:359–364

    3. Amiot LP, Labelle H, DeGuise JA, Sati M,Brodeur P, Rivard CH (1995) Image-guided pediclescrew fixation – a feasibility study. Spine 20(10):1208–1212

    4. Nolte LP, Zamorano LJ, Jiang Z, Wang Q, LanglotzF, Berlemann U (1995) Image-guided insertion oftranspedicular screws. A laboratory set-up. Spine(Phila Pa 1976) 20(4):497–500

    5. Mittelstadt B, Kazanzides P, Zuhars J, WilliamsonB, Cain P, Smith F, Bargar WL (1996) The evolutionof a surgical robot from prototype to human clinicaluse. In: Taylor RH, Lavallée S, Burdea GC, Mös-ges R (eds) Computer integrated surgery. The MITPress, Cambridge, pp 397–407

    6. Martel AL, Heid O, Slomczykowski M, Kerslake R,Nolte LP (1998) Assessment of 3-dimensional mag-netic resonance imaging fast low angle shot imagesfor computer assisted spinal surgery. Comput AidedSurg 3:40–44

    7. Cho HS, Park IH, Jeon IH, Kim YG, Han I, KimHS (2011) Direct application of MR images tocomputer-assisted bone tumor surgery. J Orthop Sci16:190–195

    8. Jacob AL, Messmer P, Kaim A, Suhm N, RegazzoniP, Baumann B (2000) A whole-body registration-free navigation system for image-guided surgeryand interventional radiology. Invest Radiol 35:279–288

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 17

    9. Hofstetter R, Slomczykowski M, Bourquin Y, NolteLP (1997) Fluoroscopy based surgical navigation:concept and clinical applications. In: Lemke HU,Vannier MW, Inamura K (eds) Computer assistedradiology and surgery. Elsevier Science, Amster-dam, pp 956–960

    10. Joskowicz L, Milgrom C, Simkin A, Tockus L,Yaniv Z (1998) FRACAS: a system for computer-aided image-guided long bone fracture surgery.Comput Aided Surg 36:271–288

    11. Foley KT, Simon DA, Rampersaud YR (2001) Vir-tual fluoroscopy: image-guided fluoroscopic naviga-tion. Spine 26:347–351

    12. Ritter D, Mitschke M, Graumann R (2002) Mark-erless navigation with the intra-operative imag-ing modality SIREMOBIL Iso-C3D. Electromedica70:47–52

    13. Grützner PA, Waelti H, Vock B, Hebecker A, NolteL-P, Wentzensen A (2004) Navigation using fluoro-CT technology. Eur J Trauma 30:161–170

    14. Rajasekaran S, Karthik K, Chandra VR, RajkumarN, Dheenadhayalan J (2010) Role of intraoperative3D C-arm-based navigation in percutaneous exci-sion of osteoid osteoma of lone bones in children.J Pediatr Orthop 19:195–200

    15. Lin EL, Park DK, Whang PG, An HS, PhillipsFM (2008) O-Arm surgical imaging system. SeminSpine Surg 20:209–213

    16. Qureshi S, Lu Y, McAnany S, Baird E (2014) Three-dimensional intraoperative imaging modalities inorthopaedic surgery: a narrative review. J Am AcadOrthop Surg 22(12):800–809

    17. Sati M, Stäubli HU, Bourquin Y, Kunz M, NolteLP (2002) Real-time computerized in situ guidancesystem for ACL graft placement. Comput AidedSurg 7:25–40

    18. Fleute M, Lavallée S, Julliard R (1999) Incorporat-ing a statistically based shape model into a systemfor computer assisted anterior cruciate ligamentsurgery. Med Image Anal 3:209–222

    19. Stindel E, Briard JL, Merloz P, Plaweski S, DubranaF, Lefevre C, Troccaz J (2002) Bone morphing:3D morphological data for total knee arthroplasty.Comput Aided Surg 7:156–168

    20. Zheng G, Dong X, Rajamani KT, Zhang X, StynerM, Thoranaghatte RU, Nolte L-P, Ballester MA(2007) Accurate and robust reconstruction of asurface model of the proximal femur from sparse-point data and a dense-point distribution modelfor surgical navigation. IEEE Trans Biomed Eng54:2109–2122

    21. Zheng G, Kowal J, Gonzalez Ballester MA, Caver-saccio M, Nolte L-P (2007) Registration techniquefor computer navigation. Curr Orthop 21:170–179

    22. Lavallée S (1996) Registration for computer-integrated surgery: methodology, start of the art.In: Taylor RH, Lavallée S, Burdea GC, Mösges R(eds) Computer integrated surgery. The MIT Press,Cambridge, pp 77–97

    23. Bargar WL, Bauer A, Börner M (1998) Primary andrevision total hip replacement using the Robodocsystem. Clin Orthop 354:82–91

    24. Nogler M, Maurer H, Wimmer C, Gegenhuber C,Bach C, Krismer M (2001) Knee pain caused bya fiducial marker in the medial femoral condyle: aclinical and anatomic study of 20 cases. Acta OrthopScand 72:477–480

    25. Besl PJ, McKay ND (1992) A method for reg-istration of 3-D shapes. IEEE Trans Pattern Anal14(2):239–256

    26. Baechler R, Bunke H, Nolte L-P (2001) Re-stricted surface matching – numerical optimiza-tion and technical evaluation. Comput Aid Surg 6:143–152

    27. Maurer CR, Gaston RP, Hill DLG, Gleeson MJ,Taylor MG, Fenlon MR, Edwards PJ, Hawkes DJ(1999) AcouStick: a tracked A-mode ultrasonogra-phy system for registration in image-guided surgery.In: Taylor C, Colchester A (eds) Medical imagecomputing and image-guided intervention – MIC-CAI’99. Springer, Berlin, pp 953–962

    28. Oszwald M, Citak M, Kendoff D, Kowal J, AmstutzC, Kirchhoff T, Nolte L-P, Krettek C, Hüfner T(2008) Accuracy of navigated surgery of the pelvisafter surface matching with an a-mode ultrasoundproble. J Orthop Res 26:860–864

    29. Kowal J, Amstutz C, Langlotz F, Talib H, GonzalezBallester MA (2007) Automated bone contour de-tection in ultrasound B-mode images for minimallyinvasive registration in image-guided surgery – anin vitro evaluation. Int J Med Rob Comput AssistedSurg 3:341–348

    30. Schumann S, Nolte L-P, Zheng G (2012) Compen-sation of sound speed deviations in 3D B-mode ul-trasound for intraoperative determination of the an-terior pelvic plane. IEEE Trans Inf Technol Biomed16(1):88–97

    31. Wein W, Karamalis A, Baumgarthner A, NavabN (2015) Automatic bone detection and soft tis-sue aware ultrasound-CT registration for computer-aided orthopedic surgery. Int J Comput Assist Ra-diol Surg 10(6):971–979

    32. Radermacher K, Portheine F, Anton M et al (1998)Computer assisted orthopaedic surgery with imagebased individual templates. Clin Orthop Relat Res354:28–38

    33. Hafez MA, Chelule KL, Seedhom BB, Sherman KP(2006) Computer-assisted total knee arthroplastyusing patient-specific templating. Clin Orthop RelatRes 444:184–192

    34. Kunz M, Rudan JF, Xenoyannis GL, Ellis RE (2010)Computer-assisted hip resurfacing using individual-ized drill templates. J Arthroplast 25:600–606

    35. Shandiz MA, MacKenzie JR, Hunt S, Anglin C(2014 Sept) Accuracy of an adjustable patient-specific guide for acetabular alignment in hip re-placement surgery (Optihip). Proc Inst Mech Eng H228(9):876–889

  • 18 G. Zheng and L.-P. Nolte

    36. Honl M, Dierk O, Gauck C, Carrero V, Lampe F,Dries S, Quante M, Schwieger K, Hille E, Mor-lock MM (2003) Comparison of robotic-assistedand manual implantation of a primary total hipreplacement. A prospective study. J Bone Joint Surg85A8:1470–1478

    37. Oszwald M, Ruan Z, Westphal R, O’LoughlinPF, Kendoff D, Hüfner T, Wahl F, Krettek C,Gosling T (2008) A rat model for evaluating phys-iological responses to femoral shaft fracture re-duction using a surgical robot. J Orthop Res 26:1656–1659

    38. Oszwald M, Westphal R, Bredow J, Calafi A,Hüfner T, Wahl F, Krettek C, Gosling T (2010)Robot-assisted fracture reduction using three-dimensional intraoperative fracture visualization: anexperimental study on human cadaver femora. JOrthop Res 28:1240–1244

    39. Jaramaz B, Nikou C (2012) Precision freehandsculpting for unicondylar knee replacement: de-sign and experimental validation. Biomed Tech57(4):293–299

    40. Conditt MA, Roche MW (2009) Minimally invasiverobotic-arm-guided unicompartmental knee arthro-plasty. J Bone Joint Surg 91(Suppl 1):63–68

    41. Ritschl P, Machacek F, Fuiko R (2003) Com-puter assisted ligament balancing in TKR using theGalileo system. In: Langlotz F, Davies BL, Bauer A(eds) Computer assisted orthopaedic surgery – 3rdannual meeting of CAOS-International (Proceed-ings). Steinkopff, Darmstadt, pp 304–305

    42. Shoham M, Burman M, Zehavi E, JoskowiczL, Batkilin E, Kunicher Y (2003) Bone-mountedminiature robot for surgical procedures: conceptand clinical applications. IEEE Trans Rob Autom19:893–901

    43. de Siebenthal J, Gruetzner PA, Zimolong A, RohrerU, Langlotz F (2004) Assessment of video track-ing usability for training simulators. Comput AidedSurg 9:59–69

    44. Clarke JV, Deakin AH, Nicol AC, Picard F (2010)Measuring the positional accuracy of computer as-sisted surgical tracking systems. Comput AidedSurg 15:13–18

    45. Meskers CG, Fraterman H, van der Helm FC, Ver-meulen HM, Rozing PM (1999) Calibration of the“Flock of Birds” electromagnetic tracking deviceand its application in shoulder motion studies. JBiomech 32:629–633

    46. Wagner A, Schicho K, Birkfellner W, Figl M, See-mann R, Konig F, Kainberger F, Ewers R (2002)Quantitative analysis of factors affecting intraop-erative precision and stability of optoelectronicand electromagnetic tracking systems. Med Phys29:905–912

    47. Nam D, Cody EA, Nguyen JT, Figgie MP, MaymanDJ (2014) Extramedullary guides versus portable,accelerometer-based navigation for tibial alignmentin total knee arthroplasty: a randomized, controlledtrial: winner of the 2013 Hap Paul Award. J Arthro-plast 29(2):288–294

    48. Huang EH, Copp SN, Bugbee WD (2015) Accu-racy of a handheld accelerometer-based navigationsystem for femoral and tibial resection in total kneearthroplasty. J Arthroplast 30(11):1906–1910

    49. Walti J, Jost GF, Cattin PC (2014) A new cost-effective approach to pedicular screw placement. In:AE-CAI 2014, LNCS 8678. Springer, Heidelberg,pp 90–97

    50. Pflugi S, Liu L, Ecker TM, Schumann S, CullmannJL, Siebenrock K, Zheng G (2016) A cost-effectivesurgical navigation solution for periacetabular os-teotomy (PAO) surgery. Int J Comput Assist RadiolSurg 11(2):271–280

    51. Pflugi S, Vasireddy R, Lerch T, Ecker TM, TannastT, Boemake N, Siebenrock K, Zheng G (2018) Acost-effective surgical navigation solution for peri-acetabular osteotomy (PAO) surgery. Int J ComputAssist Radiol Surg 13(2):291–304

    52. Nolte LP, Visarius H, Arm E, Langlotz F,Schwarzenbach O, Zamorano L (1995) Computer-aided fixation of spinal implants. J Imag Guid Surg1:88–93

    53. Foley KT, Smith MM (1996) Image-guided spinesurgery. Neurosurg Clin N Am 7:171–186

    54. Glossop ND, Hu RW, Randle JA (1996) Computer-aided pedicle screw placement using framelessstereotaxis. Spine 21:2026–2034

    55. Kalfas IH, Kormos DW, Murphy MA, McKenzieRL, Barnett GH, Bell GR, Steiner CP, Trimble MB,Weisenberger JP (1995) Application of framelessstereotaxy to pedicle screw fixation of the spine. JNeurosurg 83:641–647

    56. Merloz P, Tonetti J, Pittet L, Coulomb M, LavalléeS, Sautot P (1998) Pedicle screw placement usingimage guided techniques. Clin Orthop 354:39–48

    57. Amiot LP, Lang K, Putzier M, Zippel H, LabelleH (2000) Comparative results between conven-tional and image-guided pedicle screw installationin the thoracic, lumbar, and sacral spine. Spine 25:606–614

    58. Laine T, Lund T, Ylikoski M, Lohikoski J, Schlen-zka D (2000) Accuracy of pedicle screw insertionwith and without computer assistance: a randomisedcontrolled clinical study in 100 consecutive patients.Eur Spine J 9:235–240

    59. Schwarzenbach O, Berlemann U, Jost B, VisariusH, Arm E, Langlotz F, Nolte LP, Ozdoba C (1997)Accuracy of image-guided pedicle screw placement.An in vivo computed tomography analysis. Spine22:452–458

    60. Digioia AM 3rd, Simon DA, Jaramaz B et al (1999)HipNav: pre-operative planning and intra-operativenavigational guidance for acetabular implant place-ment in total hip replacement surgery. In: NolteLP, Ganz E (eds) Computer Assisted OrthopaedicSurgery (CAOS). Hogrefe & Huber, Seattle, pp134–140

    61. Croitoru H, Ellis RE, Prihar R, Small CF, PichoraDR (2001) Fixation based surgery: a new techniquefor distal radius osteotomy. Comput Aided Surg6:160–169

  • 1 Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives 19

    62. Siebert W, Mai S, Kober R, Heeckt PF (2002)Technique and first clinical results of robot-assistedtotal knee replacement. Knee 9:173–180

    63. Delp SL, Stulberg SD, Davies B, Picard F, LeitnerF (1998) Computer assisted knee replacement. ClinOrthop 354:49–56

    64. Dessenne V, Lavallée S, Julliard R, Orti R, MartelliS, Cinquin P (1995) Computer assisted knee ante-rior cruciate ligament reconstruction: first clinicaltests. J Image Guid Surg 1:59–64

    65. Nolte LP, Slomczykowski MA, Berlemann U,Strauss MJ, Hofstetter R, Schlenzka D, Laine T,Lund T (2000) A new approach to computer-aidedspine surgery: fluoroscopy-based surgical naviga-tion. Eur Spine J 9:S78–S88

    66. Zheng G, Marx A, Langlotz U, Widmer KH, ButtaroM, Nolte LP (2002) A hybrid CT-free navigationsystem for total hip arthroplasty. Comput AidedSurg 7:129–145

    67. Suhm N, Jacob AL, Nolte LP, Regazzoni P, Mess-mer P (2000) Surgical navigation based on flu-oroscopy – clinical application for image-guideddistal locking of intramedullary implants. ComputAided Surg 5:391–400

    68. Sadoghi P (2015) Current concepts in total kneearthroplasty: patient specific instrumentation. WorldJ Orthop 6(6):446–448

    69. Camarda L, D’Arienzo A, Morello S, Peri G,Valentino B, D’Arienzo M (2015 Apr) Patient-specific instrumentation for total knee arthroplasty:a literaturereview. Musculoskelet Surg 99(1):11–18

    70. Olsen M, Naudie DD, Edwards MR, Sellan ME,McCalden RW, Schemitsch EH (2014 Mar) Eval-uation of a patient specific femoral alignment guidefor hip resurfacing. J Arthroplasty 29(3):590–595

    71. Cartiaux O, Paul L, Francq BG, Banse X, DocquierPL (2014) Improved accuracy with 3D planning andpatient-specific instruments during simulated pelvicbone tumor study. Ann Biomed Eng 42(1):205–213

    72. Personal communication with Prof. Dr. K. Sieben-rock, Inselspital, University of Bern

    73. Rahmathulla G, Nottmeier E, Pirris S, Deen H,Pichelmann M (2014) Intraoperative image-guidedspinal navigation: technical pitfalls and their avoid-ance. Neurosurg Focus 36(3):E3

    74. Wang L, Traub J, Weidert S, Heining SM, Euler E,Navab N (2010) Parallax-free intra-operative x-rayimage stitching. Med Image Anal 14(5):674–686

    75. Chen C, Kojcev R, Haschtmann D, Fekete T, NolteL, Zheng G (2015) Ruler based automatic C-armimage stitching without overlapping constraint. JDigit Imaging 28(4):474–480

    76. Chang J, Zhou L, Wang S, Clifford Chao KS (2012)Panoramic cone beam computed tomography. MedPhys 39(5):2930–2946

    77. Chen C, Belavy D, Yu W, Chu C, Armbrecht G,Bansmann M, Felsenberg D (2015 Aug) G Zheng.Localization and segmentation of Localization and

    Segmentation of 3D Intervertebral Discs in MRImages by Data Driven Estimation. IEEE Trans MedImaging 34(8):1719–1729

    78. Zheng G, Li S, Székely G (2017) Statistical shapeand deformation analysis: methods, implementationand applications. Elesvier, London

    79. Zheng G, Gollmer S, Schumann S, Dong X, FeilkasT, González Ballester MA (2009 Dec) A 2D/3Dcorrespondence building method for reconstructionof a patient-specific 3D bone surface model usingpoint distribution models and calibrated X-ray im-ages. Med Image Anal 13(6):883–899

    80. Yu W, Tannast M, Zheng G (2017) Non-rigid free-form 2D-3D registration using b-spline-based sta-tistical deformation model. Pattern Recongn 63:689–699

    81. Driscoll M, Mac-Thiong JM, Labelle H, Parent S(2013) Development of a detailed volumetric finiteelement model of the spine to simulate surgicalcorrection of spinal deformities. Biomed Res Int2013:931741

    82. Majdouline Y, Aubin CE, Wang X, Sangole A, La-belle H (2012 Nov 26) Preoperative assessment andevaluation of instrumentation strategies for the treat-ment of adolescent idiopathic scoliosis: computersimulation and optimization. Scoliosis 7(1):21

    83. Murphy RJ, Armiger RS, Lepistö J, Mears SC,Taylor RH, Armand M (2015 Apr) Developmentof a biomechanical guidance system for periacetab-ular osteotomy. Int J Comput Assist Radiol Surg10(4):497–508

    84. Crottet D, Kowal J, Sarfert SA, Maeder T, BleulerH, Nolte LP, Dürselen L (2007) Ligament balanc-ing in TKA: Evaluation of a force-sensing deviceand the influence of patellar eversion and ligamentrelease. J Biomech 40(8):1709–1715

    85. De Keyser W, Beckers L (2010 Dec) Influence ofpatellar subluxation on ligament balancing in totalknee arthroplasty through a subvastus approach. Anin vivo study. Acta Orthop Belg 76(6):799–805

    86. de Steiger RN, Liu YL, Graves SE (2015 Apr15) Computer navigation for total knee arthroplastyreduces revision rate for patients less than sixty-fiveyears of age. J Bone Joint Surg Am 97(8):635–642

    87. Jolesz FA (2014) Introduction. In: Jolesz FA (ed)Intraoperative imaging and image-guided therapy.Springer, London, pp 1–23

    88. Dubousset J, Charpak G, Skalli W, Deguise J, KalifaG (2010) EOS: A new imaging system with lowdose radiation in standing position for spine andbone & joint disorders. J Musculoskeleta Res 13:1–12

    89. Wybier M, Bossard P (2013 May) Musculoskeletalimaging in progress: the EOS imaging system. JointBone Spine 80(3):238–243

    90. Illés T, Somoskeöy S (2012) The EOS imagingsystem and its use in daily orthopaedic practice. IntOrthop 36:1325–1331

  • 20 G. Zheng and L.-P. Nolte

    91. Wade R, Yang H, McKenna C et al (2013) Asystematic review of the clinical effectivenss ofEOS 2D/3D x-ray imaging system. Eur Spine J 22:296–304

    92. Deschenes S, Charron G, Beaudoin G et al (2010)Diagnostic imaging of spinal deformities – Reduc-ing patients radiation dose with a new slot-scanningx-ray imager. Spine 35:989–994

    93. Langlois K, Pillet H, Lavaste F, Rochcongar G,Rouch P, Thoreux P, Skalli W (2015 Oct) Assessingthe accuracy and precision of manual registrationof both femur and tibia using EOS imaging systemwith multiple views. Comput Methods BiomechBiomed Eng 18(Suppl 1):1972–1973

    94. Ferrero E, Lafage R, Challier V, Diebo B, GuiguiP, Mazda K, Schwab F, Skalli W, Lafage V (2015Sept) Clinical and stereoradiographic analysis ofadult spinal deformity with and without rotatorysubluxation. Orthop Traumatol Surg Res 101(5):613–618

    95. Glaser DA, Doan J, Newton PO (2012) Comparisonof 3-Dimensional spinal reconstruction accuracy.Spine 37:1391–1397

    96. Lazennec JY, Rousseau MA, Rangel A, Gorin M,Belicourt C, Brusson A, Catonne Y (2011) Pelvisand total hip arthroplasty acetabular component ori-entation in sitting and standing positions: measure-ments reproductibility with EOS imaging systemversus conventional radiographies. Orthop Trauma-tol Surg Res 97:373–380

    97. Lazennec JY, Brusson A, Dominique F, RousseauMA, Pour AE (2015) Offset and anteversion re-construction after cemented and uncemented totalhip arthroplasty: an evaluation with the low-doseEOS system comaring two- and three-dimensionalimaging. Int Orthop. 39(7):1259–1267

    98. Folinais D, Thelen P, Delin C, Radier C, CatonneY, Lazennec JY (2011) Measuring femoral androtational alignment: EOS system versus com-puted tomography. Orthop Traumatol Surg Res 99:509–516

    99. Zheng G, Schumann S, Alcoltekin A, Jaramaz B,Nolte L-P (2016) Patient-specific 3D reconstructionof a complete lower extremity from 2D X-rays.In:

    Proceedings of MIAR 2016, LNCS 9805. Springer,Heidelberg, pp 404–414

    100. Hommel H, Alcoltekin A, Thelen B, Stifter J,Schwägli T, Zheng G (2017) 3X-Plan: A noveltechnology for 3D prosthesis planning using 2D X-ray radiographs. Proc CAOS 2017:93–95

    101. Glocker B, Feulner J, Criminisi A, Haynor DR,Konukoglu E (2012) Automatic localization andidentification of vertebrae in arbitrary field-of-view CT scans. In: Proceedings of MICCAI 2012;15(Pt3). Springer, Heidelberg, pp 590–598

    102. Liu Q, Wang Q, Zhang L, Gao Y, Sheng D (2015)Multi-atlas context forests for knee MR image seg-mentation. MLMI@MICCAI 2015:186–193

    103. Litjens G, Kooi T, Bejnordi BE, Setio AAA, CiompiF, Ghafoorian M, van der Laak JAWM, van Gin-neken B, Sánchez CI (2017) A survey on deeplearning in medical image analysis. Med ImageAnal 42:60–88

    104. Krizhevsky A, Sutskever I, Hinton GE (2012) Ima-geNet classification with deep convolutional neuralnetworks. Advances in Neural Information Pro-cessing Systems 25, Curran Associates, Inc., 2012,1097–1105

    105. Prasoon A, Petersen K, Igel C, Lauze F, DamE, Nielsen M (2013) Deep feature learning forknee cartilage segmentation using a triplanar con-volutional neural network. MICCAI 2013 16(Pt2):246–253

    106. Zeng G, Yang X, Li J, Yu L, Heng P-A, ZhengG (2017) 3D U-net with multi-level deep super-vision: fully automatic segmentation of proximalfemur in 3D MR images. MLMI@MICCAI 2017:274–282

    107. Li X, Dou Q, Chen H, Fu CW, Qi X, Belavý DL,Armbrecht G, Felsenberg D, Zheng G, Heng PA(2018) 3D multi-scale FCN with random modalityvoxel dropout learning for intervertebral disc local-ization and segmentation from multi-modality MRimages. Med Image Anal 45:41–54

    108. Janssens R, Zeng G, Zheng G (2017) Fully auto-matic segmentation of lumbar vertebrae from CTimages using cascaded 3D fully convolutional net-works. arXiv:1712.01509

  • 2Computer-Aided Orthopedic Surgery:Incremental Shift or ParadigmChange?

    Leo Joskowicz and Eric J. Hazan

    Abstract

    Computer-aided orthopedic surgery (CAOS)is now about 25 years old. Unlike neuro-surgery, computer-aided surgery has notbecome the standard of care in orthopedicsurgery. In this paper, we provide the technicaland clinical context raised by this observationin an attempt to elucidate the reasons forthis state of affairs. We start with a briefoutline of the history of CAOS, review themain CAOS technologies, and describe howthey are evaluated. We then identify someof the current publications in the field andpresent the opposing views on their clinicalimpact and their acceptance by the orthopediccommunity worldwide. We focus on totalknee replacement surgery as a case study andpresent current clinical results and contrastingopinions on CAOS technologies. We thendiscuss the challenges and opportunitiesfor research in medical image analysis inCAOS and in musculoskeletal radiology. Weconclude with a suggestion that while CAOS

    L. Joskowicz (�)School of Computer Science and Engineering, TheHebrew University of Jerusalem, Jerusalem, ISRAELe-mail: [email protected]

    E. J. HazanTraumatology and Emergency Departments, InstitutoNacional de Rehabilitacion, Mexico City, MEXICO

    acceptance may be more moderate than that ofother fields in surgery, it still has a place in thearsenal of useful tools available to orthopedicsurgeons.

    Keywords

    Computer-aided orthopedic surgery ·Image-guided surgery · Medical robotics

    2.1 Introduction

    Computer-based technologies, including bothsoftware and hardware, are playing an increas-ingly larger and more important role in defininghow surgery is performed today. Orthopedicsurgery was, together with neurosurgery,the first clinical specialty for which image-guided navigation and robotic systems weredeveloped. Computer-aided orthopedic surgery(CAOS) is now about 25 years old. During thistime, a wide variety of novel and ingenioussystems have been proposed, prototyped, andcommercialized for most of the main orthopedicsurgery procedures, including knee and hipjoint replacement, cruciate ligament surgery,

    © Springer Nature Singapore Pte Ltd. 2018G. Zheng et al. (eds.), Intelligent Orthopaedics, Advances in Experimental Medicineand Biology 1093, https://doi.org/10.1007/978-981-13-1396-7_2

    21

    http://crossmark.crossref.org/dialog/?doi=10.1007/978-981-13-1396-7_2&domain=pdfmailto:[email protected]://doi.org/10.1007/978-981-13-1396-7_2

  • 22 L. Joskowicz and E. J. Hazan

    spine surgery, corrective osteotomy, bone tumorsurgery, and trauma surgery, among others.

    While CAOS technologies are nowadaysvisible and known to many orthopedic surgeonsworldwide, their adoption has been relativelyslow, especially when compared to othertechnologies such as robotic minimally invasivesurgery (daVinci Surgical System, IntuitiveSurgical). This raises a number of questions,e.g., What are the known clinical benefits ofCAOS technologies? Why has CAOS been aprogressive technology and not a disruptive one?Has CAOS led to a paradigm change in some ofthe orthopedic surgery procedures? What is thefuture of CAOS? What role has medical imageanalysis played in CAOS and what is its future?

    In this paper, we present a personal perspec-tive on the key aspects of CAOS in an attemptto answer these questions. We start with a briefhistory of CAOS from its beginnings, emergence,expansion, and steady progress phases. We thenoutline the main CAOS technologies and de-scribe how they are evaluated. Next, we summa-rize the current views on their clinical impact andtheir acceptance by the orthopedic communityworldwide. We focus on total knee replacementsurgery as a case study and present the clini-cal results and contrasting opinions on CAOStechnologies. We then discuss the challenges andopportunities for research in medical image anal-ysis in CAOS and in musculoskeletal radiologyand conclude with an observation: while CAOSacceptance may be more moderate than that ofother fields in surgery, it still has a place in thearsenal of useful tools available to orthopedicsurgeons.

    2.2 A Brief History of CAOS

    CAOS started over 25 years ago, with the intro-duction of four key technologies: 3D bone sur-face modeling from CT scans, surgical robotics,real-time surgical navigation, and, later, patient-specific templates. The main CAOS concepts andtechnical elements emerged in the mid- to late1990s; the first clinical results started to appear

    in the clinical literature in the late 1990s. TheInternational Society for Computer Assisted Or-thopaedic Surgery was established in 2000 andhas held yearly meetings since. The early andmid-2000s witnessed a rise in the introductionof commercial systems and the publication ofsmall- and medium-sized clinical studies. Thelate 2000s to date featured a slow consolidationperiod, with larger and more specific comparativeclinical studies, multicenter studies, and meta-studies. It also featured mature image process-ing and surgical planning software, image-basednavigation systems, robotic systems, and routinepatient-specific guide design and related produc-tion services.

    Bone modeling from CT scans stemmed from3D segmentation and surface mesh constructionmethods such as the Marching Cubes algorithmintroduced by Lorensen and Cline [1] in thelate 1980s. A variety of segmentation methodsand mesh smoothing and simplification meth-ods were developed in the early 1990s. Thesepatient-specific anatomical models are essentialfor preoperative planning, intraoperative registra-tion, visualization, navigation, and postoperativeevaluation.

    The first robotic system in orthopedics wasROBODOC, a customized industrial active robotdesigned for total hip replacement (THR) to op-timize the bone/implant interphase by machiningthe implant cavity [2]. ROBODOC developmentstarted in the late 1980s at the IBM T.J. Wat-son Research Center and at the University ofCalifornia at Davis; it was first used for humansurgery in 1992 and became a commercial prod-uct in 1995 (developed by Integrated SurgicalSystems and owned since 2008 by Curexo Tech-nology Corp.). The system includes a preoper-ative planning module that allows surgeons toselect the size and position of the acetabular cupand femoral stem based on automatically built 3Dsurface models of the pelvis and hip joint bonefrom a preoperative CT scan. Based on this plan,it automatically generates a specific machiningplan for the femoral stem cavity, which is thenexecuted during surgery after pin-based contactregistration between the patient and the plan. The