12
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE SYSTEMS JOURNAL 1 A Virtual Reality Enhanced Cyber-Human Framework for Orthopedic Surgical Training Avinash Gupta , Student Member, IEEE, J. Cecil , Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan, Fellow, IEEE Abstract—This paper discusses the adoption of information- centric systems engineering (ICSE) principles to design a cyber- human systems-based simulator framework to train orthopedic surgery medical residents using haptic and immersive virtual re- ality platforms; the surgical procedure of interest is a less inva- sive stabilization system plating surgery that is used to treat frac- tures of the femur. Developing such training systems is a complex task involving multiple systems, technologies, and human experts. The information-centric approach proposed provides a structured foundation to plan, design, and build the simulators using the ICSE approach; in addition, the information models of the surgical pro- cesses were built to capture the surgical complexities and relation- ships between the various systems/components in the simulator framework, along with the controlling factors, performing mecha- nisms, and decision outcomes at various levels of abstraction. The simulator platforms include a haptic-based training system and a fully immersive training system for six training environments. Next-generation networking principles were adopted to support the collaborative training activities within this framework. As part of the proposed approach, expert surgeons played an important role in the design of the training environments. The outcomes of the learning assessment conducted demonstrate the effectiveness of using such simulator-based cyber-human training frameworks. Index Terms—Information modeling, next-generation inter- net technologies, orthopedic surgery, simulation-based training systems engineering (SE), virtual reality (VR). I. INTRODUCTION I N RECENT years, the use of virtual reality (VR)-based sim- ulators for training surgeons and residents in medical uni- versities is becoming more widespread. Traditional methods of surgical training include residents training using cadavers, an- imals, and synthetic mockups [17]. These traditional methods have some major drawbacks [25], [26]. Other current approaches involve residents observing the surgery performed by an expert surgeon and then slowly pro- gressing to assisting in surgeries. VR-based simulation envi- Manuscript received March 23, 2018; revised May 22, 2018 and December 31, 2018; accepted January 13, 2019. This work was supported by the National Science Foundation (NSF) under Grant CNS 1257803. One of the undergraduate students who contributed to the creation of virtual training environments was funded through NSF Research Experiences for Undergraduates Site Grant NSF 1359297. (Corresponding author: J. Cecil.) A. Gupta and J. Cecil are with the Center for Cyber Physical Systems, Com- puter Science Department, Oklahoma State University, Stillwater, OK 74078 USA (e-mail:, [email protected]; [email protected]). M. Pirela-Cruz is with the Department of Orthopedic Surgery, Texas Tech University Health Sciences Center, El Paso, TX 79905 USA (e-mail:, cruzer@ zianet.com). P. Ramanathan is with the University of Wisconsin-Madison, Madison, WI 53706 USA (e-mail:, [email protected]). Digital Object Identifier 10.1109/JSYST.2019.2896061 ronments can help address these drawbacks while serving as a platform for supplementing these current training approaches. Moreover, network-based simulators can provide on-demand (24/7) access to training, which can be beneficial for medical students and residents. The interest toward training the surgeons with alternative methods such as VR-based environments has been increasing over the recent decades [38], [41]. The Ameri- can Board of Orthopedic Surgery has also approved the use of simulation-based training in order to improve the surgical skills [42] of residents. This paper discusses the adoption of information-centric sys- tems engineering (ICSE) principles to design a cyber-human systems based simulator framework to train orthopedic surgery medical residents using haptic and immersive VR platforms. In general, systems engineering (SE) emphasizes an interdisci- plinary process [5] to ensure that the customer and stakeholders needs are satisfied in a high-quality and schedule-compliant manner throughout a system’s entire life cycle [69]. SE also focuses on the formulation, analysis, and interpretation of var- ious phases of a system’s lifecycle [80]. SE can be applied to any system, complex or otherwise, using the system’s thinking approach [81]. SE has also been used to validate systems and interactions within the systems [2]–[4]. The SE approach can be either top down, bottom up, or middle out [74]. In the top down approach, a top-level process is decomposed into various levels of abstraction up to the implementation level [74]. The SE approach can be tailored to fit the scope and complexity of a given project. Some of the principles of SE are as follows [8]. 1) Meeting the needs of the customer and involving the customer in the design process. 2) A multidisciplinary team is involved in a complex project. The SE serves to tie the teams together. 3) Form the requirements and plan accordingly. 4) Use-case- and requirements-driven design of the system. 5) Focus on quality on the design phase. 6) Establish continuous improvements. This paper focuses on but is not limited to the practice of the aforementioned principles. The customer is put forth in the center of the project from the beginning. Software engineering principles are applied to bring together the multidisciplinary team consisting of engineers, computer scientists, and expert surgeons through the information models presented in the paper. Information-centric modeling has been also used to understand the requirements and plan to build the simulator ac- cordingly. Both requirements-driven design and use-case-driven design were utilized. Requirements-driven design provided a general outline for the simulator, whereas use-case-provided process-level design, which is essential to ensure that the details of the process were reflected in the simulator. This is a unique approach, and only a few researchers have tried to bridge the gap 1937-9234 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE SYSTEMS JOURNAL 1

A Virtual Reality Enhanced Cyber-HumanFramework for Orthopedic Surgical TrainingAvinash Gupta , Student Member, IEEE, J. Cecil , Senior Member, IEEE, Miguel Pirela-Cruz,

and Parmesh Ramanathan, Fellow, IEEE

Abstract—This paper discusses the adoption of information-centric systems engineering (ICSE) principles to design a cyber-human systems-based simulator framework to train orthopedicsurgery medical residents using haptic and immersive virtual re-ality platforms; the surgical procedure of interest is a less inva-sive stabilization system plating surgery that is used to treat frac-tures of the femur. Developing such training systems is a complextask involving multiple systems, technologies, and human experts.The information-centric approach proposed provides a structuredfoundation to plan, design, and build the simulators using the ICSEapproach; in addition, the information models of the surgical pro-cesses were built to capture the surgical complexities and relation-ships between the various systems/components in the simulatorframework, along with the controlling factors, performing mecha-nisms, and decision outcomes at various levels of abstraction. Thesimulator platforms include a haptic-based training system anda fully immersive training system for six training environments.Next-generation networking principles were adopted to supportthe collaborative training activities within this framework. As partof the proposed approach, expert surgeons played an importantrole in the design of the training environments. The outcomes ofthe learning assessment conducted demonstrate the effectivenessof using such simulator-based cyber-human training frameworks.

Index Terms—Information modeling, next-generation inter-net technologies, orthopedic surgery, simulation-based trainingsystems engineering (SE), virtual reality (VR).

I. INTRODUCTION

IN RECENT years, the use of virtual reality (VR)-based sim-ulators for training surgeons and residents in medical uni-

versities is becoming more widespread. Traditional methods ofsurgical training include residents training using cadavers, an-imals, and synthetic mockups [17]. These traditional methodshave some major drawbacks [25], [26].

Other current approaches involve residents observing thesurgery performed by an expert surgeon and then slowly pro-gressing to assisting in surgeries. VR-based simulation envi-

Manuscript received March 23, 2018; revised May 22, 2018 and December31, 2018; accepted January 13, 2019. This work was supported by the NationalScience Foundation (NSF) under Grant CNS 1257803. One of the undergraduatestudents who contributed to the creation of virtual training environments wasfunded through NSF Research Experiences for Undergraduates Site Grant NSF1359297. (Corresponding author: J. Cecil.)

A. Gupta and J. Cecil are with the Center for Cyber Physical Systems, Com-puter Science Department, Oklahoma State University, Stillwater, OK 74078USA (e-mail:,[email protected]; [email protected]).

M. Pirela-Cruz is with the Department of Orthopedic Surgery, Texas TechUniversity Health Sciences Center, El Paso, TX 79905 USA (e-mail:,[email protected]).

P. Ramanathan is with the University of Wisconsin-Madison, Madison, WI53706 USA (e-mail:,[email protected]).

Digital Object Identifier 10.1109/JSYST.2019.2896061

ronments can help address these drawbacks while serving as aplatform for supplementing these current training approaches.Moreover, network-based simulators can provide on-demand(24/7) access to training, which can be beneficial for medicalstudents and residents. The interest toward training the surgeonswith alternative methods such as VR-based environments hasbeen increasing over the recent decades [38], [41]. The Ameri-can Board of Orthopedic Surgery has also approved the use ofsimulation-based training in order to improve the surgical skills[42] of residents.

This paper discusses the adoption of information-centric sys-tems engineering (ICSE) principles to design a cyber-humansystems based simulator framework to train orthopedic surgerymedical residents using haptic and immersive VR platforms.In general, systems engineering (SE) emphasizes an interdisci-plinary process [5] to ensure that the customer and stakeholdersneeds are satisfied in a high-quality and schedule-compliantmanner throughout a system’s entire life cycle [69]. SE alsofocuses on the formulation, analysis, and interpretation of var-ious phases of a system’s lifecycle [80]. SE can be applied toany system, complex or otherwise, using the system’s thinkingapproach [81]. SE has also been used to validate systems andinteractions within the systems [2]–[4]. The SE approach canbe either top down, bottom up, or middle out [74]. In the topdown approach, a top-level process is decomposed into variouslevels of abstraction up to the implementation level [74]. TheSE approach can be tailored to fit the scope and complexity ofa given project. Some of the principles of SE are as follows [8].

1) Meeting the needs of the customer and involving thecustomer in the design process.

2) A multidisciplinary team is involved in a complex project.The SE serves to tie the teams together.

3) Form the requirements and plan accordingly.4) Use-case- and requirements-driven design of the system.5) Focus on quality on the design phase.6) Establish continuous improvements.This paper focuses on but is not limited to the practice of

the aforementioned principles. The customer is put forth in thecenter of the project from the beginning. Software engineeringprinciples are applied to bring together the multidisciplinaryteam consisting of engineers, computer scientists, and expertsurgeons through the information models presented in thepaper. Information-centric modeling has been also used tounderstand the requirements and plan to build the simulator ac-cordingly. Both requirements-driven design and use-case-drivendesign were utilized. Requirements-driven design provided ageneral outline for the simulator, whereas use-case-providedprocess-level design, which is essential to ensure that the detailsof the process were reflected in the simulator. This is a uniqueapproach, and only a few researchers have tried to bridge the gap

1937-9234 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

Page 2: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE SYSTEMS JOURNAL

between the requirements-driven design and use-case-driven de-sign [83]. Moreover, validation and interactions have been per-formed throughout the project with expert surgeons, residents,and medical students to improve the simulator continuously.

A review of related research in areas such as VR simula-tion for medical applications, collaborative simulators, SE, andinformation-centric engineering follows.

A. VR Simulation for Medical Applications

Various VR-based simulators have been reported in a rangeof surgical fields such as laparoscopic surgery and heart surgery,among others [32]–[40]; VR simulation-based approaches havebeen reported for orthopedic surgery [1]–[3], [5], [7], [16].Haptic-based technologies allow a user to experience the senseof touch when interacting with a simulation environment; thishas been investigated by several researchers in the context ofmedical surgical training [4], [6], [53], [56]. Researchers havealso compared the impact of learning on VR simulators andother techniques such as video-based training [54], [55]. Im-mersive systems such as Vive and Oculus Rift have been usedin medical training in recent years [49]–[51]. Collaborative vir-tual environments enable distributed users to interact with eachother through the internet [17]–[19], [43].

B. Assessment of Learning Interactions

Researchers have studied the impact of using VR simulatorsfor medical training by conducting pretest and posttests withparticipants [52], [62], [63].

C. Systems and Information-Centric Engineering

SE has been utilized in diverse fields such as the food industry[8], aeronautics [15], and healthcare [47]. In the field of health-care and medicine, an event-based model of acute care hospitalshas been presented by [47] and [60] with an objective to an-alyze decision policies for patient transfer and discharge, bedallocation, and staff scheduling using Petri net models. Otherresearchers have used the ICSE approach in the design of com-plex simulation environments and software systems for domainssuch as manufacturing [10], [71], [77], surgery [78], and fixturedesign [11], [79]–[81] among others.

Some researchers have investigated MBSE approaches [76]for domains such as naval ship design [72] and complexaerospace systems [73]; there has been no reported adoption ofadvanced model-based approaches for medical simulator designand development activities. Some of the preliminary researchincludes the study of information modeling using a workflowintegration matrix in the design of surgical information systems[45]. In [46], Jannin highlighted the need for process modelsin a computer-assisted surgery including the definition of a sur-gical ontology and the development of methods for automaticrecognition of a surgeon’s activities. In [44], a hierarchical taskanalysis (HTA)-based approach has been proposed to analyzean endoscopic surgical process; however, the HTA and otherapproaches do not model or capture key attributes such as in-formation or physical inputs, and constraints; modeling suchattributes is necessary to obtain a better understanding of func-tional and process relationships, which in turn enables a strongerfoundation that is necessary to build a simulator-based trainingenvironment or system.

The overall approach adopted in the creation of this less in-vasive stabilization system (LISS) plating surgical simulator

outlined in this paper can be described as an ICSE approach. Aninformation-centric process model has been developed to plan,build, and validate the surgical simulator developed.

Our prior work in modeling other process design activities un-derscored the benefits of adopting the ICSE approach to devel-oping software-based complex systems [13]. Our interest in theSE-based approach was to primarily reduce the overall costs andtime involved in developing complex VR-based medical simula-tors while providing a structured foundation and basis to developsuch simulators. The use of information modeling approachesas well as system engineering principles to build complex sim-ulation environments has been explored in various engineeringapplications [12], [13]. In [61], an information model of fixturedesign activities was first built and subsequently used as a basisto design and build an automated fixture design system.

The engineering enterprise modeling language (eEML) wasused to support adoption of this information-centric approach.For information modeling, there are a variety of modeling lan-guages and methodologies including the integrated computeraided manufacturing (ICAM) DEFinition (IDEF) suite of meth-ods [9], systems modeling language (SysML) [15], and the uni-fied modeling language (UML) [22], [23], among others. Anyof these modeling languages that can portray the temporal andfunctional relationships between the entities of a process can beused to develop the information-centric models. eEML was cho-sen for designing the simulators as it was developed primarilyto help facilitate a model-based approach to the complex pro-cess of developing software systems for simulation applications[48], [82].

Based on the literature survey, the following voids wereidentified.

1) Past research has not focused on the ICSE approach in thedevelopment of the medical simulators.

2) There has been less emphasis on the adoption of next gen-eration technologies to support collaborative simulationactivities in medical simulations.

Based on these discussions as well as the identified voids, thefollowing issues are addressed in this paper.

1) Investigate the adoption of ICSE-based approaches todesigning surgical simulator-based framework for or-thopedic surgery. Our ICSE-based approach recognizesthe importance of understanding the surgical processesthrough close interactions with expert surgeons, prior tothe design or development of such systems; such a per-spective enables a better understanding of the complexrelationships between the various systems involved in thetraining process. By developing information models andfocusing on key inputs, outcomes, resources, and decisionoutcomes, a structured base for the design and develop-ment of the simulators is achieved.

2) Investigate the potential of next generation internettechnologies on supporting collaborative training usingmedical simulators. The potential of adopting global en-vironments for network innovations (GENI) and software-defined networking (SDN) principles to support suchcollaborative training using simulators is also addressedin this research.

3) Assess the impact of using such medical simulators insurgical training (both haptic and immersive technolo-gies) as well as compare their effectiveness (haptic ver-sus immersive simulators) in helping medical residentsunderstanding target orthopedic processes (LISS platingsurgery).

Page 3: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GUPTA et al.: A VIRTUAL REALITY ENHANCED CYBER-HUMAN FRAMEWORK FOR ORTHOPEDIC SURGICAL TRAINING 3

The simulator framework developed has explored twotechnology-based platforms and systems involving a) haptic-based technology and b) immersive VR-based technologies. Thehaptic interactive training system (HITS) is composed of twotraining platform systems: the first system supports stand-alonetraining system where a resident can access and use the haptictraining system individually and the second system is a collab-orative training system where an expert surgeon can interactwith several medical residents and help them learn the surgicalsteps. The fully immersive training system (FITS) uses immer-sive VR technology. FITS was developed using the HTC Viveplatform.

The preliminary versions of some of the components of thissimulator framework have been discussed in prior publications[66]–[68]. In [67], the focus is on the functioning of the stan-dalone haptic-based simulator without the support of the col-laborative interactions between distributed users. In [68], theemphasis of discussion was the simulator’s capabilities usinghaptic technologies (involving nonimmersive simulation envi-ronments); these earlier implementations have been expandedin this paper to the design of more advanced and fully immersiveVR simulation environments.

In Section II, a discussion of the process of designing andbuilding the surgical environments is provided. The role of theICSE approach proposed is also elaborated in this section. InSection III, the architecture of the surgical environments is de-tailed including descriptions of the stand-alone and collaborativesimulators. In Section IV, a detailed description of the learningassessment process, system, and outcomes is presented.

II. ICSE APPROACH TO DESIGN, PLAN, AND BUILD

THE TRAINING SIMULATORS

Fig. 1 shows the top level view of the eEML model used tosupport the ICSE approach, which was used to plan, design,and build the surgical simulator. A related eEML model wasdiscussed in [48], which focused solely on the development ofthe haptic-based simulation system for the same LISS platingsurgical process. As the scope of the simulator discussed inthis paper includes both the haptic and immersive simulatorbuilding, the ICSE approach dealt with the activities involvingboth these simulators. The process of designing and buildingthis simulator was divided into six major phases (as shown inFig. 1).

Several decompositions of each of these phases were also cre-ated (only the top level model diagram is shown in this paper forbrevity). The team involved in the development of the simula-tors was multidisciplinary, comprising of engineers, IT experts,and surgeons. The ICSE model consisted of functional enti-ties and associated attributes relevant to identified phases. Eachentity Ei corresponds to a functional phase, e.g., Activity E1is “Understand user requirements.” As indicated earlier (a–d),for each Ei, there are four categories of information attributes:the influencing criteria (IC), the Performing Agents (PAs), theDecision Outcomes (DOs), and the Task Effectors. Using theseattributes, a target set of phases can be modeled, studied, andanalyzed at various levels of abstraction along with key relation-ships among these phases. IC can be categorized as informationinputs (IIs) and constraints (CO), which directly impact the ac-complishment of the target phase (being modeled). Constraintscan be viewed as controlling factors influencing the phase beingmodeled. The IIs are the information attributes that are required

(and can be viewed as process drivers) to accomplish the targetphase being modeled. The PAs refer to the software, personnel,and/or machine/tool agents that perform the identified phases.DOs can be grouped under information and physical objects andencompass the information or physical outcomes (respectively)of phases performed (or modeled using eEML). The end ef-fectors indicate the flow of activity accomplishment in either asynchronous or asynchronous manner (using logical AND/ORrelationships). A brief description of the six phases involved inbuilding the simulators follows.

Phase 1: Understand and Identify Simulator Requirements(E1): Interactions With Customers and Stakeholders: When de-veloping any software system (the VR simulator, in our case),it is important to understand the expectations of the customers(which in our case included surgeons and residents) and theirobjectives in creating the target system.

In this phase, we focused on understanding and identifyingthe simulation system and surgical training requirements. Theproject team was composed of several two-member groups: re-quirements group, simulator design group, simulator buildinggroup, and assessment group. Two expert surgeons served asknowledge sources for this project. The requirements groupmembers led the discussions and interactions with the expertsurgeons to gain a better understanding of the overall simu-lator requirements including platforms desired, user interfacepreferences, and specifications. The eEML model reflects themain attributes of these activities; the IIs include the surgicaltraining objectives. The main output (or DO) of this activitywas the identification of the software system and surgical train-ing requirements that need to be addressed when building thesimulator.

To support this activity, several meetings and discussions wereconducted involving project team members and expert surgeonsat the Texas Tech University Health Sciences Center (TTUHSC),El Paso, Texas; additional Skype-based meetings as well asa review of videos of physical surgical procedures were alsoconducted to gain a better grasp of the surgical contexts involved.This set of functional requirements provided the basis for thedesign and development of the simulator, which is summarizedbelow. The focus was on two types of requirements, namelysystem requirements and training requirements.

A. Environment and System Requirements

1) Simulator Content and Objects: The various objects andentities within each simulator scene (such as bones,implants, and surgical tools) along with the surgicaltraining procedures need to be modeled and representedaccurately.

2) Computer Platform: The computer platform for trainingwill be Windows based with appropriate interfaces to sup-port the two training modes: haptic and immersion.

3) Interacting Interfaces: The immersion environment willbe expected to be built on a VR-based platform that willallow residents to gain a better understanding of the sur-gical intricacies through interactive training interfaces. Asecond mode of interaction during training will supporthaptic (touch) capabilities for residents to enable them tovirtually interact with the surgical components.

4) User Support Interface: The user interfaces for train-ing are expected to support three modes: Avatar, voicebased, and text. The avatar-based interface should

Page 4: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE SYSTEMS JOURNAL

Fig. 1. Top-level view of the eEML model to support the ICSE approach to develop the simulator.

include a humanoid model capable of guiding a residentthrough the training activities with the help of voice andtext cues.

5) Distributed Training Support: The simulator should allowresidents to access the simulator from multiple remote

locations in both stand-alone and collaborative trainingsessions.

6) Networking Requirements: The communication duringthe distributed interactions will be based on SDN-basednetworking principles.

Page 5: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GUPTA et al.: A VIRTUAL REALITY ENHANCED CYBER-HUMAN FRAMEWORK FOR ORTHOPEDIC SURGICAL TRAINING 5

Fig. 2. Top-level eEML model showing the steps involved in the LISS plating surgical procedure.

B. Training Requirements

1) The simulator will be capable of providing educationand training in the surgical steps of the LISS surgicalprocedure.

2) The simulator modules will be expected to support assess-ment of residents’ learning based on their content throughpretest and posttest; the expert surgeons will design andsupervise the pretest and posttests.

3) The project team is expected to collect data to measurelatency and bandwidth during the distributed interactionsduring the posttests.

Phase 2: Understand the Surgery (Process) Domain (E2):The output of Phase 1 was surgical training requirements thatincluded identifying the type of surgical procedures (which werethe focus of the training activities), the computing platform, andsoftware to be used as well as the skills to be acquired as out-comes of this training procedure, among others. Based on theserequirements, it was important for the design team to under-stand the intricacies of the various tasks within the steps of thesurgical procedures (LISS plating surgery). The project teamaddressed this by reading books, reports, and papers and study-ing videos of LISS plating surgery as well as interviewing andconducting discussions with expert surgeons, among others. Theprimary outcome of this important phase is the understandingobtained of the LISS plating surgical process, which provideda strong foundation for the remaining phases in this overallprocess. This understanding was captured through the creationof eEML-based models that provided sufficient detail on theintricacies of each surgical step (shown in Fig. 2).

Phase 3: Design the Simulator (E3): After the developmentof the eEML models, the next phase was to create software de-sign models based on the eEML process or information-centricmodels. These software design models were created to serve asa basis for the development of the simulator. The emphasis inthis phase was on designing the simulator including developingthe modular architecture, and identifying the various functionalmodules (including the training modules and their scope, theuser interface modules, etc.). The main outcomes of this phaseare software design models reflecting the overall design of thehaptic-based and immersive simulator; these include diagramssuch as sequence, communication, and class diagrams based onthe UML. These software design diagrams were used as thebasis for the next activity (E4 and E5, which involved buildingsimulators). Adopting the SE approach enabled the project teamto realize the importance of planning this crucial phase that in-cluded designing the software elements of the simulator usingsequence diagrams and class diagrams (which are described inSection III).

Phase 4/Phase 5: Build the Haptic-Based and Immersive Sim-ulator (E4 and E5): These phases involved building the haptic-based and immersive simulator using various software tools andVR equipment. Adopting the ICSE approach enabled the projectteam to identify the tools to be used in building this environment

including the Unity 3D engine (which was used for the mainsimulation environment building activity), Blender, and Solid-works (which is a CAD modeling tool used to create the various3-D models for the various simulation scenes). The primaryoutcome (DO) after completion of this phase is the completedsimulator-based training environments.

Phase 6: Perform Learning Interactions (E6): It was impor-tant to verify the correctness of the simulator before it couldbe used as a training tool. The role of the expert surgeons andthe medical residents along with the overall process steps wasidentified early in the ICSE process. In general, the expert sur-geons first interacted with the various modules of the simulatorsto ensure its correctness, scope of training, and level of detail.As the simulators development progressed, timely feedback wasprovided by expert surgeons to make changes to the simulators’contents to ensure correctness as well as initiate modificationsto add or reduce detail or incorporate specific training nuancesto various surgical training tasks; these were then implementedby the project design and building teams; subsequently, assess-ment of learning using these simulators was performed involvingmedical residents and students. Additional details of the learn-ing interactions involving medical residents and students arepresented in Section IV.

As a part of the ICSE approach, it was important to capturethe intricacies of the surgical procedure in order to clearly under-stand the requirements for the design and development of suchcomplex simulation environments. eEML models for the LISSplating surgical procedure were built, which served as use casediagrams in the context of software engineering. Involvementof customers in the design process is an important facet of theSE approach [95], [96]. In our ICSE approach, the surgery teamconsisting of expert surgeons was involved in the creation of theeEML model to presenting the steps involved in the LISS platingsurgical procedure. The training systems developed were basedon this eEML model shown in Fig. 2, which shows the top levelview of the activities involved in the surgical procedure. More-over, additional decomposition models were developed for eachof the six training systems (TS-1 to TS-6). For brevity, only onedecomposition model is shown in Fig. 9.

III. ICSE-BASED TRAINING FRAMEWORK

In this section, the architecture of the training environmentsand their implementations are discussed. A software-basedtraining process manager (component) coordinates the train-ing activities involving the HITS and FITS training simulators(which is elaborated in Section III-A). The avatar-based train-ing component guides the human (resident) in the loop duringthe simulation-based training activities; the process managermaintains overall supervisory control of the training activitiesinvolving the six training simulator systems (which is discussedin Section III-E); a tracking component monitors the progressof the simulation-based training activities and updates the

Page 6: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE SYSTEMS JOURNAL

Fig. 3. Interactions between components for HITS.

training status as the medical residents progress through thevarious training sessions using the six training systems; afterthis training, the assessment of the learning activities is con-ducted using the online learning system (which is discussed inSection IV). A 3-D avatar (which is a human-like model or rep-resentation that can interact with the users) guides the traininginteractions and provides voice commands to help the residentswith the training activities along with text instructions visibleon the screen (shown in Fig. 4). Both HITS and FITS have twomodes of training involving the avatar system: a) automated andb) manual. The automated mode involves the avatar providingan overview of the surgical simulation in a step-by-step mannerwith minimal user interaction. The manual mode allows the userto manually practice the various surgery steps using the hapticdevice for the HITS system and the controllers for the FITSsystem.

Prior to a discussion of the various elements of the HITS andFITS simulators, an overview of the components of the trainingframework is provided.

The training environments for both the FITS and HITS sys-tems were built primarily using C#, JavaScript, and the Unitygame engine running on Windows platforms.

A. Data Exchange and Communication Among TrainingModules and Software Components

In a software engineering context, the simulator’s design isrepresented as models and is driven by requirements modeling,which uses a combination of text and diagrammatic forms todepict requirements; this is part of the software design process.Two types of modeling perspectives have been provided as apart of this software engineering process. These perspectivesresulted in use case diagrams (based on eEML), and class andsequence diagrams (based on UML). Each modeling perspec-tive of the software design presents the design from a differentpoint of view. The scenario-based perspective is referred inthe eEML-based use case diagram (shown in Section D). The

Fig. 4. Close up of the avatar used in the simulator.

scenario-based modeling depicts how the user interacts withthe simulator when accomplishing various training activities.The class-based perspective is depicted in the class and sequencediagrams discussed later in this section.

In general, the UML-based sequence diagrams are design dia-grams used for dynamic modeling. They focus on identificationof the behavior of the entities within the system [22].

The various training components are coordinated by softwareentities named “managers”; for example, the Avatar Managercoordinates interactions between users and training environ-ments with the help of avatar system or component. The keycomponents of the Cyber-Human Systems framework for thehaptic-based training activities is shown in Fig. 3. The dataexchange between the distributed components are supportedthrough Software Defined Networking (SDN) principles whichis described in the next section (Section III-B). Various func-tions were identified, designed, and implemented to support theinteractions and training activities. The interface functions ofthe training environments (including keyboard/mouse control,haptic control) are modeled as software classes. Class diagramscan be used to provide an overview of a software system bydescribing the classes and objects inside the system and therelationships among them [23]. Class diagrams have not beendiscussed in the paper for brevity.

The primary components of the Cyber-Human frameworkunderlying both the haptic (HITS) and Immersive (FITS) train-

Page 7: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GUPTA et al.: A VIRTUAL REALITY ENHANCED CYBER-HUMAN FRAMEWORK FOR ORTHOPEDIC SURGICAL TRAINING 7

Fig. 5. Resident interacting with the HITS system.

ing systems are the same in the context of the adopted ICSEapproach; the primary difference between them was in the in-teraction and immersion technology. While the HITS systemallowed users to interact with a haptic interface, the FITS sys-tem provided a fully immersive user experience. A descriptionof the haptic training system and the FITS follows.

B. HITS

The haptic training system uses the Geomagic Touch hapticdevice that allows users to touch, grasp, and interact with vari-ous surgical tools during the simulation activities. The primaryfunction of the haptic interface is to give an intuitive “feel” forvarious tasks (such as picking up various plates or tools, placingthem accurately in a certain location, etc.). Fig. 5 shows aresident interacting with the HITS system as part of the learningassessment.

The HITS supports two types of training modes: the first is astand-alone training mode where a resident can access and usethe haptic training system individually from a remote location,and the second system is a collaborative training mode wherean expert surgeon can interact with several medical residents indifferent locations simultaneously as part of the learning pro-cess. A detailed description of the collaborative training modeand system follows.

The collaborative training system is made “network” compat-ible to support remote access from different locations using thenext generation internet technologies. While the current internethas become ubiquitous in supporting novel societal and com-mercial usages, it was never originally designed for such a widerange of applications [27]. Owing to the unprecedented rangeof applications currently supported on the Web, new protocols,patches, and extensions have been added; however, this cannotbe sustained in the long term. This situation and context hasposed demanding technological and policy challenges in secu-rity, mobility, and heterogeneity [27]. For these reasons, severalinitiatives have started the development of the next generationof internets in the U.S., Europe [28], Japan, and other countries[28]. In the U.S., the GENI [27], [29] is an initiative led by theNational Science Foundation (NSF) that focuses on the designof the next generation internet frameworks. GENI’s primarynetworking characteristics, which are an advantage over theinternet, include multigigabit bandwidth, low latency, software-defined networking (SDN), and/or control over geographic loca-tion of resources; as part of a long-term U.S. Ignite initiative, sixareas of applications have been identified including healthcare,education, advanced manufacturing, transportation, energy, andpublic safety; this simulator project is one of several projectsoverlapping both GENI and U.S. Ignite themes in the areas ofhealthcare and education.

Fig. 6 shows the SDN-integrated architecture of our surgicalapplication. There are r redundant tele medicine servers (TMSs)

Fig. 6. Architecture of the telemedicine surgical application.

Fig. 7. Resident interacting with the Vive-based FITS simulator.

in this architecture. Failure to connect to up to r−1 TMSs canbe seamlessly tolerated in this network architecture. To achievethis, the tele medicine clients (TMCs) do not connect directlyto a TMS. Instead, each TMC connects to the TMSs throughproxies implemented by SDN switches (realized through Open-Flow); OpenFlow is an SDN standard that allows network con-trollers to decide the network path packets across the networkof switches. If there are m Open Flow Proxies (OFPs), then theTMCs are partitioned into m groups, and each group connects tothe TMSs through one of the OFPs. The OFPs play a crucial rolein providing failure resiliency without introducing much latency.

C. FITS

The Vive-based immersive training system does not supporta haptic interface but its portability and low cost are importantfeatures. The Vive platform is equipped with a fully immersiveVR headset providing a 110° field of view [40]. It has twowireless handheld controllers that can be used to explore andinteract with simulation environments in order to understandand practice the surgical tasks as part of the training activities.A resident is seen performing the training activities with theFITS system in Fig. 7.

D. Creating Information-Centric Models for theTraining Simulators

As the scope of the training activities supported by this ICSEapproach is fairly complex, the information-centric models us-ing eEML were created to help provide a structured basis forcreating use case diagrams for the surgery related training activ-ities. Fig. 7 provides a partial view of this top level model for us-ing the first three systems including the LISS assembly trainingsystem, LISS insertion and position training system, and Frac-ture reduction training system. Further, the decompositions ofeach of these activities served as use case models to understand-ing the training complexities, which enabled the project team todesign and build these training simulator systems. As explainedin Section II, the eEML model consists of three categories ofessential information for each modeled activity; the IC, the Pas,

Page 8: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE SYSTEMS JOURNAL

Fig. 8. Laying the foundation for supporting complex interactions within and between the training simulator systems.

Fig. 9. ICSE approach-based eEML decomposition model created for the LISS Assembly procedure, which served as a basis to develop the LISS plating trainingsystem (TS-1).

and DOs. For E1, these resources include TS-1 simulator in boththe FITS/HITS platforms; the teams involved in all the trainingsystems are residents who participate in the training and theexpert surgeons who supervise the residents during training.There is an overlap between HITS and FITS systems but thephysical resources (PRs) and software resources (SRs) differfor the two systems. A decomposition of PR/SO is shown inFig. 8 for both the systems. As FITS is a fully immersive sys-tem, the PRs include the VR headset, hand-held controllers, andtracking cameras that are part of the Vive system. The SRs (ob-jects) for FITS are the Unity game engine and the Steam VRtoolkit for building the application as shown within red dottedlines as A in Fig. 8. For the HITS systems, PRs are the haptic de-vices and the SRs needed are the Unity 3D engine and the OpenHaptic plugin (shown as B in Fig. 8). The input (II1) is the train-ing request submitted by residents and surgeons; the constraintsare the training schedule of residents and expert surgeons. In-teractions between the training systems are highlighted usingthe red dotted lines and arrows in Fig. 8. Training outcomesare represented as DOs (which include updating training statusand providing feedback). For example, the successful outcomefrom E1 (LISS plate assembly training activity, TS-1) becomesthe information input (II2) for training activity E2, which is theplate insertion training (TS-2). Similar interactions occur within

and between remaining training systems (TS-3, TS-4, TS-5, andTS-6), which are not shown for the sake of brevity. The junc-tion box OAS is an asynchronous OR junction (e.g., betweenE2 and E3), which indicates that after LISS insertion training,the next training can progress or if the training was unsatisfac-tory, the training is repeated. As the surgical training process iscomplex, decompositions of each of the surgical activities weredeveloped, which enabled the teams to focus on the fidelity andcorrectness of the surgical simulation scenarios; it also enabledthe expert surgeons to verify the simulation details prior to thecreation of the simulation environments. The content and scopeof the training systems are discussed in Section III-E.

Training outcomes are represented as DOs (which includeupdating training status and providing feedback). For example,the successful outcome from E1 (LISS plate assembly trainingactivity, TS-1) becomes the information input (II2) for trainingactivity E2, which is the plate insertion training (TS-2).

E. LISS Plating Surgical Training Systems

For both HITS and FITS, six training systems have beendeveloped for LISS plating surgical process. The systems areLISS Assembly Training System (TS-1), LISS Insertion Train-ing System (TS-2), Position Training System (TS-3), Fracture

Page 9: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GUPTA et al.: A VIRTUAL REALITY ENHANCED CYBER-HUMAN FRAMEWORK FOR ORTHOPEDIC SURGICAL TRAINING 9

Fig. 10. ICSE-based eEML decomposition model created to design the validation and learning interaction tasks.

Reduction Training System (TS-4), Screw Insertion TrainingSystem (TS-5) and Guide Removal Training System (TS-6).For brevity, only TS-1 is discussed in this paper in detail.

1) LISS Plate Assembly Training System (TS-1): The train-ing objective of this system is to enable the residents or buddingsurgeons to become skilled at assembling the LISS surgicalplate. In this process, the LISS insertion guide and LISS plateare attached using the fixation bolt through the distal hole. Itis followed by the insertion of stabilization bolt along with theinsertion sleeve to stabilize the LISS plate. During the design ofthe training systems, these key steps were identified as processentity elements in the eEML model (decomposition model oftop level model shown in Fig. 2) shown in Fig. 9 as a part of theICSE approach.

IV. VALIDATION AND LEARNING ASSESSMENT

A. Validation of Training Simulators

The design of the simulator’s content and learning interac-tions was created based on the information model (elided eEMLmodel) shown in Fig. 10 including the decomposition of task E-6(shown in Fig. 1). Before the learning assessment activities wereconducted, the project team including the collaborating surgeonsconducted several reviews to ensure that the system as well asthe training requirements identified earlier was satisfied (a sum-mary of these requirements is outlined earlier in Section II).

Three expert surgeons reviewed all the simulation scenarioscreated to support the training activities to ensure that the surgi-cal scenarios and the training steps simulated were satisfactoryin terms of accuracy and level of detail. This was completedin several cycles as the various training modules were com-pleted; feedback was provided after review of each simulationmodule including modifications to the training process as wellas incorporation of additional nuances to provide for a bettertraining process. This resulted in various changes to the surgi-cal simulation content and training aspects (referred as E6-1 inFig. 10). In some environments, additional help facilities wereintroduced along with cues to help a resident gain a better graspof the surgical steps. Other modifications focused on adding sur-gical complexities and on making the user interface friendlierto ensure that a medical resident is able to complete a specificsurgical simulation task.

After these changes were implemented and verified, assess-ment of learning activities was undertaken.

B. Learning Assessment Process and System

The usefulness and impact of using the developed trainingenvironments for education and training were assessed throughinteractions with surgeons, residents, and medical students at thePaul Foster School of Medicine, Texas Tech University HealthSciences Center in El Paso, Texas. This assessment activitywas categorized into three thrusts: i) residents and students re-motely accessing the standalone HITS training environments;

ii) residents and students collaborating with the expert surgeonusing collaborative HITS system; (iii) comparative assessmentof learning using FITS and HITS.

A pretest/posttest method was used to evaluate the improve-ments in the participants. A total of 64 users participated in theassessment consisting of 40 medical students and 24 residents.The average age of the participants was 26. In these assessmenttasks, for each of the participants, a pretest (referred as E6-2 inFig. 10) was first conducted to test the knowledge/skills of theparticipants in the domain of LISS surgery. After the pretest, theparticipants interacted with the training environments to learnmore about the various LISS plating surgical processes (referredas E6-3/E6-4/E6-5 in Fig. 10). Subsequently, the participantswere evaluated through a posttest (referred as E6-6 in Fig. 10).A total of 30 min were provided to the participants to completethe pretest and the posttest. The time allotted to interact withthe simulator was limited to an hour and the participants had achoice of performing the training as many times as they needed.Each of the activities was conducted on the same day. Therewas a 5-min interval after the pretest was completed and alsoafter the interactions with the simulator were completed. A totalof ten questions were asked in the pretest and the posttest eachworth ten points. There was no negative scoring in the tests. Thequestions asked in the pretest were repeated in the posttest.

For the stand-alone HITS-based training (referred as E6-3in Fig. 10), 27 out of 32 participants demonstrated significantlearning in their understanding of the LISS plating surgical pro-cess. The lead surgeon involved in the learning studies termedthe learning improvement to be “significant” if the participantsshowed an improvement of 40 points or above. The range ofimprovement between 10 and 40 points was termed as “mod-erate”; a lesser score difference was categorized as “minimal.”Based on this categorization, 12 out of 27 students showedsignificant learning improvement; learning improvement of 15students was classified as moderate. These results underscorethe impact of using VR-based simulators to improve the under-standing of orthopedic surgery procedures. Feedback was alsoobtained regarding the usefulness of these simulators and theteaching avatar used in the training environments.

In the second thrust, the focus was on collaborative learn-ing using HITS (referred as E6-4 in Fig. 10); eight participantscomprising of medical students and residents interacted with anexpert surgeon who was in a remote location; the collaborativetraining was conducted through the GENI network. The eightparticipants showed improvements in their understanding of theLISS plating surgical process. Feedback obtained from them in-dicated that all participants were satisfied with the performanceof the training environments with minimum latency issues. Twoout of eight participants showed significant improvement; sixparticipants showed moderate improvement.

Performance of the GENI-based network with respect to la-tency during the learning interactions was also studied. Networklatency between the locations was measured using Internet Con-trol Message Protocol (ICMP) ping.

Page 10: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE SYSTEMS JOURNAL

The latency data for three samples involving collaborativeinteractions between an expert surgeon and medical resident (intwo different locations) were collected for a 2-h period for threesamples; the latency is stable at around 48 ms.

In the third thrust (referred as E6-5 in Fig. 10), the assess-ment was conducted involving two groups (A and B) of medicalstudents/residents; group A was exposed to training activitiesusing HITS relating to LISS assembly training (TS-1); subse-quently, they were trained using FITS relating to fracture reduc-tion (TS-4). Similarly, group B was trained using HITS relatingto fracture reduction (TS-4) and then trained using FITS forsurgical activities relating to LISS assembly (TS-1). The pri-mary objective was to study the effectiveness of using a hapticinterface versus a nonhaptic but fully immersive environment.Each group consisted of 12 participants who had not interactedwith any of simulators before. Both the groups participated ina pretest; subsequently, one group was trained using the HITSand the other group was trained using the FITS; posttests weresubsequently conducted.

All the participants in both the groups showed learning im-provements in the posttest. The average improvement in thegroup trained using FITS was 48 and the group trained usingthe HITS was 43.33. The results showed that there is a slightlybetter improvement in scores for the group who interacted withFITS compared to the group who interacted with HITS. A sur-vey taken with all participants indicated that a majority of par-ticipants (ten out of twelve) preferred the FITS-based trainingsystem.

C. Qualitative Feedback Surveys (QFSs)

QFSs were also conducted in which all the 24 participants (12using the HITS and 12 using the FITS) provided responses to theuser friendliness and effectiveness of instruction support duringthe simulator-based training activities. Each survey’s responseinvolved assigning a rating of 1 to 10 (with 1 being the lowestrating and 10 being the highest). Some of these criteria werebased on NASA Task Load Index (TLX) [57]. NASA TLX isconsidered one of the leading standards for measuring subjectiveworkload for users across a wide range of applications; it hasbeen used to collect user feedback of various environments suchas simulators, aircraft cockpits, command, control, and commu-nication (C3) workstations, and laboratory environments.

The creation of the Cyber-Human framework discussed in thispaper was one of the projects conducted as part of a US Igniteinitiative aimed at exploring Future Internet architectures andnetworking principles. The creation of the surgical simulatorswas one of the U.S. Ignite projects that focused on developingan innovative network-based application in the national priorityareas of health and education. The scope of the training envi-ronments is currently being expanded to include the condylarplating surgical process along with exploring other VR tech-nologies such as the HoloLens.

The network based simulation frameworks facilitate a 24/7on-demand access to the users. Multiple users from differentlocations can learn from an expert surgeon remotely, whichsupplements the current training methods. The users can prac-tice the steps of the surgery as many times as they need in theVR simulator. When practicing with synthetic bones, differentsizes of the bones need to be manufactured and obtained to sup-port practice of femur of different sizes. In the simulator-basedapproach, the bone sizes can be modified in the CAD modelused as part of the training environment. In the VR simulator

environments, the residents and users can learn interactivelyin a safe environment (where mistakes can be identified andcorrected with the help of supervising expert surgeons). Futurework will also focus on developing ways to monitor and recordsuch training interactions to provide additional assessment onthe surgical skills. Other researchers have conducted transfervalidity studies [64], [65] to study whether the skills learnedin a simulator are transferable to another system. This aspectof transfer validation has not been addressed in our paper butplans are underway to incorporate such an assessment in thenext phase of our research.

V. CONCLUSION

This paper discussed the adoption of information centric sys-tems engineering (ICSE) principles to design a cyber-humansystems based simulator framework to train orthopedic surgerymedical residents using haptic and immersive VR platforms.This ICSE-based approach provided a structured foundation todesign and build a software-based training system; it enabledaddressing two major thrusts underlying this implementation:a) plan, design, and build the training system (or simulators)and b) create information-centric use case diagrams to betterunderstand the functional and temporal relationships within andbetween the six surgical training systems and activities. Ourapproach also emphasizes the adoption of emerging next gener-ation networking principles; to the best of our knowledge, ourcyber training approach is the first reported use of such next-generation internet technologies as well as the use of the ViveVR platform in orthopedic medical training activities.

Expert surgeons played an important role in the design of theCyber-Human information centric framework discussed in thispaper, use cases of the target surgical processes were modeledas information intensive process models using a modeling lan-guage called the eEML. The impact of using these simulatorswas assessed through interactions with surgical residents at theTexas Tech Health Sciences Center (El Paso, Texas); the ma-jority of participants showed significant improvements in theirunderstanding of the LISS plating surgical process after inter-acting and learning using the training simulators.

ACKNOWLEDGMENT

The authors would like to thank the surgeons, residents, stu-dents, and other staff at the Paul L. Foster School of Medicineand the Texas Tech Health Sciences Center, El Paso, who par-ticipated in this project.

REFERENCES

[1] M. D. Tsai, C. S. Liu, H. Y. Liu, M. S. Hsieh, and F. C. Tsai, “Virtualreality facial contouring surgery simulator based on CT transversal slices,”in Bioinf. Biomed. Eng. (iCBBE) 5th Int. Conf., IEEE, May 2011, pp. 1–4.

[2] A. P. Sage and S. R. Olson, “Modeling and simulation in systems en-gineering: Whither simulation based acquisition?,” Simulation, vol. 76,no. 2, pp. 90–91, 2001.

[3] E. C. Smith, “Simulation in systems engineering,” IBM Syst. J., vol. 1,no. 1, pp. 33–50, 1962.

[4] D. Gianni, A. D’Ambrogio, and A. Tolk, Eds., Modeling and Simulation-Based Systems Engineering Handbook. Boca Raton, FL, USA: CRC Press,2014.

[5] S. Ramo and R. S. Clair, “The systems approach: fresh solutions to com-plex problems through combining science and practical common sense,”Anaheim, CA, USA: TRW, Inc., 1998. COMMONS LAB| CASE STUDYSERIES, 2.

[6] Systems Engineering Fundamentals, Fort Belvoir, VA, USA: De-fense Acquisition University Press, Jan. 2001. [Online]. Avail-

Page 11: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GUPTA et al.: A VIRTUAL REALITY ENHANCED CYBER-HUMAN FRAMEWORK FOR ORTHOPEDIC SURGICAL TRAINING 11

able: https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-885j-aircraft-systems-engineering-fall-2005/readings/sefguide_01_01.pdf

[7] S. Cavalieri and G. Pezzotta, “Product–Service Systems Engineering:State of the art and research challenges,” Comput. Ind., vol. 63, no. 4,pp. 278–288, 2012.

[8] W. R. Edwards, A Systems Engineering Primer for Every Engineer andScientist. Berkeley, CA, USA: Lawrence Berkeley Nat. Lab., 2001.

[9] IDEF methods. [Online]. Available: http://www.idef.com/[10] R. Gunda, J. Cecil, P. Calyam, and S. Kak, “Information centric frame-

works for micro assembly, international workshop on enterprise integra-tion,” in Interoperability Netw. + NSF Inf. Centric Eng. Lecture NotesComput. Sci., vol. 7046. Berlin, Germany: Springer-Verlag, pp. 93–101,2011.

[11] J. Cecil, “A functional model of fixture design to aid in the design anddevelopment of automated fixture design systems,” J. Manuf. Syst., vol. 21,no. 1, pp. 58–72, Aug. 2002.

[12] J. Cecil and M. Pirela-Cruz, “An information model for designingvirtual environments for orthopedic surgery,” in Proc. EI2N Work-shop, Graz, Austria: OTM Workshops, 2013, vol. 8186, Lecture Notesin Computer Science, vol. 8186. Berlin, Germany: Springer-Verlag,pp. 218–227.

[13] J. Cecil, “Modeling the process of creating virtual prototypes,” Comput.-Aided Des. Appl., vol. 6, no. 1–4, pp. 1–4, 2015.

[14] A. D. Hall, “Three-dimensional morphology of systems engineering,”IEEE Trans. Syst. Sci. Cybern., vol. 5, no. 2, pp. 156–160, Apr. 1969.

[15] F. Mhenni, N. Nguyen, and J. Y. Choley, “SafeSysE: A safety analysisintegration in systems engineering approach,” IEEE Syst. J., vol. 12, no. 1,pp. 161–172, Mar. 2016.

[16] M. D. Tsai, M. S. Hsieh, and C. H. Tsai, “Bone drilling haptic interactionfor orthopedic surgical simulator,” Comput. Biol. Medicine, vol. 37, no. 12,pp. 1709–1718, 2007.

[17] P. Youngblood, P. M. Harter, S. Srivastava, S. Moffett, W. L. Heinrichs,and P. Dev, “Design, development, and evaluation of an online virtualemergency department for training trauma teams,” Simul. Healthcare,vol. 3, no. 3, pp. 146–153, 2008.

[18] B. R. A. Sales, L. S. Machado, and R. M. Moraes, “Interactive collabora-tion for virtual reality systems related to medical education and training,”Technol. Med. Sci., pp. 157–162, 2011.

[19] J. C. De Oliveira and N. D. Georganas, “VELVET: an adaptive hybridarchitecture for very large virtual environments,” Presence: TeleoperatorsVirtual Environ., vol. 12, no. 6, pp. 555–580, 2003.

[20] R. Caceres and A. Friday, “Ubicomp systems at 20: progress, opportuni-ties, and challenges,” IEEE Pervasive Comput., vol. 11, no. 1, pp. 14–21,Jan./Mar. 2012.

[21] P. Winzer, “Generic system description and problem solving in systemsengineering,” IEEE Syst. J., vol. 11, no. 4, pp. 2052–2061, Dec. 2017.

[22] Sequence diagrams. [Online]. Available: http://www.agilemodeling.com/artifacts/sequenceDiagram.html

[23] Class diagrams. [Online]. Available: https://www.visualparadigm.com/VPGallery/diagrams/Class.html

[24] J. Cecil and M. Pirela-Cruz, “Development of an information model fora virtual surgical environment,” presented at the TMCE 2010, Ancona,Italy, Apr. 12–16, 2010.

[25] K. Kunkler, “The role of medical simulation: an overview,” Int. J. Med.Robot. Comput. Assisted Surgery, vol. 2, no. 3, pp. 203–210, 2006.

[26] P. H. Cosman, P. C. Cregan, C. J. Martin, and J. A. Cartmill, “Virtualreality simulators: current status in acquisition and assessment of surgicalskills,” ANZ J. Surgery, vol. 72, no. 1, pp. 30–34, 2002.

[27] www.geni.net[28] FIRE, 2016. [Online]. Available: https://www.ict-fire.eu/[29] M. Berman, “GENI: A federated testbed for innovative network experi-

ments,” Comput. Netw., vol. 61, pp. 5–23, 2014.[30] J. A. Yang, V. Jaganathan, and R. Du, “A new dynamic model for drilling

and reaming processes,” Int. J. Mach. Tools Manuf., vol. 42, no. 2, pp. 299–311, 2002.

[31] A. Langella, L. Nele, and A. Maio, “A torque and thrust prediction modelfor drilling of composite materials,” Composites Part A: Appl. Sci. Manuf.,vol. 36, no. 1, pp. 83–93, 2005.

[32] B. Tolsdorff et al., “Virtual reality: a new paranasal sinus surgery simula-tor,” Laryngoscope, vol. 120, no. 2, pp. 420–426, 2010.

[33] K. S. Choi, S. Soo, and F. L. Chung, “A virtual training simulator for learn-ing cataract surgery with phacoemulsification,” Comput. Biol. Medicine,vol. 39, no. 11, pp. 1020–1031, 2009.

[34] G. Echegaray, I. Herrera, I. Aguinaga, C. Buchart, and D. Borro, “Abrain surgery simulator,” IEEE Comput. Graphics Appl., vol. 34, no. 3,pp. 12–18, May/Jun. 2014.

[35] C. Luciano, P. Banerjee, and T. DeFanti, “Haptics-based virtual realityperiodontal training simulator,” Virtual Reality, vol. 13, no. 2, pp. 69–85,2009.

[36] Y. Shi, Y. Xiong, X. Hua, K. Tan, and X. Pan, “Key techniques of hapticrelated computation in virtual liver surgery,” in 8th Int. Conf. Biomed.Eng. Inf. (BMEI), Oct. 2015, pp. 355–359.

[37] L. Yu, T. Wang, W. Wang, Z. Wang, and B. Zhang, “A geometric modelingmethod based on OpenGL in virtual gall bladder surgery,” presented atthe Int. Conf. Comput. Sci. Electron. Eng., Mar. 2013.

[38] T. M. Peters, C. A. Linte, J. Moore, D. Bainbridge, D. L. Jones, andG. M. Guiraudon, “Towards a medical virtual reality environment forminimally invasive cardiac surgery,” in Proc. Int. Workshop Med. Imag.Virtual Reality, Aug. 2008, pp. 1–11.

[39] T. S. Sørensen, S. V. Therkildsen, P. Makowski, J. L. Knudsen, andE. M. Pedersen, “A new virtual reality approach for planning of cardiacinterventions,” Artif. Intell. Med., vol. 22, no. 3, pp. 193–214, 2001.

[40] J. S. Strenkowski, C. C. Hsieh, and A. J. Shih, “An analytical finite elementtechnique for predicting thrust force and torque in drilling,” Int. J. Mach.Tools Manuf., vol. 44, no. 12, pp. 1413–1421, 2004.

[41] J. Qin, W.-M. Pang, Y.-P. Chui, T.-T. Wong, and P.G-A. Heng, “A novelmodeling framework for multilayered soft tissue deformation in virtualorthopedic surgery,” J. Med. Syst., vol. 34, pp. 261–271, 2010.

[42] [Online]. Available: https://www.abos.org/abos-surgical-skills-modules-for-pgy-1-residents.aspx

[43] K. Kirkpatrick, “Software-defined networking,” Commun. ACM, vol. 56,no. 9, pp. 16–19, 2013.

[44] A. Nemani, G. Sankaranarayan, K. Roberts, L. Panait, M. Cao, andC. S. De, “Hierarchical task analysis of hybrid rigid scope natural orificetranslumenal endoscopic surgery (notes) cholecystectomy procedures,” inProc. 2013 Medicine Meets Virtual Reality Conf. (NEXTMED/ MMVR20),San Diego, CA, USA, Feb. 20–23, 2013, pp. 293–297.

[45] A. Jalote-Parmar and P. Badke-Schaub, “Workflow integration Matrix: Aframework to support the development of surgical information systems,”Des. Stud., vol. 29 no. 4, pp. 338–368, 2008.

[46] P. Jannin, “Surgical process modeling: methods and applications, pre-sented at the Presentation Medicine Meets Virtual Reality Conf.(NEXTMED/MMVR20), San Diego, CA, USA, Feb. 20–23, 2013.

[47] R. Kopach-Konrad et al., “Applying systems engineering principles inimproving health care delivery,” J. Gen. Internal Med., vol. 22, no. 3,pp. 431–437, 2007.

[48] J. Cecil, A. Gupta, P. Ramanathan, and M. Pirela-Cruz, “A distributedcollaborative simulation environment for orthopedic surgical training,” inProc. 2017 Annu. IEEE Int. Syst. Conf. (SysCon), Apr. 2017, pp. 1–8.

[49] T. Huber, M. Paschold, C. Hansen, T. Wunderling, H. Lang, andW. Kneist, “New dimensions in surgical training: Immersive virtual realitylaparoscopic simulation exhilarates surgical staff,” Surgical Endoscopy,vol. 31, pp. 4472–4477, 2017.

[50] P. Suresh and J. P. Schulze, “Oculus Rift with stereo camera for augmentedreality medical intubation training,” Electron. Imag., vol. 2017, no. 3,pp. 5–10, 2017.

[51] J. Egger et al., “Integration of the HTC Vive into the medical plat-form MeVisLab,” SPIE Med. Imag. Int. Soc. Opt. Photon., vol. 10138,pp. 1013817–1013817, Mar. 2017.

[52] S. Wijewickrema et al., “Design and evaluation of a virtual reality simula-tion module for training advanced temporal bone surgery,” in Proc. IEEE30th Int. Symp. Comput.-Based Med. Syst. (CBMS), Jun. 2017, pp. 7–12.

[53] L. Panait, E. Akkary, R. L. Bell, K. E. Roberts, S. J. Dudrick, and A. J.Duffy, “The role of haptic feedback in laparoscopic simulation training,”J. Surgical Res., vol. 156, no. 2, pp. 312–316, 2009.

[54] A. J. Debes, R. Aggarwal, I. Balasundaram, and M. B. Jacobsen, “A tale oftwo trainers: Virtual reality versus a video trainer for acquisition of basiclaparoscopic skills,” Am. J. Surgery, vol. 199, no. 6, pp. 840–84, 2010.

[55] E. C. Hamilton et al., “Comparison of video trainer and virtual reality train-ing systems on acquisition of laparoscopic skills,” Surgical EndoscopyOther Int. Tech., vol. 16, no. 3, pp. 406–411, 2002.

[56] D. Morris, H. Tan, F. Barbagli, T. Chang, and K. Salisbury, “Haptic feed-back enhances force skill learning,” in Proc. Second Joint EuroHapticsConf. Symp. Haptic Interfaces Virtual Environment Teleoperator Systems(WHC’07), Mar. 2007, pp. 21–26.

[57] S. G. Hart and L. E. Staveland, “Development of NASA-TLX (TaskLoad Index): Results of empirical and theoretical research,” Adv. Psy-chol., vol. 52, pp. 139–183, 1988.

[58] O. Alagoz, L. Maillart, A. Schaefer, and M. Roberts, “Determining theacceptance of cadaveric livers using an implicit model of the waiting list,”Oper. Res., vol. 55, no. 1, pp. 24–36, 2007.

Page 12: A Virtual Reality Enhanced Cyber-Human Framework for … · 2019-09-13 · Avinash Gupta, Student Member, IEEE,J.Cecil, Senior Member, IEEE, Miguel Pirela-Cruz, and Parmesh Ramanathan,

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE SYSTEMS JOURNAL

[59] A. Jeffrey, X. Xia, and I. Craig, “When to initiate HIV therapy: A controltheoretic approach,” IEEE Trans Biomed. Eng., vol. 50, no. 11, pp. 1213–20, Nov. 2003.

[60] J. Peterson, Petri Net Theory and the Modeling of Systems. Upper SaddleRiver, NJ, USA: Prentice-Hall, 1981.

[61] J. Cecil, “A functional model of fixture design to aid in the design and de-velopment of automated fixture design systems’,” J. Manuf. Syst., vol. 21,no. 1, pp. 58–72, Aug. 2002.

[62] J. D. Watterson, D. T. Beiko, J. K. Kuan, and J. D. Denstedt, “A ran-domized prospective blinded study validating acquisition of ureteroscopyskills using a computer based virtual reality endourological simulator,”J. Urology, vol. 168, no. 5, pp. 1928–1932, 2002.

[63] M. K. Schlickum, L. Hedman, L. Enochsson, A. Kjellin, and L. Fellander-Tsai, “Systematic video game training in surgical novices improves per-formance in virtual reality endoscopic surgical simulators: A prospectiverandomized study,” World J. Surgery, vol. 33, no. 11, p. 2360–2367, 2009.

[64] E. G. G. Verdaasdonk, J. Dankelman, J. F. Lange, and L. P. S. Stassen,“Transfer validity of laparoscopic knot-tying training on a VR simula-tor to a realistic environment: A randomized controlled trial,” SurgicalEndoscopy, vol. 22, no. 7, pp. 1636–1642, 2008.

[65] N. R. Howells, H. S. Gill, A. J. Carr, A. J. Price, and J. L. Rees, “Trans-ferring simulated arthroscopic skills to the operating theatre,” Bone JointJ., vol. 90, no. 4, pp. 494–499, 2008.

[66] J. Cecil, M. B. R. Kumar, A. Gupta, M. Pirela-Cruz, E. Chan-Tin, andJ. Yu, “Development of a virtual reality based simulation environmentfor orthopedic surgical training,” in On the Move Meaningful Internet Sys-tems: OTM 2016 Workshops. Cham, Switzerland: Springer, 2016, pp. 206–214.

[67] J. Cecil, A. Gupta, and M. Pirela-Cruz, “An advanced simulator for ortho-pedic surgical training,” Int. J. Comput. Assisted Radiol. Surgery, vol. 13,no. 2, pp. 305–319, 2018.

[68] J. Cecil, A. Gupta, M. Pirela-Cruz, and P. Ramanathan, “A cyber train-ing framework for orthopedic surgery,” Cogent Medicine, vol. 4, no. 1,pp. 1–13, 2017.

[69] Systems Engineering. [Online]. Available: https://www.incose.org/AboutSE/WhatIsSE

[70] R. J. Mayer, M. K. Painter, and P. S. de Witte, IDEF Family of Methodsfor Concurrent Engineering and Business Re-engineering Applications.College Station, TX, USA: Knowledge Based Systems, 1994.

[71] J. Cecil, S. Albuhamood, A. Cecil-Xavier, and P. Ramanathan, “An ad-vanced cyber physical framework for micro devices assembly,” IEEETrans. Syst., Man, Cybern.: Syst., vol. 49, no. 1, pp. 92–106, Jan. 2019.

[72] N. A. Tepper, “Exploring the use of Model-based Systems Engineering(MBSE) to develop systems architectures in naval ship design (Doctoraldissertation),” Master’s Thesis, Dept. Mech. Eng., MIT, Cambridge, MA,USA, 2010.

[73] S. Uckun, T. Kurtoglu, P. Bunus, I. Tumer, C. Hoyle, and D. Musliner,“Model-based systems engineering for the design and development ofcomplex aerospace systems,” SAE Tech. Paper, Warrendale, PA, USA,Rep. No. 2011-01-2664, 2011.

[74] V. Crespi, A. Galstyan, and K. Lerman, “Top-down vs bottom-up method-ologies in multi-agent system design,” Auton. Robots, vol. 24, no. 3,pp. 303–313, 2008.

[75] S. C. Cook, A. P. Campbell, and Q. V. Do, “A ‘middle-out’ systemsengineering approach to the development of a systems integration sandpit,”in Proc. Syst. Eng. Test Eval. Conf., 2010, pp. 1–13.

[76] [Online]. Available: https://www.scaledagileframework.com/model-based-systems-engineering/

[77] Y. Lu and J. Cecil, “An internet of things (IoT)-based collaborative frame-work for advanced manufacturing,” Int. J. Adv. Manuf. Technol., vol. 84,no. 5, pp. 1141–1152, May 2016.

[78] J. Cecil and M. Cruz, “An information centric framework for creatingvirtual environments to support micro surgery,” int. J. Virtual Reality,vol. 15, no. 2, pp. 3–18, Nov. 2015.

[79] J. Cecil, “TAMIL: An integrated fixture design system for prismatic parts,”Int. J. Comput. Integr. Manuf., vol. 17, no. 5, pp. 421–435, 2004.

[80] A. P. Sage, Systems Engineering. Piscataway, NJ, USA: Wiley IEEE, 1992.[81] R. Adcock, “Principles and practices of systems engineering,” INCOSE,

UK Chapter Library, Nov. 2001. [Online]. Available: www.incose.org.uk/libraryhtm

[82] J. Cecil and M. Punal, “Using an information modeling approach in thedesign and development of VE based systems,” in Proc. IIE Annu. Res.Conf., Portland, OR, USA, May 17–21, 2003.

[83] S. Sarkar, V. S. Sharma, and R. Agarwal, “Creating design from require-ments and use cases: bridging the gap between requirement and detaileddesign,” in Proc. 5th India Softw. Eng. Conf., Feb. 2012, pp. 3–12.

Avinash Gupta (S’17) received the bachelor’s degreein industrial engineering from Tribhuvan University,Kirtipur, Nepal and the M.S. degree in product designand manufacturing from Visvesvaraya TechnologicalUniversity, Belgaum, Karnataka, India. He is cur-rently working toward the Ph.D. degree in computerscience with the Center for Cyber Physical Systemsat Oklahoma University, Norman, OK, USA.

He is currently a researcher with the Center forCyber Physical Systems at Oklahoma University. Hisresearch interests include virtual reality (VR)-based

prototyping, information modeling, and Internet-of-Things collaborative frame-works. He has authored/coauthored several papers in journals and presented hisresearch in various international conferences including the IEEE internationalconference on automation science and engineering (CASE) and IEEE SysCon.

J. Cecil (SM’17) received the B.E. degree in me-chanical engineering from the College of Engineer-ing Guindy, Chennai (Madras), India, in 1988, theM.S degree in industrial engineering from the StateUniversity of New York, Binghamton, NY, USA, in1990, and the Ph.D. degree in industrial engineeringfrom Texas A&M University, College Station, TX,USA, in 1995.

He is currently a Professor and Co-Director withthe Center for Cyber Physical Systems, Departmentof Computer Science, Oklahoma State University,

OK, USA. His research interests include adoption of information-centricengineering principles spanning three core facets: modeling, simulation, andexchange of information in a range of process domains including advancedmanufacturing, space systems (including deep space habitats), and telemedicine(including design of surgical simulators). He is a pioneer in the design of agileframeworks supporting Internet-of-Things and cyber physical systems-basedcollaboration as well as the design of advanced virtual/mixed reality-based pro-totyping approaches in the context of simulation-based design. He is also activein the design of virtual reality (VR)-based virtual learning environments andcyber-based learning approaches for engineering and K-12 education. He hasauthored/coauthored many papers in leading journals. He has also presented pa-pers and organized sessions at IEEE, american society of mechanical engineers(ASME), and other conferences.

Dr. Cecil is the recipient of the Institute of Industrial Engineers Award forTechnical Innovation, and Oklahoma State University’s Outstanding FacultyAward in support of its Land Grant Mission. He is a Fellow of the AmericanSociety of Mechanical Engineers.

Miguel Pirela-Cruz received the medical degreefrom Temple University, Philadelphia, PA, USA. Hereceived the Residency in orthopedic surgery from theKingsbrook Jewish Medical Center and state univer-sity of new york (SUNY) Downstate Medical Center,Brooklyn, NY, USA. He completed the Fellowshipin hand surgery from Thomas Jefferson University,Philadelphia, PA, USA.

He is a board-certified Orthopedic Surgeon withextensive experience in surgery of the hand and wrist.He was a Surgeon for the military for 30 years, all

over the country. He is also an accomplished Medical Illustrator and Professor.His surgical and research interests include hand, wrist, and upper extremityproblems.

Parmesh Ramanathan (F’09) received the B.Tech.degree in electrical engineering from the Indian In-stitute of Technology Bombay, Mumbai, India, in1984, and the M.S.E. degree in computer engineeringand the Ph.D. degree in computer science and engi-neering, both from the University of Michigan, AnnArbor, MI, USA, in 1986 and 1989, respectively.

He is currently a Professor with the Departmentof Electrical and Computer Engineering, Universityof Wisconsin–Madison, Madison, WI, USA and theAssociate Dean of graduate education with the Grad-

uate School, University of Wisconsin–Madison, where he was the Chair of theElectrical and Computer Engineering Department from 2005 to 2009. His cur-rent research interests include wired and wireless networks, sensor networks,real-time systems, and fault-tolerant computing.