8
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 22, NO. 2, MARCH 2014 363 Real-Time Control of an Interactive Impulsive Virtual Prosthesis Nathan E. Bunderson Abstract—An interactive virtual dynamic environment for testing control strategies for neural machine interfacing with arti- cial limbs offers several advantages. The virtual environment is low-cost, easily congured, and offers a wealth of data for post-hoc analysis compared with real physical prostheses and robots. For use with prosthetics and research involving amputee subjects it allows the valuable time with the subject to be spent in experi- ments rather than xing hardware issues. The usefulness of the virtual environment increases as the realism of the environment increases. Most tasks performed with limbs require interactions with objects in the environment. To simulate these tasks the dy- namics of frictional contact, in addition to inertial limb dynamics must be modeled. Here, subjects demonstrate real-time control of an eight degree-of-freedom virtual prosthesis while performing an interactive box-and-blocks task. With practice, four nonamputee subjects and one shoulder disarticulation subject were able to successfully transfer blocks in the virtual environment at an average rate of just under two blocks per minute. The virtual environment is congurable in terms of the virtual arm design, control strategy, and task. Index Terms—Myoelectric control, neural machine interface, prosthesis, targeted muscle reinnervation, virtual system. I. INTRODUCTION T HE GOAL of prosthetic devices is to replace a missing body part in order to restore function. While there have been great strides in achieving this goal more completely for upper limb prostheses there is still much to be done. Upper limb amputees who rejected their prostheses reported that one of the two most important factors was that they were “just as or more functional without it” [1] and 74% of upper limb amputees who rejected their prosthesis would reconsider if function were im- proved at a reasonable cost [1]. It is often assumed that arti- cial hands and arms with many degrees-of-freedom more or less matched to those of biological hands and arms will be the best at restoring function. However, an articial arm or hand that can act like a biological one will only be useful if it can easily be controlled by the user. One difculty in designing more user-friendly control methods for high-degree prostheses is the co-dependency between the hardware and the control method. It is difcult to design a control system unless there is hardware to test it with and designing, building, testing, debugging, and ensuring Manuscript received January 25, 2013; revised May 20, 2013; accepted July 01, 2013. Date of publication August 26, 2013; date of current version March 05, 2014. This work was supported by the Globe Foundation and McCormick Foundation. The author is with the School of Applied Physiology, Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail: [email protected]). Digital Object Identier 10.1109/TNSRE.2013.2274599 the safety of the hardware is extremely costly and time con- suming. It is not until the hardware has been constructed that the important user feedback on the design and control method is obtained. Once the feedback has been obtained, revisions in the design of the hardware can be implemented. Because of the length of time and amount of money involved in engineering the hardware the design process is extremely long and difcult. To overcome this difculty virtual reality systems simulating the prosthetic or intact limbs have been developed where con- trol methods can be assessed and redesign iterations can be per- formed faster and at a lower cost. Virtual systems have been developed for brain machine interfacing [2], [3] and prosthetic [4]–[11] applications. Dupont and Morin [9] developed a virtual reality system to train for myoelectric control of a single de- gree-of-freedom. The paradigm was extended to a multiple de- gree-of-freedom virtual hand to test pattern recognition of hand movements from electromyography (EMG) signals [8], [12]. To better assess efcacy of prosthetic control methods, virtual testing paradigms were developed [5], [6], [13] that added more elaborate goals or tasks in the virtual environment. Additional exibility and design of tasks, control methods, and the vir- tual arm itself were introduced with congurable environments [14]–[16]. These congurable environments have been demon- strated in several applications. Two nonamputee subjects con- trolled a nine degree-of-freedom system through a ock of birds motion tracking system (for eight of the degrees-of-freedom) and EMG (for the nal degree-of-freedom) [17]. A tetrapalegic subject was able to control a dynamic two degree-of-freedom system through six muscle actuators [10]. The virtual system designed by Bishop et al. was used to decode two degree-of- freedom center out reaching movements of a Rhesus Macaque monkey [18]. A key assumption is that the lessons learned in the virtual system are transferrable to physical prosthesis design and use. Since most of the tasks performed with biological limbs or pros- theses are interactive (i.e. there is some contact with objects in the environment) the physical realism and transferability of a virtual prosthesis could be improved if it too, was interactive. This requires a virtual environment that simulates frictional con- tact dynamics. Such an environment would provide a rich data set including but not limited to joint excursions, torques and contact forces which could provide valuable insight when as- sessing various control strategies in post-hoc analysis. The data could also be used for real-time feedback. For example, con- tact forces of the ngers can be used to provide haptic feed- back to the user. The system designed by Hargrove et al. is meant to simulate an interactive task but does not actually in- clude contact dynamics [6]. Applications of the Davoodi and 1534-4320 © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Real-Time Control of an Interactive Impulsive Virtual Prosthesis

Embed Size (px)

Citation preview

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 22, NO. 2, MARCH 2014 363

Real-Time Control of an InteractiveImpulsive Virtual Prosthesis

Nathan E. Bunderson

Abstract—An interactive virtual dynamic environment fortesting control strategies for neural machine interfacing with arti-ficial limbs offers several advantages. The virtual environment islow-cost, easily configured, and offers a wealth of data for post-hocanalysis compared with real physical prostheses and robots. Foruse with prosthetics and research involving amputee subjects itallows the valuable time with the subject to be spent in experi-ments rather than fixing hardware issues. The usefulness of thevirtual environment increases as the realism of the environmentincreases. Most tasks performed with limbs require interactionswith objects in the environment. To simulate these tasks the dy-namics of frictional contact, in addition to inertial limb dynamicsmust be modeled. Here, subjects demonstrate real-time control ofan eight degree-of-freedom virtual prosthesis while performing aninteractive box-and-blocks task. With practice, four nonamputeesubjects and one shoulder disarticulation subject were able tosuccessfully transfer blocks in the virtual environment at anaverage rate of just under two blocks per minute. The virtualenvironment is configurable in terms of the virtual arm design,control strategy, and task.

Index Terms—Myoelectric control, neural machine interface,prosthesis, targeted muscle reinnervation, virtual system.

I. INTRODUCTION

T HE GOAL of prosthetic devices is to replace a missingbody part in order to restore function. While there have

been great strides in achieving this goal more completely forupper limb prostheses there is still much to be done. Upper limbamputees who rejected their prostheses reported that one of thetwo most important factors was that they were “just as or morefunctional without it” [1] and 74% of upper limb amputees whorejected their prosthesis would reconsider if function were im-proved at a reasonable cost [1]. It is often assumed that artifi-cial hands and arms with many degrees-of-freedommore or lessmatched to those of biological hands and arms will be the best atrestoring function. However, an artificial arm or hand that canact like a biological one will only be useful if it can easily becontrolled by the user.One difficulty in designing more user-friendly control

methods for high-degree prostheses is the co-dependencybetween the hardware and the control method. It is difficultto design a control system unless there is hardware to test itwith and designing, building, testing, debugging, and ensuring

Manuscript received January 25, 2013; revised May 20, 2013; accepted July01, 2013. Date of publication August 26, 2013; date of current version March05, 2014. This work was supported by the Globe Foundation and McCormickFoundation.The author is with the School of Applied Physiology, Georgia Institute of

Technology, Atlanta, GA 30332 USA (e-mail: [email protected]).Digital Object Identifier 10.1109/TNSRE.2013.2274599

the safety of the hardware is extremely costly and time con-suming. It is not until the hardware has been constructed thatthe important user feedback on the design and control methodis obtained. Once the feedback has been obtained, revisions inthe design of the hardware can be implemented. Because of thelength of time and amount of money involved in engineeringthe hardware the design process is extremely long and difficult.To overcome this difficulty virtual reality systems simulating

the prosthetic or intact limbs have been developed where con-trol methods can be assessed and redesign iterations can be per-formed faster and at a lower cost. Virtual systems have beendeveloped for brain machine interfacing [2], [3] and prosthetic[4]–[11] applications. Dupont andMorin [9] developed a virtualreality system to train for myoelectric control of a single de-gree-of-freedom. The paradigm was extended to a multiple de-gree-of-freedom virtual hand to test pattern recognition of handmovements from electromyography (EMG) signals [8], [12].To better assess efficacy of prosthetic control methods, virtualtesting paradigms were developed [5], [6], [13] that added moreelaborate goals or tasks in the virtual environment. Additionalflexibility and design of tasks, control methods, and the vir-tual arm itself were introduced with configurable environments[14]–[16]. These configurable environments have been demon-strated in several applications. Two nonamputee subjects con-trolled a nine degree-of-freedom system through a flock of birdsmotion tracking system (for eight of the degrees-of-freedom)and EMG (for the final degree-of-freedom) [17]. A tetrapalegicsubject was able to control a dynamic two degree-of-freedomsystem through six muscle actuators [10]. The virtual systemdesigned by Bishop et al. was used to decode two degree-of-freedom center out reaching movements of a Rhesus Macaquemonkey [18].A key assumption is that the lessons learned in the virtual

system are transferrable to physical prosthesis design and use.Since most of the tasks performed with biological limbs or pros-theses are interactive (i.e. there is some contact with objects inthe environment) the physical realism and transferability of avirtual prosthesis could be improved if it too, was interactive.This requires a virtual environment that simulates frictional con-tact dynamics. Such an environment would provide a rich dataset including but not limited to joint excursions, torques andcontact forces which could provide valuable insight when as-sessing various control strategies in post-hoc analysis. The datacould also be used for real-time feedback. For example, con-tact forces of the fingers can be used to provide haptic feed-back to the user. The system designed by Hargrove et al. ismeant to simulate an interactive task but does not actually in-clude contact dynamics [6]. Applications of the Davoodi and

1534-4320 © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

364 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 22, NO. 2, MARCH 2014

Loeb environment which incorporate some form of contact dy-namics have been described but user control of these has notbeen demonstrated [19], [20]. The interactive trans-humeral vir-tual prosthesis developed by Lambrecht et al. [11] had four con-trol-classes (Elbow flexion/extension, forearm supination, andhand open) under synchronous EMG control and was control-lable by able-bodied subjects. As of yet, real-time, interactive,EMG control of a full (shoulder-to-hand) virtual arm has notbeen demonstrated.Here, I present an impulse-based virtual environment with

contact dynamics for evaluating prosthetic and neural machineinterfacing control strategies. The goal of this paper is to bothdescribe the environment and demonstrate its use and thereforethe paper has two primary components. First, Section II of thispaper contains a description of the virtual limb and simulatedenvironment, the control strategy, and the command extractionprocess. Second, Sections II and III contain a description of atest (box-and-blocks) involving four nonamputee subjects andone shoulder disarticulation subject to demonstrate controlla-bility.

II. METHODS

A. Physics Engine

Neuromechanic is a physics engine designed for forward dy-namic simulation of biomechanical systems [21]. The dynamicsare formulated such that the rigid body state variables are jointposition and velocity vectors and the equations of motionare of the form

where is the generalized mass matrix, are generalizedtorques due to gravity, are generalized Coriolis and other ac-celeration torques, are the generalized command (or motor)torques, and are generalized contact friction torques. Gener-ally the state-dependent torques and accelerations are calculatedat a given point in time and the joint acceleration and velocityvectors are integrated forward in time to achieve a force-basedsimulation of the motion of the system However, an impulse-based approximation of the system dynamics can be used to in-crease integration speed and stability at the cost of decreasedaccuracy. At each time step the velocity and acceleration areupdated to

where is the integration interval. Thus, the impulse approxi-mation reduces the order of the dynamical system from secondto first order. The velocity is integrated with a fixed step fourth-order Runge-Kutta integrator. All elements of the equations ofmotion are calculated internally in the physics engine exceptfor the command torques which are set at each point in timeby a control module that is “plugged-in” to the Neuromechanicphysics engine. The steps of simulation for the physics engineare diagrammed in Fig. 1.

Fig. 1. Command flow diagram. Command extraction process converted rawEMG voltages to a discrete 40 Hz desired command signal. Neuromechanicphysics engine is a separate process which sampled the desired command signalthrough shared memory resources. Control module in the physics engine con-verted the desired command signal to joint torques depending on the state of thevirtual limb. Joint torques contributed to the limb, box, and frictional contactdynamics simulated by the physics engine in real-time.

B. Command Extraction Process

The command extraction process converted raw EMG signalsto the desired class command signal. Electrodes were connectedto preamplifiers (Liberating Technologies, Inc., Hollister, MA,USA) and then to a custom built amplifier system (total gainof 4500) and sampled at 1 kHz by a Measurement Computingmodel USB-1616FS 16-bit A/D converter. EMG data were dig-itally band-pass filtered at 20-450 Hz with a third-order But-terworth filter. Four time-domain EMG features (mean abso-lute value, waveform length, zero crossings, and slope changes)[22] were extracted from 250 ms windows of processed EMGfrom each channel. The features were calculated at 25 ms in-tervals (thus, 225 ms of overlapping data per window). Eachfeature vector was classified as one of the 13 desired classesusing a linear discriminant analysis (LDA) classifier [23]. TheLDA is efficient and has similar performance to more complexalgorithms [24]. The result was a desired class signal whichwas passed from the command extraction process to the controlmodule.

C. Control Module

The control module converts the desired class commandsignal to command torques. The desired class could be oneof 13 classes corresponding to mixed joint/endpoint controlof an eight degree-of-freedom arm model (Fig. 2). Six of theclasses corresponded to control of the hand (or “Endpoint”) lo-cation in Cartesian inertial coordinates (Endpoint In, EndpointOut, Endpoint Up, Endpoint Down, Endpoint Forward, andEndpoint Backward). Six classes corresponded to individualjoint control of the wrist and hand (Wrist Flexion, Wrist Ex-tension, Forearm Pronation, Forearm Supination, Hand Open,and Hand Close). Finally, one class corresponded to no motion(Fig. 2). Two additional classes for wrist ulnar deviation (WristUlnar) and wrist radial deviation (Wrist Radial) also existed inthe control module but are not returned by the command extrac-tion process. These two classes were activated automaticallyin the control module when the wrist deviation angle becomeslarge. This is done to prevent gimbal lock.

BUNDERSON: REAL-TIME CONTROL OF AN INTERACTIVE IMPULSIVE VIRTUAL PROSTHESIS 365

Fig. 2. Virtual arm with eight degrees-of-freedom (spherical joint at theshoulder and wrist, hinge joint at elbow and index finger). Motion of theshoulder, elbow, and wrist joints were controlled implicitly through sixendpoint classes (endpoint up [EU], endpoint down [ED], endpoint out [EO],endpoint in [EI], endpoint backward [EB], and endpoint forward [EF]). The“endpoint” was at the center of the hand and the directions corresponded to theCartesian directions of the inertial frame of reference. Two degrees-of-freedomof the wrist and the index finger joint were also controlled explicitly by com-mands corresponding to motion of the joints of the wrist and hand (wrist flexion[WF], wrist extension [WE], forearm pronation [FP], forearm supination [FS],hand open [HO], hand close [HC]).

TABLE ICONTROLLER AND NEUROMECHANIC ENGINE PARAMETERS

The controller used to apply a velocity ramp [25] to the“activation” of each output class

The result was that the activation of each command was con-tinuously updated and varied from 0 to the maximum activationvalue for that class . The values used in the experimentare given in Table I.The activity of the currently selected class was used to

compute a desired joint position and velocity . If the classwas one of the six endpoint classes then the class velocity wasfirst used to set the velocity of the endpoint . For ex-

ample if the currently selected class was Endpoint Down where“down” is in the negative y direction in the inertial frame ofreference then . The desired position andvelocity was then

where is the Moore–Penrose pseudo-inverse of the end-point Jacobian. The result is a vector of desired joint velocitieswhich are coordinated to move the hand in one of the six Carte-sian directions. Motion of the wrist occurs during endpoint mo-tion only to maintain the global orientation of the hand. For ex-ample, if the hand is in front of the body with the palm downthen as the endpoint is moved up wrist flexion occurs to main-tain the palm-down orientation of the hand.If the current desired class was a wrist class

where is the degree-of-freedom corresponding to thewrist (if or ) orforearm degree-of-freedom (if or

).If the current desired class was Hand Open or Hand Close

where is the index finger degree-of-freedom.If the class returned by the command extraction process was

No Movement and the wrist radial ulnar deviation position wasgreater than 20 from the straight wrist posture then the currentdesired class was changed to Wrist Ulnar or Wrist Radial and

where is the wrist radial ulnar deviation degree-of-freedom. This was done to keep the wrist posture within anacceptable range and avoid gimbal lock without adding morecontrol classes. If the wrist was within 20 of the straightposture then a No Movement class resulted in

The next desired command torques were calculatedfrom several dynamic variables imported from the physics en-gine, a virtual stiffness as well as the desired joint positionsand velocities

366 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 22, NO. 2, MARCH 2014

Fig. 3. Electrode placement indicated (A) with stars for nonamputee subjects,and photographed (B) for the shoulder disarticulation subject. The arm and handrestraint used during the experiment for nonamputee subjects is also shown (A).

The command interval was the interval atwhich the desired command signal was received and desiredcommand torques were computed. The command torqueexported from the controller module to the physics engine atany given time is an interpolation of the current and nextdesired joint torques

Thus, the joint torques vary linearly from at toat at which point is set to , is set to and

is recalculated using a new desired class returned from thecommand extraction process.

D. Subject Setup

Four nonamputee subjects free from orthopedic or neuro-logical pathologies and one subject with shoulder disarticula-tion and targeted muscle reinnervation (TMR) surgery were re-cruited. TMR redirects healthy nerves which once innervatedthe arm and hand to alternative sites on now defunct muscles[26], [27]. Subjects were informed and gave consent to the pro-cedures approved by the Northwestern University Office forthe Protection of Research Subjects Institutional Review Board.Each nonamputee subject had a minimum of 20 h of prior ex-perience with EMG pattern recognition. The shoulder disartic-ulation subject uses a prosthesis with conventional myoelectriccontrol (control is based on EMG signal amplitude or rate ofchange and one EMG site is usually associated with one func-tion [28]) but had at least 20 h of prior experience with EMGpattern recognition.Nine (for nonamputee subjects) and twelve (for shoulder dis-

articulation subject) self-adhesive silver/silver chloride bipolarsurface electrode pairs (Noraxon Dual Electrodes) were placedon arm and shoulder [Fig. 3(a)] or trunk [Fig. 3(b)]. All electrodepairs had an inter-pole distance of 4 cm. For the nonamputeesubjects electrode pairs were located over the muscle bellies ofthe flexor digitorum superficialis and extensor carpi ulnaris andcircumferentially opposite each other on the forearm midwaybetween the flexor digitorum superficialis and extensor carpiulnaris pairs. Electrode pairs were also placed over the musclebellies of the biceps, triceps, anterior deltoid, posterior deltoid,

Fig. 4. Virtual environment as viewed from above and behind presented to thesubjects during the experiments. Here, block initial positions are shown in thefour locations (labeled P1, P2, P3, and P4). During the experiment only oneblock was displayed per trial.

and upper trapezius. A harness and hand splint was used to re-strict motion during the experiment [Fig. 3(a)]. For the shoulderdisarticulation subject electrode pairs were placed over his rein-nervated muscles and a few additional locations on the trunk[Fig. 3(b)].The process of training the LDA classifier was very similar to

that described elsewhere [29]. Briefly, subjects were promptedto perform muscle contractions corresponding to each of the 12motion classes and a rest class. Subjects sustained each contrac-tion for 3 s. The 3-s training contraction was performed twicefor each class. No specific instructions were given to the non-amputee subjects about how to perform the contractions for theindividual classes. The shoulder disarticulation subject was in-structed to use shoulder motion as a proxy for endpoint motion.Shoulder adduction was used for class Endpoint In, shoulder ab-duction for class Endpoint Out, shoulder up for class EndpointUp, shoulder down for class Endpoint Down, shoulder forwardfor class Endpoint Forward, shoulder backward for class End-point Backward.

E. Experiment

The virtual environment task was designed to be similar tothe “box-and-blocks” dexterity task [30] that has been used toassess prosthesis control [31]. Thus, in addition to the virtuallimb the virtual environment contained a block (3 cm per sidefor nonamputee 6 cm per side for amputee) that was free totranslate in any direction and rotate in any direction, a fixedground surface (50 cm below the shoulder joint) and a fixedbarrier (2 cm wide and 8 cm tall). Two 2-D views of the virtualprosthesis and environment were presented on a screen to thesubject (Fig. 4). The subjects’ goal was to move each block fromone side of the barrier to the other as quickly as possible. Trialsbegan with an initial block placement at one of four positionsand the virtual prosthesis at its initial position with zero velocityin the configuration shown (Fig. 4). A trial was successful if thesubject dropped the block in the target on the opposite side ofthe barrier. Subjects were given 3 min to complete each trial. Ifthe subject failed a trial the trial was repeated.The experiment was accomplished over two consecutive

days. On the first day subjects became familiar with the taskand practiced moving blocks. On the second day nonamputeesubjects performed three sets of four-block tests, for a totalof 12 blocks moved per subject. The shoulder disarticulationsubject was allowed additional practice on the second day prior

BUNDERSON: REAL-TIME CONTROL OF AN INTERACTIVE IMPULSIVE VIRTUAL PROSTHESIS 367

Fig. 5. Sample joint kinematics (top) for the wrist, index finger torque (middle)and contact forces (bottom) for a representative successful trial. The block con-tact forces are shown in inertial coordinates. However, since the orientation ofthe block is also known these could be converted to pinch and shear forces foruse as real-time feedback.

to completing four sets. All subjects were given at least 2 minof rest between each set.

F. Kinematic Analysis

Pinch force, index finger torque, joint positions, and the de-sired class decision streamwere recorded to demonstrate kineticand kinematic data available in the fully “instrumented” virtuallimb. The time to completion was calculated and definedas the time from the first movement of the limb to the time thatthe block impacted the ground surface on the opposite side ofthe barrier.In order to determine whether there was a relationship be-

tween control of certain joints and time to completion, a linearregression was computed for the time to completion and the totaldistance traversed by each of the eight limb degrees-of-freedom

III. RESULTS

All subjects were able to complete the task of transferringa block from one side of the barrier to the other within the dy-namic virtual environment. Sample wrist joint kinematics, indexfinger joint torque, and index finger to box contact forces for asingle trial are shown in Fig. 5. During the transport phase forthis trial the average magnitude of the contact forces (grip force)was approximately 70N. The subjects had no grip force feed-back and across all trials the average grip force for nonamputeesubjects was 95N. By comparison, the average maximum pinchforce for humans is approximately 60N [32].The performance, measured by time to completion, of the

nonamputee subjects improved by 29% (38.5 7.9 s to 27.3

Fig. 6. Average times to completion for nonamputees for the three sets areshown in the shaded region at the left of the figure. Time to completion wasaveraged across initial box position and subject. Standard deviations are alsoindicated. For the shoulder disarticulation subject each trial (including practicetrials) of the experiment is shown (white background area). Each black dot rep-resents a single trial and the trials are equally spaced along the x-axis. Missingdots and gaps in the solid black line represent a failed trial in which the blockhad to be reset to the initial position before continuing.

Fig. 7. Distribution of the desired class signal (the no movement class wasexcluded) for the final set of nonamputee trials.

9.3 s) from the first to the third set on the second day (Fig. 6).The shoulder disarticulation subject had an average time to com-pletion of 54.9 s in the first set and 37.8 s in the fourth set for animprovement of 31%.The strategy subjects used to complete the task can, in part,

be assessed by the distribution of classes used. On average, themost used class of the third set for nonamputee subjects wasEndpoint In which was selected more than 30% of the time(Fig. 7). The wrist classes were selected less ( 3% each forForearm Supination, Forearm Pronation, Wrist Extension, andWrist Flexion) than hand classes ( 13% for Hand Open andHand Close) or endpoint classes (all 4%) (Fig. 7).The r-squared values for the linear regression between the

total distance traversed by each joint and time to completion areshown in Table II. The elbow flex/extension distance had thestrongest correlation with time to completion andthe wrist flex/extension distance had the weakest .

IV. DISCUSSION

This work provides a 1) description and 2) demonstra-tion of an impulsive virtual environment with an eightdegree-of-freedom prosthesis controlled entirely through

368 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 22, NO. 2, MARCH 2014

TABLE IIR-SQUARED VALUES FOR LINEAR REGRESSION OF

JOINT DISTANCES WITH TIME TO COMPLETION

EMG pattern recognition. Four nonamputee subjects and aTMR shoulder disarticulation subject performed a virtualbox-and-blocks test transferring each block in approximately30 s after 2–3 h of practice with the environment.To gain perspective and place this work in the context of the

state of the art of physical upper limb prostheses and controlwe can compare these results to the work of Miller et al. [31].They recruited three shoulder disarticulation patients to performthe box-and-blocks test with their take-home myoelectric pros-thesis. These subjects completed the task with an average of 24s per block pre-TMR intervention and 7.5 s per block post-TMRintervention [31]. There are several key differences which mayexplain most of the large discrepancy in the time to completionwith a physical prosthesis (7.5 s) and that for the virtual envi-ronment (28 s). First, the subjects in the Miller study controlledendpoint location of the physical prosthesis with only two my-oelectric control classes (Elbow Flexion, Elbow Extension) andmovement of the trunk. This is dramatically simpler than the sixcontrol classes which took approximately 68% of the movementtime (sum of EI, EO, EF, EB, EU, ED in Fig. 7) in the virtualsystem. In addition the orientation of the hand was controlledwith only one control class in the Miller study (UnidirectionalWrist Rotation) compared with the four classes in the virtualsystem. A third potential reason for the discrepancy is that itis more difficult to visualize and plan movements in the virtualenvironment due, in part, to the two planar projections (Fig. 4)presented to the user.Testing whether any of these are the true reason for the longer

time to completion in the virtual environment would require anassessment with a physical prosthesis more closely matched tothe virtual system. The true value of the virtual prosthesis willbe determined when it is used to make specific predictions aboutprosthetic control strategies and these predictions are tested witha physical prosthesis. A key advantage of virtual environmentsis that they can be quickly modified. This experiment establishesa baseline for accomplishing the interactive box-and-blocks taskwith a virtual system. Future modifications to the virtual pros-thesis and control strategy can be compared with these results toassess their efficacy. Future modifications to the task can be de-veloped to broaden the applicability of the virtual environmentto the real environment.To make good predictions with the virtual prosthesis it is im-

portant that the physics of the virtual system match as nearlyas possible to the physics of the world. There are several waysin which the system does not correctly mimic the real physical

environment. Two of these have already been discussed. First,the presentation of the environment as two two-dimension pro-jections places an unrealistic cognitive burden on the users. Inthe future the system will be presented to the user in three di-mensions. Second, motion of the body does not affect motionof the virtual prosthesis. In the future we will incorporate mo-tion capture (e.g., the Flock of Birds system used by Davoodi etal. [17]) to allow motion of the body or residual limb to drivemotion of the base of the virtual prosthesis. This is especiallyimportant if transhumeral and transradial virtual prostheses areto be constructed. A third difference between a real physical en-vironment and the virtual environment is that the full effect ofinertial and interaction forces are not correctly accounted for.The effects of these forces on the dynamics of the virtual limbare accounted for but the user does not directly perceive thoseforces. The lack of proprioceptive feedback from the socket ofthe prosthesis to the user may affect how EMG patterns are gen-erated. Since the interaction forces between the prosthesis anda virtual user can be calculated in real-time it is feasible that fu-ture implementations could use a haptic device to apply thoseforces to the residual limb or body of the user.Even though the current work does not provide specific pre-

dictions favoring a particular control strategy there are severalconclusions to be drawn from the results of the experiment.It can be concluded that once a certain level of proficiency isobtained the box-and-blocks test will not be a good test forpracticing wrist movements. By the third set nonamputee sub-jects rarely used wrist classes (Wrist Extension, Wrist Flexion,Forearm Pronation, Forearm Supination 3% each on average,Fig. 7). This is due to the design of the task and the design of theendpoint control portion of the control strategy. The endpointcontroller attempts to move the limb in the direction specifiedby the desired endpoint class while maintaining the hand ori-entation. If the hand orientation is set to pick up the block andthen the hand is moved with an endpoint class, then the handwill stay in the appropriate orientation. Thus the endpoint con-troller requires few adjustments at the wrist. The virtual task canbe modified to demand finer control of hand orientation. For ex-ample, a particularly shaped peg to be fit into a similarly shapedhole at various angles would presumably require finer controlof hand orientation.Modifications in the class speed and classifier could reduce

time to completion. As the subjects became comfortable withthe arm some expressed a desire for faster velocities of the end-point in particular. Moreover, the distribution of the desiredclass signals in the third set (Fig. 7) was very similar acrosssubjects and block position as evidenced by the relatively smallstandard deviations. This suggests that the subjects may havebeen approaching the minimum number of class decisions re-quired to complete the task. If this is the case the time to com-pletion could only be reduced by allowing multiple classes to beselected simultaneously (i.e., using parallel classifiers [33]) orby increasing the velocity of the limb. The maximum activation

parameters for each class are essentially maximum ve-locities and could be tuned quickly in the virtual environment.Compared with strictly kinematic virtual environments

(where only motion is meaningful), dynamic environments(where force and mass are meaningful) give greater insight into

BUNDERSON: REAL-TIME CONTROL OF AN INTERACTIVE IMPULSIVE VIRTUAL PROSTHESIS 369

how to engineer the controller in a real device. With kinemati-cally controlled virtual prostheses the command signal is oftenjoint velocity. In the system presented here the command signalwas first interpreted as a desired velocity and position and thenthat desired motion had to be translated to a joint torque. This isnecessary to achieve compliance (a much desired characteristicin any robotic device interacting with humans) in a prosthesisand is not possible in a kinematic environment. In our system,compliance at the joints was achieved with the virtual spring. This compliance resulted in grip forces (Fig. 5) on the

order of that reported for humans [32]. Since for all classesbesides the Hand Open/Hand Close class the desired positionwas set to the current position all other joints besides the indexfinger were extremely compliant. Finally, the inertial propertiesof the limb can be included in a dynamic virtual environmentbut are ignored in a kinematic virtual environment.Since the dynamic data used internally in the Neuromechanic

physics engine is available through an API interface the en-vironment could also be useful for testing real-time feedbackstrategies. For example, the pinch force (Fig. 5) could be redi-rected to a haptic or visual interface. In addition the limb is es-sentially fully instrumented which could assist in understandinghow people use the prosthesis and how to improve controller orprosthetic design. For example, here it was found that the time tocompletion was more strongly correlated with control of prox-imal joints (the endpoint position control components, Table II)than with distal joints (the endpoint orientation control compo-nents). This suggests that control of endpoint position was moredifficult than control of endpoint orientation. This could be dueto the controller, visualization, or box and blocks task.While the virtual environment was used here to assess

a mixed joint/endpoint prosthetic control strategy it couldbe used in a number of active areas of research in neuralmachine interfacing. Many aspects of the virtual prosthesisare configurable in the XML specification file including armdimensions, joints and grip patterns, the task, control algorithmand actuation (e.g., motors or muscles). The control signalsource can be modified (e.g., intra-cortical, EEG, or kinematicsources) and real-time feedback introduced (e.g., tactile orproprioceptive feedback) in a separate program which sharesmemory resources with the Neuromechanic physics engine.

ACKNOWLEDGMENT

The author would like to thank T. Kuiken for the use of lab-oratory space and valuable mentoring. The author would alsolike to thank A. Simon for assistance with experiments and forediting the manuscript.

REFERENCES[1] E. Biddiss and T. Chau, “Upper-limb prosthetics—Critical factors

in device abandonment,” Am. J. Phys. Med. Rehabil., vol. 86, pp.977–987, Dec. 2007.

[2] D. M. Taylor et al., “Information conveyed through brain-control:Cursor versus robot,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 11,no. 2, pp. 195–199, Jun. 2003.

[3] J. M. Carmena et al., “Learning to control a brain-machine interface forreaching and grasping by primates,” PLoS Biol., vol. 1, p. E42, Nov.2003.

[4] R. Davoodi and G. E. Loeb, “Real-time animation software for cus-tomized training to use motor prosthetic systems,” IEEE Trans. NeuralSyst. Rehabil. Eng., vol. 20, pp. 134–142, Mar. 2012.

[5] A. M. Simon et al., “Target achievement control test: Evaluating real-time myoelectric pattern-recognition control of multifunctional upper-limb prostheses,” J. Rehabil. Res. Develop., vol. 48, pp. 619–627, 2011.

[6] L. Hargrove et al., “A real-time pattern recognition based myoelectriccontrol usability study implemented in a virtual environment,” in Proc.Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2007, pp. 4842–4845,6760.

[7] H. Bouwsema et al., “Learning to control opening and closing a my-oelectric hand,” Arch. Phys. Med. Rehabil., vol. 91, pp. 1442–6, Sep.2010.

[8] , L. Eriksson, Ed. et al., “Perspectives in Neural Computing,” inNeuralControl of a Virtual Prosthesis. New York: Springer-Verlag, 1998.

[9] A. C. Dupont and E. L. Morin, “A myoelectric control evaluation andtrainer system,” IEEE Trans. Rehabil. Eng., vol. 2, no. 2, pp. 100–107,Sep. 1994.

[10] E. K. Chadwick et al., “Continuous neuronal ensemble control of sim-ulated arm reaching by a human with tetraplegia,” J. Neural Eng., vol.8, Jun. 2011.

[11] M. S. Lambrecht et al., “Virtual reality environment for simulatingtasks with a myoelectric prosthesis: An assessment and training tool,”J. Prosthet. Orthot., vol. 23, pp. 89–94, 2011.

[12] D. Nishikawa et al., “On-line learningmethod for EMGprosthetic handcontrol,” Electron. Commun. Japan Part III-Fundam. Electron. Sci.,vol. 84, pp. 35–46, 2001.

[13] T. A. Kuiken et al., “Targeted muscle reinnervation for real-time my-oelectric control of multifunction artificial arms,” J. Am. Med. Assoc.,vol. 301, pp. 619–28, Feb. 11, 2009.

[14] M. Hauschild et al., “A virtual reality environment for designing andfitting neural prosthetic limbs,” IEEE Trans. Neural Syst. Rehabil.Eng., vol. 15, no. 1, pp. 9–15, Mar. 2007.

[15] E. K. Chadwick et al., “A real-time, 3-D musculoskeletal model fordynamic simulation of arm movements,” IEEE Trans. Biomed. Eng.,vol. 56, pp. 941–948, Apr. 2009.

[16] W. Bishop et al., “A real-time virtual integration environment forthe design and development of neural prosthetic systems,” in Proc.30th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2008, vol. 1-8, pp.615–619.

[17] R. Davoodi et al., “Model-based development of neural prostheses formovement,” IEEE Trans. Biomed. Eng., vol. 54, no. 11, pp. 1909–1918,Nov. 2007.

[18] W. Bishop et al., “The use of a virtual integration environment for thereal-time implementation of neural decode algorithms,” in Proc. 30thAnnu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2008, pp. 628–633.

[19] R. Davoodi and G. E. Loeb, “Development of a physics-based targetshooting game to train amputee users of multijoint upper limb pros-theses,” Presence-Teleoperators Virtual Environ., vol. 21, pp. 85–95,2012.

[20] V. Aggarwal et al., “Cortical control of reach and grasp kinematicsin a virtual environment using musculoskeletal modeling software,” inProc. 5th Int. IEEE/EMBS Conf. Neural Eng., 2011, pp. 388–391.

[21] N. E. Bunderson et al., “Neuromechanic: A computational platformfor simulation and analysis of the neural control of movement,” Int. J.Numerical Methods Biomed. Eng., vol. 28, pp. 1015–1027, Oct. 2012.

[22] B. Hudgins et al., “A new strategy for multifunction myoelectric con-trol,” IEEE Trans. Biomed. Eng., vol. 40, no. 1, pp. 82–94, Jan. 1993.

[23] R. O. Duda et al., Pattern classification, 2nd ed. New York: Wiley,2001.

[24] L. Hargrove et al., “A comparison of surface and intramuscular myo-electric signal classification,” IEEE Trans. Biomed. Eng., vol. 54, no.5, pp. 847–853, May 2007.

[25] A.M. Simon et al., “A decision-based velocity ramp forminimizing theeffect of misclassifications during real-time pattern recognition con-trol,” IEEE Trans. Biomed. Eng., vol. 58, no. 8, pp. 2360–2368, Aug.2011.

[26] G. A. Dumanian et al., “Targeted reinnervation for transhumeral am-putees: Current surgical technique and update on results,” Plast. Re-constr. Surg., vol. 124, pp. 863–9, Sep. 2009.

[27] T. A. Kuiken et al., “The use of targeted muscle reinnervation for im-proved myoelectric prosthesis control in a bilateral shoulder disartic-ulation amputee.,” Prosthet. Orthot. Int., vol. 28, pp. 245–253, Dec.2004.

[28] P. A. Parker and R. N. Scott, “Myoelectric control of prostheses,” Crit.Rev. Biomed. Eng., vol. 13, pp. 283–310, 1986.

[29] N. E. Bunderson and T. A. Kuiken, “Quantification of feature spacechanges with experience during electromyogram pattern recognitioncontrol,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 20, no. 3, pp.239–246, May 2012.

370 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 22, NO. 2, MARCH 2014

[30] V.Mathiowetz et al., “Adult norms for the box and block test of manualdexterity,” Am. J. Occup. Ther., vol. 39, pp. 386–91, Jun. 1985.

[31] L. A. Miller et al., “Improved myoelectric prosthesis control using tar-geted reinnervation surgery: A case series,” IEEE Trans. Neural Syst.Rehabil. Eng., vol. 16, no. 1, pp. 46–50, Feb. 2008.

[32] B. D. Lowe and A. Freivalds, “Effect of carpal tunnel syndrome on gripforce coordination on hand tools,” Ergonomics, vol. 42, pp. 550–64,Apr. 1999.

[33] J. J. Baker et al., “Continuous detection and decoding of dexterousfinger flexions with implantable myoelectric sensors,” IEEE Trans.Neural Syst. Rehabil. Eng., vol. 18, no. 4, pp. 424–432, Aug. 2010.

Nathan E. Bunderson received the B.S. and M.S.degrees in mechanical engineering from Utah StateUniversity, Logan, UT, USA, in 2004, and the Ph.D.degree in bioengineering from the Georgia Instituteof Technology, Atlanta, GA, USA, in 2008.He received postdoctoral training in the Neural En-

gineering Center for Artificial Limbs from 2008 to2010 and is currently a member of the Neurophys-iology Laboratory of the School for Applied Physi-ology at the Georgia Institute of Technology, Atlanta,GA, USA.