6
10 PERVASIVE computing Published by the CS and Communications Society 1536-1268/03/$17.00 © 2003 IEEE WEARABLE COMPUTING Gestures as Input: Neuroelectric Joysticks and Keyboards T oday’s laptops and PDAs have more expansive screen sizes and decreasing physical dimensions, all the while becoming increasingly computa- tional. Computers are now small enough to be worn, and—when combined with dis- play goggles—provide a truly mobile computing experience. Their smaller size has opened an oppor- tunity to explore new methods for inputting data to wearable devices (see the “Input Methods for Wear- able Devices” sidebar). We’ve developed an approach at the Neuroengi- neering lab at NASA for design- ing and using neuroelectric inter- faces for controlling virtual devices. This approach uses hand gestures to interface with a com- puter rather than relying on mechanical devices such as joysticks and keyboards. Our system noninvasively senses electromyogram (EMG) signals from the muscles used to perform these gestures. It then interprets and translates these signals into useful computer commands. We’ve cho- sen to demonstrate virtual joysticks and keyboards because the average computer user has had some experience with them. We’re certainly not the first group to suggest that gestures can be used as a form of computer input. Most gesture-recognition systems receive input in one of two forms: Through an external camera that requires sophis- ticated image processing and controlled lighting (such as Intel’s Open Source Computer Vision Library system) Through a sensing glove on the participant’s hand (such as Fifth Dimension Technologies’ Data Glove) Both techniques have fundamental difficulties that we want to avoid. We need to recognize gestures in poor lighting conditions in extreme environments outside of the lab without encumbering our hands or using invasive procedures. Our hands must be free so we can use them for tasks such as grasping handholds and operating equipment other than the computer. Have virtual joystick, will travel Because our laboratory has done quite a bit of flight control work, we could readily access several high-fidelity flight simulators. This let us test a virtual joystick’s efficacy with simulators having a fidelity (class IV) sufficient for pilot training. We first had to decide how to sense the EMG signals. Several stan- dard methods for noninvasive sensing exist. The typ- ical approach uses medical grade wet electrodes that stick to the skin. Although these give a very good sig- nal-to-noise ratio, they aren’t appealing for a regular, long-term user interface. Another approach employs dry metal electrodes. For the joystick work, we fab- ricated a dry electrode sleeve as shown in Figure 1. We outfitted this sleeve with many different electrode positions to facilitate research, using only four pairs of electrodes. EMG electrodes work by detecting skin currents EMG technology helps capture gestures as input for virtual joysticks and keyboards and thus could lead to new applications in flight control, space, and the video game industry. Kevin R. Wheeler and Charles C. Jorgensen NASA Ames Research Center

Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

10 PERVASIVEcomputing Published by the CS and Communications Society ■ 1536-1268/03/$17.00 © 2003 IEEE

W E A R A B L E C O M P U T I N G

Gestures as Input:Neuroelectric Joysticksand Keyboards

Today’s laptops and PDAs have moreexpansive screen sizes and decreasingphysical dimensions, all the whilebecoming increasingly computa-tional. Computers are now small

enough to be worn, and—when combined with dis-play goggles—provide a truly mobile computingexperience. Their smaller size has opened an oppor-tunity to explore new methods for inputting data towearable devices (see the “Input Methods for Wear-able Devices” sidebar).

We’ve developed an approach at the Neuroengi-neering lab at NASA for design-ing and using neuroelectric inter-faces for controlling virtualdevices. This approach uses handgestures to interface with a com-puter rather than relying on

mechanical devices such as joysticks and keyboards.Our system noninvasively senses electromyogram(EMG) signals from the muscles used to performthese gestures. It then interprets and translates thesesignals into useful computer commands. We’ve cho-sen to demonstrate virtual joysticks and keyboardsbecause the average computer user has had someexperience with them.

We’re certainly not the first group to suggest thatgestures can be used as a form of computer input.Most gesture-recognition systems receive input inone of two forms:

• Through an external camera that requires sophis-

ticated image processing and controlled lighting(such as Intel’s Open Source Computer VisionLibrary system)

• Through a sensing glove on the participant’s hand(such as Fifth Dimension Technologies’ Data Glove)

Both techniques have fundamental difficulties thatwe want to avoid. We need to recognize gestures inpoor lighting conditions in extreme environmentsoutside of the lab without encumbering our hands orusing invasive procedures. Our hands must be free sowe can use them for tasks such as grasping handholdsand operating equipment other than the computer.

Have virtual joystick, will travelBecause our laboratory has done quite a bit of

flight control work, we could readily access severalhigh-fidelity flight simulators. This let us test a virtualjoystick’s efficacy with simulators having a fidelity(class IV) sufficient for pilot training. We first had todecide how to sense the EMG signals. Several stan-dard methods for noninvasive sensing exist. The typ-ical approach uses medical grade wet electrodes thatstick to the skin. Although these give a very good sig-nal-to-noise ratio, they aren’t appealing for a regular,long-term user interface. Another approach employsdry metal electrodes. For the joystick work, we fab-ricated a dry electrode sleeve as shown in Figure 1.We outfitted this sleeve with many different electrodepositions to facilitate research, using only four pairsof electrodes.

EMG electrodes work by detecting skin currents

EMG technology helps capture gestures as input for virtual joysticks andkeyboards and thus could lead to new applications in flight control,space, and the video game industry.

Kevin R. Wheeler and Charles C.JorgensenNASA Ames Research Center

Page 2: Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

resulting from an action potential that trav-els in a muscle fiber from the innervationpoint to the end of the muscle. The elec-trodes must have a low-impedance con-nection with the skin, which can be dis-rupted by hair and dry skin. An emergingtechnology might change this finicky sens-ing by using new noncontact electrodesthat don’t require a transfer of current(we’ll describe them later).

MethodologyOur methodology consisted of the fol-

lowing steps:

1. Select gestures2. Apply electrodes (location and number)3. Acquire signals4. Filter and digitize data5. Form features6. Perform pattern-recognition model

training and testing7. Apply pattern recognition in interac-

tive simulation.

Gesture selection. We used four basiccoarse-grained gestures to mimic manipu-lating a joystick:1 up, down, left, and right,with varying degrees of force. We simplifiedgesture recognition by using only four ges-

tures and four pairs of electrodes to providereasonable separation between gestures.

Electrode application. We placed the elec-trodes in relationship to the gestures wewished to recognize according to individ-ual physiological differences. We sewed thedry electrodes into a sleeve as Figure 1shows. This sleeve helped reduce variationin electrode placement. Individual differ-ences in personal physiology proved chal-lenging. Differences in arm lengths andwidths made it difficult to place the elec-trodes at proper positions across peoplewithout considerable effort. In addition,

EMG signal strengths varied across peopleand the amount of training they received.

Signal acquisition, filtering, and digiti-zation. We followed standard operatingprocedures for obtaining EMG data by col-lecting it through differential pair elec-trodes (one pair is one EMG channel) witha preamplifier located near each electrodepair. We typically spaced the electrode pairs2 to 2.5 centimeters apart. We used aBortec Octopus unit to sample the EMGsignal at 2,000 Hz and amplify it by a fac-tor of approximately 2,000. We then useda 16-bit data-acquisition card to digitize it

JANUARY–MARCH 2003 PERVASIVEcomputing 11

Commercially available input methods for wearable devices

include a screen for drawing graffiti (such as the Palm OS),

very small keyboards (such as SFR’s BlackBerry), alternative key-

boards (such as Handkey Corporation’s Twiddler), and voice

recognition software (such as IBM’s ViaVoice). Other alternatives

allow touch typing through light projection (such as Virtual

Devices’ VKey, http://virtualdevices.net) or by wearing a motion-

sensing device on the palms (such as the Senseboard Technolo-

gies’ Virtual Keyboard, http://senseboard.com). However, the

QWERTY keyboard remains the standard data-entry device.

Voice-recognition software can facilitate some data entry, but

activities such as programming, numeric entry, and scientific

tasks require keyboard entry.

Each technology comes with problems. With interfaces that

require memorizing a new syntax (such as graffiti), you might for-

get the syntax of a rarely used but necessary symbol. You also

might have difficulty forming the graffiti strokes rapidly and con-

sistently. Alternative keyboards remove the need to memorize a

new writing style but often force users to hold their hands and

manipulate their fingers in stressful ways that cause cramping.

Most work that we’ve seen using EMG electrodes to interface

with a system has come out of the prosthesis community. A recent

PhD thesis on this topic1 discusses many of the same issues that

we’ve experienced while trying to use EMG technology in a com-

puter interface. We haven’t seen a prosthesis using noninvasive

technology that lets users type as if using a computer keyboard.

REFERENCE

1. N. Daisuke, “Studies on Electromyogram to Motion Classifier,”doctoral dissertation, Hokkaido Univ., Autonomous Systems Eng.Laboratory, 2001.

Input Methods for Wearable Devices

Figure 1. Dry electrode sleeve for flyinga modified F15 aircraft.

Page 3: Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

and subsequently processed it on an IntelPentium-based PC. The PC first digitallyfiltered (using antialiasing) and then trans-formed and recognized the patterns. It thentransmitted the patterns to other machinesfor simulation display and control.

Feature formation. This step separates thesignals enough to let the pattern-recogni-tion module distinguish between gestures.This transformation also creates a spacesmooth enough to be reliably modeled. Wetried many common methods such asshort-time Fourier transform, wavelets,moving averages, and autoregression coef-ficients. In the end, the simplest featurespace seemed the best: overlapping the rec-tified EMG’s moving averages.

Pattern recognition. We chose a hiddenMarkov model (HMM) that the speech-recognition community developed to solvetheir pattern-recognition time-series prob-lem.2 The history of speech recognitionreveals a process that first attempted to rec-ognize isolated words from a singlespeaker, then isolated words from multi-ple speakers, followed by continuouswords from a single speaker, and finally,continuous words from multiple speakers.We followed a similar approach, develop-ing isolated gesture recognition for both asingle participant and multiple partici-pants. (The work we describe here con-

cerns continuous recognition for the joy-stick study and isolated recognition for asingle typist in the keyboard study.)

We tried to minimize contributions todata variation. For example, electrodeplacement that drifts from day to day canvary signal statistics. Using a fixed elec-trode sleeve can help. You probably can’tremove day-to-day variations related tonatural behavior; in fact, we would bene-fit from modeling them. For example, theway people gesture can vary slightly fromday to day even though they intend to per-form the gestures identically. In this case,we need enough data to represent the mul-timodal statistics as well as a way to dailyadapt the system model. Our methodologydoesn’t vary adaptively yet. Our best rem-edy is recognizing when day-to-day varia-tion is too great for adequate model gen-eralization. We can then employ less datafor training using only data similar to ourcurrent day’s setup (that is, electrode loca-tions). This method succeeded in the tech-nology’s initial demonstrations.

The HMMs we used were continuous,tied-mixture,3 left-to-right models. We alsoused standard Baum-Welch training.2 Themodels are called continuous when they useinputs that can take on a range of floating-point values. The alternative to this is toallow for only discrete values found whenquantization has transformed the input.Tied-mixture means that a fixed number of

Gaussian mixtures are used throughout allstates. Thus, any state can use any mixture.In a left-to-right model, the HMM doesn’trevert to a previous state but remains in acurrent one or goes on to another.

The joystick work consisted of using ninediscrete states with 27 total mixtures. Weperformed model initialization using k-means clustering, partitioning the states toequalize the variance amount within eachstate. We segmented the training data setsto ensure that the variance’s peak was nearthe middle of each segment. This centeredthe bulk of the energy. We sampled segmentsat 2,000 Hz that contained 3,072 samplesper channel, with a maximum of eight chan-nels. We typically varied the HMM para-meters: the number of discrete states, Gauss-ian mixtures, and maximum iterations totrain; the method used to arrive at the statepartitioning (uniform versus variancebased); and the method used to initialize themixture’s parameters (for example, k-meansclustering). We performed the real-timerecall using the standard Viterbi algorithm.4

Experiments After we fitted the four pairs of dry elec-

trodes within a sleeve attached to the par-ticipant’s forearm, we asked the participantto pretend moving a joystick left, right, up,and down. The participant performed eachgesture 50 times per day. We collected thedata over several days. The sleeve positionand skin condition naturally varied witheach day. We separated the data by gestureand segmented it to have peaks in the cen-ter of 3,072 sample segments. We removedartifacts or incomplete gestures from thedata sets through manual inspection. Wethen used these sets to train four HMMs,one for each gesture. These trained modelsthen recognized gestures made on a dayexcluded from the training set using thesame dry electrode sleeve. We used the rec-ognized joystick gestures to performnumerous real-time demonstrations of fly-

12 PERVASIVEcomputing http://computer.org/pervasive

W E A R A B L E C O M P U T I N G

Figure 2. Wet electrodes in two ringsabout the forearm of a typing participant.Four pairs of standard medical wet EMGelectrodes circle the wrist.

Page 4: Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

ing a simulated 757 transport aircraft tolanding.1 We implemented a more contin-uous gesture-recognition process bydecreasing the segment size and integrat-ing the EMG energy as a relative forcemeasure. Most consumers are familiarwith joysticks that translate movement intocontrol (such as pitch and bank-rate con-trol for aircraft). However, some force-ori-ented commercial command sticks movevery little but translate the amount of forceinto rate control. We combined these twoideas by recognizing the acting pilot’s EMGas a gesture with an associated force.

To test online pattern recognition, wetrained on a previously acquired day of data.We then used these trained models in ourreal-time simulation for flying an aircraft.

ResultsThe live-demonstration performance

didn’t equal that of batch data sets, whichare usually collected under static condi-tions. Our live demonstrations involvedimperfect electrode placement and highstress, where a participant is bombardedwith questions and distractions.

Error rates determined from batch testsets don’t necessarily indicate real-time per-formance. In particular, error rates variedacross time depending on many factorssuch as sleeve position (rotation), sweat-ing, dry skin, how long the electrodes wereworn, and fatigue (resulting in tremors).By training on only one day’s data (with itsassociated sleeve position and skin condi-tion), we could use the dry electrode sleevefor demonstrations on many different days.We selected the day that gave the best real-time reliability. The recognition was sig-nificantly accurate to perform publicdemonstrations of flying a 757 transportaircraft to landing at a simulation of SanFrancisco Airport. Accuracy was usuallyin the 90th percentile range, depending onthe pilot’s mood and the dry electrodes’condition. The input command rate didn’tdiffer quantifiably from a standard joy-stick’s, but a hardware joystick’s resolutionand precision is finer than in the EMG-based system. After participants use oursystem for a couple hours, they can reli-ably maneuver an aircraft with our virtual

stick technology. The aircraft’s response togesture commands also becomes a form ofbiofeedback the participants can use tomodify their behavior. The biggest systemlimitation was the sensing technology’sunreliability, which is currently beingresolved (as described later).

Typing on your kneeWe wanted to extend the joystick’s coarse-

grained gestures to the finer-grained motorcontrol necessary for typing. This provedconsiderably more difficult. To ease the task,we typed on a numeric keypad like the oneon workstation keyboards. We abandonedthe dry electrode sleeve for wet electrodes toimprove reliability and the signal-to-noiseratio. Figure 2 shows how the eight pairs ofelectrodes were typically placed. The elec-trode positions varied somewhat because wehad to apply the electrodes each morning.This meant the HMMs had to adapt eachday to account for position differences.Nonetheless, we demonstrated typing recog-nition using eight channels of electrodes with11 HMMs. We trained each HMM for aparticular keystroke, in this case “0” through“9,” and “Enter.” Of course, this typingdemonstration only works with touch-typ-ing skills, which raises an open questionabout how much feedback is necessary forusers to successfully navigate new interfaces.

We performed the typing-recognition taskbecause people are very familiar with it. Ulti-mately, as wearable computers evolve, we’lldiscover interfaces more natural than theQWERTY keyboard mechanical joysticksand mice. Interfaces will evolve to takeadvantage of gesture-recognition capabili-ties. This resembles the technology evolu-tion when the computer mouse was intro-duced to dumb terminals and workstations.The operating systems’ front ends evolved totake advantage of the mouse-based cursor

control capabilities in terms of graphicalwindows and icons. Along with the abilityto perform device input without a mechan-ical interface come numerous applicationsthat could help reduce the size of many com-mon devices such as cellular phones andPDAs.

MethodologyWe used the same methodology

employed for the joystick work exceptwhen otherwise noted.

Experiments The typing experiment used one ring

near the wrist and a second near the elbow.The participant touch-typed on a printedpicture of a numeric keypad, striking thekeys “0” through “9” and “Enter.” Theparticipant typed these in order, separatedby a one-second rest interval, for a totalof 40 strokes on each key. We segmentedthis data and manually removed the arti-facts. We collected data on several differ-ent days. We trained 11 HMMs, one foreach gesture, consisting of six states with18 mixtures.

ResultsThe participant maintained a touch-typ-

ist hand position above the simulated num-ber pad. If the hand position varied, dis-

tinguishing between hitting the top row ofkeys and the bottom row became greatlymore difficult, requiring electrodes on theupper arm to sense the movement. Theparticipant also had to be careful to main-tain the wrist angle to avoid radicallychanging the sampled signals. Even withcareful attention to position and mainte-nance of electrode placement from day today, the data tended to vary. However,being able to detect a severe wrist anglethat might lead to carpal tunnel injury was

JANUARY–MARCH 2003 PERVASIVEcomputing 13

After participants use our system for a couple

hours, they can reliably maneuver an aircraft

with our virtual stick technology.

Page 5: Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

a positive outcome. The gesture interfaceitself can perform ergonomic assessmentand advise the user on better posture.

Multitrial acquisition and testing. The key-board-replication experiments had greaterdaily variation in electrode placement thanthe joystick experiments. We also had reli-ability difficulties because the participantsdidn’t maintain a consistent hand positionfrom trial to trial. This included ensuringsimilar wrist angles and that the hand wasconsistently either not resting on the tableduring motion or was partly supported by

the table (bad form, but consistent badform). Given all these difficulties, the result-ing confusion matrix shown in Table 1 formultiple trials looks pretty good. A perfectresult would list the number of keystrokesonly along the diagonal. When the key-stroke for “1” occurred, the system correctlyrecognized “1” 46 out of 51 times, thenincorrectly recognized it as “4” four timesand as “7” once. This resulted in correctclassification 90.2 percent of the time. Thedata variability caused our models to gen-eralize gestures, which caused more confu-sion. For live demonstrations, we needed totrain on the same day that we were usingthe system and thus employed only oneday’s data. However, the training took onlya few seconds.

System usage testing. We haven’t used thevirtual keypad very much for data entry. Inour trials, it’s been convenient to type any-where, including on one participant’s lap.Three major drawbacks delay usage ofEMG-based typing as a regular interface.The first, as mentioned earlier, is the sens-ing technology’s reliability and ease of use.The second is processing time. In our imple-mentation on a 400-MHz Pentium PC, therecognition delays are on the order of 700

milliseconds. However, because PCs arenow at least six times faster and the tech-nology continues to become more efficient,we believe this is a short-term problem. Thefinal problem is missing the touch feedbackthat occurs when a typist recognizes a key-stroke’s completion as the key stops mov-ing at the bottom of the stroke. Missing thisfeedback is a direct consequence of repli-cating a keyboard action rather than a morenatural gesture interface. Keyboards inher-ently give the user touch feedback that letsthem modify system usage. However, as thesystem migrates to more natural gestures,this touch feedback won’t be necessary. Forexample, the science fiction movie Minor-ity Report showed a glove-based opticalsystem that used natural gestures to manip-ulate video streams. The operator employeddistinct gestures for rewinding, fast-for-warding, copying, pasting, and freezingvideo frames. We’re exploring more nat-ural gestures as an interface to a data-explo-ration environment we’re developing formanipulating data views. Suitable softwareis unavailable for exploring these interfaces,which inspired us to develop a system.

ChallengesUsing wet electrodes caused uninten-

tional misplacement that greatly degradedour recognition performance. StandardEMG dry electrodes incorporated into asleeve alleviated this problem but thenraised significant reliability issues in signal

14 PERVASIVEcomputing http://computer.org/pervasive

W E A R A B L E C O M P U T I N G

TABLE 1

Multitrial confusion matrix for typing data.

Recognized keystrokes for keypads 1-9 Correct (%)

Keypad

number 1 2 3 4 5 6 7 8 9

1 46 0 0 4 0 0 1 0 0 90.2

2 0 48 0 0 0 0 0 3 0 94.1

3 0 0 49 0 0 1 1 0 0 96.1

4 11 0 0 38 2 0 0 0 0 74.5

5 1 3 0 5 36 1 3 2 0 70.6

6 0 1 6 0 0 42 0 0 2 82.4

7 0 0 0 0 0 0 51 0 0 100.0

8 0 0 0 0 2 1 3 44 1 86.3

9 0 0 0 0 0 0 0 0 51 100.0

Figure 3. Noncontact sensor from QuantumApplied Science and Research Corp. Thesensor is 3 × 2.5 centimeters.

Page 6: Gestures as Input: Neuroelectric Joysticks and Keyboards · 2012-03-20 · gestures to interface with a com-puter rather than relying on mechanical devices such as joysticks and keyboards

sensing. We’re working with QuantumApplied Science and Research to developa dry electrode sleeve with a cloth layerbetween the electrode and skin to let cloth-ing sense EMGs. Figure 3 shows an initialprototype sensor.

A second enhancement includes a model-correcting adaptation now common in thespeech-recognition community. This adap-tation lets models tune to small variationsthroughout the day and also to differencesbetween training models and the currentday’s configuration. We’re also working ona calibration stage so that when a partici-pant gestures to issue a certain command,the computer adapts to understand thatthese signals are that command. This freesus from requiring a participant to learn afixed set of gestures. The person will per-form a natural gesture to accomplish agiven task, and the computer will simplymap those signals to the correct action.

A less mainstream application we’vebeen considering isan interface to a wear-able robotic exoskeleton. DARPA hasfunded several projects to develop anexoskeleton that lets soldiers carry heavierloads over greater distances. The human-interface issues for such a system are quiteextreme. It would be ideal if the exoskele-ton’s limbs would move in correspondenceto the user’s limbs. One approach is to usethe very signals generating the musclemovement to control the exoskeleton.

A more ambitious idea for reconfig-urable airplanes and other transportationmachinery is a virtual wearable cockpit orcommand center. The US Air Force andother military branches increasingly useunmanned vehicles for surveillance mis-sions. One way to control these systemsfrom the field is a wearable cockpit. Youcould use a wearable computer with awireless link and display goggles, and thenemploy EMG-based gestures to manipu-late the switches and control sticks neces-sary for flight. Noncontact EMG sensorssewn into the field uniform could thensense movements as the acting pilot pre-tended to manipulate control inputs.

A space-based application could letastronauts type into a computer despitebeing restricted by a spacesuit. If a depres-

surization accident occurred on a long-term space mission and astronauts neededto access onboard computers, they coulduse EMG electrodes within their spacesuitsto replicate a computer interface.

In human-centered solutions such as agesture-based interface, the system cus-tomarily compensates for individual dif-ferences between users to produce a con-sistent pattern-recognition rate no matterwho is using the system. However, in thecase of security, you can take advantage ofuser differences to prevent unauthorizedusers. You could also do this by monitor-ing EMG signals corresponding to typicalcomputer command sequences. The EMGsignals have different signatures depend-ing on age, muscle development, motorunit paths, skin-fat layer, and gesture style.The external appearances of two peoples’gestures might look identical, but the char-acteristic EMG signals are different.

In terms of fun applications, the videogame industry constantly needs quick, flex-ible interfaces. New input devices such asthe Xbox controller are pushing the limitsby increasing the complexity of numerousphysical buttons and sticks manipulatedsimultaneously. We think it is possible tomap multiple muscle groups to differentactions to distribute this complexity acrossthe body. This would require training forproficiency, but the net result would be awhole new gaming experience.

We’re now prototyping an EMG-basedmouse and speaker-recognition system.The mouse can act as both a mouse and anaid to monitor and alert users to potentialergonomic injuries. When someone is noti-fied of a potentially harmful movement,the system can substitute a different move-ment to perform the same computer inputsimply by changing EMG signal mappingto computer commands.

W e’ve shown how to useEMG technology to repli-cate traditional joystickand keyboard interfaces.

We don’t advocate this as the next greatinterface technology—only that more nat-ural fine-grained gestures can be enlisted

to manipulate computer function and cur-sor position. A video depicting this workis available online at http://ic.arc.nasa.gov/projects/ne/videos/NECD320x240.3.mov.

REFERENCES1. C. Jorgensen, K. Wheeler, and S. Step-

niewski, “Bioelectric Flight Control of a 757Class High Fidelity Aircraft Simulator,”Proc. World Automation Congress, 3rdInt’l Symp. Intelligent Automation andControl, TSI Press, Albuquerque, NewMexico, 2000.

2. L.R. Rabiner, “A Tutorial on HiddenMarkov Models and Selected Applicationsin Speech Recognition,” Proc. IEEE, vol.77, no. 2, Feb., 1989, pp. 257–286.

3. J.R. Bellegarda and D. Nahamoo, “TiedMixture Continuous Parameter Modelingfor Speech Recognition,” IEEE Trans.Acoustics, Speech, and Signal Processing,vol. 38, no. 12, Dec., 1990, pp.2,033–2,045.

4. G.D. Forney, Jr., “The Viterbi Algorithm,”Proc. IEEE, vol. 61, no. 3, Mar. 1973, pp.268–278.

For more information on this or any other comput-ing topic, please visit our Digital Library athttp://computer.org/publications/dlib.

JANUARY–MARCH 2003 PERVASIVEcomputing 15

the AUTHORS

Kevin R. Wheeler is grouplead of the Extension of theHuman Senses group in theComputational Sciences Divi-sion at NASA Ames ResearchCenter. His research interestsinclude pattern recognitionin nonstationary time-series,

Bayesian inference, and nonlinear dynamics.Wheeler received a PhD in electrical engineeringfrom the University of Cincinnati. He is a mem-ber of the IEEE. Reach him at [email protected].

Charles C. Jorgensen is chiefscientist of the Neuroengi-neering laboratory in theComputational Sciences Divi-sion at NASA Ames ResearchCenter. His research interestsinclude neural network dam-age, adaptive control of

high-performance air and spacecraft, EMG/EEGhuman-interface technologies, and spiking neu-ron computing architectures. Jorgensen receiveda PhD in mathematical psychology from the Uni-versity of Colorado, Boulder. Reach him at [email protected].