27
From Brain Signals to Adaptive Interfaces: using fNIRS in HCI Audrey Girouard*, Erin Treacy Solovey*, Leanne M. Hirshfield*, Evan M. Peck*, Krysta Chauncey*, Angelo Sassaroli+, Sergio Fantini+ and Robert J. K. Jacob* *Computer Science Department +Biomedical Engineering Department Tufts University Medford, MA 02155, USA { audrey.girouard, erin.solovey, leanne.hirshfield, evan.peck, krysta.chauncey, angelo.sassaroli, sergio.fantini, robert.jacob }@tufts.edu Abstract Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive, lightweight imaging tool which can measure blood oxygenation levels in the brain. In this chapter, we describe the fNIRS device and its potential within the realm of human- computer interaction (HCI). We discuss research that explores the kinds of states that can be measured with fNIRS, and we describe initial research and prototypes that can use this objective, real time information about users’ states as input to adaptive user interfaces. Introduction The field of brain-computer interfaces (BCI) is beginning to expand beyond its original clinical focus. While traditional BCIs were designed to function using the brain as the sole means of communication for disabled patients (Millán, Renkens, Mouriño, &

jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

  • Upload
    vanhanh

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

From Brain Signals to Adaptive Interfaces: using fNIRS in HCI

Audrey Girouard*, Erin Treacy Solovey*, Leanne M. Hirshfield*, Evan M. Peck*, Krysta Chauncey*, Angelo Sassaroli+, Sergio Fantini+ and Robert J. K. Jacob*

*Computer Science Department +Biomedical Engineering Department

Tufts University

Medford, MA 02155, USA

{ audrey.girouard, erin.solovey, leanne.hirshfield, evan.peck, krysta.chauncey,

angelo.sassaroli, sergio.fantini, robert.jacob }@tufts.edu

Abstract   Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive, lightweight imaging tool which can measure blood oxygenation levels in the brain. In this chapter, we describe the fNIRS device and its potential within the realm of human-computer interaction (HCI). We discuss research that explores the kinds of states that can be measured with fNIRS, and we describe initial research and prototypes that can use this objective, real time information about users’ states as input to adaptive user interfaces.

Introduction

The field of brain-computer interfaces (BCI) is beginning to expand beyond its original clinical focus. While traditional BCIs were designed to function using the brain as the sole means of communication for disabled patients (Millán, Renkens, Mouriño, & Gerstner, 2004), current research investigates the brain as an addi-tional input to the interface, designed for a broader range of users (Grimes, Tan, Hudson, Shenoy, & Rao, 2008). Neural signals can act as a complementary source of information when combined with conventional computer inputs such as the mouse or the keyboard. We present work in this chapter that illustrates this direc-tion in BCI and shows how to move from controlled experiments exploring task-specific brain activity to a more general framework using mental activity to guide interface response. Our work, grounded in the field of human-computer interaction (HCI), suggests the practicality and feasibility of using normal untrained brain ac-tivity to inform interfaces.

While most BCIs use the electroencephalogram (EEG) to measure brain activ-ity, we adopt the relatively less-explored technique of functional near-infrared

Page 2: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

2

spectroscopy (fNIRS), a non-invasive measurement of changes in blood oxygena-tion, which can be used to extrapolate levels of brain activation. Ideally, for HCI research, the fNIRS signals would be robust enough to remain unaffected by other non-mental activities (such as typing) occurring during the participant’s task per-formance (we investigate some of these below). In fact, one of the main benefits of fNIRS is that the equipment imposes very few physical or behavioral restric-tions on the participant (Hoshi, 2009).

We identify two research goals that shape this chapter: (1) what kind of states can we measure using fNIRS? (2) how should we use this information as input to an adaptive user interface?

To address our first goal, we started by investigating the practicality and appli-cability of the technology in realistic, desktop environments (Solovey, et al., 2009). We then focused on what type of states we could measure with fNIRS, keeping in mind their potential application in HCI. Our studies suggest that we can obtain meaningful data related to mental workload, both with workload as an overall cognitive function (Girouard, et al., 2009; Hirshfield, et al., 2007), and with specific components of it (Hirshfield, et al., 2009b). Our studies progress from very controlled experiments with little practical HCI applications that help us identify centers of brain activity, to experiments using simple user interfaces, showing how this technique can be applied to realistic interfaces. Throughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify and use the brain activity information in real time.

Our second goal focuses on creating new interactive, real-time user interfaces, which can adapt their behavior based on brain measurements. The design chal-lenge is to use this information in a subtle and judicious way, as an additional, lightweight input that could make a mouse or keyboard-driven interface more in-tuitive or efficient. Specifically, we are exploring situations and interfaces that can be adapted slowly, in a manner that is subtle and unobtrusive to the user, which could increase productivity and decrease frustration. This work has just begun; we describe here two early prototypes of such user interfaces that can adapt to the user's workload profile or other brain activity in real time.

fNIRS Background

fNIRS provides a measure of blood oxygen concentration, indicative of brain ac-tivity when measured on the head (Villringer & Chance, 1997). Near-infrared light is pulsed into the forehead where it is refracted from the tissues of the cortex up to depths of 1-3cm. Oxygenated and deoxygenated hemoglobin are the main ab-sorbers of light at these wavelengths, and thus the diffusely reflected light picked up by the detector correlates with the concentration of oxygen in the blood, as well

Page 3: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

3

as the overall amount of blood in the tissue. The basic technology is common to all systems, but the measured signal differs depending on the location of the probe and the amount of light received.

There are many possible placements of fNIRS probes, allowing the study of multiple brain regions. The most common placements are on the motor cortex (Sitaram et al., 2007), and the prefrontal cortex (PFC) (Ehlis et al., 2008; Mappus et al., 2009), although other regions have also been explored (Hermann et al., 2008). We chose to study the anterior prefrontal cortex (aPFC), an active region that deals with high-level processing (Ramnani & Owen, 2004), such as working memory, planning, problem solving, memory retrieval and attention. These pro-cesses tend to be of greater interest and potential usefulness in HCI than measure-ments at the motor or visual cortex. Thus, our considerations below are intended for researchers investigating the aPFC; methods that reflect activity in other parts of the brain will vary considerably. However, we expect our results to be valid for other experimental setups and contexts that use the prefrontal cortex area.

Exploring fNIRS for HCI

Because most brain imaging and sensing devices were developed for clinical set-tings, they often have characteristics that make them less suitable for use in realis-tic HCI settings. For example, although functional magnetic resonance imaging (fMRI) is beautifully effective at localizing brain activity, it is susceptible to mo-tion artifacts, and even slight movement (more than 3mm) will corrupt the image. In addition, because of the magnetic field, there can be no metal objects, making regular computer usage impractical. Current technology also mandates a complex support system for the magnet itself, so portability will not be achieved in the foreseeable future. The most common technology used for brain measurement in HCI is EEG, since it is non-invasive, portable, and relatively inexpensive com-pared with other brain imaging devices (Lee & Tan, 2006). EEG is not ideal for HCI, either: it is susceptible to artifacts from eye and facial movements, requires gel in the participant’s hair, takes some time to set up properly, and is subject to noise from nearby electronic devices.

Recently, fNIRS has been used in HCI because it has many characteristics that make it suitable for use outside of clinical settings (Girouard, et al., 2009; Hirsh-field, et al., 2009b; Mappus, Venkatesh, Shastry, Israeli, & Jackson, 2009). Bene-fits include ease of use, short setup time, and portability, making it a promising tool for HCI researchers. In addition, there are no technical restrictions for using EEG and fNIRS together (Hirshfield, et al., 2009a), and the two technologies could complement one another. (While EEG measures electricity—a direct result of neuronal activity—and is thus very fast, it is also spatially indeterminate; as

Page 4: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

4

previously mentioned, fNIRS measures blood oxygenation, an indirect measure of neuronal activity which is much slower to change than the EEG signal.)

The motivation for using fNIRS and other brain sensors in HCI research is to pick up cognitive state information that is difficult to detect otherwise. It should be noted that some changes in cognitive state may also have physical manifestations. For example, when someone is under stress, his or her breathing patterns may change. It may also be possible to make inferences based on the contents of the computer screen, or on the input to the computer. However, since these can be de-tected with other methods, we are less interested in picking them up using brain sensors. Instead, we are interested in using brain sensors to detect information that does not have obvious physical manifestations, and thus can only be sensed using tools such as fNIRS.

Our goal is to observe brain signals that can be used in a relatively ordinary computer task environment; so we do not expect the participant to be physically constrained while using the computer. However, in most studies using brain sen-sors, researchers expend great effort to reduce the noise picked up by the sensors. Typically, participants are asked to remain still, avoid head and facial movement, and use restricted movement when interacting with the computer. In addition, many factors cannot be controlled, so researchers sometimes throw out data that may have been contaminated by environmental or behavioral noise, or they de-velop complex algorithms for removing the noise from the data. By doing this, the researchers hope to achieve higher quality brain sensor data, and therefore better estimates of cognitive state information. However, it is not clear that all of these factors contribute to problems in fNIRS data or that these restrictions improve the signal quality, so a series of experiments were conducted in order to observe di-rectly the effect of such artifacts on the data.

fNIRS Considerations for HCI Research

Head Movement

Several fNIRS researchers have brought attention to motion artifacts in fNIRS sensor data, particularly those from head movement (Devaraj, Izzetoglu, Izze-toglu, & Onaral, 2004; Matthews, Pearlmutter, Ward, Soraghan, & Markham, 2008). They note that these issues are significant if the head is not restricted, and even more so in an entirely mobile situation. However, other researchers indicate that fNIRS systems can “monitor brain activity of freely moving subjects outside of laboratories” (Hoshi, 2009). While fNIRS data may be affected by head move-ments, they only become a problem when present at a fairly gross level; this should be contrasted with fMRI where movement over 3mm will blur the image.

Page 5: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

5

Because of the lack of consensus in the community, we investigated the artifacts associated with head movements during typical computer usage to determine their effect on fNIRS sensor data in a typical HCI setting (Solovey, et al., 2009). From our experiments, we suggest that participants minimize major head movements, although we believe head movement artifacts may be corrected using filtering techniques.

Facial Movement

fNIRS sensors are often placed on the forehead, and as a result, it is possible that facial movements could interfere with accurate measurements. Coyle, Ward, and Markham (2004) point out that “slight movements of the optodes on the scalp can cause large changes in the optical signal, due to variations in optical path”. These forehead movements could be caused by talking, smiling, frowning, or by emo-tional states such as surprise or anger, and many researchers have participants re-frain from moving their face, including talking (Chenier & Sawan, 2007). How-ever, as there is little empirical evidence of this phenomenon, we examined it fur-ther (Solovey, et al., 2009).

We found that frowning data could always be distinguished from non-frown-ing. We also learned that if all the data includes frowns, then machine-learning al-gorithms cannot differentiate the cognitive task from the rest condition. However, we found that if we combine the data that contains frowning and that without frowning, we can then discriminate the cognitive task, which shows interesting potential to identify which examples to reject because of frowns. Those results clearly indicate that frowning is a problematic artifact, and should be avoided as much as possible.

Eye movements and blinking are known to produce large artifacts in EEG data which leads to the rejection of trials; experimenters often ask their participants to refrain from blinking entirely, or to blink during a specific, non-critical period in each trial (Izzetoglu, Bunce, Onaral, Pourrezaei, & Chance, 2004). However, fNIRS is less sensitive to muscle tension and researchers have reported that no ar-tifact is produced in nearby areas of the brain (Izzetoglu, et al., 2004). It would also be unrealistic to prevent eye blinks and movement in HCI settings. Overall, we conclude eye artifacts and blinks should not be problematic for fNIRS, and we do not constrain participants in our studies.

Ambient Light

Because fNIRS is an optical technique, light in the environment could contribute to noise in the data. Coyle, Ward, and Markham (2004) advise that stray light should be prevented from reaching the detector. Chenier and Sawan (2007) note

Page 6: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

6

that they use a black hat to cover the sensors, permitting the detector to only re -ceive light from the fNIRS light sources.

While this is a concern for researchers currently using raw fNIRS sensors that are still under development, we feel that future fNIRS sensors will be embedded in a helmet or hat that properly isolates them from this source of noise. Therefore, we have not examined how the introduction of light can affect fNIRS data. Instead we caution that excess light should be kept to a minimum when using fNIRS, or the sensors should be properly covered to filter out the excess light.

Ambient Noise

During experiments and regular computer usage, one is subjected to different sounds in the environment. Many studies using brain sensors are conducted in sound-proof rooms to prevent these sounds from affecting the sensor data (Morioka, Yamada, & Komori, 2008). However, this is not a realistic setting for most HCI research. Therefore, we conduct our studies in a setting similar to a nor-mal office. It is mostly quiet, but the room is not soundproof, and there may be oc-casional noise in the hallway, or from climate control systems in the building. We have been successful in using the fNIRS system in these conditions.

Respiration and Heartbeat

The fNIRS signals picks up respiration and heartbeat, by definition, as it measures blood flow and oxygenation (Coyle, et al., 2004; Matthews, et al., 2008). These systemic noise sources can be removed using validated filtering techniques. For a discussion of the many filtering techniques, see Matthew et al. (2008) and Coyle et al. (2004).

Muscle movement

In clinical settings, it is reasonable to have participants perform purely cognitive tasks while collecting brain sensor data. This allows researchers to learn about brain function without any interference from other factors such as muscle move-ment. However, to move this technology into HCI settings, this constraint would have to be relaxed, or methods for correcting the artifacts must be developed.

One of the main benefits of fNIRS is that the setup does not physically con-strain participants, allowing them to use external devices such as a keyboard or mouse. In addition, motion artifacts are expected to have less of an effect on the resulting brain sensor data (Girouard, et al., 2009). We examined physical motions

Page 7: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

7

that are common in HCI settings, typing and mouse clicking, to determine whether they are problematic when using fNIRS (Solovey, et al., 2009).

Overall, while typing artifacts could be detected when there was a cognitive task being performed, we could still distinguish the cognitive task itself from a rest state. This confirms our hypothesis and validates that typing is an acceptable inter-action when using fNIRS. From this, we can also assume that simple key presses (e.g. using arrow keys) would also be acceptable with fNIRS since it is a more limited movement than typing with both hands.

We found that mouse clicking might affect the fNIRS signal we are collecting. When the participant was at rest, we found a significant difference in the signal between the presence and absence of clicking. The difference in activation is not surprising as we did not have a “random clicking” task, but one where subject had to reach targets, which may have activated the the area being probed (the anterior prefrontal cortex). However, because we still were able to distinguish the cogni-tive task from rest, number memorization may produce a different signal from clicking. Hence, results indicate that when we want to observe a cognitive task that contains clicking, we need to have the rest task contain clicking as well. Over-all, we believe that clicking is acceptable if the experiment is controlled.

Slow Hemodynamic Response

The slow hemodynamic changes measured by fNIRS occur in a time span of 6-8 seconds (Bunce, Izzetoglu, Izzetoglu, Onaral, & Pourrezaei, 2006). This is impor-tant when designing interfaces based on fNIRS sensor data, as the interface would have to respond in this time scale. While the possibility of using event-related fNIRS has been explored (Herrmann, et al., 2008), most studies take advantage of the slow response to measure short term cognitive state, instead of instantaneous ones.

Summary of Guidelines and considerations

According to our research, clicking and typing are not problematic, but large-scale head and facial movements should be minimized; minor movements and heart-beat/respiration can probably be corrected using filtering techniques. Many limi-tations that are inherent to other brain sensing and imaging devices such as long setup time, highly restricted position, and intolerance to movement are not factors when using fNIRS. By using the guidelines described above, researchers can de-tect aspects of the user’s cognitive state in realistic HCI laboratory conditions. In our work, we have focused on mental workload, and have conducted several ex-

Page 8: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

8

periments described below exploring the feasibility of recognizing mental work-load states with fNIRS.

Measuring Mental Workload

Acquiring measurements about the mental state of a computer user is valuable in HCI, both for evaluation of interfaces and for real time input to computer systems. Although we can accurately measure task completion time and accuracy, measur-ing factors such as mental workload, frustration and distraction are typically lim-ited to qualitatively observing users or administering subjective surveys to users. These surveys are often taken after the completion of a task, potentially missing valuable insight into the user’s changing experiences throughout the task. They also fail to capture internal details of the operator’s mental state. As HCI continues to move out of the workplace and into the real world, users’ goals and uses for computers are changing and the analysis involved in the evaluation and design of new interfaces is shifting from “usability analysis to user experience analysis” (Mandryk, Atkins, & Inkpen, 2006). New evaluation techniques that monitor user experiences while working with computers are increasingly necessary. To address these evaluation issues, much current research focuses on developing objective techniques to measure in real time user states such as workload, emotion, and fa-tigue (Gevins & Smith, 2003; John, Kobus, Morrison, & Schmorrow, 2004; Mar-shall, Pleydell-Pearce, & Dickson, 2003). Although this ongoing research has ad-vanced user experience measurements in the HCI field, finding accurate, non-in-vasive tools to measure computer users’ states in real working conditions remains a challenge.

This study presents our first experiment with the fNIRS tool and demonstrates its feasibility and potential for HCI settings (Hirshfield, et al., 2007). We distin-guish several discrete levels of workload that users experienced while completing different tasks. Measuring user workload with objective measures such as fNIRS, galvanic skin response (John, et al., 2004), EEG (Gevins & Smith, 2003), ERP (Kok, 1997), pupil dilation (Shamsi, Xianjun Sam, & Brian, 2004), and facial EMG (Fuller, Sullivan, Essif, Personius, & Fregosi, 1995) has been a topic of much research. It is well understood that a reliable measure of user workload could have a positive impact in many real life interactions (Guhe, et al., 2005; John, et al., 2004; Shamsi, et al., 2004).

In this experiment, four subjects completed thirty tasks where they viewed all sides of a rotating three dimensional (3D) shape comprised of eight small cubes. Figure 1 illustrates an example of a rotating shape. In the experiment, cubes could be colored with two, three, or four colors, which we hypothesized lead to different workload levels. During each task, subjects counted the number of squares of each

Page 9: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

9

color displayed on the rotating shape in front of them. A blank screen represented the baseline state (no colors).

Fig. 1 A cube made up of eight smaller cubes.

The main goal of this experiment was to decide whether fNIRS data is sufficient for determining the workload level of users as they perform tasks. To accomplish this, a graphical interface displayed the rotating shapes.

At the completion of each task, the subject was prompted for his or her count for each color. Then, the subject rested for thirty seconds, allowing the brain to re-turn to a baseline state. After completing the tasks, the subject was presented with an additional example of each workload level and asked to fill out a NASA-Task Load Index (Hart & Staveland, 1988), administered to compare our results with an established measure of workload. The results of the NASA-TLX assessment vali-date our manipulation of workload levels: increased number of colors led to higher workload level.

We classified the data with a multilayer perceptron classifier using the sliding windows approach (Dietterich, 2002). We tested distinguishing all four workload levels from each other, as well as comparisons of two, three, and four workload conditions of the graphical workload level. When we consider the results compar-ing workload levels 0, 2, and 4, classification accuracies range from 41.15% to 69.7% depending on the subject. Considering that a random classifier would have 33.3% accuracy, the results were promising. We could predict, with relatively high confidence, whether the subject was experiencing no workload (level zero), low workload (level two), or high workload (level 4). The NASA-TLX results supported our findings: subjects reported task difficulty increasing as number of colors increased on the cube.

Our goal was to test the ability of the fNIRS device to detect levels of workload in HCI, to develop classification techniques to interpret its data, and to demon-strate the use of fNIRS in HCI. Our experiment showed several workload compar-isons with promising levels of classification accuracy.

Page 10: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

10

Separating Semantic and Syntactic Workload in the Brain

In our initial work described above, we verified that fNIRS sensors provide re-searchers with a measure of the mental workload experienced by users working on a task with a given user interface (UI). However, detecting a high workload while a user works with a system is not necessarily a bad thing; it could indicate that the user is immersed in the task. How can UI evaluators and designers of adaptive systems know if a high workload measurement is due to the UI or to the underly-ing task?

To solve the question, we were influenced by Shneiderman’s theory of seman-tic and syntactic components of a user interface (Shneiderman & Plaisant, 2005). In this theory, the semantic component involves the effort expended by a user to complete a given task. The syntactic component involves the effort required to un-derstand and work with the interface, which includes interpreting the interface’s feedback, and formulating and inputting commands to the interface. We propose to conceptually separate mental workload into multiple components, where the to-tal workload required to perform a task using a computer is composed of a portion attributable to the difficulty of the task itself plus a portion attributable to the com-plexity of operating the UI.

As an initial step in this direction, we designed an experiment to measure the syntactic workload of two specially constructed UIs that involve users traversing through hyperspace (a group of web pages that are linked together) while conduct-ing an information retrieval task (Hirshfield, et al., 2009b). We kept the informa-tion retrieval task constant between the two conditions, while we varied the design of the UI in each condition. In the first UI, subjects were continually informed about their location within the hyperspace, whereas in the other UI condition, sub-jects were unaware of their current location in hyperspace. Research has shown that spatial working memory (WM) is taxed when users become disoriented in hy-perspace (Boechler, 2001). We refer to these UI variations as the informed loca-tion hyperspace UI and the un-informed location hyperspace UI, and we hypothe-size that the uninformed hyperspace location UI caused subjects to experience higher loads on their spatial WM than the informed location hyperspace UI.

We did not separate "semantic" and "syntactic" workload in the brain directly, but rather we constructed our UI and task so that the syntactic portion maps di-rectly onto spatial WM, and the semantic portion maps onto verbal WM. We also developed an experimental protocol that merges low-level cognition experiments with high-level usability evaluation. Our experiment protocol and data analysis al-gorithms can help usability experts, or designers of adaptive systems, to acquire information about the cognitive load placed on users’ various cognitive resources while working with a user interface.

The general protocol is as follows: Given a UI to evaluate and an underlying task, researchers conduct a task analysis on both the UI and task. For each subtask,

Page 11: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

11

they determine the cognitive subsystems that one would use while conducting the subtasks (i.e., spatial WM, visual search, etc.). Next they gather benchmark exer-cises from cognitive psychology designed to elicit high and low levels of work-load on the target cognitive resource(s) associated with the UI. Researchers then run an experiment where users complete the benchmark cognition exercises, pro-viding a measure of their brain activity while they experience high and low work-load levels in their various cognitive subsystems. Users also work with the UI that the researchers are attempting to evaluate. Lastly, researchers use fNIRS data analysis tools to find similarities and differences in the users’ brain activity while working with the UI under evaluation and with the cognitive psychology exer-cises.

We used this protocol while conducting an experiment to shed light on the syn-tactic (interface) components of workload of the two interface variations described above. We chose two benchmark tasks from cognitive psychology experiments that involve both spatial WM and verbal WM. Both of these exercises had low verbal WM demands (mirroring the information retrieval task). However, one of these exercises had high spatial WM (HSWM) demands, and the other involved low spatial WM (LSWM) demands. We also had a controlled rest (REST) condi-tion where subjects simply rested, with no workload for a set period of time. We used these tasks to provide us with benchmark brain activity for each subject where the spatial WM needed to complete the REST task was less than that needed to complete the LSWM task, which was less than that needed to complete the HSWM task.

In this experiment, we aimed to answer the following research question: Can benchmark tasks from the cognitive psychology literature on spatial WM be used to shed light on the syntactic workload involved in our higher level user interface exercises?

A randomized block design with nine trials was used during the experiment. Ten subjects (6 female) completed the experiment. Results indicated that we could distinguish between the benchmark cognition tasks, the controlled rest, and the two user interface variations conditions using analysis of variance (ANOVA) with 95% confidence on one or both sides of subjects’ heads (Hirshfield, et al., 2009b).

We expected the uninformed hyperspace UI to cause higher spatial WM load than the informed hyperspace UI. We used hierarchical clustering with a Euclidian distance measure to cluster our data, which helped to validate this expectation. Clustering results showed that for 90% of subjects, the un-informed hyperspace UI was grouped closer to benchmark tasks of known higher spatial WM than the in-formed hyperspace UI (where the spatial WM of REST tasks < spatial WM of LSWM tasks < spatial WM of HSWM tasks) (Hirshfield, et al., 2009b).

Therefore, we established that the informed hyperspace UI caused a lower load on users’ spatial WM than the un-informed hyperspace UI. This is not surprising, as web site designers are well aware of the benefits of keeping users oriented in hyperspace. We expect this novel protocol to work in a similar manner with more

Page 12: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

12

realistic UIs, where the established cognitive psychology tasks may not parallel the UIs as closely as they did in this experiment. However, the fNIRS analysis al-gorithms will still show useful similarities and differences between brain activity induced by the UIs and by the cognition exercises. While separating semantic and syntactic workload may not be possible in more complex UIs, evaluators can make informed changes to UI designs based on the level of workload measured in users’ various cognitive resources while they work with a user interface.

Extension to Adaptive Interfaces

Using fNIRS in usability testing provides researchers with the luxury of analyzing all data offline, and of creating carefully constructed experiments that minimize noise. However, brain measurement can also be used as an input for adaptive sys-tems, where there is a need to classify brain patterns on a single trial basis, allow-ing the system to adapt in real time. With this goal in mind we used a k-nearest-neighbor classifier (k = 3) and a distance metric computed via dynamic time warp-ing to make comparisons between brain activity induced by the experiment condi-tions on a single trial basis. Results were promising. It was possible to distinguish between the benchmark LSWM tasks and the HSWM tasks with nearly 70% aver-age accuracy, between each condition and the REST condition with nearly 80% average accuracy, and between the two UI conditions with 68% average accuracy. This shows promise for detecting workload changes in various cognitive resources in real-time adaptive interfaces.

Sensing brain activity with fNIRS in a real interface

Moving away from traditional psychology experiments, where one can isolate, manipulate, and measure mental workload with great precision, we chose to apply our brain measurements techniques to a real user interface, that of an arcade game. The goal of the study was to distinguish between different levels of game diffi-culty using fNIRS data collected while subjects played a computer game (Girouard, et al., 2009). This study was a step forward, as previous work only evaluated the activeness of the user during video games using fNIRS (Matsuda & Hiraki, 2006; Nagamitsu, Nagano, Tamashita, Takashima, & Matsuishi, 2006). The study was designed to lead to adaptive games and other interactive interfaces that respond to the user’s brain activity in real time, and we believe this work to be a stepping stone to using fNIRS in an adaptive user interface, in this case a passive brain-computer interface. Passive BCIs are interfaces that use brain measurements

Page 13: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

13

as an additional input, in conjunction with standard devices such as keyboards and mice (Cutrell & Tan, 2007).

We asked participants to play the game of Pacman (Namco, Japan) for periods of thirty seconds, then rest for thirty seconds. They completed ten sets of two lev-els of difficulty, an easy level and a hard level, during which their brain was mea-sured using fNIRS. We also collected performance data, and analysis of this data showed a clear and significant distinction between the two levels. An objective measure of workload was collected using NASA-TLX (Hart & Staveland, 1988), which confirmed two levels of mental workload.

We performed two analyses of the brain data to confirm the presence of differ-ences in hemoglobin concentrations for the different conditions: a classic statisti-cal analysis to establish the differences between conditions, and a more novel task classification that will show the possibility of using this data in a real-time adap-tive system.

The statistical analysis revealed that we can distinguish between subjects being active and passive in their mental state (playing versus resting), as well as between different levels of game complexity (difficulty level) in this particular task when combined with the channel and hemoglobin type measured. The classification was consistent with those results, but indicated more difficulty at distinguishing the game levels (94% to classify playing versus resting, while 61% for the easy versus hard levels). This could indicate that the mental process for this activity manifests itself at another, unmeasured location in the brain, or that the difference was sim-ply not strong enough to cause changes in activation.

While some might argue that performance data is sufficient to classify the diffi-culty level of a game and can be obtained without interference, the goal of this study was to investigate the use of the brain measurements with fNIRS as a new input device. In a more complex problem, performance and brain data coming from fNIRS might not be as related, e.g. if the user is working hard yet performing poorly at some point. In addition, distractions may also produce workload in-creases that would not obvious from monitoring game settings and performance, and thus may necessitate brain measurements. That is, a participant playing a sim-ple game while answering difficult questions might also show brain activity relat-ing to increased workload that would be incomprehensible based only on perfor-mance data (e.g. Nijholt, Tan, Allison, Milan, & Graimann, 2008). In real, non-gaming situations, we might not have performance data like in the present case, as we don’t always know what to measure. The use of the brain signal as an auxiliary input could provide better results in these situations.

Page 14: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

14

Moving Towards an Adaptable fNIRS Interface

Now that we have established a foundation for fNIRS measurements, the focus shifts to the second question. How do we use these measurements in the context of adaptable interfaces? How do we responsibly apply a measurement of workload in a useful way? To answer these questions we first need to find a class of problems where fNIRS may be helpful.

Given the temporal limitations of hemodynamic response in the brain, we find ourselves drawn towards problems where millisecond snapshots are not necessary, eliminating activities that are instant states or extremely time-sensitive. To give one example, users who are monitoring constant streams of data fit nicely into this framework.

From one perspective, the temporal limitation of fNIRS can be viewed as a healthy one – a slow hemodynamic response pushes us towards creating interfaces that are gentle and unobtrusive. This is important. If we aren't careful, brain-com-puter interfaces can suffer the same Midas Touch problems identified in eye track-ing research (R.J.K Jacob, 1993). It is a safe assumption that users do not expect an interface to change with every whim and daydream during the course of their workday. We can all too easily imagine a new version of Microsoft’s infamous Clippy that asks questions like “I noticed that you are thinking about your upcom-ing vacation, would you like me to cancel all your commitments for the day?” So we must be judicious with our design decisions.

As a general rule for implicit interfaces, any visual modifications to the inter-face should be done carefully enough that the user hardly notices the change until he or she needs to (S.H. Fairclough, 2009). The eye is very sensitive to movement, and any adaptation that robs the attention of the user will be counterproductive to the primary task. A pop-up dialog box is a distracting response to a user in the middle of a task, even if we occasionally guess exactly what the user wants. Fad-ing, overlaying information, or changing an application's screen real estate are op-tions that, if done slowly enough, may be employed to impact the user's focus in a positive way.

Of course, visually altering the interface is just one response we can make to realtime fNIRS measurements. We can also choose to change the underlying in-formation of the interface while keeping the general visualization stable.

While work in this direction is still in its infancy, we propose two prototypes to demonstrate how real-time, lightweight adaptable interfaces may be used with fNIRS measurements.

Page 15: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

15

The Stockbroker Scenario

A stockbroker is writing a sensitive email to a client. At the same time, he is trying to track critical, minute-by-minute stock information on the same screen. While his email is of the upmost importance, he doesn’t want to completely abandon the stock information. What if he were to miss a significant spike?

Unfortunately, the detailed visualization of stock data is men-tally taxing to look at and prevents the broker from focusing on his email. As a solution, our stockbroker prototype gently changes the visualization of the data depending on the workload we associate with the emailing task.

If the stockbroker is not exerting a high mental workload on his email, we show his stock visualization as highly detailed, giving the stockbroker as much information as possible (Figure 2, left). If the stockbroker is working exceptionally hard at his email and can’t afford to be distracted by complex stock information, we simply lower the level of detail. The broker will still recognize ma-jor changes in the data without getting bogged down in the de-tails (Figure 2, right).

Fig 2. An example of high detailed graph (left), and one of low detail (right).

Many Windows Scenario

Users are almost always juggling four or five windows at once on their monitor. One may be writing a paragraph in one window, displaying a page of notes in an-other, glancing at an academic paper in a third, and tracking email in a fourth. But there is a problem here. While, like the stockbroker, the user may prefer not to get rid of any of the windows, their presence detracts from the primary task. The user may wish that the windows were only there when they were needed.

While fNIRS may not be able to take on the role of both mind-reader and task-driver, fNIRS can help us guess which win-

Page 16: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

16

dows are most important. We can monitor the amount of work-load being used in each window, and then fade the other win-dows accordingly. If glancing at email is relatively low-workload, then maybe it is not distracting. If keeping track of email is men-tally expensive, then we can gradually fade the window – out of sight, out of mind.

Figure 3 demonstrates a situation where workload is focused on the central window. Since the surrounding windows are also mentally demanding, they gently fade away. When the surrounding windows do not demand many mental re-sources, we keep them transparent. In this way, we can think of them as being cheap to keep around.

Fig. 3. One window has the focus, while the other two are fading according to when they were last used.

Looking Ahead

These two scenarios offer a brief glimpse into brain-computer interfaces with fNIRS. They are gentle and unobtrusive. They are more concerned with long-term workload and trends than an immediate snapshot. Looking ahead, we can imagine numerous situations where fNIRS is a valuable input to adaptive interfaces. What if we could adjust teaching methods to best suit a child’s learning style? What if we could dynamically filter streams of information (Twitter, RSS, email) to ac-commodate the current workload of the user?

Conclusion

We began this section by dealing with two major goals: (1) what kind of things can we measure using fNIRS? (2) how should we use this information as input to

Page 17: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

17

an adaptive user interface? Our response to the first question was three pronged: First, we explored the practical considerations for using fNIRS in HCI. We looked at the impact of head, facial, and muscle movement on fNIRS readings. We also took into account environmental factors such as ambient light and ambient noise. Next, we investigated the feasibility of using fNIRS as a measure of workload in general before measuring differences in syntactic and semantic workload in the brain, a measurement that helps us separate the interface and the underlying task. Finally, we measured a real interface, identifying a workload measurement that in-creased or decreased according to the complexity of the task (in this case, a game of Pacman). To address our second goal, we first outlined general characteristics for the design space of user interfaces that adapt based on fNIRS measurements. We then described two scenarios that could take advantage of such an interface design. From our experience, we believe that fNIRS brain sensing has great po-tential for both user interface evaluation and adaptive user interfaces, and will open new doors for human-computer interaction.

Acknowledgments   The authors would like to thank the HCI research group at Tufts Univer-sity; Michel Beaudoin-Lafon, Wendy Mackay, and the In|Situ| research group; and Desney Tan and Dan Morris at Microsoft Research. We thank the NSF (Grant Nos. IIS-0713506 and IIS-0414389), the Canadian National Science and Engineering Research Council, the US Air Force Research Laboratory, and the US Army NSRDEC for support of this research. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of these organizations.

Page 18: jacob/250bci/bcibook.submitted.doc · Web viewThroughout all four studies in this chapter, we show the use of novel machine learning techniques applied to fNIRS, in order to classify

18

References