Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Thesis for the degree Master of Science
By Ilan Lewitus
Advisor: Prof. Elisha Moses
February 2009
Submitted to the Scientific Council of the Weizmann Institute of Science
Rehovot, Israel
Motor imagery modulating auditory ERP signals as a possible basis for an EEG-based BCI.
חבור לשם קבלת התואר
מוסמך למדעים
מאת אילן לויטוס
אלישע מוזס' פרופ: מנחה
ט"התשס, שבט
מוגש למועצה המדעית של מכון ויצמן למדע
ישראל, רחובות
אפשרי י דמיון של תנועות מוטורית כבסיס " שמיעתי עERPשינוי אותות של .EEG מבוסס מחשב-מח-לממשק
Thesis for the degree Master of Science
By Ilan Lewitus
Advisor: Prof. Elisha Moses
February 2009
Submitted to the Scientific Council of theWeizmann Institute of Science
Rehovot, Israel
י דמיון של תנועות מוטורית כבסיס אפשרי " שמיעתי עERPשינוי אותות של .EEGמחשב מבוסס -מח-לממשק
Motor imagery modulating auditory ERP signals as a possible basis for an EEG-based BCI.
לתואר) תזה(עבודת גמר מוסמך למדעים
מאת אילן לויטוס
ט"התשס, שבט
מוגשת למועצה המדעית של מכון ויצמן למדע
ישראל, רחובות
אלישע מוזס' פרופ:הנחמ
2
Table of contents
Abstract.........................................................................................................................3
תקציר ................................................................................................................................3
Introduction..................................................................................................................4
EEG ...........................................................................................................................5
EEG-based BCI........................................................................................................7
Mental activities and EEG signals..........................................................................8
Signal processing....................................................................................................12
Classification ..........................................................................................................13
The auditory ERP ..................................................................................................14
Auditory ERP based BCI paradigm ....................................................................16
Current challenges .................................................................................................17
Research goals ........................................................................................................18
Methods.......................................................................................................................20
Participants, hardware and software...................................................................20
Experimental setup ................................................................................................20
Data analysis and results ...........................................................................................23
Normalization.........................................................................................................23
Training and testing...............................................................................................26
Electrode selection .................................................................................................30
Conclusions.................................................................................................................41
References...................................................................................................................43
3
Abstract
Brain computer interfaces are systems which translate user thoughts into real world
actions, potentially helping Amyotrophic Lateral Sclerosis (ALS) and locked-in
patients to control and communicate with their surroundings by bypassing their
dysfunctional motor pathways. We researched the viability of using auditory Event
Related Potential (ERP) signals modulated by a motor imagery task as a possible
basis for a Brain-Computer Interface (BCI) paradigm. 7 healthy subjects were
attached to Electroencephalograph (EEG) electrodes and presented with a train of
audio tones, instructed by a visual queue to either passively listen to the tone, or,
while listening to the tone, to perform a motor imagery task. After normalization and
signal averaging each raw sample was compared to the two ERPs, representing the
two conditions, as a means of signal classification. Using such a paradigm we
deduced that motor imagery does indeed change auditory ERP signals, achieving an
across-subject average of 72.5% and a maximum of 78.75% classification of the
signal, with a computer training period based on a 12 minutes session of EEG data
acquisition, and without the need for machine learning techniques.
תקציר
ובכך , מחשב הינם מערכות אשר מתרגמות מחשבות משתמש לפעולות בעולם האמיתי-ממשקי מוח
י עקיפת המסלולים המוטוריים שלהם "עוזרות לחולים בניוון שרירים לשלוט ולתקשר עם סביבתם ע
י פעילות "ע המאופננים קולי ERPאנו חקרנו את האפשרות להשתמש באותות של . שאינם פועלים
, EEG נבדקים בריאים נקשרו ל 7. מחשב-משק מוחכבסיס אפשרי לפרדיגמת מורית מדומיינת מוט
האם רק להקשיב לטון בצורה חזותיותהוצגה להם סדרה של טונים אודיטוריים והוצגו להם הוראות
לאחר נורמליזציה ומיצוע האות כל דגימה לא מעובדת . גופנית האו בזמן הטון לדמיין פעול, סבילה
בעזרת . כדרך לקלסיפיקציה של האות, המייצגים את שני התנאים, ERPוותה לשני אותות ההוש
השגנו , אודיטורייםERPפעילות מוטורית מדומיינת משנה אותות , פרדיגמה זו הסקנו כי אכן
בעזרת אימון המחשב בפגישה , של האותות78.75% של תמקסימאלי ו72.5%ממוצעת של קלסיפיקציה
.וללא צורך בשיטות לימוד מכונה, EEGל איסוף מידע דקות ש12של
4
Introduction
BCI
A Brain-Computer Interface (BCI) is a system in which brain activity is translated
into computer commands. Also called a thought translation device (Birbaumer et al.,
2000), the BCI translates specific thoughts of its user into real world actions. Such a
system would be useful as a control and communication device for patients suffering
from motor disabilities in which the brain is still functioning, but motor controls are
not conveyed to peripheral nerves or muscles, such as Amyotrophic Lateral Sclerosis (ALS) and spinal cord injuries (McFarland et al, 1997, Wolpaw et al, 2002). For such
patients a BCI serves as a non muscular channel for communication with the outside
world, for example, using spelling software (Birbaumer et al, 1999, Blankertz et al
2007) or computer cursor control (Wolpaw, 1991) and for real world control, such as
BCI based mobile robots (Millán et al, 2003).
BCIs can be invasive, with electrodes implanted directly into the brain (Hinterberger
et al., 2005), or non-invasive, using relevant brain imaging techniques such as near
infra red (NIR) (Coyle et al., 2004), magneto-encephalography (MEG) (Mellinger et
al., 2007) and electroencephalography (EEG) (Wolpaw et al., 1994, Kalcher et al.,
1996). Near infra-red light (NIR) has the ability to penetrate the scalp to depths
sufficient to allow functional imaging of the cerebral cortex. Brain activity induces
changes in tissue oxygenation which modulate the absorption and scattering of NIR
photons. Thus, measurement of optical changes of various NIR wavelengths allows
for qualitative imaging of brain activity. Magneto-encephalography is an imaging
technique which measures the magnetic field induced by the electrical activity of the
brain using Superconducting Quantum Interface Devices (SQUIDs). The MEG
measurements have to be conducted in a magnetically shielded environment.
In recent years EEG has been the first choice of many BCI researches, as it is non-
invasive, has a very good time resolution, a sufficient spatial resolution, and is the
5
cheapest form of brain imaging available. Much for the same reasons, this study was
conducted using EEG signals.
EEG
Electroencephalographic (EEG) signals stem from measuring electric potentials
(microvolts) using electrodes placed on the scalp according to the international 10-20
system (Jasper, 1958), as can be seen in figure 1. The electrode labels correspond to
cortex areas, thus (F) denotes Frontal, (C) denotes Central, (T) denotes Temporal, (P)
denotes Parietal and (O) denotes Occipital. Odd numbers correspond to the left side of
the brain, smaller numbers being more medial locations than larger numbers, while
even number correspond to the right side of the brain. The letter (z) denotes the
central line, between the nose and the Putamen Magnum.
Figure 1 – The international 10-20 EEG electrode placement scheme. Electrodes marked in red are the
ones used in this study.
6
The EEG potentials are measured relative to reference electrodes, placed, in our
study, on parieto-occipital locations. The signals are then amplified and digitized.
As EEG potentials are measured on the scalp, the EEG signals are believed to
originate from the cerebral cortex, mostly from pyramidal neurons firing action
potentials, either in a highly synchronized manner, or over a large neural network,
thus corresponding to specific mental activities. Potentials stemming from muscular
electrical activities, such as blinking, are considered to be unwanted artifacts, and
must be dealt with after data acquisition. Figure 2 shows examples of EEG signals
during different states of consciousness, from wakefulness, through different sleep
phases, to cerebral death (Malmivuo & Plonsey, 1995).
Figure 2: Examples of EEG signals in different levels of consciousness. From Malmivuo & Plonsey,
1995.
7
EEG-based BCI
An EEG-based BCI paradigm, as can be seen in Figure 3 (Garcia et al., 2003), usually
consists of:
- Mental activities producing distinct EEG signals.
- EEG hardware for signal acquisition.
- Digital EEG signal processing (usually) consisting of:
o Preprocessing.
o Feature selection.
o Feature extraction.
- A machine learning classifier for pattern recognition, translating EEG
processed signals into computer commands.
- The output action, which also serves as feedback to the user.
Figure 3: Architecture of an EEG-based BCI.
8
Mental activities and EEG signals
Several mental activities have been researched for BCI use. Some are active, i.e.,
produced endogenously by the subject, including motor imagery (Pfurtscheller et al.,
2001), mental calculations and visualized object rotations (Palaniappan, 2005). And
some are passive, i.e., produced as a reaction to exogenous stimuli, such as visual
(Lee et al., 2005) or auditory (Hill et al., 2004) stimuli, presented to the subject, with
selective attention to the stimuli being the BCI's signal.
Several characterized EEG signals were found to be suitable for BCI use, those
include the following:
Steady State Visual Evoked Responses (SSVERs)
SSVERs are evoked by a visual stimulus whose luminance is modulated at a fixed
frequency. SSVERs are characterized by an increase in the EEG amplitude at the
stimulus frequency, captured using electrodes attached to occipital locations (O1, O2).
Middendorf et al. (2000), describe two approaches for the use of SSVERs for BCI
control. In the first approach, subjects learn to voluntarily control their SSVER
amplitude through feedback. Luminance of a visual stimulus is modulated at a fixed
frequency (Fluorescent light at 13.25 Hz), and the subject has a visual feedback of the
EEG amplitude at that frequency. Using two thresholds, binary control is achieved as
the subject learns to increase or decrease his/her SSVERs, above or below these
thresholds, following slight eye movements. The second approach uses two spatially
divided icons on a screen, each modulated in a different frequency (23.42 Hz, 17.56
Hz). The subject controls the BCI by looking and attending to a selected icon, thus
inducing SSVERs with stronger amplitude in the frequency of the said icon. The first
approach necessitates user training while the second does not, which is the greatest
advantage of this approach. The disadvantage of using SSVERs is the need for sight
and eye movement control, both of which completely-locked-in patients do not
posses. Figure 4 shows topographic maps of 13.5-Hz activity during task related
SSVER enhancement and suppression.
9
Figure 4: Topographic maps of 13.5-Hz activity recorded during task-related SSVER enhancement and
suppression. Note the evenly distributed activity in the O1 and O2 regions of the top map (suppression)
and the asymmetric activity in the bottom map (enhancement). From Middendorf et al., 2000.
Slow Cortical Potential Shifts (SCPSs)
SCPSs are shifts of cortical voltage, lasting from a few hundred milliseconds up to
several seconds. Users can learn to produce slow cortical amplitude shifts in an
electrically positive or negative direction for binary control. This skill can be acquired
if the users are provided with a feedback on the course of their SCPS production and
if they are positively reinforced for correct responses (Birbaumer et al., 2000). This
approach uses operant conditioning as a basis for subject training. Its main
disadvantage is a very long subject training period of hundreds of sessions lasting for
several months. The main advantage being, that once the subject is trained, the
induced waveforms remain highly stable within each individual subject. It is also
important to note that Birbaumer's study is one of the few conducted on locked-in
patients rather that healthy subjects. Figure 5 shows averaged SCPSs, of three locked
in patients trained in this approach.
10
Figure 5: Averaged SCPSs of three locked-in patients. Representative averages over 700 trials each
during baseline, baseline interval and feedback interval separated for trials where selection of a letter
was required with a cortical positivity (solid line) and trials where rejection of a letter was required
with no positivity or negativity (thin line). Different waveforms develop during training which remain
highly stable within each individual patient. From Birbaumer et al., 2000.
Oscillatory sensorimotor activity
The Alpha (8-12 Hz) and Beta (18-26Hz) EEG rhythms recorded over the motor
cortex exhibit noticeable changes during movement, preparation for movement and
imagined movement (Pfurtscheller et al., 2001).
11
These changes are characterized by an event related desynchronization (ERD) in the
motor cortex contralateral to the movement and an event related synchronization
(ERS) in the ipsilatarel hemisphere. The frequency ranges and the magnitude of the
changes are user dependent; and if trained, a BCI can detect these changes and react
according to a previously established protocol (Pfurtscheller et al., 2000, Wolpaw et
al., 2000). Training usually involves the subject imagining two or more classes of
movements (left/right hand/feet) when prompted by the computer, and entails some
sort of machine learning technique, as the signal varies from subject to subject and
from session to session of the same subject. The advantage of such an approach is the
fact that the signal is completely endogenous, meaning that the subject produces the
signal by will and is not dependent on external, exogenous, stimuli. The disadvantage
is the need for machine learning, due to great signal variability, and thus long training
periods. Figure 6 (Blankertz et al., 2006) shows the spectral power in the alpha
frequency band of one subject performing left and right motor imagery. Although
there are apparent differences between the left and right conditions, a great inter-
subject variability can be observed.
Figure 6: One subject imagined left (red) vs. right (green) hand movements. The topographies show
spectral power in the alpha frequency range during single-trials of 3.5 s duration. These patterns exhibit
an extreme diversity although recorded from one subject on one day. From Blankertz et al., 2006.
12
Event Related Potentials (ERPs)
ERPs are transient signals which are characterized by a voltage deviation in the EEG
and are caused by external stimuli or cognitive processes triggered by external events.
When the subject pays attention to a particular stimulus a shift in potential appears in
the EEG signal. The changes induced by the ERP in the EEG can be detected by the
BCI. Therefore, by focusing attention to the adequate stimuli, the user can command
the BCI. As the ERP signal is weaker than the EEG noise, a train of stimuli is needed.
The resulting ERP signal is attained by averaging around a 100 samples, time-locked
to the stimuli, as a means of noise reduction and for exposing the underlying
potentials coursing through the cerebral cortex (Donchin et al., 2000, Bayliss, 2001).
ERP signals do not necessitate user training as they are exogenously evoked by the
stimulus, and reflect an inherent response of the brain to physical attributes of that
stimulus, but do necessitate computer training for signal classification.
In this study we used auditory ERP signals coupled with a motor imagery task, as will
be later described below.
Signal processing
After the EEG signals have been acquired, preprocessing and feature selection are
done to reduce dimensionality of the multi-channel signal, typical preprocessing
algorithms include Independent Component Analysis (ICA) (Qin et al., 2005) and
sparse factorization (Li et al., 2006), aiming at choosing the most influential
electrodes and/or frequency bands, thus reducing the amount and dimensionality of
the data presented to the classifier.
Many EEG signal features were studied for BCI use. Those include amplitude values
of EEG signals; for example, Kaper et al. (2004) used the amplitude of the P300 ERP
component, which will be described below. Band powers (BP) were used by
Pfurtscheller et al. (1997) to distinguish between left and right hand motor imagery,
looking for the frequency bands exhibiting the greatest differences between the two
conditions for each subject. Power spectral density (PSD) values were used by Millán
et al. (2003b) by calculating the proportion of energy between frontal and posterior
13
locations in the mental tasks of motor imagery, mental rotations, mental calculations
and word association. Time frequency features were researched by Wang et al. (2004)
in the following manner: the EEG signal was decomposed into a series of frequency
bands and the instantaneous power was represented by the envelop of oscillatory
activity, which formed the spatial patterns for each electrode at a time-frequency grid.
In our study, preprocessing and feature selection included electrode selection, based
on performance, as well as signal normalization by means of dividing by the mean
value of the ERP signal.
Classification
There has also been extensive research on classifying algorithms, with the aim of
classifying EEG signals created by different mental tasks into several classes, which
will correspond to several computer commands. To name a few, linear discriminant
analysis (LDAs) (Pfurtscheller, 1999, Bostanov, 2004, Garret et al., 2003, Scherer et
al., 2004), which entails finding a linear transformation for better discrimination of
different classes, support vector machines (SVMs) (Rakotomamonjy et al., 2005,
Garret et al., 2003, Blankertz et al., 2002) which classify by creating a separating
hyper-plane between data points of each class by maximizing the margins between the
data sets, and multi layered perceptrons (MLPs) (Anderson et al., 1996, Haselsteiner
et al., 2000), which are artificial neural networks with hidden layers, capable of non-
linear generalization, have all been studied with varying degrees of success. A good
review, explaining the BCI classification challenge, and stating the success of each
method can be read at (Lotte et al., 2007). It would seem that the non-linear methods
(SVM, MLP) should have an advantage on the linear method (LDA) as signal
classification of noisy multidimensional time series, and extraction of complex spatial
and temporal patterns is the specialty of non-linear machine learning techniques, but a
comparison done by Garret et al. (2003) reveals that their classification results are just
slightly better than the LDA technique's results.
As machine learning techniques tend to take a heavy toll on both offline analyses for
training, as well as online processing during BCI operation, our proposed BCI
14
paradigm will use a much simpler comparison-based classification scheme, which
will be described in detail later below.
A common BCI experimental protocol involves acquiring EEG data while asking the
subject to periodically perform the specific mental tasks as a response to given cues.
The entire data set is then separated to a training set and a testing set. The training set
is used to train the classifier of the BCI system, using whatever machine learning
techniques are researched, while the testing data set is used to validate, or measure the
success rate of the trained classifier.
The auditory ERP
Auditory ERPs are characteristic changes in scalp potentials elicited as a response to
auditory stimuli, visualized by averaging EEG samples time-locked to those stimuli.
Donchin et al. (1978) classified ERP components into exogenous and endogenous
ERP components. The exogenous components are mainly determined by the external
stimulus characteristics and are therefore relatively stable in terms of latency and
amplitude whereas endogenous components mostly depend on the subject’s intents
and actions, exhibiting a great variability in both latency and amplitude, usually in
accordance with the subject’s varying internal state and behavior.
The exogenous ERP components are usually divided into early, middle and late
according to their signal latency. Early components are believed to stem from
brainstem responses occurring within the first 10-12 milliseconds from stimulus
onset. Middle components manifest in the first 50 milliseconds after stimulus onset
and are believed to reflect early primary auditory cortex processing of the stimulus.
Late components include the relatively large N1 and P2 waves. N1 denotes a
negativity that usually peaks at about 100 milliseconds from stimulus onset and P2
indicates a positivity that peaks at around 180-200 milliseconds from stimulus onset.
N1 is preceded by a small positivity, denoted P1, which peaks at around 50
milliseconds. These late components are believed to stem from auditory cortex as
well. The N2 and P3 (P300) components, with a latency of 300-450 milliseconds are
15
considered to be endogenous components elicited as a response to rare or subjectively
significant stimuli. Figure 7 shows the auditory cortex (Kraemer et al., 2005). Figure
8 shows auditory, early, middle and late, ERP components (Näätänen, 1992).
Figure 7: An inflated rendering of the left hemisphere illustrating primary auditory cortex (PAC, red)
and auditory association cortex (green). The superior temporal sulcus (STS) and inferior temporal
sulcus (ITS) are indicated for reference. From Kraemer et al., 2005.
16
Auditory ERP based BCI paradigm
The following example of an auditory ERP based binary BCI (Hill et al., 2004) is
based on attentional modulation of ERP signals produced by attending to right ear Vs.
left ear auditory stimuli. The stimuli presented to the subject consisted of a series of
rapid beeps concurrently presented to both ears, but differing in phase and frequency
(8 1500Hz beeps to the right ear and 7 800Hz to the left ear). Before each trial a
visual queue was presented in the form of an arrow pointing either to the right or the
left indicating which side the subject should attend to. In order to keep the subject
alert an attentional task was presented: in each series of beeps a random number of
frequency deviant stimuli were presented (1650Hz to the right and 880Hz to the left)
and the subject was asked to count how many deviants there were in the series
17
presented to the ear he/she was ordered to attend to. The ERPs produced by attending
to each side were calculated (averaged) and used to train a support vector machine
(SVM) which classifies by creating a separating hyper-plane between data points of
each class by maximizing the margins between the data sets. It was found that it is
possible to classify single trial ERPs based on that classification. This paradigm
necessitates a 2 hour training session as well as offline machine learning.
In our study we would like to research the feasibility of using an auditory ERP signal,
by investigating whether it can be modulated by motor imagery in the time scale of
the auditory ERP. Such approach would take advantage of the lack of need for user
training, and may increase BCI speed as well as decrease computer training time.
Current challenges
Current challenges of the EEG based BCI research field, which stem from the low
signal-to-noise ratio of the EEG signal, include high variability in the following:
1. Inter-subject variability – the signal produced, from each mental task, for
each subject, varies greatly from the signals produced by other subjects.
Additionally, using a cap for electrode placement as well as the natural
differences between subject’s scalp structures means slightly different scalp
locations for each subject, which of course adds to the inter-subject variability
of the EEG signal.
2. Inter-session variability – even studying the same subject, in different
sessions, exhibits a great variability, due to different states of mind, emotional
and physical conditions, and again, the slightly different placement of
electrodes. As machine learning techniques usually necessitate a large training
data set, several data recording sessions are needed, adding to the inter-session
variability problem.
18
3. Intra-session variability - even studying the same subject during the same
session, changes in states of mind, fatigue, user adaptation and EEG gel
dehydration all contribute to variability in the EEG signal throughout the
session.
While intra-session variability can be dealt with by periodically adapting the trained
classifier by continuously adding new data to the classifier, inter-subject and inter-
session variability are harder to deal with, especially when using BCI paradigms with
machine learning techniques that generalize over data taken from multiple sessions or
multiple subjects.
It has been suggested by the Berlin BCI group (Blankertz et al., 2006) that a short
calibration stage may help to overcome these problems by calibrating to each subject
individually, before each session. They presented a 20 minute training session using
an elaborate machine learning technique, which decreases the 50 – 100 hours usually
needed for training a BCI system, thus opening the path to individual per-session
calibration.
Research goals
In this study we propose an EEG-based BCI paradigm with the aim of reducing
training time, as a means of addressing the aforementioned challenges of inter-subject
and inter-session variability. A shorter training time might be considered as a
calibration stage, that if used before every BCI session, calibrates to each subject
individually as well as each session individually, thus eliminating the need for inter-
subject and inter-session generalization. Each BCI session will begin by a short
calibration stage that will be valid for that same session alone, and for that specific
subject.
Our paradigm consists of using auditory ERP signals as a brain-computer
synchronization mechanism, which (if to borrow a metaphor from computer science)
will serve as a common clock, timed by the computer as a repetitive series of audio
tones presented to the user. Some of the tones presented will be passively listened to
19
by the subject, creating an auditory ERP signal to be considered as a baseline signal,
while the other tones presented will be coupled to an imagined motor task, performed
simultaneously with the presented tone, which will be considered as an action signal.
We hypothesized that the imagined motor task will change the auditory ERP signal
enough to be distinctly different from the baseline auditory ERP signal, and as such
will be considered as an action signal. The result will be a string of baseline and
action signals, corresponding to a string of zeros and ones, to be used by the BCI
system as data passed from the brain to the computer.
Although classic BCI experiments aim to classify two or more mental actions, for
example: left hand/right hand/feet imagined movements, while trying to learn the
differences between them, a recent article by Schalk et al. (2008) from the Wolpaw
laboratory, advocates using signal detection as opposed to signal classification, by
learning a baseline signal and trying to detect an action signal of a single mental
activity by comparison to the baseline.
Hence, our research question is: does endogenous motor imagery produced during an
audio tone, change the resulting auditory ERP signal, enough to be detected in
comparison to a baseline auditory ERP signal produced by passively listening to the
same audio tone? We should note that this is not a BCI study per se, but the study of
the viability of using said signals in a BCI system. If indeed motor imagery can
change auditory ERP signals enough to be classified, the resulting signals may be
used for BCI purposes.
20
Methods
Participants, hardware and software
7 healthy subjects (6 male 1 female) at the ages of 24 – 35 (mean=33 -/+4.19) were
attached to EEG electrodes, using a BioSemi Active 2 system, according to the 10/20
electrode placement scheme. The electrodes were attached using an elastic cap and a
conductive gel. Electrodes were placed on frontal locations (Fz, F1-6) corresponding
to the frontal cortex, central locations (Cz, C1-6) corresponding to the sensorimotor
cortex, and one parietal location (Pz), corresponding to the parietal cortex.
The EEG readings from the subject’s scalp were digitized by the BioSemi's A/D box
using a sampling rate of 2048 Hz. The LabView based software ActiView Light,
which ships with the BioSemi system, was modified and expanded to provide data
acquisition, data saving, trigger and audio tone synchronization as well as the
graphical interface for the user.
A Pulse generator (Stanford Research Systems, Model: DS345) was used to receive a
trigger signal from the modified ActiView Light software, and generate both a trigger
signal to the BioSemi system and an audio tone, delivered to the subject using Sony
earphones. Matlab scripts were used for offline analysis of the results.
Experimental setup
The subjects were seated comfortably on a chair, facing a computer screen. The
electrodes were then attached to said locations, and the earphone was placed in their
right ear. They were then given instructions and recording began.
The modified Actiview software managed the experimental session, by sending
periodic TTL trigger signals to the pulse generator producing the audio tones. An
audio tone with a frequency of 250 Hz and a duration of 500 milliseconds was
presented to the subject every 3 seconds. Higher inter stimulus intervals (ISI) did not
produce better detection results, while lower ISI proved too fast for subject comfort.
21
The volume was set individually by the subject, loud enough to be heard, yet low
enough not to produce discomfort.
The display consisted of the following:
• Three led-like flashing green lights, aligned horizontally, indicating when the
audio tone would be presented by flashing in succession.
• A text box indicating the number of tones delivered so far, both as a means of
helping the subject not to despair of the long experiment (only 20 minutes, but
300 tones!), and as a corrective measure, as the subject was instructed to say
the number in case of wrong command execution.
• A text box indicating to the subject, 3 seconds before the tone was sounded,
what the next mental task should be, either by showing the word “Listen”
corresponding to the baseline ERP produced by passively listening to the tone,
or the word “Row”, which corresponded to producing the motor imagery of
rowing in a boat using both hands and legs, during the stimulus of the audio
tone.
Figure 9 shows a schematic illustration of the experimental setup.
Figure 9 – Schematic view of experimental setup.
22
The motor imagery of rowing a boat was chosen after several motor imagery tasks
were preliminarily tested on one of the subjects, and was found to be of greater impact
on the auditory ERP signal, presumably, as it involves motor imagery of four limbs,
occupying not only both sides of the motor cortex, but also necessitating more
attention. The acquisition part of the managing software received and recorded to disk
both the EEG signals of the subject and the trigger information received from the
pulse generator. Each subject was exposed to 300 audio tones with duration of 500ms
and a frequency of 250Hz. Each data set recorded contained 150 randomly interlaced
baseline signals (listen to tone only) and 150 randomly interlaced action signals (hear
the tone while executing motor imagery task).
A major issue in EEG-based BCI research, as well as in EEG research in general,
relates to artifact removal. Artifacts are transient voltage changes not corresponding to
cortex neural activity, but rather to muscle electrical activity, as recorded from scalp
muscles, and usually related to blinking. Artifacts are usually removed from the data
manually, though some automatic artifact removal methods exist. In our study, in
order to avoid artifacts in the data, subjects were asked to only blink at the ISI, as that
data is not taken into consideration for training or testing purposes. For the same
reason, when erroneous about their task, they could report the number of the sample
that they erred in, during the ISI.
23
Data analysis and results
As mentioned above, each set of data contained 150 samples of baseline signals and
150 samples of action signals. Each sample consisted of 1024 data points, which at a
sampling rate of 2048 Hz, corresponded to a latency of 500 milliseconds. This latency
was chosen both because of its classification success rates and because of the
biological basis of the auditory ERP signal in the 500 milliseconds post stimulus, as
mentioned in the Introduction. The entire data set was then separated into a training
data set, used to train the BCI system, and a testing data set, used to measure the
success of the classification/signal detection capabilities of the system, after it had
been trained.
Normalization
Figure 10 shows a 15 minute EEG measurement from electrode Fz of subject #1.
Figure 10: 15 minutes of raw EEG data from subject #1’s Fz electrode. A drift in voltage can be seen
along the duration of the session, probably due to conducting gel dehydration.
24
As can be seen, the signal’s DC value changes dramatically during the course of the
recording session. This drift in voltage is a known EEG issue, probably due to
conductive gel dehydration as the session progresses. As we intend to extract ERPs
from the data, which means averaging many time-locked samples, such a change in
voltage affects the DC value of the ERPs, a fact which hinders our efforts for a
reliable comparison between each new sample in the testing data set to the two ERPs
(baseline and action).
Figure 11 shows, for this same electrode and subject and in the same session, 2 ERP
signals derived by averaging 100 samples of baseline condition (red), and 100
samples of action condition (blue), without any normalization. As can be seen, the
resulting ERP signals of the two conditions are too far apart (different DC values) to
be used for comparison purposes, presumably stemming from the drift in voltage
affecting the averaged DC value.
Figure 11: Baseline condition ERP in red, action condition ERP in blue. The signals are too far apart to
be used as classifying signals, due to the voltage drift shown in figure 10.
25
Figure 12: The average DC drift of each subject, averaged over the six electrodes selected for
classification (See Electrode Selection below).
Figure 12 shows, for each subject, the DC drift averaged over the six selected
electrodes (See Electrode Selection below). As can be seen, several subjects exhibit
an upward drift (increase in voltage) while others exhibit a downward drift (decrease
in voltage), such as is illustrated in figure 10.
In order to deal with this issue a normalization step is needed prior to any comparison
based classification. Several normalization techniques were tested, and we chose the
one with the best results. This normalization is simply done by dividing each ERP
signal, as well as the raw signal from the testing set by the mean value of the entire
500 millisecond (1024 data points) sample. Figure 13 shows the result of a
normalization step on the above ERP signals. As can be seen, the signals are almost
superimposed as a result, allowing for a more reliable comparison to take place.
26
Figure 13: The same ERP signals from figure 11 after a normalization step of dividing by the mean.
Training and testing
The training stage consisted of averaging the entire training set, so as to reduce the
noise and expose the ERP signals of each condition (the baseline condition of
passively listening to the audio tone and the action condition of hearing the tone while
simultaneously performing the motor imagery task.).
The testing stage consisted of classifying the test samples into the two classes
(baseline and action) and then measuring the success of the classification by
comparing the inferred class of each sample to its real class. Classification – for each
electrode, each raw sample was normalized and its distance from the ERP of each
class was measured using mean squared error (MSE). If the raw sample was closer
(shorter distance – smaller MSE) to the baseline ERP signal than to the action ERP
signal it was classified as a baseline sample, and if it was closer to the action ERP
signal than to the baseline ERP signal it was classified as an action sample. A sample
was inferred to be a member of a certain class if most of the electrodes classified it as
27
such. Measuring success – while measuring EEG data we recorded each sample’s
class, corresponding to the task presented to the subject (baseline - “Listen” / action -
“Row”). Knowing which sample belonged to which class, we measured the
percentage of baseline samples that were correctly inferred as such, and the
percentage of action signals that were correctly inferred as such. As a measure of
overall classification success, i.e. the percentage of all samples that were correctly
classified, we calculated the average of those two measures.
Figure 14 shows the ERPs of all 7 subjects' 10 medial electrodes exhibiting visible
differences, baseline ERPs are marked in red and action ERPs are marked in blue. The
6 electrodes marked in red are the ones used in the final analysis.
Subject #1
Subject #2
28
Subject #3
Subject #4
Subject #5
29
Subject #6
Subject #7
Figure 14: Normalized ERPs of all 7 subjects' 10 electrodes exhibiting visible differences, baseline
ERPs are marked in red while action ERPs are marked in blue. The 6 electrodes marked in red are the
ones used in the final analysis.
Figure 15 shows average classification success results for all 7 subjects using data
from all 15 electrodes attached. 99 samples of each condition were used for training
(averaged for ERP signal extraction) and 80 samples were used for testing. As can be
seen, subjects 2, 4, 6 and 7 correctly classified only 60% or less of the testing data set,
a success rate that cannot be considered as classification at all, as it too close to
chance level (50%). In fact, only subject 5 correctly classified more than 70% of the
testing samples. In order to improve the classification results an electrode selection
step is needed, so as to only use the relevant electrodes, presenting the biggest
differences between the two tested conditions.
30
Figure 15: Classification success rates of the 7 subjects, using data from all 15 electrodes. The blue bar
indicates the success rate of correctly classifying the baseline condition samples, the green indicates the
same for action condition samples, and brown is their average, indicating overall success percentage.
Electrode selection
Several electrode sets were tested, looking for the electrodes that, when combined,
gave the best classification success rates for all subjects. The selection was made
manually, and does not necessarily mean that they are the best for each subject or for
all subjects. From the sets that were tested the electrodes that were found to be the
best were the three medial frontal electrodes: F1, Fz and F2, as well as the three
medial central electrodes placed over the motor cortex: C1, Cz and C2. This is not
surprising as auditory ERP signals are best acquired over the frontal cortex, which is
involved in attention, and motor imagery signals are best acquired over the motor
cortex.
Figure 16 shows a comparison of overall success rates of each subject with 15 and 6
electrodes, as can be seen, selecting the best set of electrodes from the several sets
tested improved classification success rates for all the subjects, dramatically for some.
31
Figure 16: Comparison of overall success rates using data from all 15 recording electrodes and using
data from only the best 6 electrodes. Classification success rates improved for all the subjects.
Figure 17 shows in more detail the detection percentages for each one of the 7
subjects, using a training data set of 99 samples for each condition, and a testing data
set of 80 samples. As can be seen, endogenously produced motor imagery changes the
baseline, exogenously evoked, auditory ERP signal enough to be classified with an
average of 72.5% across subjects, with the best subject exhibiting an 82.5% rate of
classification of action signal samples and a 75% classification of baseline signal
samples, amounting to 78.75% of overall sample classification.
All this was done without using any elaborate preprocessing or machine learning
techniques, usually seen in BCI paradigms, but by using the noise reduction offered
by ERP averaging, and a simple MSE based comparison paradigm. As less than a 100
samples for each condition were used for training, with a 500 milliseconds audio tone
latency, and a 3 second ISI, training was completed in less than 12 minutes. This is 8
minutes less than the proposed Berlin BCI calibration suggestion. It should be noted,
however, that they report an average detection rate of 89.5% compared to our 72.5%,
having used both machine learning and user feedback.
32
Figure 17: Classification success rates of the 7 subjects, using data from the best 6 electrodes. The blue
bar indicates the success rate of correctly classifying the baseline condition samples, the green indicates
the same for action condition samples, and the brown bar presents their average, indicating overall
classification success percentage.
It might be asked whether the changes induced in the auditory ERP by motor imagery
can be characterized, so as to present a general rule for BCI classification algorithms.
Comparing the different subjects’ signals from figure 14 it can be seen that although
the general auditory ERP characteristics are present, not only is the baseline signal
different in amplitudes and latencies across different subjects, but the differences
produced by adding the motor imagery task to the auditory ERP are distinctly
different for each subject. Figure 18 shows for the six chosen electrodes, for each
subject, the difference between the action ERP signal and the baseline ERP signal,
normalized for better comparison across subjects. As can be seen, the differences
induced by motor imagery on the auditory ERP are different across subjects. So,
although we cannot denote a general rule for said differences, this fact just elucidates
the need for an individual, per-subject, calibration stage.
33
Figure 18: six electrodes, 7 subjects, differences between action and baseline ERP signals. A great
inter-subject variability can be seen.
A control analysis was done to verify that the difference between baseline and action ERP
averages is bigger than the difference between two baseline averages and the difference
between two action averages. 140 baseline samples were smoothed (using a moving average
with a 64 point window), normalized by subtracting the minimum and averaged. The same
was done with 140 action samples. The difference between the baseline average and the
action average is depicted in blue for each subject's 6 electrodes in figure 19. This is
compared to the difference between the smoothed normalized average of 70 baseline samples
and the smoothed normalized average of different 70 baseline samples (green line), as well as
the difference between the smoothed normalized average of 70 action samples and the
smoothed normalized average of different 70 action samples (red line). As can be seen, for
some subjects the real difference (baseline – action) is greater, in most electrodes, than the
two other differences (baseline – baseline, action – action), although a few subjects do not
exhibit such big differences.
34
Subject #1 Subject #2
Subject #3 Subject #4
Figure 19: Six electrodes of all seven subjects showing the difference between the average of 140 baseline
samples and the average of 140 action samples (blue line), the difference between the average of 70 baseline
samples and the average of other 70 baseline samples (green line) and the difference between the average of 70
action samples and the average of other 70 action samples (red line).
35
Subject #5 Subject #6
Subject #7
Figure 19: (Continued) Six electrodes of all seven subjects showing the difference between the average of 140
baseline samples and the average of 140 action samples (blue line), the difference between the average of 70
baseline samples and the average of other 70 baseline samples (green line) and the difference between the
average of 70 action samples and the average of other 70 action samples (red line).
36
In order to test how averaging an increasing number of samples does reduce noise and
exposes the underlying ERP, we looked at the standard deviation of the difference between
baseline and action signal averages as a function of number of samples averaged. For subject
#1, data was taken from the 6 electrodes, the standard deviation was calculated over the
difference between baseline and action averages over 1 to 100 samples for each condition.
For each difference between averages of size n (1 to 100) we measured the distance from the
difference between the ERPs obtained by averaging 100 samples. Figure 19 shows the log-
log analysis of the average of the six electrodes of subject #1. We found that averaging using
an increasing number of samples (1 to 100) reduces the standard deviation of the difference
between baseline and action signals, converging to the difference between baseline and action
signals produced by averaging 100 samples. The reduction behaves like Y~n-0.5, where n is
the number of samples used for averaging (asterisks). This means that the sample means get
more precise as the sample size increases, as their standard deviation decreases. Meaning that
as we average more samples the noise is reduced exposing the underlying ERP signal.
Figure 19: Subject #1's 6 electrodes - log-log analysis of the standard deviation of the difference between
baseline and action signal averages as a function of number of samples used for averaging. The difference
decreases as more samples are used for averaging, and behaves like Y~ N-0.5 (asterisks).
37
To conclude the Results chapter, I would like to address a further possible criticism
regarding the differences between the ERPs of the two different classes. One might
ask whether the differences between the ERPs really stem from different brain
activities, or are they just random differences resulting from averaging different
samples. In order to verify that the differences are meaningful and not random, a cross
averaging analysis was done, for the 6 electrodes used in the above classification. For
every subject, two conditions were tested, each with two data sets:
1. Same class data sets consisting of:
a. 100 samples of baseline samples.
b. 100 samples of action samples.
2. Randomly scrambled data sets consisting of:
a. 50 samples of baseline samples + 50 samples of action samples.
b. 50 samples of action samples + 50 samples of baseline samples.
For illustration purposes, figure 21 shows, for all subjects, for the six used electrodes,
the ERPs of same-class and randomly scrambled data sets.
Cross averaged Actual ERPs
Subject #1
Subject #2
38
Cross averaged Actual ERPs
Subject #3
Subject #4
Subject #5
Subject #6
39
Subject #7
Figure 21: For all 7 subjects the left column shows the randomly selected cross averaged signals, while
the right column shows the actual ERPs. Looking carefully it can be seen that the two conditions
(baseline and action) are different to a bigger extent in the real ERPs than in the scrambled cross
averages.
In order to quantify the difference between the same-class data sets and the scrambled
data sets the following analysis was performed. For each condition, each data set was
averaged and normalized and the distance between the two resulting graphs was
measured using MSE. The relative differences between the same-class and scrambled
data sets (same-class MSE/scrambled MSE) are shown for all subjects in figure 22.
As can be seen, same-class averages (actual ERPs) of the two conditions exhibit
larger differences than the scrambled (cross averaged) averages. Thus, it is safe to
assume non-random differences between the ERPs, and conclude that the motor
imagery task, performed in synch with an audio tone, does change the auditory ERP,
evoked by that same tone, enough to be classifiable and be used for brain computer
interfacing.
40
Figure 22: Results of cross averaging for all subjects: comparing the relative differences between MSE
distances from averages of same-class and randomly scrambled data sets.
41
Conclusions
In this study we have presented a BCI paradigm based on the modulation of
exogenously evoked auditory ERP signals by a simultaneous endogenously executed
motor imagery task. We have proved that the motor imagery task changes the auditory
ERP signal enough to be classified for BCI use, and without the use of computational
and time costly machine learning techniques usually incorporated in classic BCI
paradigms.
We have also shown the potential for shortening the training period, so as to open the
possibility of an individual per-subject and per-session calibration stage, in the order
of minutes, and by that, addressing the current BCI challenges stemming from inter-
subject and inter-session variability issues.
Even though 70-80% of correct classification does not seem like a lot, surely not
enough to connect any locked-in patient to control his wheelchair, it does serve as a
proof of concept for our BCI paradigm. It is also an acceptable success rate in the BCI
research field, especially for first time BCI users, not accustomed to or properly
trained in producing highly synchronized and consistent mental activities. The 20%
incorrectly classified samples might be attributed to this fact.
Apart from that, in the BCI research field three different brain-computer adaptation
levels are recognized and defined (Wolpaw et al., 2002):
1. The computer adapting to the user’s brain signals via classification algorithms.
2. Periodic adjustments for reducing intra-session variability.
3. The brain adapting its signals to better control the BCI, through presentation
of feedback to the user.
In this study, only the first level of adaptation was tested, thus not manifesting the full
power of brain computer adaptations.
Additionally, as mentioned above, no machine learning techniques were used, and
those might improve the classification success rates. Further study in this area might
42
be required for a definite answer, although using such techniques introduces a tradeoff
dilemma, between better accuracy and a heavy burden on real-time computation.
In this study, 2 free parameters were set for all the subjects, the first being the latency
of the samples (0 – 500 milliseconds), and the second being the electrodes used for
the classification (F1/2/z, C1/2/z). Although producing robust, across subjects,
classification results, a true per-subject calibration would necessitate finding the best
latency and electrodes for each subject. Future studies of our BCI paradigm should
involve an automated parameter optimization technique for improving classification
accuracy.
Finally, on a more practical note, the use of an audio tone presented to one ear might
prove to be irritating in a long use BCI. In order to make it more bearable for use,
future studies should include researching the viability of near- or sub-threshold
auditory stimuli in producing ERP signals.
43
References
C.W. Anderson, and Z. Sijercic, “Classification of EEG signals from four subjects
during five mental tasks”, Solving Engineering Problems with Neural Networks:
Proc. Int. Conf. on Engineering Applications of Neural Networks, 1996.
J. Bayliss, “A Flexible Brain-Computer Interface”, PhD thesis, Department of
Computer Science, University of Rochester, 2001.
N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J.
Perelmouter, E. Taub, and H. Flor, “A spelling device for the paralyzed”, Nature, vol.
398, pp. 297–298, 1999.
N. Birbaumer,A. Kübler, N. Ghanayim, T. Hinterberger, J. Perelmouter, J. Kaiser, I.
Iversen, B. Kotchoubey, N. Neumann, and H. Flor, “The thought translation device
(ttd) for completely paralyzed patients”, IEEE Transactions on Neural Systems and
Rehabilitation Engineering, 8, pp. 190–193, 2000.
B. Blankertz, G. Curio, and K. R. Muller, “Classifying single trial EEG: towards brain
computer interfacing”, Adv. Neural Inf. Process. Syst. (NIPS 01), 14, pp.157–64,
2002.
B. Blankertz, G. Dornhege, S. Lemm, M. Krauledat, G. Curio, K. R Muller, “The
Berlin Brain-Computer Interface: Machine Learning Based Detection of User Specific
Brain States”, Journal of Universal Computer Science, vol. 12, no. 6, pp. 581-607,
2006.
B. Blankertz, M. Krauledat, G. Dornhege, J. Williamson, R. Murray-Smith, and K.R.
Müller, "A note on brain actuated spelling with the Berlin Brain-Computer Interface",
In C. Stephanidis, editor, Universal Access in HCI, Part II, HCII 2007, volume 4555
of LNCS, pp. 759-768, 2007.
44
V. Bostanov, “BCI competition 2003–data sets ib and iib: feature extraction from
event-related brain potentials with the continuous wavelet transform and the t-value
scalogram”, IEEE Trans. Biomed. Eng. 51, pp. 1057–61, 2004.
S. Coyle, T. Ward, C. Markham, and G. McDarby, "On the suitability of near-infrared
(NIR) systems for next-generation brain-computer interfaces", Physiol Meas
25(4):815-22, 2004.
E. Donchin, K.M. Spencer, and R.S. Wijesinghe, “The mental prosthesis: Assessing
the speed of a p300-based brain-computer interface”, IEEE Transactions on
Rehabilitation Engineering, 8, pp. 174–179, 2000.
L.A. Farwell and E. Donchin, “Talking off the top of your head: toward a mental
prosthesis utilizing event-related brain potentials”, Electroenceph. Clin.
Neurophysiol., pp. 510—523, 1988.
G.N. Garcia, T. Ebrahimi and J.M. Vesin, "Human-Computer adaptation for EEG
based communication", Submitted to the IEEE EMBS conference, 2003.
D. Garrett, D.A. Peterson, C.W. Anderson, and M.H. Thaut, “Comparison of linear,
nonlinear, and feature selection methods for EEG signal classification”, IEEE Trans.
Neural Syst. Rehabil. Eng. 11, pp. 141–4, 2003.
E. Haselsteiner, and G. Pfurtscheller, “Using time-dependant neural networks for
EEG classification”, IEEE Trans. Rehabil. Eng. 8, pp. 457–63, 2000.
J.N. Hill, T.N. Lal, M. Schröder, T. Hinterberger, N. Birbaumer, and B. Schölkopf,
“Selective Attention to Auditory Stimuli: A Brain-Computer Interface Paradigm”,
Proceedings of the 7th Tübingen Perception Conference, 102. (Eds.) Bülthoff, H.H.,
H.A. Mallot, R. Ulrich and F.A. Wichmann, Knirsch Verlag, Kirchentellinsfurt,
Germany, 2004.
45
J.N. Hill, T. N. Lal, K. Bierig, N. Birbaumer, B. Schölkopf, "Attentional modulation
of auditory event-related potentials in a brain–computer interface", In IEEE
International Workshop on Biomedical Circuits and Systems, pp. 17–19, 2004
H.H. Jasper, “The ten-twenty electrode system of the international federation”,
Electroencephalography and Clinical Neurophysiology, 1(10), pp. 371–375, 1958.
J. Kalcher, D. Flotzinger, C. Neuper, S. Gölly, and G. Pfurtscheller, "Graz Brain-
Computer Interface II: towards communication between humans and computers based
on online classification of three different EEG patterns", Med. Biol. Eng. Comput.,
vol. 34, pp. 382-388, 1996.
M. Kaper, P. Meinicke, U. Grossekathoefer, T. Lingner, and H. Ritter, “BCI
competition 2003–data set iib: support vector machines for the p300 speller
paradigm”, IEEE Trans. Biomed. Eng. 51, pp. 1073–6, 2004.
D.J.M. Kramer, C.N. Macrae, A.E. Green, and W.M. Kelley, “Musical imagery:
Sound of silence activates auditory cortex”, Nature, 434, 158, 2005.
T.N. Lal, T. Hinterberger, G. Widman, M. Schröder, N.J. Hill, W. Rosenstiel, C.E.
Elger, B. Schölkopf, and N. Birbaumer, “Methods towards invasive human brain
computer interfaces”, in Advances in Neural Information Processing Systems 17, L.
K. Saul, Y. Weiss, and L. Bottou, Eds. Cambridge, MA: MIT Press, pp. 737–744,
2005.
P.L. Lee, C.H. Wu, J.C. Hsieh, and Y.T. Wu, “Visual evoked potential actuated brain
computer interface: A brain-actuated cursor system”, Electr. Letts 41(15), pp. 832–
834, 2005.
Y. Li, X. Gao, H. Liu, S. Gao, "Classification of Single Trial Electroencephalogram
During Finger Movement", IEEE Trans. on Biomedical Eng., vol. 51, No.6, pp. 10
19-1025, 2004.
46
Y. Li, A. Cichocki, and S. Amari, "Blind Estimation of Channel Parameters and
Source Components for EEG Signals: A Sparse Factorization Approach", IEEE
Transactions on Neural Networks, Vol. 17, No. 2, pp. 419-431, 2006.
F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi, “A review of
classification algorithms for EEG-based brain–computer interfaces”, J. Neural Eng. 4
R1–R13, 2007.
J. Malmivuo, R. Plonsey, “Bioelectromagnetism”, Oxford University Press, 13.6, pp.
234, 1995.
D.J. McFarland, A.T. Lefkowicz, and J.R. Wolpaw, “Design and operation of an
EEG-based brain-computer interface (BCI) with digital signal processing
technology”, Behav. Res. Methods Instrum. Comput. 29, pp. 337–345, 1997.
J. Mellinger, G. Schalk, C. Braun, H. Preissl, W. Rosenstiel, N. Birbaumer, and A.
Kübler, “An MEG-based brain-computer interface (BCI)”, NeuroImage, 36(3), pp.
581-93, 2007.
M.S. Middendorf, G.R. McMillan, G.L. Calhoun, and K.S. Jones, “Brain computer
interfaces based on the steady-state visual-evoked response”, IEEE Transactions on
Rehabilitation Engineering, 8, pp. 211–214, 2000.
J.d.R. Millán, “A local neural classifier for the recognition of EEG patterns associated
to mental tasks”, IEEE Transactions on Neural Networks, 13, pp. 678–686, 2002.
J.d.R. Millán, F. Renkens, J. Mouriño, and W. Gerstner, “Noninvasive brain-actuated
control of a mobile robot by human EEG”, in Proc. 18th Int. Joint Conf. Artif. Intell.,
2003.
47
J.d.R. Millán, and J. Mouriño, “Asynchronous BCI and local neuralclassifiers: an
overview of the adaptive brain interface project”, IEEE Trans. Neural Syst. Rehabil.
Eng. 11, pp. 159–61, 2003.
R. Näätänen, "Attention and brain function", Published by Lawrence Erlbaum
Associates, pp. 104, 1992.
R. Palaniappan, “Brain computer interface design using band powers extracted during
mental tasks”, Proceedings of 2nd International IEEE EMBS Conference on Neural
Engineering, Arlington, Virginia, USA, pp. 321-324, 16-19 March, 2005.
G. Pfurtscheller, “EEG event-related desynchronization (ERD) and event-related
synchronization (ERS)”, Electroencephalography: Basic Principles, Clinical
Applications and Related Fields 4th edn, ed E Niedermeyer and F H Lopes da Silva
(Baltimore, MD: Williams and Wilkins) pp. 958–67, 1994.
G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, “EEG-based
discrimination between imagination of right and left hand movement”,
Electroencephalogr. Clin. Neurophysiol. 103, pp.642–51, 1997.
G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, H. Ramoser, A. Schlögl, B.
Obermaier, and M. Pregenzer, “Current trends in Graz brain-computer interface (bci)
research”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 8,
pp. 216–219, 2000.
G. Pfurtscheller, and N. Neuper, "Motor imagery and direct brain-computer
communication", Proc. IEEE, 89, pp. 1123-1134, 2001.
J. Qin, Y. Li, and A. Cichocki, "ICA and committee machine-based algorithm for
cursor control in a BCI system", Lecture Notes in Computer Science, J. Wang, X.
Liao, and Z. Yi (Eds.): ISNN 2005, LNCS 3497, pp. 973–978, 2005, Springer-Verlag
Berlin Heidelberg, 2005.
48
A. Rakotomamonjy, V. Guigue, G. Mallet, and V. Alvarado, “Ensemble of SVMs for
improving brain computer interface p300 speller performances”, Int. Conf. on
Artificial Neural Networks, 2005.
R. Scherer, G. R. Muller, C. Neuper, B. Graimann, and G. Pfurtscheller, “An
asynchronously controlled EEG-based virtual keyboard: improvement of the spelling
Rate”, IEEE Trans. Biomed. Eng. 51, pp. 979–84, 2004.
T. Wang , J. Deng, and B. He, “Classifying EEG-based motor imagery tasks by means
of time-frequency synthesized spatial patterns”, Clin. Neurophysiol. 115, pp. 2744–
53, 2004.
J.R. Wolpaw, D.J. McFarland, G.W. Neat, and C.A. Forneris, "An EEG-based brain-
computer interface for cursor control", Electroencephalogr. Clin. Neurophysiol.,
vol.78, pp.252–259, 1991.
J.R. Wolpaw, and D.J. McFarland, "Multichannel EEG-based brain-computer
communication", Electroenceph. Clin. Neurophysiol., vol. 90, pp. 444-449, 1994.
J.R. Wolpaw, D.J. McFarland, and T.M. Vaughan, “Brain-computer interface research
at the Wadsworth center”, IEEE Transactions on Neural Systems and Rehabilitation
Engineering, 8, pp. 222–226, 2000.
J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan,
"Brain-computer interfaces for communication and control", Clin. Neurophysiol. vol.
113, pp. 767-791, 2002.