Upload
others
View
9
Download
0
Embed Size (px)
Citation preview
Brain Computing Interface AugmentedHuman Computer Interaction
Vaidic Joshi
Roll No. 14IT60R06
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
West Bengal, India
April 2016
Brain Computing Interface AugmentedHuman Computer Interaction
A Report submitted in partial fulfilment of the
requirements for the degree of
Master of Technology
in
Information Technology
by
Vaidic Joshi
Roll No. 14IT60R06
under the supervision of
Prof. Debasis Samanta
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
West Bengal, India
April 2016
DECLARATION
I, Vaidic Joshi, Roll no 14IT60R06, registered as a student of M.Tech. program inthe Department of Computer Science and Engineering, Indian Institute of Technology,Kharagpur, India (hereinafter referred to as the ’Institute’) do hereby submit my thesis,title: Brain Computing Interface Augmented Human Computer Interaction (here-inafter referred to as ’my thesis’) in a printed as well as in an electronic version forholding in the library of record of the Institute.
I hereby declare that:
a. The work contained in the thesis is original and has been done by myself underthe general supervision of my supervisor.
b. The work has not been submitted to any other Institute for any degree or diploma.
c. I have followed the guidelines provided by the Institute in writing the thesis.
d. I have conformed to the norms and guidelines given in the Ethical Code of Con-duct of the Institute.
e. Whenever I have used materials (data, theoretical analysis, and text) from othersources, I have given due credit to them by citing them in the text of the thesisand giving their details in the references.
f. Whenever I have quoted written materials from other sources, I have put themunder quotation marks and given due credit to the sources by citing them andgiving required details in the references.
Vaidic Joshi
iii
CERTIFICATE
This is to certify that this thesis entitled Brain Computing Interface Augmented
Human Computer Interaction , submitted by Vaidic Joshi to Indian Institute of Tech-
nology, Kharagpur, is a record of bona fide research work carried under my supervision
and I consider it worthy of consideration for award of the degree of Master of Technol-
ogy of the Institute.
Date:
Debasis Samanta
Associate Professor
Department of Computer Science and Engineering
Indian Institute of Technology Kharagpur
Kharagpur - 721302, India
iv
ACKNOWLEDGEMENT
First and foremost, I would like to express my deepest gratitude and sincere
thanks to my advisor, Prof. Debasis Samanta for his inspiration, encouragement and
able guidance all throughout the course of my M.Tech Project. It is because of his
valuable advice from time to time and constant support that today I have been able to
give shape to my work. It is indeed an honor and a great privilege for me to have worked
under his guidance which has made my research experience productive. I learned an
approach of humanity, perseverance and patience from him.
I sincerely acknowledge my deepest gratitude towards all faculty members of
Department of Computer Science & Engineering for providing in-depth knowledge on
various subjects over the past two years. It was a pleasure to learn and work with their
co-operation.
Lastly and most importantly, I am grateful to my beloved father Ramesh
Chandra Joshi and my beloved mother Sangeeta Joshi for their unconditional love and
constant encouragement which has given me the strength to complete this thesis.
Dated:
VAIDIC JOSHIDepartment of Computer Science and Engineering,
Indian Institute of Technology, Kharagpur.
v
ABSTRACT
Human brain is one of the most wondrous organs that distinguishes hu-
mans from all of other organisms. The brain does not just control the organs, but also
can think and remember. This ability to feel, adapt, reason, remember and communicate
makes human a social being. Typically, people with disabilities have limited opportu-
nities to socialize and pursue social and leisure activities that most people enjoy. The
15% of the world’s population lives with some form of disability, and this requires a
call to change, to empower people with disabilities. Brain-computer interfaces (BCIs)
can be seen as one such hope of restoring independence to disabled individuals.
BCI research, involving persons with disability and healthy users, have grown ex-
plosively over last two decades. A significant progress has been made and BCI research
is out of laboratories gaining wide acceptance. The key concept of a BCI system is to
capture the brain signals i.e the intentions of the user and translate them directly into
device commands, with no involvement of peripherals. There are various brain imaging
technologies like Functional magnetic resonance imaging or functional MRI (fMRI),
Magnetoencephalography (MEG), Electroencephalography (EEG) etc. EEG technol-
ogy has been around for almost four decades[1]. It is an inexpensive and efficient way
to record electrical activity of brain.
The aim of this project is to analyze and develop an application that distinguishes
between intentions to move Left or Right hand and perform task based on intention. The
project deals with acquiring and preprocessing of EEG signals using a 14 electrodes
device. The captured signals are digitalized and transferred to the computer system.
Various features set are used to classify the EEG signals accurately. These results can
be used for the further development of better Brain-computer interface systems.
Key words:Human Computer Interaction (HCI); Brain Computer Interface (BCI);
Motor Imagery; People with special needs; electroencephalogram (EEG)
vi
Contents
Declaration iii
Certificate by the Supervisor iv
Acknowledgments v
Abstract vi
List of Figures x
List of Tables xiii
1 Introduction 5
1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Issues and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Scope Of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Research Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.8 Organization of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Literature Survey 16
2.1 Biological Background . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Human Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
vii
2.1.2 Brain Activity Patterns . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Brain-Computer Interfaces (BCIs) . . . . . . . . . . . . . . . . . . . . 22
2.2.1 Types of BCIs . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.2 Measuring Brain Activity . . . . . . . . . . . . . . . . . . . . 23
2.3 Electroencephalogram (EEG) . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.1 Brief History of EEG . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.2 The 10-20 System of Electrode Placement . . . . . . . . . . . . 27
2.3.3 Devices in Market . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Human Computer Interaction (HCI) . . . . . . . . . . . . . . . . . . . 30
3 Design of Experiment 31
3.1 Hardware Device : Emotiv EPOC+ . . . . . . . . . . . . . . . . . . . . 34
3.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.1 Emotiv TestBench . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.2 EEGLAB - MATLAB Toolbox . . . . . . . . . . . . . . . . . . 37
3.2.3 OpenViBE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5 Data Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.6 Experimental Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.1 DATA Acquisition and DATA Labeling . . . . . . . . . . . . . 46
3.6.2 Data Consolidation . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6.3 Data Format Conversion . . . . . . . . . . . . . . . . . . . . . 49
3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4 BCI Signal Preprocessing 52
4.1 Preprocessing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Experimental Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 Generic Preprocessing of BCI Signal . . . . . . . . . . . . . . 57
4.2.2 Preprocessing the BCI Signal using CAR and ICA . . . . . . . 58
viii
4.2.3 Preprocessing BCI Signal using CSP and Regularized CSP . . . 58
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5 Features Extraction 66
5.1 Time domain methods . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2 Frequency domain methods . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3 Time-frequency representations . . . . . . . . . . . . . . . . . . . . . . 69
5.4 Statistical Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.6 Hilbert Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.7 Experimental Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.7.1 Features Extraction . . . . . . . . . . . . . . . . . . . . . . . 72
5.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6 Modeling and Application 80
6.1 Linear Discriminant Analysis (LDA) . . . . . . . . . . . . . . . . . . . 80
6.2 Support Vector Machines (SVM) . . . . . . . . . . . . . . . . . . . . . 81
6.3 Modeling with OpenViBE . . . . . . . . . . . . . . . . . . . . . . . . 82
6.3.1 Off-line Classification . . . . . . . . . . . . . . . . . . . . . . 82
6.3.2 On-line Classification . . . . . . . . . . . . . . . . . . . . . . 85
6.4 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7 Conclusion and Future Work 98
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.2 Mapping Achievements to Goals . . . . . . . . . . . . . . . . . . . . . 99
7.3 Further Enhancement and Ideas . . . . . . . . . . . . . . . . . . . . . . 99
Publications out of this work 102
ix
List of Figures
1.1 Concept of BCI-HCI . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Concept of BCI augmented HCI . . . . . . . . . . . . . . . . . . . . . 8
1.3 BCI Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Neuron[2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Human Brain[3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Different Types of Brain Rhythms[4] . . . . . . . . . . . . . . . . . . . 20
2.4 Different devices for brain imagery[5] . . . . . . . . . . . . . . . . . . 26
2.5 The first reports of the human EEG from the first publication from Hans
Berger (1929)[6]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6 The 10-20 system of Electrode placement is based on the relationship
between the location of an electrode and the underlying area of cere-
bral cortex (the "10" and "20" refer to the 10% or 20% inter-electrode
distance)[5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1 Block diagram representing BCI System . . . . . . . . . . . . . . . . . 31
3.2 Components of a BCI System . . . . . . . . . . . . . . . . . . . . . . . 32
3.3 Block diagram of the proposed BCI-HCI System . . . . . . . . . . . . 33
3.4 Emotiv EPOC+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Emotiv TestBench Suite . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.6 Emotive TestBench and SDK Suite . . . . . . . . . . . . . . . . . . . . 37
3.7 EEGLAB - MATLAB Toolbox . . . . . . . . . . . . . . . . . . . . . . 38
3.8 OpenViBE Acquisition Server and Configurations Settings . . . . . . . 39
x
3.9 OpenViBE Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.10 Images for various OpenViBE boxes . . . . . . . . . . . . . . . . . . . 41
3.11 Lua plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.12 Data Acquisition- Various stimulation shown to subject . . . . . . . . . 43
3.13 Recording session in progress . . . . . . . . . . . . . . . . . . . . . . 44
3.14 Procedure for DATA Acquisition (from Emotiv EPOC+ ) and DATA
Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.15 Signal Concatenation Box . . . . . . . . . . . . . . . . . . . . . . . . 48
3.16 Procedure for Data Consolidation - combining data from different ses-
sions into single file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.17 Images of various OprnViBE boxes used . . . . . . . . . . . . . . . . . 49
3.18 Procedure for Data Format Conversion - converting data from one for-
mat to another . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.1 General Statistics Generator . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Procedure for Preprocessing the BCI Signal(Generic) . . . . . . . . . . 60
4.3 Images for OpenViBE boxes . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 Procedures for Preprocessing the BCI Signal(CAR & ICA) . . . . . . . 61
4.5 Images of OpenViBE boxes used . . . . . . . . . . . . . . . . . . . . . 62
4.6 Procedures for Preprocessing the BCI Signal(CSP & Regularized CSP . 62
4.7 Unfiltered EEG signals & Filtered EEG signals after some preprocessing 64
4.8 Radar Curve showing the EEG signal for each channel on application
of various preprocessing methods . . . . . . . . . . . . . . . . . . . . 65
5.1 Steps leading to optimal Feature Vector . . . . . . . . . . . . . . . . . 66
5.2 AR feature vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3 Valence as feature using channels FC5 . . . . . . . . . . . . . . . . . . 69
5.4 Images for used OpenViBE boxes . . . . . . . . . . . . . . . . . . . . 72
5.5 Procedure for Features Extraction . . . . . . . . . . . . . . . . . . . . 74
5.6 Procedure for Features Extraction . . . . . . . . . . . . . . . . . . . . 75
xi
5.7 5-fold cross-validation result with 50% accuracy . . . . . . . . . . . . . 77
5.8 5-fold cross-validation result with 82.5% accuracy . . . . . . . . . . . . 79
6.1 SVM hyperplane i.e. a line separating two classes represented as STAR
and TRIANGLE shapes . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2 Procedure for Classification (Off-line) . . . . . . . . . . . . . . . . . . 83
6.3 OpenVIBE boxes used for evaluating classifier performance . . . . . . 84
6.4 Classifier Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.5 Procedure for Classification (On-line) . . . . . . . . . . . . . . . . . . 86
6.6 BCI Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.7 Block Diagram of the Application . . . . . . . . . . . . . . . . . . . . 89
6.8 Classifier Performance Matrix Accuracy = 88.5% . . . . . . . . . . . . 93
6.9 Classifier Performance Matrix Accuracy = 87.5% . . . . . . . . . . . . 94
6.10 Classifier Performance Matrix Accuracy = 82% . . . . . . . . . . . . . 95
6.11 Classifier Performance Matrix Accuracy = 93.5% . . . . . . . . . . . . 96
6.12 Confusion Matrix with Accuracy = 58% . . . . . . . . . . . . . . . . . 97
xii
List of Tables
1.1 Number of disable population and type of disability,Census of India
2001[7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Various types of brain signals and associated physiological phenomena . 13
2.1 Summary of Brain rhythms (adapted from [4]) . . . . . . . . . . . . . . 21
2.2 Summary of different brain imagery techniques(adapted from [8] . . . . 23
2.3 Summary of some popular EEG products . . . . . . . . . . . . . . . . 29
3.1 Specifications of Emotiv EPOC+[9] . . . . . . . . . . . . . . . . . . . 35
5.1 Summary of all features extracted . . . . . . . . . . . . . . . . . . . . 76
5.2 Accuracy of various features extracted using 10-folds cross-validation
method on LDA and SVM Classifiers . . . . . . . . . . . . . . . . . . 78
xiii
Brain Computing Interface Augmented
Human Computer Interaction
1
List of Acronyms
ANN Artifcial Neural Network
AR Autoregressive model
BBCI Berlin Brain Computer Interface
BCI Brain Computer Interface
CAR Common Average Reference
CNS Central nervous system
CSD Current Source Density
CSP Common Spatial Pattern
ECG Electrocardiograms
EEG Electroencephalogram
EGG Electrogastrography
FFT Fast Fourier Transform
fMRI Functional magnetic resonance imaging
fNIR functional Near InfraRed
GMM Gaussian Mixture Model
GUI Graphical user interface
ICA Independent component analysis
KNN K Nearest Neighbor
LDA Linear Discriminant Analysis
MATLAB MATrix LABoratory
MEG Magnetoencephalography
MSR Magnetically Shielded Room
2
NN Neural network
PCA Principal Component Analysis
PDF Probability Density Function
PET Positron Emission Tomography
PSD Power spectrum density
REM Rapid eye movement
RFLDA Regularized Fishers Linear Discriminant Analysis
RMS Root mean squared
SEF Spectral edge frequency
SL Surface Laplacian
SNR Signal to Noise Ratio
STD Standard deviation
STFT Short Time Fourier Transform
SVM Support Vector Machine
WPD Wavelet Packet Decompositions
WT Wavelet transform
3
List of Symbols
α Alpha
β Beta
ε Epsilon
Hz Hertz
φ Phi
ψ Psi
κ Kappa
∑ Sigma
ξ Xi
4
Chapter 1
Introduction
BCI (brain-computer interface) is a system for controlling a device,for example, com-puter, wheelchair or a neuroprothesis by human intention which does not depend on thebrain’s normal output pathways of peripheral nerves and muscles[10]. They providea direct communication between the brain and the computer without muscle control.For those people with severe physical disabilities, such as damaged limbs, brain-stemstroke, spinal cord injury, cerebral palsy, amyotrophic lateral sclerosis (ALS) or otherneuro-muscular diseases, BCI is the only method to communicate. Currently, severalfunctional imaging modalities like EEG, MEG, fMRI, etc. , are available for research.Among these non-invasive devices Electroencephalography (EEG) is unique and mostoften used, since it promises to provide high temporal resolution of the measured brainsignals, relatively convenient, affordable, safe and easy to use BCI for both healthyusers and the disabled.
In this thesis we propose a BCI-HCI application, where we develop a user graphicaluser interface which can be further elaborated to form a sophisticated interactionsystem. The navigation in this GUI is controlled by left and right sensory motorimagination of hand movement. In order to get near to accurate command signal, thenoisy EEG signals acquired are filtered and forwarded to feature extraction task. Thenon-line classification of EEG data is done and outcome is displayed to the user usingaudio and visual displays.
The proposed system has the potential to bring the severely disabled people into themain stream. Such a system can form a bridge of communication between normal anddisabled ones. Without any voluntary movements, gadgets can become friendly to thosepeople. Though their bodies are dependent but their thinking will have freedom to do
5
1.1. CONTEXT
anything they want to do. Boredom or monotony will not bound them to stay detachedfrom the society. Availability of such interfaces will keep the people occupied, havingtheir own personal space in life. Such type of systems has been already proposed usingexpensive EEG devices which cost above 20 Lacs. Here the same with comparableaccuracy is achieved by using an inexpensive Emotiv EPOC+ EEG device which costaround 50K.
1.1 Context
Humans are social being and communication, in any form, is the most basic requirementto socialize. Computers today are integral part of our day to day lives that have helpedhumans to communicate effectively and efficiently. Over years there has been lots ofinventions, discoveries and innovations to make human computer interactions as smoothas possible, but there exist a wide gap between the way abled-body users interact ascompared to interactions between people with disabilities and computers. One billionpeople, or 15 % of the worldâAZs population, experience some form of disability[11],a detailed statistics of the disable population and type of disability in India is listed inthe following table:
Population Percentage(%)Total population 1,028,610,328 100.0Total disabled population 21,906,769 2.1 –Type of DisabilityIn seeing 10,634,881 1.0In speech 1,640,868 0.2In hearing 1,261,722 0.1In movement 6,105,477 0.6Mental 2,263,821 0.2
Table 1.1: Number of disable population and type of disability,Census of India 2001[7]
If we take a look at the various assistive technologies for disabled users be it screenreaders, eye tracker or something else, we find that all of these use the same data flowpath from human brain to hands or some other body part to computer peripherals likecamera, keyboard to the computer memory or CPU. What if the humans only thinkactively and computers somehow understands the users’ intention.
6
1.1. CONTEXT
Figure 1.1: Concept of BCI-HCI
Figure 1.1 is an illustration explaining the concept of BCI. The underlying capa-bility of BCI systems is to distinguish different patterns of brain activity, each beingassociated to a particular intention or mental task. As depicted in the figure, a BCIsystems is smart enough to understand the intentions of the user,for example,if users isfeeling happy, the system is able to imply that the user is happy just analyzing the brainsignals only and the expressions, voice etc. are not used to deduce the conclusion.
BCI allows users to communicate with others by using only brain activity withoutusing peripheral nerves and muscles of human body. Figure 1.2 distinguishes betweenthe legacy HCI systems and the BCI augmented HCI approach. The maximum delayin any computation is generally at the stage when the user inputs the data into thecomputer. The human brain and the computers available today are much more efficientand fast. Thus, if we can bridge the gap between humans and computers, such that thecomputers can understand the intentions of human, the efficiency will be improved.BCI augmented HCI tries to bridge this gap between the computers and humans.
Today the BCI systems provide a ray of hope to people with severe disabilities, tosuch an extend that even eye-blinks becomes a non-trivial task. The BCI systems couldbe used as standalone systems that provides assistance, or can be used together withother popular existing systems to enhance their capabilities.
7
1.2. STATE OF THE ART
(a) Legacy HCI Systems
(b) BCI augmented HCI Systems
Figure 1.2: Concept of BCI augmented HCI
1.2 State of the art
Since the discovery of the electroencephalogram in 1924 by Hans Berger[6] greatamount of advancements have been made in the use of EEG signals for BCIs. Some ofthe milestones achieved[12] are listed as follows:
1929: First Record of EEG1950: First Wet-Brain Implants1970: DARPA super BCI Research1972: Birth of Bionic Ear1976: First Evidence that BCI can be used for communication1978: First BCI to Aid the Blind
8
1.2. STATE OF THE ART
1980: Recorded EEG in Macaque Monkey1998: First Implant in Human Brain1999: BCI Used to Aid Quadriplegic2000: BCI Experiments with Owl Monkey2002: Monkeys Trained to Control Computer Cursor2003: First BCI Game Exposed to the Public2005: First Tetraplegic BrainGate BCI Implementation2008: First Consumer off-the-shelf, Mass Market Game Input Device2009: Wireless BCI Developed2010: BCI allows Person to Tweet by Thought2013: First Thought-Controlled Social Media Network
The BCI systems that uses EEG signals could be further classified accordingto application area into : spelling, painting, research, wheelchair control etc. , andsimilarly based on type of BCI principle used as P300, SSVEP, SSEP etc. This sectiondescribes some state of the art BCI systems,designed for spelling applications or motorimagery-based systems, their limitations and proposed enhancements to overcomethem.
Farewell et al.[13] describes the development and testing of a P300 componentbased system for one to communicate through a computer. The system is designedfor users with motor disabilities. The alphabet are displayed on a computer screenwhich serves as the keyboard or prosthetic device. The subject focuses on charactershe wishes to communicate. The computer detects the chosen character on-line and inreal time, by repeatedly flashing rows and columns of the matrix. The data rate of 0. 20bits/sec were achieved in this experiment.
Another study(Bin et al.[14]) that was aimed to improve the low communicationspeed of the EEG based BCI systems based on code modulation of visual evokedpotentials (c-VEP). The target stimuli were modulated by a time-shifted binarypseudo-random sequence. The on-line system achieved an average information transferrate (ITR) of 108 ± 12 bits min on five subjects with a maximum ITR of 123 bits minfor a single subject.
Prasad et al.[15] aims at goal-directed rehabilitation tasks leads to enhanced
9
1.3. MOTIVATION
functional recovery of paralyzed limbs among stroke sufferers. It is based on motorimagery (MI) and is also an EEG-based brain-computer interface (BCI). The MIactivity is used to devise neurofeedback for the BCI user to help him/her focus betteron the task. The positive gains in outcome measures demonstrated the potential andfeasibility of using BCI for post-stroke rehabilitation.
Ryan et al.[16] compared a conventional P300 speller brain-computer interface witha predictive spelling program. Time to complete the task in the predictive speller (PS)condition was 12 min 43 s as compared to 20 min 20 sec in the non-predictive speller(NS) condition. Despite the marked improvement in overall output, accuracy was sig-nificantly higher in the NS paradigm. These results demonstrate the potential efficacyof predictive spelling in the context of BCI.
1.3 Motivation
Disabilities among people specially related to motor-sensory tasks, leaves person iso-lated with fewer options to communicate. This impacts the social life of the person.Around 15 of the world’s population[11] is having some minor or major form of dis-ability. Motor disability can be caused due to various reasons like serious injury. Thereare two ways to help them restore some motor function
• Repair the damaged nerve axons
• Build neuro-prosthetic device
according to 2001, Census of India 6,105,477[7] (0.6%) Indians have disability inmovement. Repairing the damaged nerve axons may not be always an option, besidesit involves surgery which is risky and expensive.
Today, there are many assistive technologies available, however all of them, lackeither speed or accuracy or both. Moreover, these technologies requires interactions thatare explicit and exhaustive. Some of the assistive technologies are discussed below[17]:
• Mouth stick - a stick is placed in the mouth, it is simple and inexpensive.
• Head wand - similar to mouth sticks, except the stick is strapped to the head. Aperson moves the head to interact with the device.
10
1.4. ISSUES AND LIMITATIONS
• Single-switch access - designed for people who have very limited mobility. Theclicking action on switch is interpreted by special software on the computer, al-lowing the user to navigate.
• Sip and puff switch - similar to the single switch described above, sip and puffswitches are able to interpret the user’s breath.
• Oversized trackball mouse - functionally similar to the standard mouse, but it isoften easier for a person with a motor disability to operate than a standard mouse.
• Adaptive keyboard - for a person without reliable muscle control in the handsfor precision movements, an adaptive keyboard can be useful.
• Voice recognition - allows a person to control the computer by speaking. Thisassumes that the person has a voice that is easy to understand.
• Eye tracking - powerful alternative for individuals with no control, or only lim-ited control, over their hand movements. The device follows the movement of theeyes and allows the person to interact with the device.
This study focus on text-entry systems, as a keyboard. Text based communicationopens the world of opportunities and information to the user. An efficient text-entrysystem should have comparable performance for disabled and able-bodied users, andmust not be exhaustive to work with.
This study is a step towards providing a BCI based hands-free, touch-free text-entrysystem for motor disabled user. Such a neuro-prosthetic device can greatly enhance thelife of people with disabilities. Easing the life of the disabled using an inexpensive EEGdevice is the primary focus of this proposed work.
1.4 Issues and Limitations
The study described in this thesis focuses on the signal processing side of BCI. Morespecifically the goal is to explore and evaluate various preprocessing methods and fea-tures, to combine them with classification methods and to finally find a good set oftechniques for our project. Towards this end, the following issues have been identified:
• The data transfer rate of EEG devices is too low.
11
1.5. SCOPE OF WORK
• The error rate during the EEG signal acquisition is high under distracting envi-ronment.
• Raw EEG signals have very low SNRs.
• Poor spatial resolution is an inherent limitation.
• Presence of artifacts like eye-blinks or faulty electrodes etc.
• Absence of data set specific to our study with 14 channels of EEG recording forLeft and Right movement was a major issue.
• High dimensionality of extracted feature vectors is another challenge.
1.5 Scope Of Work
Figure shows the various applications of a BCI system. The current BCI works rangesfrom laboratory research to gaming and entertainment. This study primarily focuseson building the smart environment. Communication comes easy to the able-bodied,whereas it poses a serious challenge to the disabled. With human computer interaction(HCI), it is conceivable to build systems and have yielded satisfactory results. Howevernot much focus has been given on disabled people. There are probably many reasonsattached to this, one being, finding disabled users and performing studies with themis not only a challenging work but also raises many ethical issues. Regardless of theissues and challenges we face, it is time that we take this area a step further and helpthe disabled. This is where BCI can play a pivotal role. A hands-free user interfacebased on brain signals is an interesting research problem.
The following table summarizes various types of signals that could be captured fromthe brain. Each type of signal could be measured using same brain imaging technologylike EEG or fMRI, and differ only in terms of associated physiological phenomena thatcaused the signal.
The scope of this study will be limited to use of sensorimotor signals acquired usingEEG device for classification of Left and Right hand movement.
12
1.5. SCOPE OF WORK
Figure 1.3: BCI Applications
Signal Physiologicalphenomena Training Information
transfer rateVEP Brain signal modulations in the visual cortex No 60 - 100 bits/minSCP Slow voltages shift in the brain signals Yes 5 - 12 bits/minP300 Positive peaks due to infrequent stimulus No 20 - 25 bits/min
SensorimotorModulations in sensorimotor rhythmssynchronized to motor activities Yes 3 - 35 bits/min
Table 1.2: Various types of brain signals and associated physiological phenomena
13
1.6. OBJECTIVE
1.6 Objective
The problem statement of this study can be expressed in short as, given an affordableEEG device like Emotiv EPOC+, can we build a robust brain computer interfacereliable enough for a user to communicate.
This study will focus on the task of communication, for motor disabled people. Theaim is to augment BCI with HCI component to enhance the usability of the system. Theproject methodology proposed will provide adaptive approach, that need not necessarilybe used by motor disabled people.
1.7 Research Contribution
Toward the goal of creating a BCI based HCI text entry system, the research contribu-tions as part of this study are :
• Creation of a data set for Left and Right hand movement/intention of movementusing 14 channel device
• Evaluation of various preprocessing techniques and features extracted in terms ofclassifier accuracy for sensory motor BCI based HCI components
• Creating a prototype application that can distinguish between Left and Right handkinesthetically imagined movements using an inexpensive EEG headset
1.8 Organization of Thesis
The rest of the thesis is organized as follows:
Chapter 2, gives an overview of the Human Brain and other biological process thatis necessary to understand the working of BCI. The chapter further gives details aboutthe various types of BCI and brain imaging devices that could be used. The chapteralso gives an overview of EEG and EEG devices available in market.
Chapter 3 covers in details the overall work-flow of the project. A basic overviewof hardware and software devices used, followed by a detailed description of theexperimental setup sets the ground work for the study. The chapter describes the data
14
1.8. ORGANIZATION OF THESIS
acquisition and data labeling in details.
Chapter 4 briefly describes the signal preprocessing techniques used for the project.Multiple methods were used and evaluated during the development of project. Thevarious finding regarding the use of preprocessing techniques are given in the resultsection of the chapter.
Chapter 5 is about features extraction, the key step for the BCI system. Thechapter describes various features that could be used for this study and explains indetails the methods used to extract various features, evaluate them and select an opti-mal subset of features. A comparative study of the findings is given in the result section.
In chapter 6, the classification process, the classifiers used and the methods forevaluation are described. The chapter gives in depth insight to the development of theprototype application and results obtained.
Chapter 7 explores the possibilities of improving and extending the present system.It also gives a summary of the overall work done to develop this project.
15
Chapter 2
Literature Survey
2.1 Biological Background
2.1.1 Human Brain
Human Brain is the most amazing part of human body. An average human brain weighsabout 1.5 kilograms and consists of approximately 100 billion neurons,interconnectedvia axons and neurons. A neuron has a cell body called Soma, a long axon and manydendrites.
Figure 2.1: Neuron[2]
Neurons receive stimulus from other interconnected neurons( about 103 to 105 )through synapses. These stimulus travel through axon as electrical impulses and helps
16
2.1. BIOLOGICAL BACKGROUND
controlling body movements, emotions and other aspects of body coordination.
Figure 2.2: Human Brain[3]
Parts of Human Brain (according to position)
The brain can be classified based on the position as:
• Forebrain - consisting cerebrum, thalamus, and hypothalamus
• Midbrain - consisting tectum and tegmentum
• Hindbrain - consisting cerebellum, pons and medulla
Parts of Human Brain (according to functions)
Human brain is the center for thinking, sensory activity and controlling other volun-tary and involuntary actions of the body. Different regions of brain are responsible fordifferent specific tasks, hence brain can be classified into various parts based on theirfunctions. It can be classified into :
17
2.1. BIOLOGICAL BACKGROUND
• Cerebrum
• Cerebellum
• Limbic system
• Brain stem
Cerebrum
The cerebrum also called cortex, is found only in mammals, and is largest part ofhuman brain. It is site for complex brain functions as thought and action. Cerebralcortex has large number of folding, that increase the surface area and the number ofneurons within it.
The cerebrum is divided into two halves, known as Left and Right hemispheres,connected by Corpus callosum which is a bundle of axons.The left hemisphere is as-sociated to right part of the body and right hemisphere to left parts of the body. Thehemispheres are symmetrical, and each of them is divided into four sections :
• Frontal Lobe - associated with reasoning, planning, parts of speech, movement,emotions, and problem solving
• Parietal Lobe - associated with movement, orientation, recognition, perception ofstimuli
• Occipital Lobe - associated with visual processing
• Temporal Lobe - associated with perception and recognition of auditory stimuli,memory, and speech
Cerebellum
The cerebellum, also known as "little brain", is also divided into two hemispheres. Itis responsible for regulation and coordination of movement, posture and balance. Thecerebellum is also highly folded to increase the surface area and number of neurons inthe area.
18
2.1. BIOLOGICAL BACKGROUND
Limbic system
The limbic system, also referred to as the "emotional brain", is found buried withinthe cerebrum. This system consists of the thalamus, hypothalamus, amygdala, andhippocampus.
• Thalamus - is a large mass of gray matter and helps in sensory and motor func-tions.It acts as router for action potentials
• Hypothalamus - controls autonomic nervous system and pituitary e.g. homeosta-sis, thirst, hunger, circadian rhythms
• Amygdala - found in temporal lobe and is associated with memory, emotion andfear
• Hippocampus - found in temporal lobe and helps converting short term memoryto permanent one
Brain stem
The brain stem is underneath limbic system, and is responsible for basic functions likebreathing, blood pressure, heartbeat. The brain stem is consists of the midbrain, pons,and medulla.
• Midbrain - includes tectum and tegmentum,and helps with vision, hearing, eye-movement, and body-movement
• Pons - helps with motor control, sensory analysis
• Medulla - situated between pons and spinal cord and helps with breathing andheart rate
2.1.2 Brain Activity Patterns
The brain consists of large number of neurons, and when humans perform someactivity, electric potential is developed and it travels across axons of neurons. Since,different parts of brain are associated to different functions, the electric signalsgenerated due to neural activity widely differ in terms of their frequency, amplitude,shape and the position. These are commonly termed as Brain waves or Brain rhythms.
These Brain waves, based on their frequency, amplitude, shape and the position areclassified as :
19
2.1. BIOLOGICAL BACKGROUND
(a) Alpha Rhythms
(b) Beta Rhythms
(c) Delta Rhythms
(d) Gamma Rhythms
(e) MU Rhythms
Figure 2.3: Different Types of Brain Rhythms[4]
• Alpha waves - originate from occipital lobe and backside of the head, and havefrequency range from 7.5 Hz to 12 Hz. They are associated with relaxed and calmstates in awake humans.
• Beta waves - originate from central area of the brain and front side of head, and
20
2.1. BIOLOGICAL BACKGROUND
have frequency range from 13 to 30 Hz. They are associated with deep thinking,high concentration level and anxious state.
• Theta waves - originate from central, temporal and parietal parts of head, andhave frequency range from 3.5 to 7.5 Hz. They are associated with thinking,stressed and deep meditating state.
• Gamma waves - they have frequency range above 30 Hz. They are associatedwith Motor functions, simultaneous work and while multi-tasking.
• MU waves - originate from motor cortex of the head, and have frequency rangefrom 9 Hz to 11 Hz. They are associated with motor activities,when there is actualmovement or intent to move.
• Delta waves - they have frequency range from 0.5 to 3.5 Hz. They are associatedwith deep sleep, and coma mental state.
BrainRhythms
Typicalfrequencyrange (Hz)
Normalamplitude(micro-volts)
Comments
Delta 0.5 - 4 <100Dominant in infantsDuring deep stages of adult sleepFound at central cerebrum and parietal lobes
Theta 4 - 7 <100State of drowsinessFound at frontal, temporal and parietal regions
Alpha 8 - 13 20 - 60Associated to alert stateFound at occipital and parietal lobes
Mu 9- 11 <50Hand MovementsFound at motor and somatosensory cortex
Beta 14 - 30 <20 Also associated to hand movements
Gamma >30 <2Attentive state - when the subject ispaying attention, response to stimulus
Table 2.1: Summary of Brain rhythms (adapted from [4])
21
2.2. BRAIN-COMPUTER INTERFACES (BCIS)
2.2 Brain-Computer Interfaces (BCIs)
2.2.1 Types of BCIs
Brain-Computer Interfaces (BCIs) aims to provide a direct communication between thebrain and a device, enabling brain to control the device or a computer by passing com-mands and messages. BCI system converts the electrophysiological signals generatedby various mental activities to device specific messages or commands.Based on the location of the sensors used to record the aforementioned signals, BCIsystems can be categorized[18] into
• noninvasive - when the sensors are placed on the scalp, e.g. Electroencephalog-raphy (EEG), Magnetoencephalography (MEG)
• semi-invasive - when the electrodes are placed on the exposed surface of the brain,e.g. Electrocorticography (ECoG)
• invasive - when micro-electrode arrays are placed directly into the cortex
Noninvasive systems, records the electrical activity, the magnetic activity or hemo-dynamics response due to neural activity in brain. Non-invasive BCI system arenow reliable enough and can be used as alternative means of communication outsidededicated research facilities. EEG devices are relatively inexpensive, and are primarilyused for BCI research. All the methods based on the EEG devices have to address theinherent challenges, like its poor spatial resolution and low signal to noise ratio. MEGsenses the magnetic potential formed due the activity of the neurons. It is orthogonalto the electrical activity recorded using EEG devices, and gives higher spatiotemporalresolution. However, due to need for sensitive sensors and magnetically shielding, theyhave limited research potential.
Semi-invasive systems uses electrocorticography (ECoG) and due to it beinglocated nearer to the site of neutral activity, provides better spatial resolution and highersignal to noise ratio. ECoG relies on the same neurophysiologic mechanisms as EEG,and requires similar signal processing approaches.
Invasive techniques provides a very high spatial resolution, temporal resolution andsignal to noise ratio. The recordings differ significantly from that of noninvasive de-vices and hence require different signal processing approach. Due to risky nature of
22
2.2. BRAIN-COMPUTER INTERFACES (BCIS)
Neuroimagingmethod
Activitymeasured
MeasurementType
Temporalresolution
Spatialresolution Risk Portability
EEG Electrical Direct 0.05 s 10 mm Non-invasive PortableMEG Magnetic Direct 0.05 s 5 mm Non-invasive Non-portableECoG Electrical Direct 0.003 s 1 mm Invasive PortableIntracorticalneuronrecording
Electrical Direct 0.003 s0.5 mm (LFP)0.1 mm (MUA)0.05 mm (SUA)
Invasive Portable
fMRI Metabolic Indirect 1 s 1 mm Non-invasive Non-portableNIRS Metabolic Indirect 1 s 5 mm Non-invasive Non-portable
Table 2.2: Summary of different brain imagery techniques(adapted from [8]
the this technique, research focus on animals mainly monkeys and rats, even thoughsuch systems have been demonstrated in humans as well.Commercial applications us-ing BCIs are been created with non-invasive, as well as with invasive BCI systems.BrainGate is running in clinical trials for an implantable BCIs that can help controllimb movements[19] [20].
2.2.2 Measuring Brain Activity
Brain Activity can either be the electrophysiological activity or, the hemodynamicsresponse of the brain. Brain imagery techniques are the techniques to investigate thespatial and temporal organization of brain.
Electrophysiological activity of the brain is due to the electro-chemical transmitters.Ionic currents are generated by neurons while exchanging information between them.Electrophysiological activity can be measured by electroencephalography (EEG), elec-trocorticography (ECoG), magnetoencephalography (MEG), and invasive electricalmeasurements.
The hemodynamic response of the brain helps to distinguish between activated andless activated neurons. These are indirect methods, as they measure the variation oflocal ratio of oxyhemoglobin to deoxyhemoglobin[8] and do not directly characterizeneuron activity as electrophysiological methods. The blood releases glucose to activeneurons at a much higher rate than in area of inactive neurons. Thus, the presenceof glucose and oxygen results in surplus of oxyhemoglobin in the veins. This canbe measured by methods as functional magnetic resonance (fMRi) and near infraredspectroscopy (NIRS).
23
2.2. BRAIN-COMPUTER INTERFACES (BCIS)
Electroencephalography (EEG)
Electroencephalography (EEG) measures the brain’s electric potential caused by theneural activity during synaptic excitations of the dendrites[21]. EEG measures theelectric activity using the electrodes placed on the scalp, hence is a non-invasive tech-nique. EEG is the most widespread recording method, as it is the most inexpensivenon-invasive technique, that provides high temporal resolution( 1ms) and is portable.EEG are only able to measure signals of thousands of neurons, as the sensors are placedon the scalp and thus have a poor spatial resolution.
Magnetoencephalography (MEG)
Magnetoencephalography (MEG) detects the magnetic fields resulting from the elec-trical currents in neurons. These magnetic fields are orthogonal to the electric signalsmeasured by EEG. The magnetic field are less distorted, and hence MEG provides bet-ter spatial and temporal resolution. MEG, however requires typically expensive andmuch sensitive devices. Also, the measurements are required to be taken at magneti-cally shielded rooms[22]. This approach measures only shallow parts of brain and istoo bulky to be suitable for everyday use.
Electrocorticography (ECoG)
Electrocorticography (ECoG), the electrodes are placed under the dura matter, directlyon the surface of the cortex, without penetrating the cortex[23]. ECoG provides betterspatial, temporal resolution and better signal quality. These signals have higher am-plitudes and are less vulnerable to the artifacts such as eye-blinks and eye movements.However, this is an semi-invasive technique, that requires risky surgery. ECoG are pri-marily used in experiments with animals. ECoG implants remains stable for severalmonths and can be used to record signals, however long term stability remains unclearto date[24].
Intracortical Neuron Recording - Brain Implants
Brain implants are directly inserted into the grey matter of the brain,to measure theactivity of single neuron. They provide best quality signal with very high temporal andspatial resolution.In 2005, such type of neurosurgery was done successfully to move acursor on a computer screen [25]. However, this requires risky surgery. Additionally,such devices raise several issues like long-term viability and bio-compatibility[26].
24
2.2. BRAIN-COMPUTER INTERFACES (BCIS)
Functional magnetic resonance imaging (fMRI)
Functional magnetic resonance imaging (fMRI) is a non-invasive technique that relieson hemodynamics. fMRIs determine the blood oxygen level variations and thus pro-vides high spatial resolution. They suffer from poor time resolution, and is susceptibleto head motion artifacts. fMRIs like MEG are very expensive equipment and are notsuited for individual and everyday applications.
Nuclear Functional Imaging Techniques
Nuclear Functional Imaging Techniques use radioactive substance to measure the brainactivity. They can be either :
• Single-photon-emission-computed-tomography (SPECT)Single-photon-emission-computed-tomography (SPECT) is based on the trackingof gamma rays emitted by radionuclides injected in the bloodstream of the patient.Specific chemicals (radio-ligands), that binds to certain types of tissues(e.g. braintissues), allow to concentrate the radionuclide in the region of interest of the brain.These are visible to gamma cameras. SPECT provide good spatial resolution, butthe temporal resolution is low.
• Positron-emission-tomography (PET)Positron-emission-tomography (PET) is relatively similar to SPECT. The ra-dionuclides injected in the patient emit positrons, which annihilate with elec-trons located in the vicinity ) and, thus, produce a pair of gamma rays emitted inopposite directions[27]. PET measures these gamma rays.PET has a better spa-tial resolution compared to the SPECT. The active molecule generally chosen isfludeoxyglucose( FDG),an equivalent of glucose.
Near-infrared spectroscopy (NIRS)
Near-infrared spectroscopy (NIRS) is also a non-invasive acquisition technique. It mea-sures the changes in optical response of cerebral tissues to near-infrared light due tovariations of hemoglobin concentrations. Infrared light can penetrate only small depthof skull, and hence provides shallow spatial resolution of the order of the centimetrewhile the time resolution is of around 200 ms. NRIS are relatively inexpensive, portableand are gaining popularity for everyday use applications.
25
2.2. BRAIN-COMPUTER INTERFACES (BCIS)
(a) EEG (b) MEG
(c) fMRI (d) ECoG
(e) PET (f) SPECT
Figure 2.4: Different devices for brain imagery[5]
26
2.3. ELECTROENCEPHALOGRAM (EEG)
2.3 Electroencephalogram (EEG)
The term Electroencephalogram is derived from the concepts of :
• Electro - the electrical activities of the brain
• Encephalo - the emission of signals from the head
Modern techniques, measures electric patterns from scalp and digitalizes them. Thesensors measures potentials as micro-volts, and amplify them before digitalizing. Thesensors are generally made up of gold or silver and use a conductive gel on scalp foracceptable signal to noise ratio. [figure A segment of a multichannel EEG of an adult.]
2.3.1 Brief History of EEG
Richard Caton (1842-1926), a physician, published his findings about electricalphenomena of the exposed cerebral hemispheres of rabbits and monkeys in the BritishMedical Journal in 1875. In 1890, Polish physiologist Adolf Beck presented aninvestigation of spontaneous electrical activity of the brain of rabbits and dogs.
Beck experimented with the electrical brain activity of animals, placing electrodesdirectly on the surface of brain. His observation concluded in determination of brainwaves[21]. In 1914, Napoleon Cybulski and Jelenska-Macieszyna recorded EEG forexperimentally induced seizures. In 1929, German physiologist and psychiatrist, HansBerger (1873-1941) published ’das Elektrenkephalogramm’, which marks the begin-ning of research on the human electroencephalogram[6].
2.3.2 The 10-20 System of Electrode Placement
The International 10-20 system[28], which has been standardized by the AmericanElectroencephalographic Society is used for electrode placement on the scalp. Thesystem uses two reference points in head, they are
• nasion - at top of nose, and at level of eyes
• inion - bony lump at base of skull
The electrodes are placed on the traverse and median plain, containing these points,at intervals of 10% and 20% as shown in figure 2.6. The letters at each electrode positionrefers to the brain regions such that :
27
2.3. ELECTROENCEPHALOGRAM (EEG)
Figure 2.5: The first reports of the human EEG from the first publication from HansBerger (1929)[6].
• A represents ear lobes
• C represents central region
• Pg represents naso-pharyngeal region
• P represents parietal region
• F represents the frontal lobe
• Fp represents the frontal polar region and
• O represents the occipital lobe
The even numbers (2, 4, 6, 8) denote right hemisphere and odd numbers (1, 3, 5, 7) theleft hemisphere. Smaller numbers are assigned closer to the mid-line(passing throughnasion and inion). The letter >’z’ refers to an electrode placed on the mid-line.
2.3.3 Devices in Market
EEG devices are characterized by various factors as number of the channels, abilityto integrate with other devices, cost of software development kit(SDK) suite or othersoftware, number of different activities it can identify and so on. Acquiring an EEG, iscostly and hence a proper choice of fairly cheap EEG device that delivers data of god
28
2.3. ELECTROENCEPHALOGRAM (EEG)
Figure 2.6: The 10-20 system of Electrode placement is based on the relationship be-tween the location of an electrode and the underlying area of cerebral cortex (the "10"and "20" refer to the 10% or 20% inter-electrode distance)[5]
quality is required. A summary of various popular EEG devices available in market ispresented here.
Device Indicative Price($) Channels WebsiteAurora Dream Headband 199 1 iwinks.org/iFocusBand 500 1 www.ifocusband.com/MindWave 99 1 neurosky.com/Eotiv EPOC+ 499 14 emotiv.com/Muse 299 4 www.choosemuse.com/OpenBCI RandD Kit 899 16 openbci.com/
Table 2.3: Summary of some popular EEG products
29
2.4. HUMAN COMPUTER INTERACTION (HCI)
2.4 Human Computer Interaction (HCI)
Over years the way humans interact with computers have made remarkable progress,from punch cards to swipe cards to touch less system, but the HCI systems designedfor disabled lags behind. In this section we discuss few BCI augmented HCI systems.This study focuses on text-entry system for motor impair users. The key concepts toconsider while designing HCI component for motor impaired are :
• Users may not be able to use mouse, so all functionality must be available throughkeyboard.
• Users may be able to handle cognitive load, hence the system can be adaptive.
• Users may become fatigued, provide mechanism to skip over or quickly use fre-quently used items.
Blankertz et al.[29] describes a speller ’Hex-o-Spell’ that uses BCI for typing Thetyping speed was between 2.3 and 5 char/min for one subject and between 4.6 and 7.6char/min for the other subject.
Hohne et al. [30] proposes a novel approach using auditory evoked potentialsfor the example of a multiclass text spelling application. To control the ERP speller,BCI users focus their attention to two-dimensional auditory stimuli that vary in both,pitch (high/medium/low) and direction (left/middle/right) and that are presented viaheadphones. The resulting nine different control signals are exploited to drive apredictive text entry system. Users spelled with more than 0.8 characters per minute onaverage (3.4 bits/min).
DASHER is a humanâAScomputer interface for entering text using continuous ordiscrete gestures. Wills et al.[31] et al. evaluates DASHER as well-matched to thelow bit-rate, noisy output obtained from brainâAScomputer interfaces (BCIs) It is notnecessary to build a BCI augmented HCI component from scratch, BCI could be used asan additional dimension to existing system, increasing the overall efficiency and speed.
30
Chapter 3
Design of Experiment
The figure 3.1 represents block diagram of a typical BCI system(BCIs). The sig-nals(EEG) from the subject are acquired and are converted into control signals for thedevice the BCI systems is made for. Typically a BCIs is a closed loop system, i.e amechanism for continuous feedback exists. The feedback provides an instantaneous vi-sualization, decide the next command for the device and also helps to increase accuracyof the system.
Figure 3.1: Block diagram representing BCI System
BCI systems are developed to determine and quantify the patterns of brain signals.The signals can originate due to actual actions or subject’s intention. The purpose ofany such system is to translate these features in real-time to device commands that willaccomplish subject’s intent.
31
Figure 3.2: Components of a BCI System
Figure 3.2 shows the steps involved in a BCI system in details. A BCI systemconsists of 5 sequential components:
• Signal Acquisition - The brain signals are measured and recorded using variousbrain imaging technologies like EEG, MEG, fMRI etc. The signals are amplifiedand digitalized and then passed to the computer.
• Artifact Processor - Artifact are the undesirable signals, they are mostly of non-cerebral origin. Presence of artifacts adversely affects the system and hence theyneed to be removed. In this step the focus is on removal of artifacts along withsome signal preprocessing.
• Feature Extraction - It is the process of analysing the characteristics of the dig-ital signal and to represent them in a form suitable for translation into command.Feature Extraction involves multiple iterations of signal enhancement, dimen-sionality reduction, extraction of features and finally selection of optimal features.
• Feature Translation - The resulting feature vector is then translated into devicespecific commands using translation algorithms. The features are classified andsome post-processing is done on top the classification to achieve this.
Artifact Processor, Feature Extraction and Feature Translation together can beseen as BCI transducer that converts variation brain signals into device specificcommands.
32
• Device Output - The commands are then passed to the device, that completessubject’s intent. The device also provides a feedback, thus closing the loop.
Figure 3.3 depicts the block diagram of the proposed system. The system can be dividedinto two sub systems: the BCI sub system and the HCI sub system.
Figure 3.3: Block diagram of the proposed BCI-HCI System
The BCI sub system: The BCI sub system involves all the steps of a typical BCIsystem as described above. The BCI sub system is developed in two steps, first tocreate a model for the application i.e off-line analysis and second real-time processingof captured signals i.e on-line analysis.
A large number of signals from the subjects are captured for building a model.Various signal preprocessing techniques are applied to this set of signals, and suitablefeatures are extracted. Generally, the selection of preprocessing method and feature setis an iterative process. An exhaustive set of features and preprocessing methods areinitially selected and applied to the signals, and then each of them are evaluated. Thefinal model will be based the one with feature vector and preprocessing methods thatgave the best accuracy and robustness. The model is created by training a classifier withthe training data, i.e using the captured signals, and evaluating using cross-validationmethods.
Once the model is ready, the BCI system is almost ready. Now the signalsthat are being captured from the subject can be classified in real-time, using exactsame preprocessing techniques and feature vector as were used to train the model.
33
3.1. HARDWARE DEVICE : EMOTIV EPOC+
The on-line system will interact with the HCI component to fulfill the subjects intention.
The HCI sub system: A prototype application enabling the subject to interactand test the BCI system, with the ultimate aim of extending the the application tofull-fledged text entry system is developed as part of the study. The application willdisplay a cue to the subject to indicate active mode, in which the device is capturing thesignals and then based on the captured signal the device will provide visual feedbackalong with audio sound and some other activity like opening of another application onthe host machine.
One of the issue that we faced was to obtain reliable, labeled data. Non-clinicaldata, that suited our purpose, was obtained in our laboratory. The following section,discusses in details the ’data acquisition’ process. It gives a brief introduction on aboutthe hardware and the software used for the project, followed by a detailed discussion onthe experimental setup and methodology followed for ’data acquisition’. We will endour discussion here by explaining the ’data labeling process’.
3.1 Hardware Device : Emotiv EPOC+
Figure 3.4: Emotiv EPOC+
Emotiv EPOC+ is a EEG headset as shown in figure 3.4 build by Australiancompany Emotiv Systems. The device is the main component for the entire project
34
3.1. HARDWARE DEVICE : EMOTIV EPOC+
Specification Emotiv EPOC+Number of channels 14 (plus CMS/DRL references P3/P4 locations)
Channel names(International 10-20 locations)AF3 F7 F3 FC5 T7 P7 O1 O2P8 T8 FC6 F4 F8 AF4
Sampling method Sequential sampling. Single ADCSampling rate 256 SPS (2048 Hz internal)
Resolution
14 bits 1 LSB = 0.51ÎijV(16 bit ADC2 bits instrumental noisefloor discarded)
Bandwidth0.2 - 45Hz digitalnotch flters at 50Hzand 60Hz
Filtering Built in digital 5th order Sinc flterDynamic range (input referred) 8400ÎijV (pp)Coupling mode AC coupledConnectivity Proprietary wireless 2.4GHz bandPower LiPolyBattery life (typical) 12 hours
Impedance MeasurementReal-time contactquality using patentedsystem
Table 3.1: Specifications of Emotiv EPOC+[9]
being into action. Emotiv EPOC+ captures data in high resolution using 14 EEGchannels plus 2 references for accurate spatial resolution, at sample rates of 128samples per seconds (SPS) and 256 SPS[9]. The deice operates at a resolution of14 bits per channel and frequency response between 0.16 - 43 Hz. Emotiv EPOC+comes along with a driver suite and SDK that can be used for capturing raw data or foradvance analysis e.g. facial expressions, mental commands, and performance metric.
Emotiv EPOC and EPOC+ has been extensively used for EEG based BCI research.Ramirez et al.[32] describes the use of EPOC for emotion detection. EPOC is widelyused for P300 based research, Duvinage et al.[33] studies the performance of EmotivEPOC for P300 based applications. EPOC is also evolving as a major player in BCIbased gaming systems.
35
3.2. SOFTWARE
3.2 Software
3.2.1 Emotiv TestBench
Emotiv TestBench is a software that comes as part of the Emotiv SDK. It allows torecord and replay files in EEGLAB format. TestBench is a powerful tool that can beused to visualize the signals captured, convert the data into different file formats likecsv etc. Some advance features like Fast Fourier transform (FFT), wireless packet ac-quisition/loss display are also supported[9].
Figure 3.5: Emotiv TestBench Suite
As shown in figure 3.6c, the software indicates the headset battery level, and also thesensors contact quality. The tool displays the data from all channels for 5 sec of rollingtime window. We can view data from selective channels or for a particular frequencyband. The tool is particularly helpful in offline recording of data for further analysisand classification.
36
3.2. SOFTWARE
(a) Emotiv TestBench Suite (b) showing the status of each sensor
(c) Emotive SDK Suite
Figure 3.6: Emotive TestBench and SDK Suite
3.2.2 EEGLAB - MATLAB Toolbox
EEGLAB is a MATLAB Toolbox[33] that provides interactive graphic user interface(GUI) for analysis and modeling the EEG signals figure 3.7. EEGLAB comes withpre-implemented methods and functions, e.g. independent component analysis (ICA),time/frequency analysis (TFA) and various visualizing techniques etc, that allows toprocess the EEG signals. The Toolbox also has a command line interface, targeted atadvance users. The Toolbox also provides a sample EEG dataset that could be usedalong with the extensive tutorial to learn the software.
37
3.2. SOFTWARE
Figure 3.7: EEGLAB - MATLAB Toolbox
EEGLAB is an open-source platform, that allows researchers to develop methodsand implements the algorithms as EEGLAB ’plug-in’. The Toolbox is available for allpopular operatin systems. It is best integrated with MATLAB 2014, for versions laterthan MATLAB 2015, some tweaks need to be made before using the Toolbox[33].
3.2.3 OpenViBE
OpenViBE initially stood for Open Virtual Brain Environment, but since its no longerlimited to ’Virtual Environments’, OpenViBE is no longer an acronym[34]. OpenViBEis a software platform for Brain-Computer Interfaces (BCIs). It is a free and open sourcesoftware with capabilities to acquire, filter, process, classify and visualize brain signalsin real time. OpenViBE bundle contains various useful tools, and two of the tools vizOpenViBE Acquisition Server and OpenViBE Designer that are used in this project arediscussed here.
OpenViBE Acquisition Server
OpenViBE Acquisition Server as shown in figure 3.8a is used to communicate with thesignal acquisition devices. It acts as a server that listens to a specific port (as set in theconfiguration), and forwards the acquired signals to OpenViBE Designer. It does notdirectly communicates with the hardware, but to the drivers provided with the hardware
38
3.2. SOFTWARE
device.
(a) OpenViBE Acquisition Server
(b) OpenViBE Acquisition Server Configu-rations
(c) OpenViBE Acquisition Server Configura-tions
Figure 3.8: OpenViBE Acquisition Server and Configurations Settings
OpenViBE Acquisition Server has a graphical interface that can be used for adjust-ing the configuration details. The GUI is useful tool to configure the port number, buffersize and the hardware device being used from a list of all supported devices. OpenViBEAcquisition Server also allows to fine tune the the signals before it is passed to theDesigner.
OpenViBE Designer
OpenViBE Designer, fig 3.9 is part of the OpenViBE bundle, and helps in creatingand executing the OpenViBE procedures. It is a graphical user interface (GUI) thatsupports ’drag and drop’ method to build procedures. The Designer comes with a set ofboxes usually called ’box algorithms’, which can be arranged on the Designer to createdifferent procedures. These boxes are the ready to use implementation of standard
39
3.2. SOFTWARE
algorithms and other useful functions. Some of the boxes that are widely used in ourprocedures are described in the following section.
Figure 3.9: OpenViBE Designer
Acquisition Client
Acquisition Client help in acquiring EEG signals from the devices. It connect to theOpenViBE Acquisition Server and reads the data being captured. Along with thedata it can also read the experiment information, stimulation and localized channel data.
Generic Reader
This is used to read stored data from files. They can read any data saved using Genericstream writer box. The file saved have number of streams. The files are generally savedin ".ov" format.
Generic Writer
This is used to stored data into files in binary format. Generic Writer can write any datawith multiple number of streams into a single file. The files are generally saved in ".ov"format.
40
3.2. SOFTWARE
(a) Acquisition Client (b) Generic Reader (c) Generic Writer
(d) Identity (e) Channel Selector (f) Temporal Filter
(g) Time Based Epoching(h) Stimulation BasedEpoching (i) Feature Aggregator
Figure 3.10: Images for various OpenViBE boxes
Identity
Identity box is useful to duplicate the input stream data into output stream. There is aone to one correspondence between the input and associated output stream. This helpsto isolate the logic from the method of taking the input, thus allowing to change thesignal acquisition box at any point of time.
Channel Selector
Channel Selector helps to select or reject channels of the acquired data. The channelscan be selected using the names or index i.e. their relative position in the data, startingfrom index 0.
Temporal Filter
Temporal Filter allows to filter the input data into a particular range. ( Butterworth,Chebychev, Yule- Walker), the frequency band and also the kind of filter ( low pass ,band pass, band loop).
41
3.2. SOFTWARE
Time Based Epoching
Time Based Epoching is used to find the epochs i.e. signal segments, by adjusting thetime offset between two consecutive epochs. They are used when the size of data blockis not good enough.
Stimulation Based Epoching
Stimulation Based Epoching is more of less like the Time Based Epoching box, butit uses the stimulation to generate epochs. The epochs are generated when a partic-ular stimulation is received. This box also allows to configure a start offset to somemilliseconds of the time of occurrence of the stimulation.
Feature Aggregator
Feature Aggregator helps to aggregate the various features extracted into one featurevector, for use in the classifier. This box requires that all the features given as input tothe box must have same rate of data generation.
Lua
Lua is an embeddable scripting language. It is portable, fast and lightweight language.It an interpreted language with automatic memory management and garbage collection.It is ideal for prototyping, rapid scripting and configurations.
Figure 3.11: Lua plugin
OpenViBE provides mechanism to plug-in Lua script on the OpenViBE Designer.In this project, Lua scripts have been used to control the procedures by generating ap-propriate stimulation(data labels). Lua is also used to generate the graphic cues to thesubject for data capturing and also to indicate recording of the signals.
42
3.3. EXPERIMENT SETUP
3.3 Experiment Setup
Fifteen healthy subjects(ten male), right-handed, between ages of 23 and 29, volun-teered for experiment. Each subject provided written informed consent before partic-ipating in the experiment. None of the subjects had prior BCI training. The subjectswere provided with information only related to the activities they performed. They werenot informed about the experimental design and the hypothesis/objective of the study.
(a) Cross on screen(b) Left arrow onscreen
(c) Right arrow onscreen
Figure 3.12: Data Acquisition- Various stimulation shown to subject
The subjects performed or kinesthetically imagined left and right hand movements,and the EEG signals were recorded on all 16 channels using the Emotiv EPOC+ deviceand OpenViBE. The subjects were provided with stimulus on GUI, designed usingthe OpenViBE tool. They were provided with two stimulus,a cross figure 3.12a thatmarked the beginning of the trial followed by a left or a right arrow figure 3.12 onwhich the task was performed, recording EEG signals from movement/imagery forapproximately 4 seconds.
The experimental setup was done at the BCI LAB facility at IIT Kharagpur. Sub-jects were seated in chair with their arms extended, resting on the desk and their legsextended, resting on a footrest. The lab was well illuminated by artificial lights, andthere was no background noise while the data was recorded. The stimulus were pre-sented on a nineteen-inch LCD monitor, kept in-front of the subject making a viewingangle of approximately 1.5 degrees. The subject was focused on the stimulus duringthe entire session. The experiment were performed in presence of a researcher, with nointeraction with the subject during the recording session.
43
3.4. DATA ACQUISITION
3.4 Data Acquisition
The Data was acquired from each subject over four session. Each session lasted about20 to 30 minutes, including the time for experimental setup and data recording. Thesessions were held on different days. Each session, had 25 trials of left and another25 trials of right hand movements to be performed/kinesthetically imagined by the sub-jects. The subjects were shown left and right arrows in random order on the screen andsubjects have to think about lifting/moving the corresponding arm.
Figure 3.13: Recording session in progress
Subjects were instructed to focus on the cues to minimize noisy reading. To min-imize the artifacts in the recordings, subjects were asked to minimize eye blinks, jawand head movements during recording. They were allowed to swallow, blink and adjustto relax during the cross presentation. The duration of cross presentation was adjustedif required for subjects, and also was varied randomly. A Lua stimulator figure 3.11,was used to generate the stimulus randomly for Left or Right movement. For each trial,an arrow was randomly shown on the screen, based on which the subjects performedor imagined a right hand movement/squeeze or left hand movement/squeeze respec-tively.The data was captured at 256 samples per seconds on all 16 channels i.e. labeledas channels 1-14: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4 andchannel 15,16: P3, P4 were used as reference channels.
44
3.5. DATA LABELING
3.5 Data Labeling
The data acquired from the experiment will be processed further, and finally a modelwill be created that will determine the subject’s intention based only on the capturedEEG signals. This is a multi-step process, that requires removal of noise and artifacts,preprocessing, extraction of features and finally building a model using classifiers. Thisfollows a supervised learning approach, where the input to the classifier is correctlylabeled. The classifier learns the rules for classification based on this labeled data.Thus,the data captured needs to be correctly labelled.
The data is recorded in to a single file as a continuous stream of signals. The datais recorded in random order based on the stimulus the subject received, hence thereis no clear demarcation of Left and Right signals recorded. To ensure proper label-ing of data stimulus generated i.e. OVTK_GDF_Right for Right hand movement, andOVTK_GDF_Left for Left hand movement, using the LUA script are also recordedalongside to the data file. The data for left vs right movement can be separated, andany later point using stimulation based epoching box. The algorithms for generatingthe stimulation :
Algorithm 1 Random_Stimulus_Generation1: Inputs: number_of_trials, baseline_duration, display_cue_duration2: Output: A randomly generated sequence of stimulus, with equal number of trials
for Left and Right movement3: Output_stimulation : OVTK_StimulationId_ExperimentStop {output stimulus for
end of experiment}4: for i = 1i <= numbero ftrials do5: Randomly fill table T with OVTK_GDF_Right and OVTK_GDF_Left6: end for7: Output_stimulation : OVTK_GDF_Start_Of Trial {output stimulus for start of
trail}8: Output_stimulation : OVTK_StimulationId_BaselineStart {output stimulus for
start of Baseline}Ensure: EEG captured for baseline_duration
9: Output_stimulation : OVTK_StimulationId_BaselineStop {generate stimulus foreach trial}
45
3.6. EXPERIMENTAL PROCEDURES
10: for i = 1 i <= 2∗number_o f _trials do11: Output_stimulation : OVTK_GDF_Start_Of Trial {output stimulus for start of
trial}12: Output_stimulation : OVTK_GDF_Cross_On Screen {output stimulus for dis-
playing cross on screen}13: Output_stimulation : Table T[i] i.e. OVTK_GDF_Right or OVTK_GDF_Left
{output stimulus for displaying Left/Right arrows on screen}Ensure: Cross is displayed long enough for subject to relaxEnsure: EEG is captured long enough after arrow is displayed14: Output_stimulation : OVTK_GDF_End_Of Trial {output stimulus for end of
trial}15: Wait for a small random amount of time before next trial16: end for17: Output_stimulation : OVTK_StimulationId_ExperimentStop {output stimulus for
end of experiment}
3.6 Experimental Procedures
3.6.1 DATA Acquisition and DATA Labeling
Objective : To Acquire the EEG signals from subjectsBoxes Used : Acquisition Client, Lua Stimulator, Identity, Graz Visualization,Generic Stream Writer, Player Controller.
Description of the Procedure:The procedure aims to collect the EEG data using an 16 channel(14 EEG and 2
Gyro channels) Emotive Epoch+ device. The subjects are required to wear the Emo-tive Headset, which connects to a desktop computer via blue-tooth. The subjects arerequired to perform/imagine movements according to the stimulus provided.
The procedure consists of a Acquisition client, that connects to the OpenViBEacquisition server on port 1024(configurable) and receives the EEG signals from thedevice. The Lua stimulator executes the Lua script, to generate the stimulus, whichare converted to visual cues using the Graz visualization box. The Graz visualizationbox generates the left and the right arrows and crosses appear on the screen. The datacaptured by the Acquisition client and the stimulations by Lua stimulator are writteninto a single file as tagged data, using the Generic stream writer. The Identity boxjust passes the signals and stimulations as they arrive without any modification orprocessing, these help to keep the various parts of procedure independent of each other.
46
3.6. EXPERIMENTAL PROCEDURES
Figure 3.14: Procedure for DATA Acquisition (from Emotiv EPOC+ ) and DATA La-beling
For example, the identity box removes the dependency of the procedure on the writerbeing used or the reader/input method being used. The player controller will terminatethe session once the desired amount of stimulus have been generated.
This procedure will give us an ’.ov’ file that consists of continuous EEG recordingthat is tagged with the stimulations corresponding to it.
3.6.2 Data Consolidation
Objective : To accumulate the EEG signals collected over different sessions fromsubjects into a single Data fileBoxes Used : Generic Stream Reader, Stimulation Filter, Signal Concatenation,Generic Stream Writer, Player Controller.Description of Box:
Signal Concatenation Box: This box is used to concatenate the signals and thestimulation from various input stream into single output stream. The input data areread in parallel and must have same characteristics: frequency, channels, samples perseconds.
47
3.6. EXPERIMENTAL PROCEDURES
Figure 3.15: Signal Concatenation Box
Description of the Procedure:This procedure reads the EEG signals and stimulations from the two input files in par-allel and merges them into a single output stream. The data and stimulation stored onthe data file are read using the Generic Stream Reader and passed through Signal Con-catenation box to get a single stream output for each stimulation and signals.
Figure 3.16: Procedure for Data Consolidation - combining data from different sessionsinto single file
The stimulation stream from one of the reader is filtered using a Stimulation Filterto reject the session termination (OVTK_GDF_End_Of_Session) stimulation. Thisstimulation is used to indicate the termination of the current session, and we need toensure that only one such stimulation is present on the file.
48
3.6. EXPERIMENTAL PROCEDURES
This procedure will generate a single file in desired format(’.ov’ in thiscase), that has all the signals from the two input files. The original files willnot be modified. The consolidated file will have only one session termination(OVTK_GDF_End_Of_Session) signals and thus, can be directly be used in other pro-cedures as input.
3.6.3 Data Format Conversion
Objective : To convert the data in one format to anotherBoxes Used : Generic Stream Reader, CSV File Writer, Timeout, EDF File Writer,Brainamp File Writer, GDF File Writer, Player Controller.Description of Box:
EDF File Writer, Brainamp File Writer, GDF File Writer: These boxes is areused to write the input stream into files of specific formats. The EDF file writer writesEEG data in European Data Format (EDF). The ’.eeg’ and ’.vmrk’ files are generatedby Brainamp file writer and GDF file writer writes data into file in GDF format.
(a) EDF File Writer, GDF File Writer andBrainamp File Writer Boxes (b) Timeout Box
Figure 3.17: Images of various OprnViBE boxes used
Timeout: This box is used to determine the end of a signal stream, and is helpfulfor procedure control. The Timeout box will generate a stimulus(configurable) afterwaiting a particular time period and this could be used in player controller to stop/pausethe procedure.
Description of the Procedure:This procedure is useful when we wish to use the recorded data for visualization or for
use in other softwares like MATLAB. The data is read from the file, and written intoanother file using a different writer. We can use any supported writer boxes to get data
49
3.7. SUMMARY
Figure 3.18: Procedure for Data Format Conversion - converting data from one formatto another
in different output formats.
A timeout box is used in conjuncture with a player controller to signal thetermination of the input data stream. The ’.ov’, GDF and EDF file format also recordsstimulation along with the signals, however the stimulation will be lost in case weconvert into csv format.
This procedure converts the OpenViBE’s ’.ov’ format to comma-separated values(CSV) file format.
3.7 Summary
This section can be summarized as :
• Emotiv EPOC+ plus was used for data acquisition
– Data from all 16 channels (14 EEG and 2 reference) was collected
– Sampling frequency of 256 samples per second was used
50
3.7. SUMMARY
• OpenViBE software was used for generating graphical cues for data acquisition
• Data was collected at BCI LAB IIT Kharagpur facility
– The data was collected in presence of a researcher, with no interaction withthe subject during the recording session
– The cue(left/right arrows) displayed on a nineteen-inch monitor in randomorder
– The subject was relaxed, and all data acquisition was done in a well venti-lated and illuminated quiet environment
• Data was collected from 15 naive subjects of age group 23 to 29 years
– 10 healthy male subjects
– 5 healthy female subjects
– Data was collected in four sessions per subject
– Each session had 25 left and 25 right hand movements recorded continu-ously for about 12 minutes
• Lua script was used for procedure control and to generate stimulation for datalabeling
51
Chapter 4
BCI Signal Preprocessing
Signal preprocessing aims to remove the noise and enhance the information content ofthe EEG signal. The raw EEG signals captured are preprocessed, before performingfeature extraction and classification. Another reason for preprocessing is the presenceof artifacts, i.e. parts of signal due to background brain activity, which may lead to in-correct conclusions. This chapter describes various techniques for EEG preprocessing,but it is not necessary that all are used in this project. The methods used for preprocess-ing depends on noise levels present in raw signals and also on the techniques that willbe used in further processing of the data.
4.1 Preprocessing Techniques
Data preprocessing aims to improve the information content of the data in question. Theresult of preprocessing is a noise free enriched training set. Here we describe multiplepreprocessing techniques that are used for BCI applications.
Referencing
The differences in results of various studies are partially due to the differences inreferencing[35]. The EEG signals are generated due to neural activity. When EEGis measured at cortex or scalp, the amplitude of signal captures by sensor is relativelymeasured. The measured amplitude will be consist of activity at the sensor position,activity at reference position and noise. Thus, the reference must be chosen such thatthe activity at that position is zero or minimal. In practice we use nose, earlobes asreference sites. The common ways for referencing are:
52
4.1. PREPROCESSING TECHNIQUES
Common Reference
This is widely used method for referencing. Only one electrode/sensor position is usedfor all electrodes. The reference point will be far from each sensor position.
Common Average Reference(CAR)
This is based on the fact that the sum of brain activity at any given time will be zero,hence the average of brain signals captured at all positions is subtracted from eachelectrode measured signal.
Current Source Density (CSD)
This approach uses Laplacian to measure the rate of change of current flowing throughscalp. The difference between a particular electrode and its neighbors gives currentsource density estimation.
Re-Sampling
A high sample rate recording means more data, i.e. it requires more memory, process-ing power and computation time. Thus, typically the data is down-sampled to a lowerfrequency(e.g. original frequency (f) /2 or f/4 ).
A EEG experiment will require following kind of EEG recordings
Data to be processed=
Number of Subject * Number of Channels *
Length of Samples in seconds * Sampling Rate *Number of Bits per channel
(4.1)
For typical 10 min EEG recording using Epoch+ with sample rate of 256Hz andrecording session length of 10 min(600sec), and A/D converter with 14 bit per channelsand 14 channels. The data to be processed will be :
Data to be processed = 20*14*600*256*14 =602.112 Mb (megabit) (4.2)
Simple Temporal and spatial Filters
The EEG signal quality can be greatly improved by use of simple Spatial or Temporalfilters. Filters are helpful in de-noising and are usually applied at the initial steps of
53
4.1. PREPROCESSING TECHNIQUES
preprocessing. e.g. a High pass filter can be used to remove low frequency (f < 0.1 Hz).
Temporal Filters
Band-pass, low-pass, high-pass, and notch filters are the commonly used temporal fil-ters. Notch filters rejects a narrow frequency band and are useful for removal of power-line noise. Similarly band-pass or low-pass are used to filter out very high or lowfrequency bands. Thus, we are left with a particular frequency band. In our project wefocus on the motor and sensorimotor rhythms, that have a frequency band of 8 to 30 Hz.Thus, a band pass filter is used to select the relevant band.
Spatial Filters
Every body limbs have a respective region in the brain, that controls and co-ordinatesthe limbs. Thus, finding a spatial-filter that could discriminate such regions is of greatinterest[36]. For example to design a BCI component based on motor imagery move-ment of hands/feet, the relevant signals are located in the sensorimotor region. Hence,only the brain signals from this region i.e. sensors C3 and C4 are sufficient. Thesesignals are captured from left and right sensory motor cortex and need to be spatiallyfiltered[37]. The Surface Laplacian (SL) filter and Common Spatial Pattern (CSP) arethe most used spatial filter used to for removal of background noise from EEG.
Surface Laplacian(SL) Filtering
Surface Laplacian generates a number of output, from a given number of inputs,where the output channel is linear combination of input channels. Each sample isprocessed independently of the past samples. Surface Laplacian also increases thetopographical specificity and helps in spatially filtering spatially broad features likesvolume-conduction effects.For example, if we take the input over 10 EEG channels asC3,C4,FC3,FC4,C5,C1,C2,C6,CP3,andCP4 and do a surface Laplacian aroundC4 and C5, and let the spatial filter coefficient’s be: 4; 0; -1; -1; 0; 0; -1; 0; 0; 4; 0; -1;
54
4.1. PREPROCESSING TECHNIQUES
0; 0; -1; -1; 0; -1, the filtered output on channel OC1 and OC2 will be as follows[34]:
OC1 =
4∗C3 +0∗C4 +(−1)∗FC3 +
0∗FC4 +(−1)∗C5 +(−1)∗C1 +
0∗C2 +0∗C6 +(−1)∗CP3 +0∗CP4
⇒ 4∗C3−FC3−C5−C1−CP3
(4.3)
OC2 =
0∗C3 +4∗C4 +0∗FC3 +(−1)∗FC4+
0∗C5 +0∗C1 +(−1)∗C2 +(−1)∗C6+
0∗CP3 +(−1)∗CP4
⇒ 4∗C4−FC4−C2−C6−CP4
(4.4)
Common Spatial Pattern(CSP)
Common Spatial Pattern algorithm is used to distinguish between two types of signals.CSP algorithm increases the signal variance for one condition and minimize thevariance for another condition. This method is useful for task like motor-imagerymovements i.e. Left/Right movements. The discrimination is based on the variance orpower of the signals. CSP algorithm relies on calculation of the trace normalizationof the input signals. CSP is a supervised learning approach, that uses labeled data todistinguish between two types of classes.
Given input data {X ic}K
i=1,
where i is the trial for class C ∈ 1,2, and Xc is a NxT matrix where N is number ofchannels and T is the number of samples per time per channel.
The goal of CSP is to find a linear transformation W (a NxM matrix), that gives Mspatial filters as per:
xCSP(t) =W T x(t) (4.5)
x(t) is the input vector at time t for all the channels.
55
4.1. PREPROCESSING TECHNIQUES
Independent Component Analysis (ICA)
Independent Component Analysis (ICA), unlike the above mentioned techniques, canbe considered as an advance technique. ICA is used when the signals acquired arelinear combination of signals from multiple sources and the noise has comparableamplitudes.
Let X be the input signal vector measured, it can be considered as a linearly mixingof statistically independent sources inside the brain:
X = MY (4.6)
where M is an unknown mixing matrix and Y represents the vector of hidden indepen-dent sources[38].ICA finds the hidden sources Y by calculating a matrix W such that :
a =WX (4.7)
and the components of the feature vector a are maximally statistically independent, i.e.,
P(a) = ΠMt=1P(at) (4.8)
Artifact Detection
Artifacts deteriorates the quality of the signal and may lead to incorrect interpretationof the signals. Artifacts may occur at any point of time while recording the signals. Forexample, it can be caused by power interference or loose sensors. Another type of arti-facts may arise due to eye blinks and jaw movements, and they can be easily recognizedby human eye due to characteristic pattern and location of occurrence. Human heart isanother source of artifact and can be identified by regularity and their coincidence withthe ECG signal. Some of the popular methods to remove artifacts are listed as follows:
• Signal thresholding - Signal segments with amplitude higher than a threshold arecategorized as artifacts. The threshold is computed using standard deviation ofsignal or can be chosen to be a constant.
• Pattern recognition - Specific patterns, for some well defined artifacts are used fortraining and then subsequent removal of that artifact from the signals. This type
56
4.2. EXPERIMENTAL PROCEDURES
of method is useful for artifacts as eye-blinks, heartbeats etc. Even if the artifactis not well known, a simple cluster analysis can be useful.
• Artifact property based - This method relies on identification of specific prop-erties of the EEG artifacts that could be used to identify and then remove theartifacts from the raw signals. For example, eye-blinks produce low frequencyrhythms (1-3 Hz) at frontal sensors.
4.2 Experimental Procedures
4.2.1 Generic Preprocessing of BCI Signal
Objective : To apply generic preprocessing techniques like bandpass filterBoxes Used : Generic Stream Reader, Temporal Filter, Signal Display, General Statis-tics Generator, Identity, Player Controller.Description of Box:
General Statistics Generator This box is used to log the statistical details of theinput signals. The details include maximum, minimum and mean of the input signal.The statistics is generated for each channel and is logged into a xml file.
Figure 4.1: General Statistics Generator
Description of the Procedure:This procedure reads the previously recorded data and applies a bandpass filter with therange of 8-30 Hz frequency. The procedure is designed only for visualizing the effectof bandpass, and the modified data is not recorded back to a file, rather this filter willbe used in other procedure to have same effect.
At the end of this procedure we will get a signal stream that has artifacts due toelectrical lines and other environmental disturbances (with frequency >30Hz) removed.Also the output signal will consist of only alpha, beta and mu rhythms(frequency range8-30Hz).
57
4.2. EXPERIMENTAL PROCEDURES
4.2.2 Preprocessing the BCI Signal using CAR and ICA
Objective : To apply CAR and ICA independently on the EEG signal and determinemost feasible preprocessing method for our project.Boxes Used : Generic Stream Reader, Temporal Filter, Common Average Reference,Independent Component Analysis(FastICA), Signal Display, General Statistics Gener-ator, Identity, Player Controller.Description of Boxes
Common Average Reference: This box re-references the input signals by subtract-ing a common average of all the channels at that time instance.
Independent Component Analysis(FastICA) This box decomposes the inputsignals into individual components by performing the independent component analysisof the received EEG signal. Generally a temporal filter is used in conjunction with thisbox to provide a chunk of signals to the ICA box for better performance.
Description of the Procedure:This procedure reads the previously recorded data and applies a bandpass filter withthe range of 8-30 Hz frequency. The filtered signal that consist of only alpha, beta andmu rhythms(frequency range 8-30Hz) is then further processed using common averagereference(CAR) or independent component analysis(ICA). The performance of theOpenViBE system, the quality of output signal and effect of the technique on furtherprocessing steps are taken into considerations.
At the end of this procedure we will get a signal stream that has better informationcontent than the input signal, after being preprocessed using bandpass filter and CARor ICA.
4.2.3 Preprocessing BCI Signal using CSP and Regularized CSP
Objective : To train and apply common spatial pattern(CSP) and Regularized CSPpreprocessing techniques on the EEG signal and determine most feasible preprocessingmethod for our project.Boxes Used : Generic Stream Reader, Temporal Filter, Spatial Filter,CSP Spatial FilterTrainer, Regularized CSP Spatial Filter Trainer, Stimulation Based Epoching, SignalDisplay, General Statistics Generator, Identity, Player Controller.Description of Boxes
58
4.2. EXPERIMENTAL PROCEDURES
CSP Spatial Filter Trainer: This box increases the variance between two class ofsignals, by maximizing the variance for one class and minimizing it for another. Thisfilter reads inputs for two class and adjusts the weights of all the channels of inputsignals such that the signals of two classes are most discriminated. The box computesa trace normalization.
Regularized CSP Spatial Filter Trainer This box aims similar to the CSP trainerbox to generate the set of weights that will maximize the difference in signals of twoclasses. This box relies on finding a linear transform of the data that will bring out thedistinctiveness of the two classes, i.e. left and right signals.
Description of the Procedure:This procedure reads the previously recorded data and applies a bandpass filter with therange of 8-30 Hz frequency. The filtered signal that consist of only alpha, beta and murhythms(frequency range 8-30Hz) is then further processed separated into two halvesone for each class of movement i.e left and right.
Each signal class, separated using a stimulation based epoching, is then passed tothe CSP Spatial Filter Trainer as shown in Figure4.6a. This will train the filter andgenerate a XML file. The CSP Spatial Filter Trainer also performs a dimensionalityreduction from 14 or 16 channels to 6(configurable) spatially filtered channels.Thegenerated configuration XML file is then used with a spatial filter, figure4.6c, to applythe trained CSP filter to the input signals.
At the end of this procedure we will get a configuration file, that will have weightsfor each channel for CSP filter or regularized CSP filter. This configuration file will beused in procedures that follow for dimensionality reduction and for filtering the inputsignal.
59
4.2. EXPERIMENTAL PROCEDURES
Figure 4.2: Procedure for Preprocessing the BCI Signal(Generic)
(a) Common Average Reference (b) Independent Component Analysis(FastICA)
Figure 4.3: Images for OpenViBE boxes
60
4.2. EXPERIMENTAL PROCEDURES
(a) Screen-shot of the OpenViBE procedure - CAR
(b) Screen-shot of the OpenViBE procedure - ICA
Figure 4.4: Procedures for Preprocessing the BCI Signal(CAR & ICA)
61
4.2. EXPERIMENTAL PROCEDURES
(a) CSP Spatial Filter Trainer (b) Regularized CSP Spatial Filter Trainer
Figure 4.5: Images of OpenViBE boxes used
(a) Screen-shot of the OpenViBE procedure- CSP
(b) Screen-shot of the OpenViBE procedure- Regularized CSP
(c) Screen-shot of the OpenViBE procedure - Spatial Filter
Figure 4.6: Procedures for Preprocessing the BCI Signal(CSP & Regularized CSP
62
4.3. RESULTS
4.3 Results
This chapter describes various preprocessing techniques. Preprocessing is an importantstep as it increases the information content of the raw signal. The artifacts due to powerline interference and other artifact above 50 Hz were easily removed using a notchfilter, or a band-pass filter that allowed only signals in the range 8-30 Hz.
The signals were recorded at 256 Hz,and to reduce computation time and FourierTransform, they were down-sampled to 128 Hz during testing and development phaseof the project.
The optimal length of the segments for a Left or Right movement/intent to moverecording was found to be 4 sec. The length of the segment depends on the signal andkind of features we are looking for.
As described in the procedures, CAR, ICA, CSP and Regularised CSP weremajor preprocessing techniques used and tested for the study. CAR had a minorimprovements in the results, but use of common average of all channels resulted inartifacts still being present and at some time magnified.
63
4.3. RESULTS
(a) Using - CAR
(b) Using - Regularized CSP
(c) Using - CSP
Figure 4.7: Unfiltered EEG signals & Filtered EEG signals after some preprocessing
64
4.3. RESULTS
(a) Radar curve - Bandpass (b) Radar curve - CAR
(c) Radar curve - CSP (d) Radar curve - Regularized CSP
Figure 4.8: Radar Curve showing the EEG signal for each channel on application ofvarious preprocessing methods
ICA, though theoretically was the best method for the preprocessing, is verymemory intensive. The requirement for large computation time made it unfit to beused for our project, where we will be processing real-time signals and such amount ofcomputation time will unnecessarily slowdown the system.
Based on our study, CSP and Regularized CSP gave the best results. CSP and Reg-ularized CSP also reduced the dimensionality from 14 channels data to just 6 channelsdata. The use of variance between channels also limits the random artifacts introducedsuch as eye-blinks. The use of 6 channels for CSP filter output gave the best results.We have used Regularized CSP as the preprocessing technique along with the bandpassfilter for our project.
65
Chapter 5
Features Extraction
Once the signal preprocessing is completed, information to classify individual states isextracted, this processing called Feature Extraction. Features are values or informationthat defines some properties of the underlying signals. These properties should besufficient for distinguishing between properties of other classes and should remainunchanged or have slight variations within a class. These descriptive properties aboutthe signals are generally summed as feature vector, i.e. a vector of relevant features.
Figure 5.1: Steps leading to optimal Feature Vector
As depicted in the Figure 5.1, determining and then extracting optimal feature vector
66
5.1. TIME DOMAIN METHODS
is an important step of the study. Initially a large number of features are considered forthe study. These features are evaluated and the subset of features, that gave best resultis determined. In this project, many features were tested. The following section givesa brief overview of them. The techniques to study features extraction can be broadlydivided into three categories:
• Time domain methods - these exploit the temporal information of the signal
• Frequency domain methods - exploits the frequential information embedded
• Hybrid methods - there are based on time-frequency domain and exploit bothtemporal and frequential information
5.1 Time domain methods
Temporal methods uses the variations of signals with time to define the Neurophysio-logical data acquired.
EEG signal amplitude
The amplitude of the raw EEG signals captured is preprocessed and aggregated into asingle feature vector. The EEG data is captured from various sensors/channels placedon the scalp. The signals are generally down-sampled, and the dimensionality getsreduced(like using spatial filters ) during the preprocessing, before they are aggregatedas Feature Vectors.
AutoRegressive (AR) parameters
AutoRegressive (AR) method[4] models the signal at any given time, as a weightedsum of signals at previous time and some noise, i.e. mostly Gaussian white noise.Mathematically, it can be formulated as:
X(t) = a1X(t1)+a2X(t2)+ .....+apX(t p)+Et (5.1)
where, X(t) is the measured signal at time t, Et is the noise term and a1 to ap are theauto-regressive parameters
67
5.2. FREQUENCY DOMAIN METHODS
AR tries to find the best values for ai given a series X(t). The series X(t) is assumedto be linear, stationary and zero mean. OpenViBE uses Burg’s method[39] to calculatethe coefficients.
(a) AR feature vectors using 6 coefficients (b) AR feature vectors using 15 coefficients
Figure 5.2: AR feature vectors
5.2 Frequency domain methods
The human brain waves or rhythms are associated to different mental tasks. Eachrhythm have a specific frequency, though the amplitude of the waves may change fordifferent tasks. The rhythms are synchronized with the stimulus frequency and thusfrequential information is key source to extract features. The main features that can beextracted are power spectral density and band power.
Power Spectral Density (PSD) Features
Power Spectral Density (PSD), shows the distribution(strength) of power of a signalas a function of frequency domain. It is generally calculated by squaring the Fouriertransform of the signal or by calculating auto-correlation function and then transformingit. PSD of a signal forms a very useful feature for BCI study.
Band Power Features
Band Power Features, gives the power distribution of a given band. The signals areband-pass to a particular frequency range, squared and finally averaged over a time
68
5.3. TIME-FREQUENCY REPRESENTATIONS
window. The normal distribution of the signal is obtained by taking log transform.Band power has been a useful feature for distinguishing of motor imagery classes.
band power = log(1+X2) (5.2)
where, X is the band-pass signal having a specific frequency content
Valence
The Valence of a EEG signal is the difference of the alpha to beta power ratio along thevalence plane, i.e. the difference in the power ratio in left hemisphere of the brain to thepower ratio in the right hemisphere of the brain[4].
Valence =αF8
βF8− αF7
βF7(5.3)
Figure 5.3: Valence as feature using channels FC5
5.3 Time-frequency representations
These methods extract the information that are in both time and frequency domains.These methods helps in identifying (sudden)changes in the temporal domain while still
69
5.4. STATISTICAL FEATURES
analyzing frequency domain. Short-Time Fourier Transform (STFT) or wavelets areuseful for such feature extraction.
Short-Time Fourier Transform (STFT)
Taking Fourier transform of a signal windowed over a short time period gives Short-Time Fourier Transform (STFT)[4]. The window size is fixed and causes identicaltemporal and frequenctial resolutions.
Let the signal be x(n) and the window be w, the STFT of X(n,w) is given by :
X(n,w)) =+∞
∑n=−∞
x(n)w(n)e− jωn (5.4)
Wavelet transform(WT)
The EEG signals are non-stationary i.e. the statistical property of the signals changesover time, thus STFT features are useful, however the window size is fixed, and todetermine this window size is difficult. Smaller window size gives good temporalproperties while larger windows gives better frequency information. Further, Fouriertransform do not represents finite and on-periodic signals correctly[4].
Wavelet transform(WT) provides best trade-off between frequency and temporalresolutions. The wavelet transform uses finite basis functions known as wavelets ratherthan sines and cosines functions. Wavelet transform(WT) can be expressed as :
F(a,b) =+∞∫−∞
f (x)ϕ∗(a,b)(x)dx (5.5)
where, * is the complex conjugate symbol and function ϕ is some function.
5.4 Statistical Features
The statistical features are based on the amplitude and the basic shape of the signals.The characteristic property of amplitude distribution defines the statistical feature vec-tor. The following statistical features were extracted as part of this project.
70
5.5. ENTROPY
• Mean
• Standard deviation
• Max Value
• Min Value
• Median value
5.5 Entropy
Entropy is the measure of the complexity of the EEG signal. The spectral entropy(HS)is calculated as follows:
HS(X) =−1
logN fPf (X)logePf (X) (5.6)
where, Pf is the estimated PSD for EEG segment X and N f is the number of frequencycomponents in the PSD estimate.
5.6 Hilbert Transform
Hilbert transform[4] is a linear operation that generates H(u)(t), with same domain asthe input signal u(t). The Hilbert transform derives the analytic representation of asignal u(t). An "analytic" (complex time) signal Y(t) can be constructed from a real-valued input signal y(t) as:
Y (t) = y(t)+ jh(t) (5.7)
where, Y(t) is the analytic signal constructed from y(t) and its Hilbert transformy(t) is the input signalh(t) is the Hilbert Transform of the input signal
The real and imaginary parts can be expressed in polar coordinates as:
Y (t) = A(t)exp[ jψ(t)] (5.8)
where, A(t) is the "envelope" or amplitude of the analytic signalψ is the phase of the analytic signal
71
5.7. EXPERIMENTAL PROCEDURES
5.7 Experimental Procedures
5.7.1 Features Extraction
Objective : To extract various features from the EEG signal, and to identify a minimalsubset of extracted features that gives best accuracyBoxes Used : Generic Stream Reader, Generic Stream Reader, Temporal Filter, TimeBased Epoching, Simple DSP, Signal Average, Spatial Filter, Identity, StimulationBased Epoching, Feature Aggregator, Classifier Trainer, Matrix Display, Hilbert Trans-form, Autoregressive Coefficients, Spectral Analysis(FFT), Spectral Average, PlayerController.Description of Box:
Hilbert Transform: This box gives the Hilbert transform of the input EEG signal.Along with Hilbert transform, envelope and phase is also calculated by performingDiscrete-Time Analytic signal[34].
xa(t) = x(t)+ i∗H(x)(t) (5.9)
where, x(t) is the input signal and H(x) is the Hilbert transform and i is imaginary unit.
(a) Hilbert Transform (b) Autoregressive Coefficients
(c) Feature Aggregator (d) Classifier Trainer
Figure 5.4: Images for used OpenViBE boxes
Autoregressive Coefficients: This box is calculates the coefficient to get the Au-toRegressive (AR) model of the input signal. The AR model gives the time variation of
72
5.7. EXPERIMENTAL PROCEDURES
the signal to its own previous values.
xt =N
∑i=1
aixt−i + εt (5.10)
where a(i) are the AR coefficients or model parameters, xt is the input signal, xt−i areprevious values, N is model order and εt is the residue, assumed as Gaussian whitenoise.
Feature Aggregator: This box is produces a single feature vector by mergingvarious chunks of the input feature streams. The obtained feature vector can be usedfor classification. In order for the vox to work properly, the input features must havesimilar property like frequency, features per second tec. and also all the input to thebox must be active atleast once during the entire session.
Classifier Trainer: This box is useful to train a classifier model. It also performsa k-folds cross-validation to estimate the accuracy of the trained classifier. The box canbe used to build a classifier using either a SVM or LDA or Neural Network for modeling.This box performs a supervised learning, where a number of features labeled with theirrespective classes are provided as an input. This box generates a configuration file thatdefines the model and gives a k-fold test based estimate of accuracy. The producedconfiguration file is used for on-line classification of the signals based on the test data.
Description of the Procedures:This section describes multiple procedures that were created to extract various featuresand then to select a minimal subset of these features that gave best results. Theselection of the features was done based on the k-folds(10-folds) accuracy results ofthe classifier model generated. The accuracy details are described in the result section.Later, while working with real-rime on-line signals, we found that the k-fold test resultsagree with the classification results.
At end of each procedure execution, we will get a classifier configuration file inXML format. This file defines our model obtained by training the classifier( we haveused SVM and LDA classifiers) and will be used for classification of unseen signals.
The procedure in the figure5.5a, is used to obtain the signal power of each band,as a feature vector. The entire EEG frequency spectrum is passed through band-pass
73
5.7. EXPERIMENTAL PROCEDURES
filter to get signals in a particular frequency range for alpha, beta, delta, gamma and mubrain rhythms. Each of the frequency band is sub-divided into two halves to get betterresolution. The band power is then calculated and is used for training the classifier.
(a) Features - alpha, beta, delta, gamma and mu band power
(b) Screen-shot of the OpenViBE procedure
Figure 5.5: Procedure for Features Extraction
74
5.7. EXPERIMENTAL PROCEDURES
(a) Screen-shot of the OpenViBE procedure
(b) Screen-shot of the OpenViBE procedure
Figure 5.6: Procedure for Features Extraction
Similarly, in the procedures in figure 5.5b, Hilbert Transform features, statisticalfeatures and Fourier features are used. The EEG signal is preprocessed using thetrained CSP parameters, and is then band-passed to 8-30 Hz. Stimulation based
75
5.8. RESULTS
epoching and temporal filters are used to find the epochs, and label the input data withappropriate stimulation for either Left or Right movement/intended movement. Thevarious features are individually and in various combinations are fed to the classifier togenerate a model. Only the features/ combinations of features, that gave good resultswith k-folds cross-validation were further considered.
The procedures in figure 5.6a and figure 5.6b use AR coefficient, band power of8-30 Hz band, Hilbert features and FFT features in various combination. The processin each procedure remains the same, the raw EEG signals are preprocessed, band-passfilter is applied and are tagged using stimulation based epoching and temporal filters.The next step is to extract the feature and use it to train the classifier. The features usedin our study are summarized in the next section.
5.8 Results
FeaturesExtracted Description Type of
FeatureMinimumMaximum The minimum and maximum value of the signal Statistical
Mean 1N ∑
Ni=1 x Statistical
Median Middle value of the signal Statistical
Standard Deviation√
1N−1 ∑
Ni=1(xi− x)2 Statistical
Valence αF5βF5− αF7
βF7Frequency
α to β ratio power(α)power(β) Frequency
Band Poweralpha, beta, mudelta, gamma
log(1+X^{2}) calculated for each band seperately Frequency
Band Power[8-30 Hz] log(1+X^{2}) Frequency
FFTAmplitude of Fourier transformedsignal was only used Frequency
Hilbert TransformPhase, Envelope and transformedsignal was used s features Hybrid method
AR CoefficientsAR coefficient with 6, 10, 14and 20 coefficients calculated Time Domain
Table 5.1: Summary of all features extracted
Although a large set of features can be extracted, using different methods for BCI
76
5.8. RESULTS
component, it is necessary to identify the most efficient features set. Thus, FeaturesExtraction and Features Selection are the most important activity of the entire project,the accuracy of the study and the conclusions drawn greatly depends on the featuresused to represent the entire chunk of the signals for classification. In order to build aneffective system, we need to use a small set of features that gives the best results. Forour study, to classify the left and the right hand movement/intention for movement, thefeatures listed in table 5.1 were extracted.
These extracted features were then evaluated individually and by combining withother features and finally a subset of these features was selected for further process-ing. These features were evaluated using the k-folds based cross-validation methodfor accuracy of the classifier obtained using the features. The LDA and SVM classi-fiers, explained in details in next chapter, were used to train the classifier. The figure5.7 shows one such accuracy measure of procedure shown in figure 5.5a using 5-foldscross-validation.
Figure 5.7: 5-fold cross-validation result with 50% accuracy
The following table summarizes the accuracy obtained using LDA and SVMclassifier for each/combination of features.
77
5.8. RESULTS
FeaturesAccuracy
using SVMin %
Accuracyusing LDA
in %alpha, beta,delta, gamma, muBand Power
58.2 52.5
Statistical Features 53.5 50Valence 46 34α-to-β ratio 40 38.5Signal Band Power[8-30 Hz] 62 56
FFT Coefficient 77 72Hilbert Transform Features 60 53.7Autoregresive Coefficientswith 6 Coefficientswith 10 Coefficientswith 14 Coefficientswith 20 Coefficients
60556583
52426378
Band Power+α to β ratio+Statistical Features+Valence
57 53
Signal Band Power+Hilbert Transform Features+AR features
75 70
Signal Band Power +Hilbert Transform Features+FFT Features+AR features
85 82.5
Table 5.2: Accuracy of various features extracted using 10-folds cross-validationmethod on LDA and SVM Classifiers
Based on the results observed, the following features were selected to be used forclassification of the EEG signals.
• Autoregressive Coefficients - 20 coefficients were used
• Hilbert Transform features - transformed signal, envelope and phase
• Band Power [8-30 Hz] defined as log(1+ x2)
78
5.8. RESULTS
• Amplitude of FFT was used
The combination of above features gave an accuracy of 85% using LDA and 85%using SVM classifiers. The figure5.8 shows the accuracy using 10-folds cross-validationtest.
Figure 5.8: 5-fold cross-validation result with 82.5% accuracy
79
Chapter 6
Modeling and Application
Classifiers build a model based on the training data, to distinguish between classes.Once the model is created, it could be used to label unseen data i.e. testing data. Inorder to classify the unseen data correctly the model build must be robust and accurate.A robust and accurate model could even be used for real-time on-line classificationof the captured EEG signals. This thesis investigates the performances using LinearDiscriminant Analysis (LDA) and Support Vector Machines (SVM) classifiers. Theseare linear classifiers, that use linear functions to differentiate classes.
6.1 Linear Discriminant Analysis (LDA)
Linear Discriminant Analysis (LDA)[4] also known as Fisher’s linear discriminant.LDA distinguishes data representing various classes using hyperplanes. For problemsto distinguish between two classes, it relies on which side of the hyperplane the featurevector falls. This technique is computationally light as it involves less mathematicalcalculations, and is suitable for on-line systems like our project.
LDA is based on simple probabilistic model. Given a training data set X with a classlabel y=k, the class conditional distribution of X for class k is
P(X |y = k) (6.1)
The prediction for unseen data can be given by:
P(y = k|X) =P(X |y = k)P(y = k)
P(X)=
P(X |y = k)P(y = k)∑l P(X |y = l) ·P(y = l)
(6.2)
80
6.2. SUPPORT VECTOR MACHINES (SVM)
Modelling P(y|X) as Gaussian distribution we get :
p(X |y = k) =1
(2π)n|Σk|1/2 exp(−1
2(X−µk)
tΣ−1k (X−µk)
)(6.3)
Now, using this we can find a linear decision surface identifying two classes by com-paring the the log-probability ratios log[P(y = k|X)/P(y = l|X)] :
log(
P(y = k|X)
P(y = l|X)
)= 0⇔ (µk−µl)Σ
−1X =12(µt
kΣ−1µk−µt
lΣ−1µl) (6.4)
6.2 Support Vector Machines (SVM)
This classifier also uses a discriminant hyperplane to identify different classes. Itconstructs hyperplanes in a multidimensional space based on class tags to differentiatebetween classes[4].
SVM is based on concept of decision hyperplanes and decision boundaries. Asshown in the figure 6.1, the input belong to either of STAR or TRIANGLE shapedclasses. A decision plane, i.e a separating line defines a boundary such that all theobject to left are STAR shaped and those to right are TRIANGLE shaped. Any newobject, shown with circle around it, will be classified depending on which side of theline it falls. Training SVM involves the minimization of the error function:
12
wtw+CN
∑i=1
ξi (6.5)
subject to the constraints:
yi(wtφ(xi)+b)≥ 1−ξiandξ≥ 0, i = 1, ...,N (6.6)
where, C is the capacity constant and should be chosen with care to avoid over fitting,w is the vector of coefficients, b is a constant, for handling non-separable inputs. Thereare N training cases, indexed using i and ξ represents the independent variables. Theclass labels are denoted as yi ∈ ±1 and φ is the kernel that transforms data from inputspace to feature space.
81
6.3. MODELING WITH OPENVIBE
Figure 6.1: SVM hyperplane i.e. a line separating two classes represented as STAR andTRIANGLE shapes
6.3 Modeling with OpenViBE
6.3.1 Off-line Classification
Objective : To classify recorded unseen EEG data and to evaluate the performance ofthe classifierBoxes Used : Generic Stream Reader, Temporal Filter, Time Based Epoching, SimpleDSP, Signal Average, Spatial Filter, Identity, Stimulation Based Epoching, Feature Ag-gregator, Classifier Processor, Matrix Display, Hilbert Transform, Autoregressive Coef-ficients, Spectral Analysis (FFT), Spectral Average, Confusion Matrix, Matrix Display,Kappa Coefficient, ROC Curve, Lua Simulator, Stream Switch, Player Controller.Description of Box:
Confusion Matrix: This box creates confusion matrix, given the actual stimulationand the classifier generated tags for the test data set. Confusion matrix generated canbe further use for accuracy measurements.
Matrix Display: This box is used to display the input stream as a matrix. This isuseful to visualize the confusion matrix created. Matrix display can be used to displayany of the streamed data as Matrix on screen.
82
6.3. MODELING WITH OPENVIBE
(a) Screen-shot of the OpenViBE procedure - Evaluating Classifier
(b) Screen-shot of the OpenViBE procedure - Evaluating Classifier with voting
Figure 6.2: Procedure for Classification (Off-line)
Kappa Coefficient (κ): This box takes the classifier predicted stimulation and thestimulation received and calculates the kappa coefficient.
ROC Curve: This box takes the classifier probability measure of the stimulationpredicted and the stimulation received and generates the ROC curve for the same.
83
6.3. MODELING WITH OPENVIBE
(a) Confusion Matrix (b) Matrix Display
(c) ROC Curve (d) Kappa Coefficient
Figure 6.3: OpenVIBE boxes used for evaluating classifier performance
Figure 6.4: Classifier Processor
Classifier Processor: This box is loads the classifier parameters, that weregenerated during training of the classifier. The processor then classifies the featurevector received, based on the model trained. Along with the labels/stimulation,classifier will also generate the probability with which the decisions was made, and thedistance from hyperplane of the test data.
Description of the Procedure:The procedures described in figure 6.2 are used to evaluate the classifier accuracyfor off-line EEG signals. The term off-line is used to underline the fact that thesignals were recorded beforehand, and are not real-time signals. The classifiers usedwere LDA and SVM. Once the relevant set of features were selected, a classifiermodel was build. The build model is generated as configuration file, a XML file, thatcan be loaded into a classifier processor to classify the test data. A Lua Simulatoris used to control the Stream Switch box, that controls the flow of data stream intoclassifier. Only the Left/Right marked stimulation are allowed to pass through classifier.
A voting based approach was also used to evaluate the outcome. In this approach
84
6.3. MODELING WITH OPENVIBE
the duration of epoch window is reduced, so the 4 sec of recording will generate mul-tiple epochs. This will result in multiple classification results for a single movement.Now we will use a vote based stimulation box to see the majority in the tags generated.This method slightly improves the accuracy.
This procedure will measure the accuracy of the classifier, generating a kappa value,confusion matrix and ROC curve.
6.3.2 On-line Classification
Objective : To classify recorded unseen EEG data and to evaluate the performance ofthe classifierBoxes Used : Acquisition Client, Temporal Filter, Time Based Epoching, SimpleDSP, Signal Average, Spatial Filter, Identity, Stimulation Based Epoching, FeatureAggregator, Classifier Processor, Matrix Display, Hilbert Transform, AutoregressiveCoefficients, Spectral Analysis (FFT), Spectral Average, Confusion Matrix, MatrixDisplay, Kappa Coefficient, ROC Curve, Lua Simulator,Stream Switch, Player Con-troller.
Description of the Procedure:The procedures described here are used to evaluate the classifier accuracy for on-lineEEG signals. The signals are captured in real-time from the EEG device i.e. EMOTIVEPOC+ through the acquisition client. The signals will follow the same path as definesin previous procedures. They will go through a band-pass filter, followed by a CSPfilter and then a feature vector is generated. The stimulation on the other hand will passthrough the a stream switch, which will control signal flow for classification.
Lua simulators are used to generate the the cues for the subject to record the data.In order to generate the confusion matrix, the acquired data must also be tagged. Thecaptured data which is labeled, is striped of its label. The Left/Right stimulus areconverted into ’Correct’ and all others into ’Incorrect’ stimulus. Only the signalswith ’Correct’ label are processed further and classified. The generated tag from theclassifier is compared to the recorded tag. This comparison generates the confusionmatrix, similarly the probability with which the classifier generated the tags is used togenerate the ROC curve.
This procedure will measure the accuracy of the classifier, generating a kappa value,
85
6.3. MODELING WITH OPENVIBE
Figure 6.5: Procedure for Classification (On-line)
confusion matrix and ROC curve.
86
6.4. APPLICATION
6.4 Application
The last step in processing of EEG signal is Feature Translation. Translation algo-rithms convert the features extracted into device/application control commands, i.e.from independent variable to dependent variable. It must also ensure that the poten-tial range of specific features from the disabled user covers the full range of interfacecontrol.
Figure 6.6 shows screen-shots of a demo application created as part of this project.The aim of the study was to develop a BCI based HCI component, which was primarilylike a text-entry system. The BCI component consisted of EEG signal processing,modeling the data and being able to classify/translate the received EEG signals intouseful commands.
The application developed is a prototype, capable of opening any other softwareapplication on the host computer, capable of signaling event through a sound and alsocapable of displaying the outcome on the screen. The application is solely build usingOpenViBE platform, and could be expanded to any binary decision making basedapplication.
The application procedure uses an acquisition client to capture the EEG signals fromthe device. In order to capture the signals the application must be ready and sensing theEEG signals and also the subject must be aware of the same. This is achieved using aLua script, the script generates events/stimulation activating the device to sense EEGafter fixed interval. The stimulation are shown to the subjects in parallel, thus both thesystem and the subject knows that the device is ready to listen to EEG on subject.
The captured real-time signal will be processed in the same manner as the pro-cessing done during building the model off-line. The EEG signal will pass through aband-pass filter to get the EEG in frequency range of 8-30Hz. The signal is still from14 channels, this data is preprocessed and reduced to 6 channels data using CSP filter,that has been trained off-line previously. The next step is to identify the epochs andclassify them.
Another Lua simulator is used to control the flow of signals to the classifier forclassifications. The Lua script converts the stimulus for Left/Right into ’Correct’ and
87
6.4. APPLICATION
(a) Screen-shot of the OpenViBE procedure for Application
(b) Display of Cue and the Outcome
Figure 6.6: BCI Application
all others into ’Incorrect’ stimulus. Only the signals with ’Correct’ label are processedfurther and classified. A stream switch box is used to control the flow of ’Correct’ EEG
88
6.4. APPLICATION
Figure 6.7: Block Diagram of the Application
signals into classifier.
The same set of features as used for building the model, is then extracted from theincoming signals. For our project we are using Hilbert Transform, Auto-regressiveCoefficient(20), Band Power (8-30 Hz) and FFT amplitude as feature vector. Theclassifier processor will classify the incoming feature vector based on the trainedmodel. The outcome of the classifier is then translated into command.
The application displays a fish as a cue to signal active listening mode. In this modethe EEG signals are captured and send for classification. Once the EEG signals arecaptured (for 4 seconds in our project) they are translated into application commands.A Tiger is displayed if the user intended or actually performed Left hand movement anda Rhinoceros for Right movement. The application can be further configured to launchsome another application and to additionally play an audio sound for these Left/Right
89
6.5. RESULTS
movement.
6.5 Results
The following parameters were taken into account for measuring the accuracy of theclassifier.
Kappa Coefficient
Kappa is a measure of how well the classifier has performed as compared to the arandom classification i.e. a classification simply by chance. It is a more robust accuracymeasure that percentage as it takes into account the agreement occurring by chance.Kappa is given as:
κ =po− pe
1− pe= 1− 1− po
1− pe(6.7)
where, po is observed probability and pe is hypothetical probability.
A model with a big difference between the accuracy and the null error rate will havea high Kappa.In general, kappas over 0.75 are excellent, 0.40 to 0.75 are fair to good,and below 0.40 are poor.
Confusion Matrix and Accuracy
A confusion matrix is used to describe the performance of a classifier. It is a table thatgives how a set of test data was classified, where the true labels of the test data areknown.
Given a set of test data T with possible labels k1 and k2, lets consider a data t1labelled as k1. A classifier(binary) could classify t1 as either k1 (True Positive) or k2
(False Positive) Similarly, classifier could tag t1 as not k1 (False Negative) or not k2
(True Negative) . This information in tabular form gives a confusion matrix.Accuracy defines how often the classifier is correct, and is obtained from the confusionmatrix as:
Accuracy =TruePositive(T P)+TrueNegative(T N)
Total(6.8)
90
6.5. RESULTS
Receiver Operating Characteristic(ROC) curve
ROC curve is the most commonly used way to visualize the performance of a binaryclassifier. It is a plot of True Positive Rate vs False Positive Rate.
True Positive Rate =TruePositivesAllPositives
(6.9)
False Positive Rate =FalsePositivesAllNegatives
(6.10)
A classifier with ROC curve closer to the diagonal(y=x) line has done poor job inseparating the two classes, whereas a good classifier will have ROC curve away fromthe diagonal touching the upper left corner of the graph. Area under ROC curve alsogives the accuracy measure of the classifier.
A binary classifier was used to build the model, since there were just two classesone defining Left movement/intention for movement and another for Right. Linearclassifiers Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM)were used for our study. Once the feature set for the project was selected, LDA andSVM classifiers were trained separately and their performances were evaluated.
Initially, the performance was measured for off-line classification, using the alreadyacquired data set. The classifier parameters were fine-tuned to improve the perfor-mance. Then the accuracy for on-line classification with real-time data from EEGheadset was measured . The following observations were made.
• LDA gave best results with the following parameters:
– Native multiclass strategy
– Shrinkage Coefficient -1
• SVM gave best results with the following parameters:
– C-Support Vector Classification
– Linear Kernel
– One-vs-All approach
91
6.5. RESULTS
– Degree 3
– ε0.100000
• The performance decreased if the preprocessing using CSP was erroneous
• SVM gave better performance with accuracy 87.5% than LDA with accuracy of82%
• The kappa value of about 0.77 was achieved for the classifier, which indicatedgood performance and a greater difference from random classification
• The accuracy of the classifier was slightly increased to 88.5% using voting basedmethod as described in procedure 8.
• SVM gave an accuracy of 93.5% when the testing data was also part of the traingdata
Figure 6.8 shows the ROC curve, kappa value and confusion matrix for SVMclassifier with vote based stimulation and accuracy of 88.5%
92
6.5. RESULTS
(a) ROC Curve for Left stimulation (b) ROC Curve for Right stimulation
(c) Confusion Matrix and Kappa Coefficients
Figure 6.8: Classifier Performance Matrix Accuracy = 88.5%
Figure 6.9 shows the ROC curve, kappa value and confusion matrix for SVMclassifier and accuracy of 87.5% . Based on the ROC curve, this classifier gives bestperformance.
93
6.5. RESULTS
(a) ROC Curve for Left stimulation (b) ROC Curve for Right stimulation
(c) Confusion Matrix and Kappa Coefficients
Figure 6.9: Classifier Performance Matrix Accuracy = 87.5%
94
6.5. RESULTS
(a) ROC Curve for Left stimulation (b) ROC Curve for Right stimulation
(c) Confusion Matrix and Kappa Coefficients
Figure 6.10: Classifier Performance Matrix Accuracy = 82%
Figure 6.10 shows the ROC curve, kappa value and confusion matrix for LDAclassifier and accuracy of 82%
Figure 6.11 shows the ROC curve, kappa value and confusion matrix for SVMclassifier where the testing data is subset of the training data and accuracy of 93.5%.
95
6.5. RESULTS
(a) ROC Curve for Left stimulation (b) ROC Curve for Right stimulation
(c) Confusion Matrix and Kappa Coefficients
Figure 6.11: Classifier Performance Matrix Accuracy = 93.5%
Figure 6.12 shows the confusion matrix for SVM classifier, where the signals arepreprocessed using erroneous CSP configuration. In thhis case the accuracy drops to58%
96
6.5. RESULTS
Figure 6.12: Confusion Matrix with Accuracy = 58%
97
Chapter 7
Conclusion and Future Work
This chapter discusses the important findings and contributions of this study, andpresents the recommendations for future work.
7.1 Conclusion
This thesis presents an complete overview of EEG technology, the way humanbrain functions, different technologies and stages involved in signal processing, andclassification of EEG signals along with the algorithms that represents a simpleBrain-Computer Interface system to act as text entry application.
The EEG signals were captured from scalp using Emotiv Epoch+ over 16 channels.Various processing was applied to the collected data, e.g. filtering, epoching, averagingetc. The information content of the signal was extracted into a feature vector whichformed input to LDA/SVM classifier, to model Left and Right hand movements. Amajor challenge was to test the BCI system with real-time EEG signals. To furtherimprove the accuracy of the system, a voting based scheme was used to reduce falsepositive.
The following observations were made as a part of the study:
• OpenViBE software is comparatively easier to develop a on-line real-time BCIbased application
• The quality of EEG data collected is dependent on the environment and the con-centration level of the subject
98
7.2. MAPPING ACHIEVEMENTS TO GOALS
• The signal quality and the accuracy varies slightly among different subjects ofalmost same age group
• CSP gives better results as compared to other preprocessing methods as CAR
• SVM gave better classification accuracy as compared to LDA
7.2 Mapping Achievements to Goals
The aim of this study was to achieve a set of goals and objectives, as specified at thebeginning of the thesis. In this section we will review the objectives and achievements.
An inexpensive EEG device, Emotiv EPOC+ was selected for the study. The devicehad 16 channels for recording the EEG signals. The aim to develop an applicationthat would distinguish between was Left and Right movements full-filled. This studyshows that using an inexpensive device, we will be able to distinguish between varioussensory motor activities.
Further, a data set consisting of recordings from 15 subjects was created, and variouspreprocessing techniques and features for sensory motor BCI based HCI componentswere evaluated.
7.3 Further Enhancement and Ideas
This study raises a few questions for more investigation. This section presents proposaland suggestions for constant work with the system presented in this thesis.
• The developed system, does not incorporate methods to remove artifacts like eye-blinks, heart-beats etc. The removal of these artifacts will further improve theaccuracy of the system. The system could be extended to include artifacts removaltechniques like using co-related ECoG or ECG recording.
• Feature selection for this study was done manually. A large set of features wereinitially used and given as input to the classifier for modeling the EEG signals,based on the accuracy of classifier, a smaller subset of features was chosen forthis project. The work can be extended to propose alternative method for featuresselection.
99
7.3. FURTHER ENHANCEMENT AND IDEAS
• The data set collected for this study was from the subjects with age range of about23 to 29 years. It is observed that the EEG signals varies with the age group ofthe subjects. Thus, for a more robust model, data set can be extended to includesubjects from different age groups, creating a more versatile model.
• The developed system, can identify only two classes of EEG signals i.e. Left handmovement to Right hand movement. The system could be improved in terms ofaccuracy and speed, if the number of classes of EEG signals is extended to at leastfour in Left, Right, Up and Down directions. This might require use of artificialneural networks (ANN) for better classification accuracy.
• An immediate extension to the work discussed in the thesis could be to designa full-fledged Graphical-User Interface (GUI) for all characters in the Englishlanguage. This will lead to a complete text entry system, identical to systems inmarket. The GUI could further be extended to an android application for mobilephones and tablets.
100
Publications out of this work
• "BCI augmented HCI for people with limited mobility using an affordable EEGdevice" by Sreeja.S. R, Anushri Saha, Shabnam Samima, Punyashlok Dash,Vaidic Joshi, Atanu Dey, Monalisa Sarma, Debasis Samanta accepted as shortpaper in 7th International conference on Intelligent Human computer Interaction,2015.
• "BCI Augmented HCI for people with special needs" by Vaidic Joshi, Sreeja.S.R, Debasis Samanta, IEEE Techsym 2016. (to be communicated)
101
Bibliography
[1] J.-J. Vidal, “Toward direct brain-computer communication,” Annual review of Bio-physics and Bioengineering, vol. 2, no. 1, pp. 157–180, 1973.
[2] “https://en.wikipedia.org/wiki/neuron,” mar 2016.
[3] “http://www.brainwaves.com/,” mar 2016.
[4] V. Gandhi, Brain-computer Interfacing for Assistive Robotics: Electroencephalo-grams, Recurrent Quantum Neural Networks, and User-centric Graphical Inter-faces. Academic Press, 2014.
[5] “https://en.wikipedia.org/,” mar 2016.
[6] L. Haas, “Hans berger (1873–1941), richard caton (1842–1926), and electroen-cephalography,” Journal of Neurology, Neurosurgery & Psychiatry, vol. 74, no. 1,pp. 9–9, 2003.
[7] “http://censusindia.gov.in/,” apr 2016.
[8] L. F. Nicolas-Alonso and J. Gomez-Gil, “Brain computer interfaces, a review,”Sensors, vol. 12, no. 2, pp. 1211–1279, 2012.
[9] “https://emotiv.com/epoc.php,” mar 2016.
[10] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M.Vaughan, “Brain–computer interfaces for communication and control,” Clinicalneurophysiology, vol. 113, no. 6, pp. 767–791, 2002.
[11] “http://www.who.int/disabilities/world_report/2011/report/en/,” apr 2016.
[12] I. Arafat, “Brain-computer interface: Past, present & future,” International IslamicUniversity Chittagong (IIUC), Chittagong, Bangladesh, 2013.
102
BIBLIOGRAPHY
[13] L. A. Farwell and E. Donchin, “Talking off the top of your head: toward a mentalprosthesis utilizing event-related brain potentials,” Electroencephalography andclinical Neurophysiology, vol. 70, no. 6, pp. 510–523, 1988.
[14] G. Bin, X. Gao, Y. Wang, Y. Li, B. Hong, and S. Gao, “A high-speed bci basedon code modulation vep,” Journal of neural engineering, vol. 8, no. 2, p. 025015,2011.
[15] G. Prasad, P. Herman, D. Coyle, S. McDonough, and J. Crosbie, “Using motorimagery based brain-computer interface for post-stroke rehabilitation,” in Neu-ral Engineering, 2009. NER’09. 4th International IEEE/EMBS Conference on,pp. 258–262, IEEE, 2009.
[16] D. B. Ryan, G. Frye, G. Townsend, D. Berry, S. Mesa-G, N. A. Gates, and E. W.Sellers, “Predictive spelling with a p300-based brain–computer interface: increas-ing the rate of communication,” Intl. Journal of Human–Computer Interaction,vol. 27, no. 1, pp. 69–84, 2010.
[17] “http://webaim.org/articles/motor/assistive,” mar 2016.
[18] K. J. Panoulas, L. J. Hadjileontiadis, and S. M. Panas, “Brain-computer interface(bci): Types, processing perspectives and applications,” in Multimedia Services inIntelligent Environments, pp. 299–321, Springer, 2010.
[19] “http://www.braingate2.org/,” mar 2016.
[20] S. A. Patil, “Brain gate as an assistive and solution providing technology fordisabled people,” in 13th International Conference on Biomedical Engineering,pp. 1232–1235, Springer, 2009.
[21] A. Coenen, E. Fine, and O. Zayachkivska, “Adolf beck: A forgotten pioneer inelectroencephalography,” Journal of the History of the Neurosciences, vol. 23,no. 3, pp. 276–286, 2014.
[22] T. G. Yuen, W. F. Agnew, and L. A. Bullara, “Tissue response to potential neuro-prosthetic materials implanted subdurally,” Biomaterials, vol. 8, no. 2, pp. 138–141, 1987.
[23] G. R. Barnes, A. Hillebrand, I. P. Fawcett, and K. D. Singh, “Realistic spatialsampling for meg beamformer images,” Human brain mapping, vol. 23, no. 2,pp. 120–127, 2004.
103
BIBLIOGRAPHY
[24] D. J. Krusienski, M. Grosse-Wentrup, F. Galán, D. Coyle, K. J. Miller, E. Forney,and C. W. Anderson, “Critical issues in state-of-the-art brain–computer interfacesignal processing,” Journal of neural engineering, vol. 8, no. 2, p. 025002, 2011.
[25] L. Grand, L. Wittner, S. Herwik, E. Göthelid, P. Ruther, S. Oscarsson, H. Neves,B. Dombovári, R. Csercsa, G. Karmos, et al., “Short and long term biocompati-bility of neuroprobes silicon probes,” Journal of neuroscience methods, vol. 189,no. 2, pp. 216–229, 2010.
[26] S. Waldert, T. Pistohl, C. Braun, T. Ball, A. Aertsen, and C. Mehring, “A reviewon directional information in neural signals for brain-machine interfaces,” Journalof Physiology-Paris, vol. 103, no. 3, pp. 244–254, 2009.
[27] E. Hoffman, N. Mullani, et al., “Application of annihilation coincidence detectionto transaxial reconstruction tomography j,” Nucl Med, vol. 16, pp. 210–224, 1975.
[28] H. H. Jasper, “The ten twenty electrode system of the international federation,”Electroencephalography and clinical neurophysiology, vol. 10, pp. 371–375,1958.
[29] B. Blankertz, G. Dornhege, M. Krauledat, M. Schröder, J. Williamson, R. Murray-Smith, and K.-R. Müller, “The berlin brain-computer interface presents the novelmental typewriter hex-o-spell.,” 2006.
[30] J. Höhne, M. Schreuder, B. Blankertz, and M. Tangermann, “A novel 9-class au-ditory erp paradigm driving a predictive text entry system,” Frontiers in neuro-science, vol. 5, p. 99, 2011.
[31] S. A. Wills and D. J. MacKay, “Dasher-an efficient writing system for brain-computer interfaces?,” Neural Systems and Rehabilitation Engineering, IEEETransactions on, vol. 14, no. 2, pp. 244–246, 2006.
[32] R. Ramirez and Z. Vamvakousis, “Detecting emotion from eeg signals using theemotive epoc device,” in Brain Informatics, pp. 175–184, Springer, 2012.
[33] A. Delorme and S. Makeig, “Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of neuro-science methods, vol. 134, no. 1, pp. 9–21, 2004.
[34] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand,and A. Lécuyer, “Openvibe: an open-source software platform to design, test,
104
BIBLIOGRAPHY
and use brain-computer interfaces in real and virtual environments,” Presence:teleoperators and virtual environments, vol. 19, no. 1, pp. 35–53, 2010.
[35] D. Hagemann, E. Naumann, and J. F. Thayer, “The quest for the eeg referencerevisited: a glance from brain asymmetry research,” Psychophysiology, vol. 38,no. 05, pp. 847–857, 2001.
[36] C. Brunner, M. Naeem, R. Leeb, B. Graimann, and G. Pfurtscheller, “Spatial fil-tering and selection of optimized components in four class motor imagery eeg datausing independent components analysis,” Pattern Recognition Letters, vol. 28,no. 8, pp. 957–964, 2007.
[37] F. Lotte, Study of electroencephalographic signal processing and classificationtechniques towards the use of brain-computer interfaces in virtual reality applica-tions. PhD thesis, INSA de Rennes, 2008.
[38] R. P. Rao, Brain-computer interfacing: an introduction. Cambridge UniversityPress, 2013.
[39] J. Burg, “omaximum entropy spectral analysis, pproceedings of the 37th meet%ing of the society of exploration geophysicists; reprinted in dg childers, ed.(1978),modern spectrum analysis,” 1967.
[40] J. Müller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, “Designing optimal spatialfilters for single-trial eeg classification in a movement task,” Clinical neurophysi-ology, vol. 110, no. 5, pp. 787–798, 1999.
[41] B. Greene, S. Faul, W. Marnane, G. Lightbody, I. Korotchikova, and G. Boylan, “Acomparison of quantitative eeg features for neonatal seizure detection,” ClinicalNeurophysiology, vol. 119, no. 6, pp. 1248–1261, 2008.
[42] J. N. Mak and J. R. Wolpaw, “Clinical applications of brain-computer inter-faces: current state and future prospects,” IEEE reviews in biomedical engineer-ing, vol. 2, p. 187, 2009.
[43] M. Duvinage, T. Castermans, M. Petieau, T. Hoellinger, G. Cheron, and T. Dutoit,“Performance of the emotiv epoc headset for p300-based applications,” Biomedi-cal engineering online, vol. 12, no. 1, p. 56, 2013.
105