Upload
debra-davis
View
218
Download
0
Embed Size (px)
Citation preview
CH 18.
Adaptation in brain-computer interfaces
Introduction
Inherent nonstationarity of EEG
Why do we need ‘adaptation’ ?
• varies between BCI sessions and within individual sessions
• due to a number of factors : changes in background brain activity, fatigue, concentration levels, etc.
• depends on the skill and experience of subjects
• classifier trained on past EEG data : not optimal for other sessions
• need for adaptation in BCI
Introduction
Inherent nonstationarity of EEG :
shift of the power of the selected frequency band in the calibration compared to the feedback session
Introduction
Online adaptation of the classifier
• keep the classifier constantly tuned to the EEG signal
• subject 의 생각을 판독하기 위해 그들에게 약 100 시간의 뇌 활동 조절 훈련을
시키는 대신 , 개인별로 classifier 의 parameter 들을 조정함으로써 20 분 정도만
투자하면 된다 . 따라서 신호처리기술을 이용해 적응의 부담을 subject 에서
기계로 전가할 수 있게 되었다 .
Introduction
3 Studies
• Adaptation in CSP-based BCI systems (Offline study)
: 3 subjects , mental typewriter
• Adaptive online discriminant analysis for cue-based BCI
: 6 naive subjects , basket paradigm
• Online classifier adaptation in an asynchronous BCI
: 1 subject , driving a wheelchair
(Online study)
Study1. Adaptation in CSP-based BCI systems
Experimental Setup
BBCI system with visual feedback
3 subjects (2 naive subjects + 1 experienced subject)
features reflecting changes of bandpower
The experiments consisted of 2 parts …
• a calibration measurement :
• a feedback period :
visual stimuli L, R, and F selection of 2 imagery classes and frequency bands (discriminability) CSP analysis & CSP filters calculation of a linear separation btw bandpower values (LDA)
EEG from 64 channels bandpass-filter and common spatial filter measure of instantaneous bandpower (with sliding window) these values were weighted by LDA classifier → move a cursor
Mental Typewriter Feedback
a continuous movement of the cursor in the horizontal direction
type a letter on the basis of binary choices
symbol ‘<‘ for deleting one letter
after an error of choice → subjects relax or stretch
Adaptation algorithms
ORIG : unmodified classifier trained on calibration data
REBIAS : shift the original classifier’s hyperplane parallel to itself
RETRAIN : rotate the hyperplane
RECSP : classifier trained on feedback data
• Increasing order of change : ORIG < REBIAS < RETRAIN < RECSP
• In all adaptive methods, we need to make a trade off :
• Estimate the number of training samples necessary for retraining each method and each subject.
Taking more training : more stable estimates , less adaptive
Results
Conclusion : Original classifier can hardly be outperformed by any relearning method.
Study2. Adaptive online discriminant analysis for Cue-based BCI
Principles
< Adaptation diagram for a cue-based BCI >
Principles
The adaptation trigger isDivided into 2 parameters : - Trigger start = Tini
- Trigger stop
Adaptation window : the num-ber of samples btw trigger startand stop, ‘N’.
Adaptation starts at Tini and stops after the adaptation window.
Delay time for avoiding overfitting.
After the delay, classifier is updated.
Principles
MI : Mutual Information mi : the output of the classifierUCtini : an update coefficient : maximum class-separabilityTini : the time when appears
The update equations for Kalman filtering
Parameter initialization
Experimental Setup
6 naive subjects
Subjects performed motor imagery experiments – basket paradigm
1080 trials for each, (40 trials * 9 runs * 3 sessions = 1080 trials)
2-class cue-based and EEG-based BCI
Results Experimental results, minimum ERR and maximum MI from single trial analysis of each session
Results 두 집단이 통계적으로 정말 다른 것인가 ? - Parametric test : 모집단이 정규분포를 이룰 때 - Permutation test : 모집단의 분포가 정규분포가 아닐 때
Are continuously adaptive classifiers better than discontinuously adaptive ones ?
Experimental Setup
• discontinuously adaptive LDA classifier
• 6 new subjects, basket paradigm
3 runs 3 runs 3 runs
The 1st classifier is the general
classifier.
LDA classifier is updated and
used for 3 runs.
LDA classifier is updated and
used for 3 runs.
9 runs
Results Minimum ERR and maximum MI of online & discontinuously adaptive LDA classifiers.
Results Session comparison of discontinuously adaptive LDA classifiers
Comparison of discontinuously and online adaptive LDA classifiers
Study3. Online Classifier Adaptation in an Asynchronous BCI
Introduction
Previous preliminary works
• IDIAP BCI : performed in an asynchronous paradigm (CH.6)
• Online learning with the basic gradient descent algorithm on a Gaussian classifier
• Online learning with stochastic meta descent algorithm
→ adapting individual learning rates for each parameter → accelerates training
Principle (1) : Statistical Gaussian Classifier
X : SampleCi : ClassNi : Gaussian prototypes
A: Total activation of the classifier
ac: Activation of class c
yc: Posterior probability of class c
Decision : class with the highest probability ! under the threshold → result is “Unknown”.
Principle (1) : Statistical Gaussian Classifier
Training of the classifier
• starts from an initial model• improved by stochastic gradient descent model
(yi: Posterior probability of class i , t : target vector)
For optimization , calculate the derivative of the error
The gradient descent update equations
At each step, update center(α) and covariance(β) of individual learning rate
Principle (2) : Stochastic Meta Descent
Stochastic meta descent is an extension of gradient descent that uses adaptive learning rates to accelerate learning.
However, in SMD algorithm, each parameter maintains and adapts an individual learning rate. This is in contrast to basic gradient descent, which uses a single learning rate for all parameters.
Learning rateSample
Update the learning rates :
Similar system for the covariance updates :
(α : meta-learning rate, vt : gradient trace )
,
,
Experimental Results
IDIAP BCI
Computer simulation of driving a wheelchair avoiding obstacles
Subject was guided by an operator
Samples are not balanced btw classes and the length of time varies
Experimental Results
Comparison between… online classification and offline performance of static classifier
Online adaptation produces a final classifier that outperforms the initialclassifier.
Online classification rate Offline performance of the final classifier
The online classification rates track the EEG signal well, with noclear bias between classes.
The final classifier perform wellon the last part of the session, butless well on the early part of the session.
→ Online adaptation makes it possible to complete the task from the very 1st trial.
Experimental Results
Discussion
Online classifier adaptation would improve the performance of a BCIBecause of the high variability in EEG signals.
But, no systematic study has been done to formally analyze theextent of signal variation through different stages in a subject’s usageof BCI.
Adaptive methods such as REBIAS , RETRAIN improve the classifier,but do not result in a significant increase of performance.
The main research issue is that adaptation when we don’t know theuser’s intent. → Reinforcement learning
Reinforcement learning : we receive only occasional feedback onhow well or poorly we are performing.→ (1) the recognition of cognitive error potentials (2) contextual information about how well the device is operating
Video
Thank you