Upload
viktor-k
View
212
Download
0
Embed Size (px)
Citation preview
PR
1Q5 st
2Q1 ri3
4 Fran56 is, 5
7
8910 Accepted 12 April 201311 Available online xxxx121314151617 Markov process18 Discrete dynamics19 Partition20 State space21 Graph
22of23the actual nature of its time evolution remains unclear. Computational models propose specic state space24the dynamic repertoire of the resting brain. Nevertheless, methods devoted to25organization of brain state space from empirical data still lack and thus preclude26
27
28
29
30
31
32
33
34
35
36
37
38
3940
41
42
43 1. Introduction
44 Brain activity is a spatiotemporal signal with deterministic and45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61none of its temporal nature.62
63
64
65
66
67
68
69
70
71
NeuroImage xxx (2013) xxxxxx
YNIMG-10353; No. of pages: 15; 4C: 5, 6, 7, 8, 9, 10, 11, 14
Contents lists available at SciVerse ScienceDirect
NeuroIm
.e lUNCOstochastic characteristics. As time evolves the brain dynamics traces
out a trajectory in a high-dimensional state space. The objective of allanalysis methods is to provide a characterization of this trajectory.The most frequently used approaches apply some sort of informationcompression with the objective that the achieved reduction in com-plexity is still sufciently informative to allow for comparisons betweendata sets, let it be different experimental conditions, different subjectgroups or between model and empirical data. Such approaches includeentropy, dimension estimates, various forms of complexity, Lyapunovexponents, and covariances just to name a few (see e.g. Stam (2005)
We illustrate this thought along a hotly debated example in neuro-science, that is the resting state dynamics of the brain, which istypically accessed through functional magnetic resonance imaging(fMRI), electroencephalographic (EEG) and magnetoencephalographic(MEG) measurements. The fMRI measures the blood oxygenationdependent level (BOLD) signals of the hemodynamic response on highresolution spatial scales (mm) and slow temporal scales (sec). Here alimited number of 810 robust network patterns of BOLD activityhave been observed in the primate brain during rest in the BOLD signals(Biswal et al., 1995; Damoiseau et al., 2006; Raichle et al., 2001). Theseand Subha et al. (2010), for reviews). For eachple derivatives exist with various degrees of sapproaches have in common by constructionof dynamics, that is the actual notion of tempo
Corresponding author at: Institut de NeuroscienceAix-Marseille Universit, Inserm, 27 Bd. Jean Moulin,
E-mail address: [email protected] (L. Peza
1053-8119/$ see front matter 2013 Published by Elhttp://dx.doi.org/10.1016/j.neuroimage.2013.04.041
Please cite this article as: Hadriche, A., et al., Mj.neuroimage.2013.04.041R consequence quantitative descriptions of brain activity are impove-rished and only capture limited aspects of the brain trajectory andRECTE
DWepropose here an algorithm based on set oriented approach of dynamical system to extract a coarse-grainedorganization of brain state space on the basis of EEG signals. We use it for comparing the organization of thestate space of large-scale simulation of brain dynamics with actual brain dynamics of resting activity in healthysubjects.The dynamical skeleton obtained for both simulated brain dynamics and EEG data depicts similar structures.The skeleton comprised chains of macro-states that are compatible with current interpretations of brainfunctioning as series of metastable states. Moreover, macro-scale dynamics depicts correlation featuresthat differentiate them from random dynamics.We here propose a procedure for the extraction and characterization of brain dynamics at a macro-scale level.It allows for the comparison between models of brain dynamics and empirical measurements and leads to thedenition of an effective coarse-grained dynamical skeleton of spatiotemporal brain dynamics.
2013 Published by Elsevier Inc.Keywords:Resting state comparison of the hypothetical dynamical repertoire of the brain with the actual one.organization which denesthe characterization of theMapping the dynamic repertoire of the re
Abir Hadriche a,b, Laurent Pezard a,, Jean-Louis NandAbdennaceur Kachouri b, Viktor K. Jirsa a
a Aix Marseille Universit, INSERM, INS UMR S 1106, 27 Bd. Jean Moulin, 13005 Marseille,b Universit de Sfax, ENIS, LETI, Route de Soukra Km 2.5, BP. 1173 3038, Sfax, Tunisiac Universit de Lille Nord-de-France, URECA EA 1059, Domaine Universitaire du Pont de Bo
a b s t r a c ta r t i c l e i n f o
Article history: The resting state dynamics
j ourna l homepage: wwwof thesemeasures multi-ophistication. All of thesethat they lose their senseral evolution is lost. As a
des Systmes, UMR S 1106,13005 Marseille, France.rd).
sevier Inc.
apping the dynamic repertoOOF
ing brain
no c, Hamadi Ghariani b,
ce
9653 CEDEX Villeneuve d'Ascq, France
the brain shows robust features of spatiotemporal pattern formation but
age
sev ie r .com/ locate /yn img72patterns are characterized by spontaneous intermittent coherentuctu-73ations on an ultraslow scale of b0.1 Hz. Correlation patterns amongst74the resting state networks with strong positive and negative correla-75tions have been observed (Fox et al., 2005). Though this nding pro-76vides a constraint on the permissible spatiotemporal brain dynamics,77the details remain still unclear. We still do not know if this temporal78dynamics translates into an oscillatory coactivation of selected resting79state networks, or a pattern competition, or even a hopping process
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
2 A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORREC
from one resting state network to another. These important details willexpress themselves through features of the trajectories in the statespace of the brain network, including their geometry and topology, aswell as zones of increased convergence and divergence of the trajectorydensities. The total loss of temporal dynamics has been nicelydemonstrated by Wen et al. (2012) who applied common techniquesof prevailing statistical approaches to surrogate fMRI data (subjectedto covariant randomization of the temporal order) and obtained identi-cal results regarding the resting state networks.
We nd a similar situation in electroencephalographic data. Firstof all, the BOLD signal and its electrophysiological correlate show aclear but non-trivial relationship. To investigate this relationshipbetween the BOLD signal and its underlying neural activity Logothetiset al. (2001) demonstrated in anesthetized monkeys that simultaneousfMRI and intracortical electrical recordings of localeld potentials showpositively correlated spontaneous uctuations. Laufs et al. (2003)recorded simultaneous fMRI and EEG in awake human subjects at restand found that the -band power was positively correlated with theBOLD signals in the posterior cingulate, the precuneus, the temporo-parietal and dorsomedial prefrontal areas. These areas are knownfrom one of the resting state network patterns referred to as the DefaultMode Network (DMN), which is characterized by a reduced activationduring task conditions and elevated activation in absence of any task.Mantini et al. (2007) also showed that different frequency bands corre-late differentlywith the various resting state network patterns, whereasa recent study by Scheeringa et al. (2011) demonstrated that the BOLDsignal in humans performing a cognitive task is related to synchroniza-tion across different frequency bands. In the latter, trial-by-trial BOLDuctuations correlated positively with high power of the EEG andnegativelywith and. Hence, there is empirical evidence that the spa-tiotemporal dynamics of the BOLD signal is related to the spatiotempo-ral dynamics of electroencephalographic signals, but the actual natureof the spatiotemporal organization, in particular its time evolution isnot clear.
The situation is similar for the theoretical studies in the restingstate literature, certainly motivated by the objective to achieve corre-spondence with empirical data (see Deco et al. (2011), for a recent re-view). Essentially, in a series of theoretical studies (Deco et al., 2009;Ghosh et al., 2008; Honey et al., 2007) the authors demonstrate thatthe resting state pattern formation can be understood as emergingfrom the network interactions of a brain network with a full brainconnectivity derived from either the CoCoMac database or usingtractographic data (DTI/DSI). Convergent evidence based on thesetheoretical models was provided that multiple states exist duringthe resting state, which are explored in a stochastic process drivenby noise. Such models agree on the importance of noise-driven explo-ration of the brain dynamical repertoire, but do not fully agree on thedeterministic skeleton of this dynamical repertoire. The dynamicrepertoire is the set of all themacrostates thatmay be occupied duringthe resting state dynamics and the deterministic skeleton is the setof transitions between these macrostates. Taken together they thusinclude the geometry and topology of the trajectories, as well astheir convergence properties indicating stability. One can view the en-semble of dynamic repertoire and skeleton as a graph composed ofmacrostates on the nodes and the edges indicating the possible transi-tions. Such a visualization is effectively a discrete representation of thecontinuous time evolution of the brain network in its state space.Ghosh et al. (2008) named the noise driven exploration of such abrain dynamics graph the exploration of the dynamic repertoire ofthe human brain due to the similarity between the resting state pat-terns and network activations known from task-related conditions.
Initial interpretations of neuronal dynamics have mainly focusedon attractor dynamics (Amit, 1989; Hopeld, 1982), leading to aclear hypothesis concerning the organization of the brain state space.Nevertheless, it soon appeared that transients and metastability play
an important role due to e.g. high dimension or sparse connectivity
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
(Friston, 1997; Tsuda, 2001). This leads to the proposal that such a com-plex coordinated system as the brain is best understood as amultistableand metastable system at many levels from multifunctional neuralcircuits to large-scale neural circuits (Kelso, 2012). Nevertheless, theseproposals have been quantitatively compared with real data (fMRI,EEG or MEG) mainly on the basis of the comparison of time-series ofspectral contents (Freyer et al., 2011) or the comparisons of covariancematrices (functional connectivity) (Deco and Jirsa, 2012) betweenexperiment and model. For example, Deco and Jirsa (2012) recentlycorrelated the functional connectivity matrices of the simulated brainnetwork data with the experimental BOLD signals and obtained highcorrespondence. They also demonstrated the existence of ve restingstate patterns for an elevated value of coupling strength but alsofound that the best t of functional connectivity is obtained at a lowerlevel of coupling, i.e. when the resting state patterns do not exist asstable states in the state space, but are close to their creation(a so-called bifurcation). For this reason, Deco and Jirsa named themghost attractors, which comprise the dynamic repertoire of the brainnetwork. Here a subtle distinction plays an important role: Theseghost attractors are assumed to exist due to their parametric proximityto the bifurcation. This situation is called criticality. The assumptionbehind this hypothesis is that below the bifurcation point (subcritical)and above (supercritical), the density of the trajectories in the statespace changes smoothly. Though this assumption is reasonable, it stillremains to be shown that the ghost attractors indeed exist in the sub-critical regime. Once their existence, and thus the dynamic repertoire,is established, then this will be the rst step towards the full character-ization of the trajectories within the state space connecting the ghostattractors, and hence the full characterization of the spatiotemporalbrain dynamics.
As we illustrated above, although the presence of a spatio-temporalorganization in brain activity has clear empirical support, the under-standing and interpretation of this organization remains an openproblem. Moreover, the deterministic dynamical skeleton in the statespace is not easily characterized at the level of scalar metrics (spectralanalysis or even correlation) and render the comparison of model andempirical brain activity in its state space indispensable. From the viewpoint of dynamical system theory the only unambiguous representa-tion of a dynamic system is given by its ow in the state space. Saiddifferently, two dynamical systems are different from each other ifand only if the topology of their ow is different. If not, then theybelong to the same class of systems. The ow prescribes the evolutionof the state vector of the system as a function of its current state and ismost commonly written as a set of differential equations. In terms ofstate space analysis, two approaches are typically taken in the litera-ture. The rst approach is based on nonlinear time series analysis de-rived from dynamical systems theory (see e.g. Kantz and Schreiber(2004)) and has mostly focused on the properties of continuoustrajectories in the brain's state space. The second approach is basedon the stochastic approach of dynamical systems (Lasota and Mackey,1994). The main difference between these two complementaryapproaches lies in the fact that the rst one supposes a differentialstructure of the trajectory, whereas the second one does not and thusallows one to link dynamical systems and stochastic processes. Thestochastic approach of dynamical systems thus becomes a naturaltheoretical setting for the characterization of brain dynamics all themore that high dimensional dynamics can be hardly differentiatedfrom stochastic processes (Lachaux et al., 1997). Nevertheless, thereare subtleties that go beyond the scope of this article, but should bekept in mind when interpreting data. To some extent noise, which isinvariably present in biological systems, can obscure the underlyingdifference but not always: A subcritical bifurcation is only smoothedinto an apparent supercritical bifurcation by additive noise of a suf-cient amplitude otherwise the discontinuity of the former remains.In the presence of multiplicative noise, the statistics of the system211will be also qualitatively different. Similarly, the symmetry of noise
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227228229230231232233234235236237238239240241242243244245246247248249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285Q2286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
3A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORREC
may play a crucial role in the transition dynamics of multistable sys-tems (Freyer et al., 2011). In the current article we focus to uncoverthe regimes in the state space, in which the brain dynamics spends alonger time as compared to other regimes. Under these conditionsthese former regimes will correspond to either stable network patternsor zones of slow and convergent ows, which Deco and Jirsa (2012)called ghost attractors. The latter play effectively a similar role as thenetwork states, since in the presence of noise the difference betweenthem is smeared out.
Thus our goal is twofold: rst propose an approach of braindynamics which allows obtaining information on the organizationof brain dynamics in its state space and then to compare state spaceorganization for large-scale simulation of brain electrical activity atrest and experimental EEG data in healthy subjects. Let us considerbrain activity within the framework of ergodic theory i.e. let B; T be the brain dynamical system with B its state space and T : BBthe transformation that allows changing brain state. We have accessto B only through a measurement m : BE where E is the statespace of the measurement apparatus. In the case of EEG, the measure-ment space is related to the electrode placement system and theresolution of the analog to digital conversion system; thus E is discreteand thus constitutes a partition of B. This partition is our highestresolution andwewill call it ourmicro-scale. Nevertheless, due to nitetime observation this partition is too rened to be studied at themicro-scale level and we propose a coarse-graining procedure whichmakes a compromise between description scale and statistical errors.Such an alternative approach is based on proposals in the context ofthermodynamics of non-equilibrium systems (Gaveau and Schulman,1996), set oriented approach of dynamical systems (Dellnitz andJunge, 2002), approximation of dynamics (Dellnitz and Junge, 1999;Froyland, 2001), nite resolution dynamics (Luzzatto and Pilarczyk,2011), symbolic image of dynamics (Osipenko, 2007) and symbolicapproach of complex systems (Gupta and Ray, 2009). Regardingnonlinear approach of brain dynamics, this approach, also proposedby Allefeld et al. (2009), does not emphasize the estimation of invariantindices but deals with the qualitative/topological organization of thebrain phase space.
The initial step of this procedure is a subdivision of the measure-ment space which allows one to obtain an approximate model ofbrain's dynamics in terms of a meso-scale discrete Markov process(Dellnitz and Junge, 1999). Once the discrete representation isobtained one can search for regions of the phase space with specicdynamical features such as dynamical slowing down or recurrence.The set of coarse-grained regions and their dynamical transitionsdenes a coarse-grained dynamical skeleton in measurement statespace and as such an effective macro-scale model of brain dynamics.
This paper is organized as follows: in the next section, we describeour approach in a general manner. It proceeds in several transforma-tions with the goal of replacing the micro-scale of the EEG signal by agraph of transitions between macro-scale brain states. The steps ofthis procedure are then applied to a multistable stochastic system inorder to test this procedure on a simple benchmark example. Thewhole procedure is then applied to resting brain dynamics in bothlarge-scale simulation and in real EEG data from healthy subjects. Itallows one to compare the macroscopic organization of the realbrain dynamics with simulated one and to characterize resting activ-ity by a set of coarse-grained patterns.
2. Materials and methods
2.1. Procedure
We provide here a general description of the procedure used for thecoarse-grained approach of brain dynamics. It allows us to introducethe notions used throughout the remainder of this article. Each step
of the procedure and specic notations are detailed in Appendix A.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
Our approach is based on transformations from the measurementspace (i.e. micro-scale) to a coarse-grained representation (i.e. macro-scale) through an intermediate level (i.e. meso-scale).
2.1.1. From micro-scale to meso-scaleThe rst step of the procedure is an iterative discretization of the
measurement space which leads to the meso-scale representation. Ourapproach is based on a subdivision's algorithm (Dellnitz and Hohmann,1997) inspired by Ulam's discretization process (Ulam, 1960). It isthe basis of set oriented numerical methods for dynamical systems(Dellnitz and Junge, 2002). The entropy rate of the discrete process isused to choose the last discretization step as described in Appendix A.When the last discretization step is found, the system's dynamics is rep-resented by a sequence of discrete meso-states. Each meso-state corre-sponds to a set of micro-states, i.e. the set of spatial EEG patterns in thepartition element.We can represent ameso-state as amean EEG patternaveraged over all the EEG patterns in one partition.
As a working hypothesis, the sequence of meso-states is then con-sidered as the result of a rst-order Markov process. This hypothesisprovides a simple approximation of the dynamics at the meso-scalewhich is characterized by its stationary distribution and its transitionmatrix. This working hypothesis is difcult to check experimentallyand should be further investigated in subsequent studies.
2.1.2. From meso-scale to macro-scaleThe transformation from meso-scale to macro-scale is based on
the network representation of master equation systems, Markov pro-cesses or symbolic dynamical systems (Kitchens, 1998; Osipenko,2007; Schnakenberg, 1976). In fact, it has been shown that dynamicalproperties of complex systems can be related to the properties of theassociated network representation (Donner et al., 2010; Marwanet al., 2009). The network representation of a Markov process is agraph, whose nodes are the states of the process and the edges theobserved transitions between the states (see Appendix A).
On the basis of this graph, our next step needed to identify regions ofthe meso-scale state space with specic dynamical characteristics. First,regions of the state space where the dynamics slows down or withhigh recurrence are identied as sets ofmesostateswith high probability.Second, parts of the graph with complete connectivity (i.e. cliques, seeAppendix A) can be associated with regions depicting a kind of dynami-cal invariance perturbed by random uctuations. Moreover, since theMarkov process is built from experimental data, some properties (suchas connected or attracting sets) trivially correspond to the whole graphand the clique organization is an intermediate level. On the basis ofthese constraints, our algorithm uses the probabilistic properties of thestationary distribution and the clique organization of the network repre-sentation of the meso-scale Markov process for coarse-graining themeso-states intomacro-states. A representation of the EEGpattern corre-sponding to a macro-state can be obtained as the weighted average ofEEG patterns associated with the meso-states in the macro-state. Thedynamics of the system is thus a sequence of macro-states, which isstudied to characterize the macro-scale dynamics.
2.1.3. Adaptation of the procedure to brain dynamicsThe procedure explained above should be adapted to the practical
case of brain dynamics using a preprocessing step. In fact, in the caseof brain dynamics one starts with a measurement space where eachdimension has a physical meaning related to the electrode placement.This arbitrary order can lead to misleading results in the subdivisionstep. In order to avoid such a problem, a principal component analysis(PCA) is used as a preprocessing step. PCA induces an order in thedimensions since it orders the dimensions according to decreasingvariance. PCA is used here only for rotating axis and not for dimensionreduction; the latter would necessitate a truncation of the PCA expan-sion, which we do not perform, and we leave the dimension reduction337to the discretization process.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398399400401402403404405406
407
408
409
410
411
412
413
414
415
416matical formulation, which typically resembles the mathematical417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451Fpz) which are most likely to be contaminated by eye movements,452
453
454
455
456
457
458
459
oImage xxx (2013) xxxxxxUNCORREC
2.1.4. Statistical validationEach step of the procedure was statistically validated against
adapted surrogate data (Schreiber and Schmitz, 2000; Theiler et al.,1992) depending on the null hypothesis to be tested.
Multivariate phase-randomized surrogate data (Prichard andTheiler, 1994) were used to test the rst step of our procedure i.e. theresult of the subdivision procedure. Since these surrogate data keeppower spectra of single signals and cross-correlation between signals,they allow us to test the null hypothesis that the subdivision processleads to the same process for raw data as for multivariate randomprocesses. The entropy rate of the effective Markov process was usedas a statistics for this test. The null hypothesis was rejected using anon-parametric procedure with a signicant threshold = 0.05 for aset of 50 multi-variate phase-randomized surrogate data for each rawdata set. When the null hypothesis can be rejected, we can considerthe meso-scale Markov process as a validated discrete approximationof the raw dynamics.
The determination of the macro-states for the raw dynamics wastested against macro-states obtained for random sequences. Namely,macro-states obtained using the meso-scale Markov approximationof the dynamics can be characterized by their size (i.e. the number ofmeso-states in the macro-state) and their probability (i.e. the sum ofprobability of the meso-states in the macro-state). The coarse-graining was thus computed for the raw meso-scale Markov processand for shufed versions of this process. The shufed surrogate datakeep the same stationary distribution as the raw meso-scale Markovprocess, but randomize the transitions. For each raw meso-scalesequence, a set of 50 shufed surrogate data was generated. Themacro-states obtained from the raw meso-scale process were thencompared with those obtained from the shufed surrogate data. Foreach size, only macro-states with higher probability than that of allthe surrogate macro-states of the same size were selected for furtherinvestigation. The macro-states that were not selected are then consid-ered as a random component of the macro-scale dynamics and theselected macro-states as a non-random component.
2.1.5. Characterization of the macro-scale dynamicsOnce non-random macro-states were selected, we reduced our
investigation to these states by discarding the macro-states of therandom component from the sequence of macro-states. The dynamicsof these states was then characterized for both their typical residencetimes (i.e. their correlation) and for their transitions.
Firstly, the typical residence time was computed for each macro-state as the mean length of runs and compared with the typical resi-dence time obtained for shufed random sequences. The same statisti-cal procedure as for the statistical validation (Section 2.1.4) was usedhere i.e. 50 shufed surrogate sequences and a statistical thresholdp = 0.05.
Secondly, the transitions between non-random macro-states wereestimated using sequences where successive repetitions of the samemacro-state were reduced to one single occurrence of the macro-state. The transitions were then considered signicant if they werehigher than those observed for 50 shufed surrogate versions of thereduced sequence. These validated transitions observed for the non-random component allow one to infer a macro-scale dynamical skele-ton between the macro-states.
2.2. Data
2.2.1. Multistable stochastic systemIn order to extract an effective macro-scale dynamical skeleton
frommultidimensional data, we studied a two dimensional stochasticsystem (x1,x2) in discrete time as in Allefeld et al. (2009), dened by:
xi a xi2x3i
bii 1
4 A. Hadriche et al. / NeurPlease cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041were excluded from the analysis.EEG signals were digitized on 16 bits of precision using a 250 Hz
sampling frequency and ltered using a band-pass lter between0.5 Hz and 70 Hz. EEG recordings were obtained from each participantin eyes closed and resting condition. About 2 min (i.e.: 117 s or 3.104
time samples) of artifact-free EEG recordings was visually selectedfrom each subject. Effective coarse-grained dynamical skeleton wasthus obtained for 12 subjects in the 17-dimensional measurement
4ED P
ROterms of the neuron model used. The connectivity matrix for the full
brain network is an extrapolation of the human anatomy from thema-caque monkey brain's connectivity (Kotter, 2004; Kotter and Wanke,2005). The interaction between the vertices of the network wasimplemented using delayed interaction terms. The network modelwas solved using the EulerMaruyama method as implemented inthe Virtual Brain platform (see Jirsa et al. (2010), for more details).The equilibrium state of the network is a function of transmissiondelay and coupling strength (Deco et al., 2011; Ghosh et al., 2008;Jirsa and Ghosh, 2010). All parameters were identical as used in thesubcritical regime by Ghosh et al. (2008). The delay and strengthparameters were chosen in the neighborhood of the instability ofequilibrium state of the deterministic system. Thus, when the noisedrives a system out of equilibrium state, the system explores theneighborhood of the equilibriumpoint and the subsequent relaxationsestablish the resting-state dynamics. For these parameter settings, noother stable equilibrium points exist.
Two sets of data of length T = 3.104 time points were simulatedusing this model. Since FitzhughNagumo model has two state vari-ables, these data sets correspond to brain dynamics in a 38 2 dimen-sional space. Only the voltage state variable is taken into account as ourmeasurement and the measurement space is thus 38-dimensional. Thesampling rate was 250 Hz. Those data sets were studied, as in the caseof real EEG data, after a PCA preprocessing step, using discretizationand coarse-graining procedure.
2.2.3. Resting EEG in healthy subjectsA set of EEG sample was used to compare coarse-grained dynamical
skeleton of large-scale simulated brain dynamics to real EEG data. TheEEG recordings were performed on 12 healthy subjects in the Depart-ment of Clinical Neurophysiology (University Hospital of St. Philibert,Lomme, France) using 17 Ag/AgCl electrodes (C3, C4, Cz, F3, F4, F7,F8, Fz, O1, O2, P3, P4, T3, T4, T5, T6) placed on the scalp according tothe 10/20 international electrode placement system. The referenceelectrode was placed on the nose. Prefrontal electrodes (Fp1, Fp2,OF
where 1; 2 are a standard normal two dimensional white noise anda = 0.01, b1 = 0.03 and b2 = 0.05 and T = 5.105. The deterministicpart of this system has four stable xed points 1=
2
p;1=
2
p
which dene the multistable dynamical organization in the phasespace. The Markov transition matrix of the model system has foureigenvalues close to unity associated with four distributions, whichcorrespond to almost-invariant sets and have been referred to asmetastable with regard to the Markov operator (Allefeld et al., 2009).
2.2.2. Large-scale simulated systemResting state brain dynamics was simulated using the large-scale
model of Ghosh et al. (2008). It is based on a network of 38 vertices,where the local node dynamics is implemented using noisy StefanescuJirsa populationmodels (Stefanescu and Jirsa, 2008) based on coupledFitzHughNagumo neurons (see e.g. Fitzhugh and Izhikevich (2006)).This population model takes into account dispersion of the neuronalparameters, in this case a distribution of threshold values. The virtueof this population model is that it is analytically reducible to a mathe-460space E for T = 3.10 data points after PCA.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
TOOF
461
462
463
464
465
466
467
468
469
470
471
472
473474475476477478479480481
482
483
484
485
486
487
488
489
2.0
1.5
.5
2
1
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0
-1.5
-2.0-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
0.0544
0.0436
0.0327
0.0218
0.0109
Y
Fig. 1. Illustration of the subdivision process and of the coarse-graining step for the multistalammac) foransited a
5A. Hadriche et al. / NeuroImage xxx (2013) xxxxxx3. Results
3.1. Technical results
(right). In both cases, the result of the subdivision algorithm (after 11 steps) based on Usoscale (left) microstates are colored according to the mesostate they belong to. For theThe second line depicts the complete transition graph (before surrogate data selectioncumulative occurrence probability of each macro-state and the edges represent the trmacro-states with a size larger than 3 have been depicted. The spheres have been loca1.0
0.5
0.0
-0.5
-1.0
-1.5
-2.0-1.5 -1.0 -0.5 0.0 0.5 1.0 1
2
1UNCORRECNumerical simulations were used to check the validity of the rststep of the procedure on systems with known properties. Multivari-
ate Gaussian white noise process and deterministic chaotic systemwere used to demonstrate nite-size effects in the subdivision pro-cess. These technical validations are reported in Appendix B.
3.2. Multistable stochastic system
An illustration of the results of the successive steps of the proce-dure is given in Fig. 1.
The application of the discretization step stops for k = 11 leadingto a Markov representation of Mj j 2008 actual meso-states.1 Theentropy rate obtained is h = 3.66 bits per sampling unit. In the caseof the phase-randomized multivariate surrogate data, the number ofmeso-states varies from 3217 to 3698 and entropy rates vary from3.80 to 3.92 bits per sampling unit. Thus, the meso-scale dynamicsobtained for raw dynamics is signicantly different from thatobtained for surrogate data. Themeso-scale approximation of the sys-tem thus keeps the information of the raw data and does not confuseit with the random system.
The network representation of the meso-scale dynamics was thencoarse-grained to dene the macro-states on the basis of cliquesearch in the graph. The same procedure was applied to a set of 50shufed meso-scale surrogate data. The macro-states for both rawand shufed data ordered according to their occurrence probabilityand size are depicted in Fig. 2. The comparison between raw and
1 Although 211 = 2048 all the subdivision elements do not contain a microstate sothat a lower number of meso-states is actually visited by the system's dynamics.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
R
surrogate macro-states allows one to select 4 non-random macro-states. The averaged coordinates of the 4 macrostates are (0.69,0.70), (0.67, 0.68), (0.79, 0.71) and (0.71, 0.70) which are
3.40e-05Z X
ble stochastic system. The rst line represents the mesoscale (left) and the macroscale's procedure is depicted as black lines. The dots correspond to microstates. For the me-roscale (right) the microstates are colored according to the macrostate they belong to.the coarse-grained multistable stochastic system. The size of the spheres denotes thetions between the macrostates (colored according to the transition probability). Onlyt their average coordinates.490correct approximations of the 4 stable states of the system.491Macro-scale dynamics was then studied for both residence time492and transition probabilities. For each macro-state we found residence493times of 2.97, 2.84, 3.26 and 3.23 (sampling unit), which were all494signicantly longer than their respective residence time in the shufed495surrogate sequence. The study of the transition probabilities allows one496to select signicant transitions between macro-states that can be com-497pared with the dynamics in the system's state space (see Fig. 3). The498organization of the macro-scale space reproduces the main features
0
Fig. 2. Surrogate data testing of the macrostates for the multistable stochastic system.Macro-states for raw (in red) and shufed surrogate data (in blue) represented in thespace of their size and occurrence probability. Only raw macro-states, for which theoccurrence probability is higher than that of all the surrogate data of the same size,are selected for further analysis. Here, four macro-states are selected.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
TOOF
499 of the state-space structure: four macrostates and preferred transitions500
501
502
503
504
505
506
507
508
509 subdivision iteration number 7 leading to Mj j 126 and Mj j 123 ac-510 tual meso-states. The entropy rates of the associated effective Markov511512513514515516517
518the simulated brain dynamics preserves the information of the raw data519520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
a
0
2
1
3
b
Fig. 3. Comparison of the dynamics of the multistable stochastic system in its phase space and in the macro-scale space. (a) Representation of the phase space of the system: arrowsdepict the orientation of the gradient; blue dots correspond to micro-states visited by the numerical simulation. (b) Macrostate network for the multistable stochastic system:nodes represent the selected macro-states and arrows the signicant transitions.
6 A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxRRECprocesses were respectively h = 0.74103 and h = 0.69.103 bits per
second. The subdivision process of the phase-randomized surrogatedata sets leads to a number of 256 meso-states for the surrogate data.The entropy rate of the effective Markov process associated with surro-gate data vary from 0.96.103 to 1.00.103 bits per second. Thus, themeso-scale dynamics obtained for rawdynamics is signicantly differentfrom that obtained for surrogate data. The meso-scale approximation ofin the y-direction where the noise level is the highest.Our procedure using a discrete approximation of the system's
dynamics and on a coarse-graining based on clique search in thegraph representation of this approximation allows revealing themain structure of the dynamics of the metastable system in itsphase space. This procedure is now applied to simulated and recordedbrain dynamics.
3.3. Large-scale simulated system
For the two simulateddata sets, the discretization procedure stops forUNCO
a
Fig. 4.Macro-states for raw (in red) and shufed surrogate data (in blue) represented in theset. Only raw macro-states for which the occurrence probability is higher than that of all theframed lower left corner of the main plot.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
Rand does not confuse it with the random system.The network representation of the meso-scale dynamics was thencoarse-grained to dene the macro-states on the basis of cliquesearch in the graph. The same procedure was applied to a set of 50shufed meso-scale surrogate data for each simulated data set. Themacro-states are depicted for both raw and shufed data accordingto their occurrence probability and size in Fig. 4. The comparisonbetween raw and surrogate data results allows selecting 19 non-random macro-states for the rst simulated data set and 16 non-random macro-states for the second simulated data set.
Macro-scale dynamics was then studied for both residence timeand non-random transitions. Mean residence times were comparedfor each macro-state with those obtained for random dynamics (seeFig. 5). In each case, raw mean residence times are signicantly longerthan those of random dynamics demonstrating that macro-statedynamics depict signicant correlations that are not present in randomwalk dynamics. The study of the transition probabilities allowsselecting the transitions between macro-states which are signicantlymore probable than those of shufed surrogate data. When the transi-tions are bidirectional (which happens in almost all of the cases) thehighest transition indicates the direction of the dynamics as depictedb
space of their size and occurrence probability for rst (a) and second (b) simulated datasurrogate data of the same size are selected for further analysis. The inset enlarges the
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
TROOF
540 in Fig. 6. This graph reveals the actual noise driven dynamics of the sys-541 tem. In both cases, one can notice that the graphs are not totally542 connected demonstrating that preferred transitions exist within the543 phase space of the large scale simulated brain dynamics. Moreover,544
545
546
547
548
549
550
551
552
553
554
555
556
557varies from 0.76.103 to 0.97.103 bits per second (mean: 0.86.103, sd.:5580.06.103). The subdivision process of the phase-shufed surrogate559data stops for k = 8 in 6 data sets and for k = 9 for the other5606 data sets leading to a number of meso-states |M| = 256 or |M| =561512. For all the data sets, the entropy rate of the effective Markov pro-562cess associated with surrogate data is signicantly higher than that of563the raw data. The subdivision process thus keeps the structure of the564raw dynamics that is destroyed by phase shufing. The difference565between the entropy rates obtained for real and simulated brain566dynamics was tested using a bootstrap procedure for the difference of567mean. Entropy rate for simulated data is signicantly lower than that568of real data (p-value: 0.006 for 1000 resampling).569The coarse-graining procedure allows obtaining macro-states for570real electrical brain dynamics. The occurrence probability as a func-571tion of the macro-states' size is depicted in Fig. 7 for all the subjects.572The selection procedure allows to extract from 7 to 12 macro-states573for each EEG data set leading to a total of 105 macro-states selected574for the whole set of 12 EEG samples.575For each EEG sample, the macro-scale dynamics was studied576for both residence time and preferred transitions. The signicant577macro-scale transition graphs are given in Fig. 8 for the highest prob-578abilities of transitions as in Fig. 6. As in the case of simulated data,579the macro-scale graphs clearly depict preferred transitions between580macro-states of the non-random component. The organization of581these transitions depicts one macrostate whose role in the graph is582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
Fig. 5. The mean residence times for simulated data sets as a function of the macro-statenumber. The continuous lines depict the results for raw data whereas the dotted lines de-pict the maximum value obtained for shufed surrogate data.
7A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxRREC
the structure of the graphs depicts an attracting node which corre-sponds to the equilibrium point, cycles and chains of macro-states.Such chains maybe realized via heteroclinic cycles (Rabinovich et al.,2012) together with possible cycles of variate lengths as suggested byprevious work on smooth dynamical systems (Ashwin and Chossat,1998; Ashwin and Field, 1999; Dellnitz et al., 1995) or may correspondto ghost attractors (Deco and Jirsa, 2012). The latter differ from the for-mer that the chains are probabilistic (as opposed to deterministic) andhence may provide varying sequences for each realization as a functionof the transition probability.
3.4. Application to resting EEG in healthy subjects
For the 12 data sets, we obtained a meso-scale Markov representa-tion with Mj j 28 after the subdivision procedure. The entropy rate
18
2UNCO
0
1
3
17
5
4
16
13
11
9
615
14
10
127
8
a) First datasetFig. 6. Macrostate network for simulated data. The nodes are the non-random macro-stshufed surrogate data. When transitions are bidirectional (which happens in almost alldynamics.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED Psimilar to the equilibrium point in the case of simulated data andchains and cycles of macro-states as in the simulated case.
The mean residence times observed for the macro-scale dynamicsof the EEG data are depicted in Fig. 9. They are signicantly longerthan those of the shufed surrogate data demonstrating that the EEGdynamics at the macro-scale level have a correlation structure, whichis not present in a mere random walk dynamics. Moreover, the timescale of these correlations is similar with that observed for the simulat-ed data.
The 105 patterns of activity associated with macro-states are hardlyvisually comparable between subjects. We used a clustering techniquebased on afnity propagation (Frey and Dueck, 2007) to extract com-mon features between subjects. This procedure, based on Euclideandistance between patterns identies the number of clusters in anunsupervised way and thus extracts common features without a priori
15
5
12
0
11
14
13
7
10
6
4
9
18
3
2
b) Second datasetates. The edges represent the transitions that are signicantly higher than those ofthe cases) the highest transition probability is depicted to dene the direction of theire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
597Q3598
599
600
601
8 A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORRECT
assumption on the number of clusters. The non-random macro-stateEEG patterns for all the subjects were taken into account. Fig. 10depicts the average pattern of the macro-state EEG identied in eachcluster. Thus the most important patterns at the macro-scale can besummarized to 3 archetypes which can be considered as coarse-
(a) (b
(d) (e
(g) (h
(j) (kFig. 7. Macro-states for raw (in red) and shufed surrogate data (in blue) represented inmacro-states for which the occurrence probability is higher than that of all the surrogatelower left corner of the main plot.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
602grained dynamical modes of the resting state network. These modes603comprise a low amplitude pattern, which can be related to an average604level of activity and twomodes of higher amplitude with occipital acti-605vation. These patterns are compatible with the -rhythm of the resting606condition recorded here (see Fig. 10).
) (c)
) (f)
) (i)
) (l)the space of their size and occurrence probability for all the EEG data sets. Only rawdata of the same size are selected for further analysis. The inset enlarges the framed
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
ORRECTE
D P
ROOF
607 4. Discussion
608 EEG brain dynamics is only accessed through a nite resolution609 and nite time window dened by the electrode placement system610 and the duration of the recording session. Due to this limited informa-611 tion, one can only model brain dynamics at a nite scale. We pro-612 posed here a procedure which denes an effective meso-scale for613 brain dynamics as the result of a subdivision of the measurement614 space. In order to obtain geometrical information on the brain615 dynamics, we dened macro-states as regions of the meso-scale616 space with specic dynamical characteristics such as slowing down,617 recurrence or clique-invariance on the basis of the graph associated618 to the Markov dynamics in the meso-scale. In that sense, the macro-619 states emerge from the properties of meso-scale dynamics. This proce-620 dure was inspired by several trends in non-linear dynamics which621 can be qualied as the set oriented approach of dynamical systems622 (Dellnitz and Junge, 2002). Using this unsupervised procedure, we623 showed that we were able to recover the main qualitative properties
624of a multistable stochastic system on the sole basis of the time series.625We then studied both the simulated and real brain dynamics.626When applied to the brain's resting state dynamics, our coarse-627grained approach allows reconstructing the dynamic repertoire of a628brain network model based on large scale connectivity, the so-called629connectome. In this particular case we focused on the model of630Ghosh et al. (2008), for which the existence of multiple macro-631states has not been known yet. Our approach successfully recovers632several macro-states in the state space of the network model, of633which one is highly dominant and the others signicantly weaker.634This property was observed both in the geometrical representation635of the transition network at the macro-scale and in the residence636time. What has been known so far is that in the subcritical regime637the spatial network activations are dominated by two networks638showing an oscillatory dynamics in the 10 Hz range. Our analysis639suggests that ghost attractors may also exist in the model of Ghosh640et al. (2008). Effectively a ghost attractor will correspond here to an641unstable attractive regime in the state space, which predicts for the
a) FRA b) ANG c) LAH
OR
s. Th
9A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNC
d) DCH e) FFig. 8. Macrostate network for EEG data. The nodes are the non-random macro-state
surrogate data. When transitions are bidirectional (which happens in almost all the cases) the h
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041f) aLIe edges represent the transitions that are signicantly higher than those of shufed
ighest transition probability is depicted to dene the preferred direction of the dynamics.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
642
643
644
645
646
647
648
649
10 A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORRECT
stochastic system an increased transition probability from onemacro-state to another. Allowed transitions between macro-statesare illustrated in the topology of the transition network as edges.The absence of an edge in this graph indicates that the transition can-not be differentiated from a pure random walk transition. In thissense the transition network captures nicely the spatiotemporal dy-namics of the brain network by quantifying the preferred transitionsbetween macro-states. Given the stochastic nature of this process,
a) LAE b) BE
d) CAN e) Fig. 8 (cont
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
650we have expressed the network's time evolution through transition651probabilities. The relative sizes of the macro-states are dependent652on the degree of the coarse graining, which in our method is deter-653mined automatically through the algorithm by construction. The654dominant macro-state expresses the sampling of the mostly linear re-655gime around the stable xed point. In case of a smaller coarse656graining, more smaller macro-states might have been identied in657its immediate neighborhood. In the particular case of the brain
A c) LUDO
HEI f) FRFinued).
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T PROOF
658 network model though, the algorithm identied a coarser scale,659 which emphasizes the strong asymmetry in the topology of its transi-660 tion graph: the dominant macro-state is located at the periphery of661 the transition network and the associated transitions to the other662 macro-states occur in to a particular direction in the state space.663 This phenomenon is due to the characteristics of the network shaping664 the nonlinear ow in the state space, in particular the nonlinearities,665 the connectome and the signal transmission delays.666
667
668
669
670Q4671
672
673the simulated one. This result can be both interpreted as a higher674level of noise in real brain dynamics or as a higher dimensional dy-675namics for the real brain than for the simulated large-scale brain dy-676namics. Second, the main macro-states differed less in their size than677in the simulated brain dynamics. Nevertheless, the transition net-678works show similar structure between real and simulated data: the679presence of dominating macro-state related to the equilibrium state680in the simulated data and the presence of chains of macrostates. In681order to extract some regularity from all the macro-states found in682the empirical data set, clusters of macro-states were studied. These683clusters allow dening three archetypical averaged patterns of activ-684ity that can be regarded as dynamical modes of the resting brain net-685work comprising its dynamic repertoire.686The representation of coarse-grained regions of the meso-scale as687an average pattern does not address satisfactorily the dynamical na-688ture of those ghost-attractors. A specic study of the dynamics689within these macro-states would complement our coarse-grained ap-690proach. Nevertheless, several theoretical proposals about cognitive691architectures and how they relate to brain dynamics and cognition692(Rabinovich et al., 2012; Tsuda, 2001; Woodman et al., 2011) suppose693the presence of metastable states as randomly visited organizing cen-694ters of the dynamics. Our results are one of the rsts to give empirical695support to this model of resting state brain dynamics using EEG data696from human subjects.697The approach developed here is a completely unsupervised method698with no a priori hypothesis about what a brain macro-state should be.699Brain states are dened here on dynamical properties and in a data700driven manner so that macro-states appear as regions discovered by701the clique search in the meso-scale transition network. Nevertheless,702
703
704
705
706
707
708
Fig. 9. Residence time for all the EEG data sets. The continuous lines depict the resultsfor the real data sets whereas the dotted lines depict the maximum result for the shuf-ed surrogate data. In all the cases, the residence times are longer for the real data thanfor the random walk dynamics.
11A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxRREC
In the case of the empirical EEG data, the meso-scale and macro-scale properties were similar between subjects although variabilityincreases when the EEG patterns were taken into account. Themeso-scale and macro-scale properties were qualitatively compara-ble with those found for the simulated brain network dynamics.Nevertheless, noticeable differences were found. First the entropyrate of the real brain dynamics was signicantly higher than that ofUNCO
Fig. 10. Averaged modes of the brain dynamics obtained from the clustering procedure for alpattern is a low amplitude pattern whereas the two other patterns depict higher amplitude wthe resting state condition as depicted on the averaged power spectrum density of electrod
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041EDit supposes several simplifying assumptions and methodological
choices that certainly need further attention.Our main assumptions are the following. First, although we do not
suppose reversibility, we considered that the resting state can be con-sidered as a stationary homogeneous process at the observation scaleused here. Moreover, the Markov representation of brain dynamicswas limited to a rst-order Markov process which represents a simple
l the EEG patterns of the non-random component of the macro-scale dynamics. The rstith occipital activation. These patterns are compatible with the -rhythm activation of
es Fz, Cz, O1 and O2.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T709
710
711
712
713
714
715716717718719720721722723724725726727728729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785786787788
789
790
791
792
793
794
795
796
797
798
799
800
801
802803804805806807808809810811812813814815816
817
818
819
820821822823824825826
827
828
829
12 A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORREC
approximation of brain dynamics. This hypothesis has been used bothat the meso- and macro-scales. In the case of the meso-scale this ismainly an intermediate step and other models might be used here toapproximate the meso-scale dynamics. In the case of the macro-scaledynamics, more general models, such as variate length Markov chains(Buhlmann and Wyner, 1999) or -machines (Crutcheld, 2012)might be used to obtain a more precise description of the macro-scale brain dynamics. Our approach is also restricted to theobservation/electrode space and does not allow studying the problemin the source space. Moreover, the number of recording site is limitedhere and might weaken the study of brain dynamics in the observationspace. An extension of this study should take into account higher den-sity recordings of brain electrical activity in order to obtain a largeramount of information on the brain dynamics and might also beextended in the directions of the source space after inverse problemresolution. Finally, simulated data were obtained in the state space ofthe model but they were compared to the data in the measurementspace. Another extension of this work should compare the data in thesame space such as after the forward solution. Nevertheless, the pre-processing step using a PCA may reduce this limitation.
In our methodological strategy the algorithm used to coarse-grainthe meso-scale state space is based on heuristics choices. They wereadapted to the purpose of the identication of regions with specicdynamical characteristics. The correct identication of the character-istics of the multistable stochastic system somehow validates empir-ically these choices. Moreover, the surrogate data testing used toselect the macro-states ensures a statistical validity. Nevertheless,the denition of the macro-states depends on the denition of thecliques which can be biased rst by the fact that transitions are notobserved during the recording period and second by the choice ofthe simple criteria of high probability. In the rst case, a softer crite-rion might ensure a more robust procedure against the inevitableerrors in estimating transition probabilities. In the second case morecomplicated criteria such as combination of local transitions andlocal graph structure could be explored.
The role of noise on real and simulated brain dynamics can also berelated to its inuence on the interpretation of our results. First, noisehas been introduced in the data generating model systems as additiveGaussian white noise. This type of noise is the most common in neu-roscience modeling; we still wish to point out that other variations ofnoise such as multiplicative noise, skewed noise (with an asymmetricdistribution), and colored noise may display qualitative effects (seefor instance Freyer et al. (2011)). These situations need to be furtherconsidered on a case by case basis. Then, metastability on the macro-scopic scale is to be understood in terms of the Markov operatorresulting in a dynamics towards a node, followed by its escape to an-other macro-state. This dynamics needs to be complemented by theview of the dynamics on the meso-scale, in which the neural networkdynamics follows a deterministic ow perturbed by random noise.
The comparison between models of brain dynamics and real braindynamics need criteria for their evaluation. We proposed here a com-parison at a macro-scale oriented toward a coarse-grained dynamicalskeleton. Once the simulated and real data are represented in thesame space i.e. either the source space or the observation space,the large scale dynamical skeletons and their topological or metricproperties obtained in both models and real data could be compared.This might be a criterion for deciding between alternative models. In-deed, our approach does not deal with the signal but with an effectivelarge-scale skeleton whichmight be a very simple but robust criterionfor the evaluation of models.
Acknowledgments
We wish to thank two anonymous reviewers for their advices thatallowed us to improve our manuscript. This research was initiated in
the CODYSEP project supported by the Neuroinformatics Program of
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
the CNRS (p.i.: LP). AH received nancial support from the InstitutFranais de Coopration and The Virtual Brain Project (see www.thevirtualbrain.org). VKJ was supported by the Brain Network Recov-ery Group through the James S. McDonnell Foundation and theFP7-ICT BrainScales. We wish to thank B. Lenne for his help with dataacquisition. Most of the computation of this article was done usingfree software and we are indebted to the developers and Debian main-tainers of the following packages: TEXLive, vim, R, python, python-networkx, python-numpy, python-mvpa, python-sklearn, mayavi2,graphviz, to mention only a few.
Appendix A. Procedure
1. We denote the measurement space E. For each electrode, the ana-log to digital conversion limits the available measurement to an in-terval ofN equivalent to I 0;2b1 where b is the resolution ofthe analog to digital converter in bits. In the case of a multichannelrecording system with N electrodes, E can be considered to be: E IN with card E Ej j 2b
N.
In this space a recording session of Ts seconds leads to a sequenceof row vectors e(t) = (e1(t), ,eN(t)) with t = 0, ,T 1 with T =Ts/t the number of recording samples in a recording session (with tthe recording period in seconds).
We thus used here a spatial embedding of the dynamics instead ofthe time-delay embedding (Kantz and Schreiber, 2004). This choicewas motivated rstly, by our interest in the whole brain state spaceand, secondly, by the fact that time-delay embedding of single chan-nel recordings is not adapted in the case of spatiotemporal dynamicssuch as brain dynamics (Lachaux et al., 1997).
2. The rst step of the procedure is an iterative discretization of themeasurement space E which leads to the meso-scale space M.Since the discretization proceeds by iterative bisections of inter-vals until a nite k step, Mj j 2k . The criteria to choose k inthe k iterations will be described below. At this meso-scale, braindynamics can be approximated by an effective Markov processwith Mj j states. This Markov process is characterized by its sta-tionary probability distribution and its associated graph represen-tation. The meso-states are then grouped into macro-states usinga graph-based algorithm using both probabilistic properties ofthe stationary distribution of the Markov process and clique orga-nization of the graph related to the transition matrix.
3. The macro-scale or coarse-grained scale of this study is thus C Cif gi1;; Cj j which corresponds to the states that belong to thesame clique ordered by occurrence probability. This scale is de-ned in order to obtain geometrical information on the topologicalorganization in the measurement space.
A.1. Discretization step
The rst step consists in a subdivision or partition of the measure-ment space i.e. in the denition of non-overlapping regions Ei so thatiEi E and iEi . There is obviously no unique way for the par-tition of E and each partition procedure has its own goal. For example,in the dynamical systems literature, one typically searches for a gen-erating partition (see e.g. Badii and Politi (1999)) which has strongtheoretical settings. Nevertheless, a generating partition is hardlydetermined on the basis of noisy experimental data.
Our approach is based on a subdivision's algorithm (Dellnitz andHohmann, 1997) leading to the generation of a sequence E 0 ; E 1 ;of nite collection E k E k j : j 0;1;;2k1
n oof 2k rectangles
such as 2kj1E k j E. The rst collection E 0 E emin1 ; emax1
min max
min max min max830e2 ; e2 eN ; eN with ei and ei is respectively the
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T831 minimum and maximum values of the i-th coordinate of eQ6 . E k1 is832 constructed by the bisection of each rectangle E k on the k modN th833834
835
836
837
838
839
840841842843844845
846
847
848849850
851852853
854855856857858
859
860
861
862
863Q7864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882883884885886
887
888
889
890
891
892893894895
896897898899900901902903904905906
907
908
909
910
911
912
913
914
915
916
917
918
919920921922
923
924
925
926
927
928
929
930
931
932
933
934
935
936937938939940
941942
13A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORREC
j -
dimension.This is an unsupervised method which leads to an equal-width
discretization process (Kotsiantis and Kanellopoulos, 2006). In Allefeldet al. (2009) another unsupervised technique was used based on anequal-frequency discretization process. Both techniques lead to anapproximation of the probability density E e by a simple functiondened as
k e i k i I k i e A:1
where IE k i is the characteristic function of Ek i i.e. IE k i e 1 if eE
k i
and IE k i e 0 otherwise. In the case of Allefeld et al. (2009) i, i =T/2k and the measure of E k i vary whereas it is the reverse case for thepresent subdivision procedure.
For each iteration step k, one hypothesizes a Markov process basedon the discretized trajectory i.e. the brain micro-states e(t) are re-placed by meso-states m(k)(t) with m k t M k 0;2k1 wherem(k)(t) corresponds to the index j of the region E k j where e(t) ispresent. We consider the sequence of m(k)(t) as the realization ofa rst-order Markov process M(k) with stationary distribution k k 0 ;;
k 2k1
and 2k 2k transition matrix (k) = (i,j(k)) =
(Pr(m(k)(t) = j | m(k)(t 1) = i). This Markov process can be charac-terized by its entropy rate h(M(k)) given by:
h M k
i;j k i
k i;j log
k i;j A:2
where the (k) and (k) are both maximum likelihood estimate of,respectively, the stationary distribution and probability of transitionand log is taken as natural logarithm.2
The iteration of the subdivision process lead to ner and ner par-titions and thus the entropy rate h(M(k)) increases for k = 1, 2, toits limit of the entropy rate of the underlying transformation(Petersen, 1989). Nevertheless, in the case of experimental data thiscannot be repeated for an arbitrary number of times since data arelimited to the available T data points. For large k, the number ofmeso-states becomes large and the number of data point needed toobtain a valid estimation of the entropy rate increases. Thus in practi-cal situations, one enters the domain of bad statistics (Lesne et al.,2009) where block entropy saturates and thus entropy rate tends tozero. One should enter a compromise between statistical error andprecision of the partition (Holstein and Kantz, 2009). We choosehere a pragmatic and simple criteria based on the behavior of theentropy rate of the Markov process and we thus dened k as: k =argmax h(M(k)).
This rst step can be summarized as follows: for k 1, 2, ,kmax1. Split the data according to the dimension k modN;2. Estimate the stationary distribution and transition matrix (k) and
(k);3. Compute h(M(k)).
Then, choose k 1,kmax where h(M(k)) is maximal (practicallykmax 10).
At the end of the iterative process we consider the discretized tra-jectory as a Markov process with 2k
states and thus we keep the 2k
vector of stationary probability and 2k 2k transition matrix. In the
remaining parts of this article, we will drop the (k) in our notationsand will for example use M for M k and m(t) for m k t and so onfor all the quantities that are obtained at the nal k step such as ,and .
2 Entropy rates are converted to bits per sampling unit when the time scale is arbi-
trary and to bits per second when time scale is explicit.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
The meso-scale Markov representation of brain dynamics is nowstudied using the graph representation of the process.
A.2. Coarse-graining
Let us associate to the transition matrix a weighted directedgraph G with Mj j vertices vi (each vertex corresponds to an elementmi of M) and edges li,j from vertex i to vertex j if i,j > 0 (i,j is theweight of edge li,j). The set of vertices is also associated with the sta-tionary distribution in the meso-scale space.
Within this graph G one can dene cliques Ci which are the largestcomplete subgraphsCiGwith at least three vertices and characterizedby: vk; vjCi; kj > 0 and jk > 0 which means that there are tran-sitions from nj to ni and reciprocally. Thus, if time evolution starts witha meso-state in a clique Ci then with positive probability the nextmeso-states will stay in Ci but there is also a positive probability thatit will leave Ci. Since Pr(m(t) Ci) can be roughly estimated toCi =j jMj j (Barber and Long, 2012), process that starts in larger cliqueshave a higher probability to stay in these cliques. Then cliques withthe largest size and containing the meso-states with the higheststationary probability have the highest probabilistic invariance.
Macro-scale is then dened using both cliques in G and the sta-tionary distribution in the meso-scale spaceM. Cliques are searchedin G according to decreasing probabilities of meso-states as follows:
1. Select the most probable meso-state mj associated with vertex vjwhich has not yet been attributed to a clique.
2. Find the largest clique to which the vertex vj belongs to.3. Gather all the vertices vj of this clique into one coarse-grained state
and exclude them from next step.4. Iterate this process.
In fact this procedure is empirically equivalent to an algorithmwhich rst identies all the cliques in the meso-scale graph and thenselects the clique with the highest probability since probability andsize criteria are most of the time equivalent. Step 3 ensures thatCiCj for i j. For each clique Ci one can thus associate a macro-statewhich gathers the meso-states mj associated to the vertices vj such asvjCi. These macro-states are characterized by probability viCk iand a size |Ck| i.e. the number of vertices (meso-states) in the clique.
A.3. Adaptation of the procedure to brain dynamics
In the case of nite observation length T, the subdivision processstops for small k. If the dimension of the measurement space is great-er than k, this implies that information is mainly taken into accountfrom the rst dimensions and ignored for the latest ones. Since theorder of the electrodes is arbitrary, this would emphasize arbitrarilyrst electrodes. In order to avoid such a problem, a principal compo-nent analysis (PCA) was used as a preprocessing step. After PCA, e isreplaced by linear combinations p and the discretization process isthen applied to pE.
In the case of real physiological data, once the coarse-grained (ormacro) states are dened, one can test for physiological relevance ofthe procedure and project back to E the average activity of a state.Each macro-state Ci is a set of vertices vj associated to meso-statesmj. In turn, eachmj is a set of pl or el and one can thus dene an aver-age pattern of brain electrical activity eh ij lLj el=jmjj with Lj l : elmj
associated to a meso-state mj. The macro-state patternCi is then
Ci jj eh ij A:3
where j is the occurrence probability of mj. This macro-state pattern
943can be projected to the scalp for inspection of physiological realism.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
T944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965966967968969970
971
972
973
974
975
976
977
978
97998098198298398498598698798898999099199299399499599699799899910001001
iviste Gbeh
14 A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxREC
Appendix B. Technical results
B.1. Multivariate Gaussian white noise
Five independent random processes with standard Gaussian dis-tribution were used to simulate multivariate independent and identi-cally distributed processes of dimension 5 for an increasing length T.The discretization step was applied to these data sets and the entropyrate of the meso-scale Markov processesM(k) was computed for eachstep k in each dataset. In that case, for iteration step k the theoreticalentropy rate is h = log(2k) since the partition has 2k independentand equiprobable states. The results of these simulations are depictedin Fig. 11(a).
For initial iteration steps, the entropy rate follows the theoreticalvalue h = log(2k). The value of k estimated for each length T variesfrom 5 for T = 104 to 6 for T = 3.104 and T = 105. For k > k the es-timated entropy rate decreases as a consequence of nite size effects.We thus check here the valid initial evolution of the estimation of en-tropy rate for k b k and the expected nite-size decrease for k > k.
a
Fig. 11. Entropy rate of the meso-scale Markov process (h(M(k))) as a function of subddashed lines represent the theoretical value (h) of the entropy rate for (a) Multivariathe initial increase of the entropy rate (for k b k) follows the theoretically predictedlead to a decrease of the entropy rate.UNCORB.2. Dynamical system with known dynamics
Lorenz attractor (Lorenz, 1963) was used as a model for low di-mensional dynamics with known properties. The system of nonlineardifferential equations which dene Lorenz attractor is:
x: yx y: Rxyxzz: bz xy
8 k.
Please cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ED P
ROOF
References
Allefeld, C., Atmanspacher, H., Wackermann, J., 2009. Mental states as macrostatesemerging from brain electrical dynamics. Chaos 19, 015102.
Amit, D.J., 1989. Modeling brain functions. The World of Attractor Neural Networks.Cambridge University Press.
Ashwin, P., Chossat, P., 1998. Attractors for robust heteroclinic cycles with continua ofconnections. J. Nonlinear Sci. 8, 103129.
Ashwin, P., Field, M., 1999. Heteroclinic networks in coupled cells systems. Arch.Ration. Mech. Anal. 148, 107143.
Badii, R., Politi, A., 1999. Complexity. Hierarchical Structures and Scaling in Physics.Cambridge University Press.
Barber, B., Long, E., 2012. Random walks on quasirandom graphs. arXiv:1211.3296v1([math.CO]).
Biswal, B., Yetkin, F.Z., Haughton, V.M., Hyde, J.S., 1995. Functional connectivity inthe motor cortex of resting human brain using echo-planar MRI. Magn. Reson.Med. 34, 537541.
Buhlmann, P., Wyner, A.J., 1999. Variable length Markov chains. Ann. Stat. 27, 480513.Crutcheld, J.P., 2012. Between order and chaos. Nat. Phys. 8, 1724.Damoiseau, J.S., Rombouts, S.A., Barkhof, F., Scheltens, P., Stam, C.J., Smith, S.M.,
Beckmann, C.F., 2006. Consistent resting-state networks across healthy subjects.Proc. Natl. Acad. Sci. U. S. A. 103, 1384813865.
Deco, G., Jirsa, V.K., 2012. Ongoing cortical activity at rest: criticality, multistability, andghost attractors. J. Neurosci. 32, 33663375.
Deco, G., Jirsa, V.K., McIntosh, A.R., Sporns, O., Kotter, R., 2009. Key role of coupling,
b
ion iteration (k) for several lengths of the time series (T = 104, 3.104 and 105). Blackaussian random noise, h = log(2k) and (b) Lorenz attractor, h 0.92. In both case,avior whereas for k > k nite-size effects due to the nite length of the time series1002delay, and noise in resting brain uctuations. Proc. Natl. Acad. Sci. U. S. A. 106,10031030210307.1004Deco, G., Jirsa, V.K., McIntosh, A., 2011. Emerging concepts for the dynamical organization1005of resting state activity in the brain. Nat. Rev. Neurosci. 12, 4356.1006Dellnitz, M., Hohmann, A., 1997. A subdivision algorithm for the computation of unstable1007manifolds and global attractors. Numer. Math. 75, 293317.1008Dellnitz, M., Junge, O., 1999. On the approximation of complicated dynamical behavior.1009SIAM J. Numer. Anal. 36, 491515.1010Dellnitz, M., Junge, O., 2002. Set oriented numerical methods for dynamical systems. In:1011Fiedler, B. (Ed.), Handbook of Dynamical Systems, vol. 2. Elsevier, pp. 221264.1012Dellnitz, M., Field, M., Golubitsky, M., Ma, J., Hohmann, A., 1995. Cycling chaos. Int. J.1013Bifurcat. Chaos 5, 12431247.1014Donner, R.V., Zou, Y., Donges, J.F., Marwan, N., Kurths, J., 2010. Recurrence networks 1015a novel paradigm for nonlinear time series analysis. New J. Phys. 12, 033025.1016Fitzhugh, R., Izhikevich, E.M., 2006. FitzhughNagumo model. Scholarpedia 1, 1349.1017Fox, M.D., Snyder, A.Z., Vincent, J.L., Corbetta, M., Van Essen, D.C., Raichle, M.E., 2005.1018The human brain is intrinsically organized into dynamic, anticorrelated functional1019networks. Proc. Natl. Acad. Sci. U. S. A. 102, 96739678.1020Q8Frey, B.J., Dueck, D., 2007. Clustering by passing messages between data points. Science1021315, 949972.1022Freyer, F., Roberts, J., Becker, R., Robinson, P., Ritter, P., Breakspear, M., 2011. Dynamic1023mechanisms of multistability in the human alpha rhythm. J. Neurosci. 31,102463536361.1025Friston, K., 1997. Transients, metastability and neuronal dynamics. NeuroImage 5,1026164171.1027Froyland, G., 2001. Extracting dynamical behavior via Markov models. In: Mees, A.I.1028(Ed.), Nonlinear Dynamics and Statistics. Bikhuser, Boston, pp. 283324.1029Gaveau, B., Schulman, L.S., 1996. Master equation based formulation of nonequilibrium1030statistical mechanics. J. Math. Phys. 37, 38973992.1031Ghosh, A., Rho, Y., McIntosh, A., Kotter, R., Jirsa, V., 2008. Noise during rest enables1032the exploration of the brain's dynamic repertoire. PLoS Comput. Biol. 4, e1000196.
ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
TED P
ROOF
1033 Gupta, S., Ray, A., 2009. Statistical mechanics of complex systems for pattern identica-1034 tion. J. Stat. Phys. 134, 337364.1035 Holstein, D., Kantz, H., 2009. Optimal Markov approximations and generalized embed-1036 dings. Phys. Rev. E 79, 056202.1037 Honey, C.J., Kotter, R., Breakspear, M., Sporns, O., 2007. Network structure of cerebral1038 cortex shapes functional connectivity on multiple time scales. Proc. Natl. Acad.1039 Sci. U. S. A. 104, 1024010245.1040 Hopeld, J.J., 1982. Neural networks and physical systems with emergent collective1041 computational abilities. Proc. Natl. Acad. Sci. U. S. A. 79, 25542558.1042 Jirsa, V., Ghosh, A., 2010. Spontaneous brain dynamics emerges at the edge of instability.1043 In: Steyn-Ross, D.A., Steyn-Ross, M. (Eds.), Modeling Phase Transitions in the Brain.1044 Springer, pp. 8198.1045 Jirsa, V.K., Sporns, O., Breakspear,M., Deco, G.,McIntosh, A.R., 2010. Towards the virtual brain:1046 network modeling of the intact and the damaged brain. Arch. Ital. Biol. 148, 189205.1047 Kantz, H., Schreiber, T., 2004. Nonlinear Time Series Analysis, 2nd edition. Cambridge1048 University Press.1049 Kelso, J., 2012. Multistability and metastability: understanding dynamics coordination1050 in the brain. Phil. Trans. R. Soc. B 367, 906918.1051 Kitchens, B.P., 1998. Symbolic dynamics. One-sided, Two-sided and Countable State1052 Markov Shifts.Springer-Verlag.1053 Kotsiantis, S., Kanellopoulos, D., 2006. Discretization techniques: a recent survey. GEST1054 Int. Trans. Comput. Sci. Eng. 32, 4758.1055 Kotter, R., 2004. Online retrieval, processing, and visualization of primate connectivity1056 data from the CoCoMac database. Neuroinformatics 2, 127144.1057 Kotter, R., Wanke, E., 2005. Mapping brains without coordinates. Philos. Trans. R. Soc.1058 Lond. 360, 751766.1059 Lachaux, J.-P., Pezard, L., Garnero, L., Pelte, C., Renault, B., Varela, F., Martinerie, J., 1997.1060 The spatial extension of brain activity fools the single-channel reconstruction of1061 EEG dynamics. Hum. Brain Mapp. 5, 2647.1062 Lasota, A., Mackey, M.C., 1994. Chaos, fractals and noise. Stochastic Aspects of1063 Dynamics.Springer, Berlin.1064 Laufs, H., Kleinschmidt, A., Beyerle, A., Eger, E., Salek-Haddadi, A., Preibisch, C., Krakow,1065 K., 2003. EEG-correlated fMRI of human alpha activity. NeuroImage 19, 14631476.1066 Lesne, A., Blanc, J.-L., Pezard, L., 2009. Entropy estimation of very short symbolic1067 sequences. Phys. Rev. E 79, 046208.1068 Logothetis, N.K., Pauls, J., Augath, M., Trinath, T., Oeltermann, A., 2001. Neurophysiolog-1069 ical investigation of the basis of the fMRI signal. Nature 412, 150157.1070 Lorenz, E., 1963. Deterministic nonperiodic ow. Atmos. Sci. 20, 130142.1071 Luzzatto, S., Pilarczyk, P., 2011. Finite resolution dynamics. Found. Comput. Math. 11,1072 211239.
1073Mantini, D., Perrucci, M.G., Del Gratta, C., Romani, G.L., Corbetta, M., 2007. Electrophys-1074iological signatures of resting state networks in the human brain. Proc. Natl. Acad.1075Sci. U. S. A. 104, 1317013175.1076Marwan, N., Donges, J.F., Zou, Y., Donner, R.V., Kurths, J., 2009. Complex net-1077work approach for recurrence analysis of time series. Phys. Lett. A 373,107842464254.1079Osipenko, G., 2007. Dynamical Systems, Graphs, and Algorithms. Springer, Berlin.1080Petersen, K., 1989. Ergodic Theory. Cambridge University Press, Cambridge.1081Prichard, D., Theiler, J., 1994. Generating surrogate data for time series with several1082simultaneously measured variables. Phys. Rev. Lett. 73, 951954.1083Rabinovich, M.I., Afraimovich, V.S., Bick, C., Varona, P., 2012. Information ow dynamics1084in the brain. Phys. Life Rev. 9, 5173.1085Raichle, M.E., MacLeod, A.M., Snyder, A.Z., Powers, W.J., Gusnard, D.A., Shulman,1086G.L., 2001. A default mode brain function. Proc. Natl. Acad. Sci. U. S. A. 98,1087676682.1088Scheeringa, R., Fries, P., Petersson, K.M., Oostenveld, R., Grothe, I., Norris, D.G., Hagoort,1089P., Bastiaansen, M.C., 2011. Neuronal dynamics underlying high- and low-1090frequency EEG oscillations contribute independently to the human BOLD signal.1091Neuron 69, 572583.1092Schnakenberg, J., 1976. Network theory of microscopic and macroscopic behavior of1093master equation systems. Rev. Mod. Phys. 48, 571585.1094Schreiber, T., Schmitz, A., 2000. Surrogate time series. Physica D 142, 346382.1095Stam, C., 2005. Nonlinear dynamical analysis of EEG and MEG: review of an emerging1096eld. Clin. Neurophysiol. 116, 22662301.1097Stefanescu, R.A., Jirsa, V.K., 2008. A low dimensional description of globally coupled1098heterogeneous neural networks of excitatory and inhibitory neurons. PLoS1099Comput. Biol. 4, e1000219.1100Subha, D., Joseph, P., Acharya, U.R., Lim, C., 2010. EEG signal analysis: a survey. J. Med.1101Syst. 34, 195212.1102Theiler, J., Eubank, S., Longtin, A., Galdrikian, B., Farmer, J., 1992. Testing for1103nonlinearity: the method of surrogate data. Phys. D 58, 7794.1104Tsuda, I., 2001. Toward an interpretation of dynamic neural activity in terms of chaotic1105dynamical systems. Behav. Brain Sci. 24, 793810.1106Ulam, S., 1960. A Collection of Mathematical Problems. Interscience Publishers Inc.,1107New York.1108Wen, X., Mo, J., Ding, M., 2012. Exploring resting-state functional connectivity with1109total interdependence. NeuroImage 60, 15871595.1110Woodman, M., Perdikis, D., Pillai, A.S., Dodel, S., Huys, R., Bressler, S., Jirsa, V., 2011.1111Building neurocognitive networks with a distributed functional architecture. Adv.1112Exp. Med. Biol. 718, 101109.
1113
1114
15A. Hadriche et al. / NeuroImage xxx (2013) xxxxxxUNCORRECPlease cite this article as: Hadriche, A., et al., Mapping the dynamic repertoj.neuroimage.2013.04.041ire of the resting brain, NeuroImage (2013), http://dx.doi.org/10.1016/
Mapping the dynamic repertoire of the resting brain1. Introduction2. Materials and methods2.1. Procedure2.1.1. From mic