39
The Asynchronous State in the Cerebral Cortex Alfonso Renart, 1Jaime de la Rocha, 1,2Peter Bartho, 1,3 Liad Hollender, 1 estor Parga, 4 Alex Reyes, 2 Kenneth D. Harris 1,5 Supplementary Information Contents 1 Theory 3 1.1 Analytical Methods ................................ 3 1.1.1 Network Architecture ........................... 3 1.1.2 Dynamics ................................. 4 1.2 Analytical Resuts ................................. 6 1.2.1 Firing Rates ................................ 8 1.2.2 Instantaneous Pair-wise correlations ................... 10 1.2.3 Tracking of fluctuations in the asynchronous state ............ 15 1.2.4 Balance of current correlations ...................... 15 1.2.5 Delayed Correlations ........................... 17 1.2.6 Self-consistent propagation of asynchronous activity .......... 20 1

balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

The Asynchronous State in the Cerebral Cortex

Alfonso Renart,1∗ Jaime de la Rocha,1,2∗ Peter Bartho,1,3 Liad Hollender,1

Nestor Parga,4 Alex Reyes,2 Kenneth D. Harris1,5

Supplementary Information

Contents

1 Theory 3

1.1 Analytical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 3

1.1.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.2 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Analytical Resuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6

1.2.1 Firing Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.2.2 Instantaneous Pair-wise correlations . . . . . . . . . . . .. . . . . . . 10

1.2.3 Tracking of fluctuations in the asynchronous state . . .. . . . . . . . . 15

1.2.4 Balance of current correlations . . . . . . . . . . . . . . . . . .. . . . 15

1.2.5 Delayed Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . .17

1.2.6 Self-consistent propagation of asynchronous activity . . . . . . . . . . 20

1

Page 2: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

2 Methods 21

2.1 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21

2.1.1 Binary Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.1.2 Integrate-and-fire networks . . . . . . . . . . . . . . . . . . . . .. . . 22

2.2 Experimental Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 24

2.3 Analysis Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

2.3.1 Single-Unit Isolation . . . . . . . . . . . . . . . . . . . . . . . . . .. 25

2.3.2 Classification cortical state . . . . . . . . . . . . . . . . . . . .. . . . 25

2.3.3 Down-state detection and exclusion . . . . . . . . . . . . . . .. . . . 26

2.3.4 Quantifying Spiking Correlations . . . . . . . . . . . . . . . .. . . . 26

2.3.5 Assessing significance of correlations . . . . . . . . . . . .. . . . . . 29

2.3.6 Cross-correlograms . . . . . . . . . . . . . . . . . . . . . . . . . . . .29

2.3.7 Distance-dependence of correlations . . . . . . . . . . . . .. . . . . . 29

3 Supplementary Results 31

3.1 Relationship between instantaneous firing rate fluctuations and population-averaged

correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2 Time-scale dependence of correlations in the rat cortexin vivo . . . . . . . . . 33

3.3 Examples of CCGs of individual pairs . . . . . . . . . . . . . . . . .. . . . . 34

3.4 Distance dependence of correlations . . . . . . . . . . . . . . . .. . . . . . . 36

2

Page 3: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

1 Theory

Here, we present some aspects of the theory that we developedto describe asynchronous activity

in densely connected, strongly coupled recurrent networks. Asynchronous states in recurrent

networks have been studied analytically before (1–7). However previous treatments lacked at

least one of two features which are crucial for an accurate description of how recurrent circuits

in the cortex operate: the networks studied so far were either sparsely connected or weakly

coupled. In order to make these concepts precise, we adopt the large N limit, considering a

series of models of increasing size, in which connection probabilities and strengths scale in a

systematic manner with the sizeN of the network. In this approach, considering how dynamical

properties of the network change withN allows one to make qualitative statements which are

robust to the precise values of model parameters.

A network is said to have dense connectivity if the probability that two cells are connected

does not decrease with the network size. In a random densely connected network with connec-

tion probabilityp, any pair of cells will therefore share a fractionp of their inputs, even in the

limit of very large networks. Similarly, a network is said tohave strong coupling if the number

of inputs needed to make the neuron fire is a small fraction of the total number of inputs the cell

receives; more precisely, this fraction should approach zero as network size increases. Thus,

the connection strength has to decrease with the network size at a slower rate than∼ O(1/N).

For the current model, we considered the scaling introducedin (3), in which the synaptic cou-

plings scale as∼ O(1/√N). With this scaling, the magnitude of temporal fluctuations in the

synaptic input currents to the neurons in an asynchronous network saturates to a value of the

order of the synaptic threshold in the limit of large networks (3). Thus, in a densely connected,

strongly coupled network, neurons share inputs and synaptic currents display strong temporal

fluctuations, even in the limit of very large networks.

1.1 Analytical Methods

1.1.1 Network Architecture

We consider a recurrent network composed ofN excitatory andN inhibitory cells. The proba-

bility that a neuron of populationα = E, I (whereE andI denote the excitatory and inhibitory

populations) receives input from a neuron of populationβ = E, I is denoted bypαβ. In addi-

tion to the cells in the recurrent network, we also consider input from an external population of

neurons, which do not receive inputs from the recurrent network (see Fig. 2A main text). In

3

Page 4: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

particular, we assume that there areN excitatory cells in the external network and that the prob-

ability of a connection from an external cell to a cell in populationα of the recurrent network is

pαX .

The strength of the synaptic connection from cellj in populationβ to cell i in populationα

is denoted asJαβij . We consider randomly connected networks in which

P (Jαβij =

jαβ√N

) = pαβ

P (Jαβij = 0) = 1 − pαβ (1)

so that all connections between pairs of neurons belonging to the same populations have the

same strength. The quantitiesjαβ are order unity (which we write as∼ O(1)), meaning that

they are independent of the network size. Synaptic connections are thusstrong, so that a change

in the activity of only∼ O(√N) neurons is enough to provoke a change in the synaptic input

to the neurons of order unity (of the order of the firing threshold).

This network architecture is similar to the one analyzed in (3), with an important difference.

In that network, the average number of pre-synaptic inputs was held fixed independently of

the network size, whereas in our network it is proportional to the total number of neurons per

populationN . Thus, in that network the fraction of shared input for any pair of cells vanishes

in the limit of large networks, whereas in our network it is constant atpαβ independently of

network size.

1.1.2 Dynamics

We use the formalism introduced by Glauber (8) to describe the stochastic updating of net-

works of binary elements. This formalism often been used in neuroscience applications (9,10),

including the study of correlations in recurrent networks (2) and the analysis of balanced re-

current networks (3). In this framework, neurons are modeled as binary elements. We will

denote the state at timet of neuroni from populationα asσαi (t) = {0, 1} (greek letters refer to

populations and latin letters to specific neurons). All neurons update their state independently

and stochastically. We will denote asw(σαi ) the probability per unit time that neuroni from

populationα changes its state fromσαi to 1−σα

i , and byP (~σ, t) the probability that the state of

the whole network is~σ = {σi} (i = 1, . . . , 3N) at timet. As described in (8), one can write an

equation for the rate of change ofP (~σ, t) (the master equation) by noting that if neurons update

independently, in an interval of lengthdt, at most one neuron can change its state. If follows

4

Page 5: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

thatd

dtP (~σ, t) = −P (~σ, t)

N ′

i

w(σi) +N ′

i

P (~σ(σi), t)w(1 − σi) (2)

whereN ′ ≡ 3N and~σ(σi) = {σ1, . . . , 1 − σi, . . . , σN ′}. The equation should be solved with

the initial condition{~σ, 0}. Using this equation, one can write up an equation for the temporal

evolution of any arbitrary product of neuronal variables. For instance, the temporal evolution of

the average activity of neuroni with respect toP (~σ) is given by

d

dt〈σi〉(t) ≡

d

dt

(

P (~σ, t)σi

)

=∑

(

d

dtP (~σ, t)

)

σi

Substituting equation (2) into the previous equation, one obtains

d

dt〈σi〉(t) =

N ′

j

P (~σ(σj), t)w(1 − σj)σi − P (~σ, t)N ′

j

w(σj)σi

Since we are summing over all configurations~σ, for each term withσj = 1 there is a corre-

sponding term with all other neurons in the same state butσj = 0 and vice-versa, so that this

expression can be re-written as

d

dt〈σi〉(t) =

P (~σ, t) [w(σi)(1 − 2σi)] (3)

Using similar arguments it is straightforward to find the expression for the joint probability that

two neurons are active at the same time

d

dt〈σiσj〉(t) =

P (~σ, t) [w(σi)(1 − 2σi)σj + w(σj)(1 − 2σj)σi] (4)

One can also compute joint probabilities that different neurons will be active at different

times. Let us define

〈σi(t)σ′j(t+ τ)〉 =

~σ~σ′

P (~σ′, t+ τ ; ~σ, t)σiσ′j =

P (~σ, t)σi

~σ′

P (~σ′, t+ τ |~σ, t)σ′j

It follows that

d

dτ〈σi(t)σ

′j(t+ τ)〉 =

P (~σ, t)σi

~σ′

d

dτP (~σ′, t+ τ |~σ, t)σ′

j

The equation for the rate of change of the conditional probability is the master equation where

the state we are conditioning on{~σ, t} is to be interpreted as the initial condition. We thus

obtaind

dτ〈σi(t)σ

′j(t+ τ)〉 =

~σ~σ′

P (~σ′, t+ τ ; ~σ, t)σiw(σ′j)(1 − 2σ′

j) (5)

5

Page 6: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

The dynamics of rates and correlations (of any order) is therefore completely determined by

the transition probabilitiesw(. . .). In our case they take the form

w(σαi ) =

1

τα[σα

i − Θ(hαi )]2 (6)

w(σXi ) =

1

2τX

[

1 − (2σXi − 1)(2mX

i − 1)]

(7)

whereΘ(. . .) is the Heaviside step function andhαi is the net afferent current to this cell, given

by

hαi =

E,I,X∑

β

N∑

j

Jαβij σ

βj − θα

i (8)

For notational simplicity we have included the thresholdθαi of each neuron as a constant neg-

ative term in its input current. A neuron’s output does not contribute to its input, so the sum-

mation in the definition ofhαi does not include the term proportional toσα

i . At the time of its

update, therefore, a cell from the recurrent network becomes active (inactive) if the afferent

input from all other neurons, including the external ones isgreater (smaller) than its threshold.

The state to which external cells are updated to, on the otherhand, does not depend on the state

of the rest of the network. Because of this, it will follow that for any given pair of external neu-

rons,P (σXi ; σX

j ) = P (σXi )P (σX

j ). Given the form of the transition probabilities, the symbol

〈. . .〉, which stands for an average over the sources of stochasticity in the dynamics, will stand

for an average over the distribution of update times of all neurons, and over the probability that

an external neuron will become activated when its turn to update its state comes.

1.2 Analytical Resuts

In this section, we describe how to calculate the leading order of firing rates and correlations in

the network during steady states where the statistical properties of the neurons are constant.

Let us define the average activity of celli from populationα at timet asmαi (t) = 〈σα

i 〉(t).Using the transition probabilities (equations (6-7)), equations (3) for the temporal evolution of

the average activities become

ταd

dtmα

i (t) = −mαi (t) + 〈Θ(hα

i (t))〉

τXd

dtmX

i (t) = −mXi (t) +mX

i

Similarly, we define the instantaneous correlation (strictly speaking this is the instantaneous

covariance, but we will use the term correlation throughoutthe description of the theory) at

6

Page 7: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

time t between unitsαi andβj as

rαβij (t) ≡ 〈δσα

i (t)δσβj (t)〉 αi 6= βj

whereδx ≡ x − 〈x〉 andα, β = E, I,X. Substituting the transition probabilities in equations

(6-7) into equations (3-4) one obtains the following equations for the temporal evolution of the

pair-wise correlations

ταβd

dtrαβij (t) = −rαβ

ij (t) +τα

τα + τβ

[

〈σαi (t)Θ(hβ

j (t))〉 −mαi (t)〈Θ(hβ

j (t))〉]

+τβ

τα + τβ

[

〈Θ(hαi (t))σβ

j (t)〉 − 〈Θ(hαi (t))〉mβ

j (t)]

ταXd

dtrαXij (t) = −rαX

ij (t) +τX

τα + τX

[

〈Θ(hαi (t))σX

j (t)〉 −mαi (t)mX

j (t)]

whereταβ ≡ (τατβ)/(τα + τβ). We will be interested in the value of the firing rates and pair-

wise correlations in astationarysituation (which we will also refer to as equilibrium) where

the memory of the initial conditions has been lost and the statistical properties of the network

activity are no longer changing. This is equivalent to taking thet → ∞ limit of the previous

equations, which results in

mαi = 〈Θ(hα

i )〉 (9)

mXi = mX

i (10)

(τα + τβ)rαβij = τα〈δσα

i δΘ(hβj )〉 + 〈δΘ(hα

i )δσβj 〉τβ (11)

(τα + τX)rαXij = 〈δΘ(hα

i )δσXj 〉τX (12)

The reason the external-recurrent correlations only contain one term is that neurons from the

recurrent network are disconnected from, and therefore have no influence on, the external neu-

rons.

The difficulty in evaluating this set of coupled non-linear equations lies in the fact that ar-

bitrary dependencies between the activities of different neurons preclude, in principle, a closed

form for the probability distribution of the synaptic currentshαi . Since thehα

i are the sum of

a large number of stochastic variables, the central limit theorem ensures that, if these variables

were independent, the probability distribution ofhαi would be well approximated by a Gaussian

distribution for large networks. However, the existence ofshared inputs introduces correlations

in the activities of the cells. The central limit theorem, however, can still be used if these cor-

relations are sufficiently weak. In fact, if the population-averaged correlations are inversely

7

Page 8: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

proportional to the network sizeN (and higher order cumulants decay fast enough), the distri-

bution ofhαi can still be well approximated by a Gaussian for largeN (11). Network states in

which the population averaged correlations scale as∼ O(1/N) are calledasynchronous(2). In

an asynchronous state, neurons areeffectivelyindependent (see section 3.1 of the Supplemen-

tary Information), in the sense that the correlations do notsignificantly impair how well the

average firing rate in the network can be estimated.

Our strategy will therefore be to firstassumethat the network is asynchronous in the steady

states. With this assumption, we will develop equations (9-12) to obtain expressions for the

population averaged firing rates and pair-wise correlations, in the network. After this, we will

show that there are solutions to these equations in which thenetwork is indeed asynchronous.

We will do this in two steps. First, we will develop the set of equations (9-12) without using

the fact that the synaptic connectionsJαβij are stochastic, but using the central limit theorem

to invoke a Gaussian distribution for the synaptic currentshαi . This leads to a set of coupled

equations for the firing rates and pair-wise correlations ofeach neuron and neuronal pair in

the network, which we refer to as themicroscopicequations. Second, we will average over

the distribution of the synaptic connectivity in order to obtain macroscopicequations for the

population averaged firing rates and correlations in the network.

1.2.1 Firing Rates

We assume that the distribution of the afferent currenthαi to cellαi is well approximated by a

Gaussian. Let us denote the mean and variance of the the Gaussian distribution of the current

to cellαi at equilibrium by

µαi ≡ lim

t→∞〈hα

i (t)〉 (sαi )2 ≡ lim

t→∞〈(δhα

i (t))2〉 (13)

In terms of these quantities, and following the notation in (12), equation (9) for the equilib-

rium firing rate of cellαi becomes

mαi = H (ψα

i ) (14)

whereψαi ≡ −µα

i /√

(sαi )2 and

H(z) ≡ 1√2π

∫ ∞

zdx exp(−x2/2)

Equation (14) gives the firing rate of every individual neuron in the network in terms of the

activity of all other neurons and the properties of the connectivity. We now would like to obtain

8

Page 9: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

a statisticaldescription of activity in the network, specifying, for instance, the average firing

rate in a given sub-populationmα ≡ 1/N∑

imαi . In order to do this, one takes advantage of

the fact that the network connectivity is stochastic. Because of this, population averages can be

thought of assampleaverages which, if the network is large, will provide accurate estimates of

thedistributionaverages induced by the probabilistic connectivity. Let usdenote the probability

distribution of a given connectivity matrix asP (J). To compute averages overP (J), we note

that both the meanµαi and the variance(sα

i )2 of the current are linear combinations of a large

number of uncorrelated random variables (the synaptic variablesJαβij ). Thus,P (J) induces a

probability distribution inµαi and(sα

i )2 which will be well approximated by a Gaussian for large

networks. We denote this by

µαi = µα + xµα∆µα

(sαi )2 = s2

α + xs2α∆s2

α

where thex’s are zero-mean, unit-variance Gaussian random variables, and where we have

defined(∆a)2 ≡ [(a− [a])2]. We use the notation[. . .] to denote averages over the distribution

of heterogeneityP (J). Altough the variablesxµα andxs2α

are in principle correlated, it can be

shown that the cell-to-cell variability in the magnitude ofthe temporal fluctuations in synaptic

current∆s2α decays with the network size, whereasµα, ∆µα ands2

α do not. Thus, for large

networks, one only needs to consider

µα =1

ǫ

β

Jαβmβ − θα (15)

(∆µα)2 =∑

β

J(2)αβ qβ +

β

J2αβ(qβ −m2

β)

s2α =

β

J(2)αβ (mβ − qβ) + cαα

where we have made the following definitions:ǫ ≡ 1/√N . Jαβ ≡ [Jαβ

ij ] = jαβpαβ and

J(2)αβ = [(Jαβ

ij − [Jαβij ])2] = j2

αβpαβ(1 − pαβ) are, respectively, the average and variance of the

distribution of synaptic efficacies between pre-synaptic neurons from populationβ and post-

synaptic neurons from populationα. Finally, qα ≡ 1/N∑

i(mαi )2 is the second moment of

the spatial distribution of firing rates, andcαβ = 1/N2∑

ij〈δhαi δh

βj 〉 is the average correlation

between the synaptic currents to neurons in populationsα andβ. We show below that the

population-averaged current correlation decreases with the network size,cαβ ∼ O(ǫ), so it

can also be neglected in the previous equations. In terms of these quantities, the population-

9

Page 10: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

averaged firing rate and the second moment of the rate distribution can therefore be, for large

networks, approximated by

mα =∫

Dx H

(

−µα+x∆µα√s2α

)

= H

(

− µα√s2α+(∆µα)2

)

(16)

qα =∫

Dx

[

H

(

−µα+x∆µα√s2α

)]2

(17)

whereDx is a Gaussian measure of zero-mean and unit-variance.

As shown in (3), one does not actually have to solve equation (16) in order to obtain the

firing rates in the network. Because each neuron receives∼ O(N) synaptic inputs, but only

∼ O(√N) are enough to make it fire, the net magnitude of the total excitation and inhibition

felt by the neurons is very large compared to the firing threshold (factor1/ǫ in the r.h.s. of

equation (15)) In order for the firing rates not to be at eitherzero or at saturation, these large

excitatory and inhibitory drives have to cancel, but this cancellation can only happen if the firing

rates take on precise values. Thus, for large networks, imposing the cancellation, determines

the rates. Although this has already been shown in (3, 12), we briefly now outline the formal

derivation of this idea for completeness and because an essentially identical rationale determines

the average correlations: In order for the net synaptic input in equation (15)) to be of order unity

(to avoid complete quiescence or saturation), it has to be true that∑

β

Jαβmβ =∑

β=E,I

Jαβmβ + JαXmX ∼ O(ǫ)

Asymptotically, this is a linear equation that determines the firing rates at equilibrium∑

β=E,I

Jαβmβ = −JαXmX

so that

mα = −∑

β=E,I

J−1αβ JβXmX ≡ AαmX (18)

Thus, asymptotically, the population averaged firing rate of each population is proportional to

the population averaged rate of the external neurons. The conditions for this solution to be

realized have been described in (12).

1.2.2 Instantaneous Pair-wise correlations

In order to obtain an expression for the instantaneous pair-wise correlations in equations (11-

12) one needs to evaluate terms of the type〈Θ(hαi )σβ

j 〉. Let us first rewrite them in terms of the

10

Page 11: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

conditional probability that neuronαi is active given that neuronβi is active at the same time.

To do this, we note that

〈Θ(hαi )σβ

j 〉 =∑

P (~σ)σβj Θ(hα

i ) =∑

P (~σ(σβj

)|σβj )P (σβ

j )σβj Θ(hα

i )

= mβj

~σ(σβj

)

P (~σ(σβj

)|σβj = 1)Θ(hα

i (σβj

) + Jαβij ) = mβ

j 〈Θ(hαi (σ

βj

) + Jαβij )|σβ

j = 1〉

where〈. . . |σi = 1〉 =∑

~σ(σi)P (~σ(σi)|σi = 1) is an average over the conditional distribution

of the network activity~σ given that neuronσi is active, and~σ(σi) means that neuronσi is ex-

cluded from all those included in~σ (and similarly forhαi (σ

βj

)). The average over the conditional

distribution is equivalent to an average over a Gaussian random variable

zαi (σ

βj

) = hαi (σ

βj

) + Jαβij

with mean and variance given by

µαi ≡ 〈zα

i (σβj

)|σβj = 1〉

(sαi )2 ≡ 〈(zα

i (σβj

))2|σβj = 1〉 − (µα

i )2

so that

〈Θ(hαi )σβ

j 〉 = H(−µαi /√

(sαi )2)mβ

j (19)

We develop the previous expression in two steps. First, we relate averages over the conditional

distribution to averages over the equilibrium distribution by noting that

〈σαi |σβ

j = 1〉 = 〈σαi (1 + δσβ

j /mβj )〉

〈σαi σ

γk |σβ

j = 1〉 = 〈σαi σ

γk(1 + δσβ

j /mβj )〉

Second, we relate properties ofhαi (σ

βj

) to properties ofhαi by recalling that

hαi (σ

βj

) = hαi − Jαβ

ij σβj

(hαi (σ

βj

))2 = (hαi )2 + (Jαβ

ij )2 − 2Jαβij h

αi σ

βj

which leads to

〈zαi (σ

βj

)|σβj = 1〉 = 〈hα

i (1 + δσβj /m

βj )〉

〈(zαi (σ

βj

))2|σβj = 1〉 = 〈(hα

i )2(1 + δσβj /m

βj )〉

11

Page 12: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

Using these equations, one can express the conditional ratein terms of the meanµαi and variance

(sαi )2 of the current over the (unconditional) equilibrium distribution. This is done by noting

that

〈(hαi )2δσβ

j 〉 = 〈(δhαi )2δσβ

j 〉 + 2µαi 〈δhα

i δσβj 〉

and making the definitions

Aαβij ≡ 〈δhα

i δσβj 〉 Bαβ

ij ≡ 〈(δhαi )2δσβ

j 〉

one obtains

µαi = µα

i + (Aαβji /m

βj )

(sαi )2 = (sα

i )2 + (Bαβij /m

βj ) − (Aαβ

ij /mβj )2

Equation (19) therefore becomes

〈Θ(hαi )σβ

j 〉 = H

−µαi − (Aαβ

ij /mβj )

(sαi )2 + (Bαβ

ij /mβj ) − (Aαβ

ij /mβj )2

mβj (20)

If an asynchronous state exists in this network, the population average of the quantitiesAαβij /m

βj

and(Bαβij /m

βj ) − (Aαβ

ij /mβj )2, which quantify the difference between the conditional andun-

conditional means and variances to neuronαi respectively, must be of order∼ O(1/N) at

most. Imposing these conditions results in a series of equations that set the value, not only of

the population-averaged correlations, but also of a numberof other properties of the activity in

the network such as the degree to which cells with a higher firing rate tend to have a higher

correlation with all other neurons in the network. Here, however, we will only describe how

to calculate the population-averaged correlations and firing rates in the network, for which we

show below it is sufficient to keep track of terms linear inAαβij /m

βj (13). Developing equation

(20), thus, up to first order inAαβij /m

βj , one obtains

〈Θ(hαi )σβ

j 〉 = mαi m

βj + mα

i Aαβij +O((Aαβ

ij )2) +O(Bαβij )

where

mαi ≡ ∂mα

i /∂µαi = ∂H(−µα

i /√

(sαi )2)/∂µα

i

is the slope of the input-output relationship of the neuron evaluated at the value that the mean

current takes in equilibrium. This allows us to write the following expression for the (micro-

scopic) instantaneous correlations at equilibrium

(τα + τβ)rαβij = mα

i Aαβij τβ + mβ

jAβαji τα (21)

(τα + τX)rαXij = mα

i AαXij τX (22)

12

Page 13: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

where

Aαβij = Jαβ

ij mβj (1 −mβ

j ) +∑

γk 6=βj

Jαγik r

γβkj (23)

The quantityAαβij measures the influence of pre-synaptic cellβj on the firing of post-synaptic

cellαi. The first term inAαβij contains the contribution of a direct connection from cellβj to cell

αi to their pair-wise correlation. This effect is also proportional to the temporal variance of the

pre-synaptic cellmβj (1 −mβ

j ). The second term contains all the contributions of all thosecells

which project to the post-synaptic cellαi and with which the pre-synaptic cellβj is correlated.

We would now like to obtain expressions for the population-averaged correlationsrαβ ≡1/N2∑

ij rαβij (if α = β in the previous expression, the pre-factor should be1/N(N − 1)).

As described above, we will do this by averaging over the distribution of randomly connected

networksP (J). Let us start with the external-recurrent correlations in equation (22) first. Av-

eraging overP (J) one gets

(τα + τX)rαX = ǫ JαXaX +1

ǫ

γ=E,I

JαγrγX (24)

whereaX ≡ mX −qX andJαβ ≡ mαJαβ , with mα ≡ [mαi ] = ∂mα/∂µα. In the last section, we

saw that due to the strong connectivity, there was a mismatchbetween the magnitude of the net

input to the neurons and their activity unless there was a precise cancellation of net excitation

and inhibition. A similar situation takes place here. Even if we assume that the network is

indeed asynchronous, so that all population averaged correlations scale asrαβ ∼ O(ǫ2), the

l.h.s. of the previous equation is∼ O(ǫ2) and the r.h.s is∼ O(ǫ). Thus, the equation has no

solution unless there is a precise cancellation between thedifferent terms in the r.h.s. To reveal

this explicitly, let us expressrγX as a Taylor series inǫ

rγX =∑

n=0

r(n)γXǫ

n

substitute this expression in equation (24) and evaluate the equation at each order inǫ separately.

Doing this shows that the first non-zero term in the series isr(2)γX , whose value is given by the

solution of∑

γ=E,I

Jαγr(2)γX + JαXaX = 0 −→ r

(2)γX = −

α=E,I

J−1γα JαXaX = AγaX (25)

The last equality follows from the fact that∑

α=E,I

J−1γα JαX =

α=E,I

J−1γα JαX ≡ −Aγ

13

Page 14: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

Thus, we have shown that

rαX = ǫ2AαaX +O(ǫ3)

Using similar arguments, equation (21) can be written as

(τα + τβ)rαβ = ǫ Jαβaβτβ +1

ǫ

γ=E,I

Jαγrγβ + JαXrXβ

τβ +

ǫ Jβαaατα +1

ǫ

γ=E,I

Jβγrγα + JβXrXα

τα (26)

Again, expanding therαβ andaα in powers ofǫ and evaluating the previous equation at each

order, the term∼ O(ǫ) results in the following equation

Jαβa(0)β τβ +

γ=E,I

Jαγr(2)γβ + JαXr

(2)Xβ

τβ + Jβαa(0)α τα +

γ=E,I

Jβγr(2)γα + JβXr

(2)Xα

τα = 0

wherea(0)α = m(0)

α − q(0)α is the leading-order population-averaged temporal variance of the

activity of cells in populationα. In the general case, the solution to the previous equation is that

the first three and last three terms in the last equation (which are identical if we exchange the

values ofα andβ) are both equal to zero, i.e.,r(2)αβ is the solution of

γ=E,I

Jβγr(2)γα + JβXr

(2)Xα + Jβαa

(0)α = 0

It is useful to define the correlation between the instantaneous activity of populationsα andβ

at equilibrium

vαβ = limt→∞

〈∑

i

(δσαi (t)/N)

j

(δσβj (t)/N)〉 = rαβ + ǫ2δαβaα (27)

in terms of which the previous equation can be written as∑

γ=E,I

Jβγv(2)γα + JβXr

(2)Xα = 0 −→ v(2)

γα = −∑

β=E,I

J−1γβ JβXr

(2)Xα = AγAαaX

The solution for the population-averaged instantaneous pair-wise correlations in the steady

states in our network is therefore

rEX = ǫ2AEaX +O(ǫ3)

rIX = ǫ2AIaX +O(ǫ3) (28)

rEE = ǫ2(A2EaX − a

(0)E ) +O(ǫ3)

rII = ǫ2(A2IaX − a

(0)I ) +O(ǫ3)

rEI = ǫ2AEAIaX +O(ǫ3) (29)

14

Page 15: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

1.2.3 Tracking of fluctuations in the asynchronous state

There is a simple way of expressing the leading order solution for the correlations in the net-

work. Let us consider the difference between the normalizedinstantaneous activities of the

excitatory and inhibitory populationsmα(t) =∑

i σαi (t)/N and the instantaneous activity of

the external population. They are given by

∆αX(t) ≡ 1

AαmX(∑

i

σαi (t)/N) − 1

mX(∑

i

σXi (t)/N) α = E, I

We can measure the degree to which the activity in the recurrent network tracks the instanta-

neous activity in the external population by calculating the variance of∆αX(t) at equilibrium,

〈(∆αX(t) − 〈∆αX(t)〉)2〉 =1

Nm2X

(

(aα +Nrαα)/A2α + aX − 2NrαX/Aα

)

Again replacing the expressions (28-29) into this formula we obtain that, to leading order, it

vanishes. Similarly, it is simple to show in the same way that∆EI(t) also vanishes to leading

order.

Thus, although the magnitude of the temporal fluctuations ofthe instantaneous firing rate

of each population is∼ O(ǫ), the magnitude of the temporal fluctuations of the instantaneous

differencein firing rates is∼ O(ǫ3/2). This implies that as the network gets larger, the instanta-

neous firing rate in the three populations track each other more faithfully, and that, asymptoti-

cally, the tracking isperfect, i.e., asN → ∞

mE(t) = AE mX(t) (30)

mI(t) = AI mX(t) (31)

This result captures the essential difference between the sparse balanced network and the densely-

connected one. In the sparse balanced network, the recurrent feedback results in linear propaga-

tion of the average firing rate. In the dense network, not onlythe average firing rate, but also the

instantaneous fluctuations in activity are faithfully propagated. In these conditions, referring

to the average firing rate as ‘signal’ and to the fast temporalfluctuations as ‘noise’ becomes

questionable, since both are propagated with the same accuracy.

1.2.4 Balance of current correlations

The leading order solution for the average pair-wise correlations in the network leads to a can-

cellation between the different components of the average correlation between the currents to a

15

Page 16: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

pair of neurons. To see this, one just needs to note that

cαβij ≡ 〈δhα

i δhβj 〉 =

γk

Aαγik (Jγβ

kj )t

Thus, the average correlation between the synaptic currents to cells in populationsα andβ is

equal to

cαβ =∑

γ

JαγaγJtγβ +

1

ǫ2∑

γλ

JαλrλγJtγβ (32)

Terms proportional toaγ in the previous sum are the contribution of common input to the

average current correlation. The other terms measure the contribution of correlations between

pre-synaptic inputs, to the average current correlation. Given that the leading order of the

population-averaged correlations in firing activity is∼ O(ǫ2), the leading order of each term in

the previous sum is of order unity. However, it is straightforward to check that if one substitutes

the solution in equations (28-29) into equation (32), the positive and negative terms cancel out.

Thus,

cαβ ∼ O(ǫ)

Just as linear propagation of average firing rates in the sparse network is extended to linear

propagation of instantaneous firing rates in the dense network, the cancellation of the mean ex-

citatory and inhibitory synaptic inputs in the sparse network is extended to a cancellation of the

positive components of the population-averaged current correlation (arising from shared input

and from network amplification of correlations between excitatory cells and between interneu-

rons), and a negative term coming from network amplificationof the correlations that tracking

induces between the excitatory and inhibitory cells.

The structure of synaptic current correlations is very different in asparsely connectednet-

work. We say that the connectivity of the network is sparse ifthe average number of pre-

synaptic inputs is independent of the network size, i.e., ifthe probability of connection scales

as∼ O(1/N). In a sparse network, such as (3, 4), the fraction of shared inputs decreases with

the network size and, consequently, so do each of the components of the current correlations

in an asynchronous state. Thus, in a densely connected network the asynchronous state is a

purely dynamical phenomenon, whereas in a sparsely connected network it is the result of a

static feature of the network architecture.

16

Page 17: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

1.2.5 Delayed Correlations

Although a detailed treatment of the form of the delayed correlations (cross-correlation func-

tions) is beyond the scope of this Supplementary Material, there is one qualitatively important

point that we would like to discuss. We have just shown that the instantaneous population-

averaged current correlations decay as∼ O(1/√N). How is this compatible with the instan-

taneous population-averaged activity correlations decaying as∼ O(1/N) (equations (28,29))?

We will show that this is related to the fact that the time-scale of current correlations is much

faster than the time-scale of activity correlations. To simplify the discussion, we will only

consider delayed correlations between recurrent neurons.

We define the delayed correlation between cellsαi at timet andβj at timet+ τ as

rαβij (t, t+ τ) ≡ 〈δσα

i (t)δσβj (t+ τ)〉

where note that, when averaging quantities evaluated at different times,〈. . .〉 stands for an

average over the joint probability distributionP (~σ′, t+ τ ; ~σ, t). Thus, delayed correlations are

defined forτ > 0.

Inserting the transition probabilities into equation (5) one obtains

τβd

dτrαβij (t, t+ τ) = −rαβ

ij (t, t+ τ) + 〈δσαi (t)δΘ(hβ

j (t+ τ))〉 (33)

Instead of using the previous strategy for evaluating the terms〈δσαi (t)δΘ(hβ

j (t+ τ))〉, we will

now use a different approach, expressing the activity of cell αi at timet in terms of its afferent

current at the time when it was last updated. Since there is a constant probability per unit time

(equal to1/τα) of neuronαi updating its state, the probability of the interval after the last update

is exponential with time-constantτα. We can therefore write

〈σαi (t)Θ(hβ

j (t+ τ))〉 =∫ ∞

0

dτ ′

ταexp(−τ ′/τα)〈Θ(hα

i (t− τ ′))Θ(hβj (t+ τ))〉 (34)

We thus need to evaluate

limt→∞

〈Θ(hαi (t− τ ′))Θ(hβ

j (t+ τ))〉 ≡ rαβij (τ + τ ′)

The quantityrαβij (τ) measures the probability that the current to cellαi and the current to cell

βj τ seconds into the future are both above threshold in equilibrium. If the distribution of

the currents can be approximated by a Gaussian, it can be expressed in terms of the cross-

correlogram of the currents

cαβij (τ) ≡ lim

t→∞〈δhα

i (t)δhβj (t+ τ)〉

17

Page 18: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

by the following equation

rαβij (τ) = H

(

ψαi , ψ

βj , ρ

αβij (τ)

)

where we have defined

ραβij (τ) ≡ cαβ

ij (τ)/(sαi s

βj )

and

H(a, b, ρ) ≡ 1

∫ ∞

ady exp(−x2/2)

∫ ∞

b−ρy√1−ρ2

dx exp(−y2/2)

The quantityH(a, b, ρ) represents the probability that two Gaussian variables with means and

variances characterized bya, b and correlation coefficientρ will be greater than zero at the

same time. Since correlations are weak in the asynchronous state, it is convenient to consider

the expansion ofH(a, b, ρ) nearρ ∼ 0,

H(a, b, ρ) = H(a)H(b) +e−a2/2

√2π

ρe−b2/2

√2π

+O(ρ2)

As in previous sections, in order to characterize the leading-order value of the correlations, it

is sufficient to keep the first term in this expansion. Using these expressions, we can write

equation (33) in equilibrium as

τβd

dτrαβij (τ) = −rαβ

ij (τ) +∫ ∞

0

dτ ′

ταexp(−τ ′/τα)mα

i cαβij (τ + τ ′)mβ

j

which is equivalent to

τατβd2

dτ 2rαβij (τ) + (τα − τβ)

d

dτrαβij (τ) − rαβ

ij (τ) + mαi c

αβij (τ)mβ

j = 0

The population-averaged version of the previous equation is

τατβd2

dτ 2rαβ(τ) + (τα − τβ)

d

dτrαβ(τ) − rαβ(τ) + mαcαβ(τ)mβ = 0 (35)

and the cross-correlogram of the population-averaged currents is given by

cαβ(τ) =∑

γ=E,I,X

Jαγaγ(τ)Jtγβ +

1

ǫ2∑

γλ=E,I,X

Jαγrγλ(τ)Jtλβ

whereaα(τ) = 1/N∑

i aαi (τ) is the population-averaged auto-correlation function of neurons

in populationα. For each neuron it is defined asaαi (τ) = limt→∞〈δσα

i (t)δσαi (t + τ)〉 and can

be evaluated with the same techniques we have outlined for the cross-correlation functions.

18

Page 19: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

Equation (35) can be studied using singular perturbation techniques. For our present pur-

poses, note that forτ of order unity, i.e., not very close to the origin (τ ∼ 0), the derivatives in

the first two terms of the equation do not change the dependency of those terms withǫ. Thus,

expandingrαβ(τ) in powers ofǫ as in the previous sections one obtains that the leading order

of the population-averaged delayed correlations at lags oforder unity have to obey∑

γ=E,I,X

Jαγa(0)γ (τ)J t

γβ +∑

γλ=E,I,X

Jαγr(2)γλ (τ)J t

λβ = 0

The solution of this equation is that, at lags of order unity,the leading order of the cross-

correlation function of the instantaneous network activity in each population (the delayed ver-

sion of equation (27)) is again∼ O(ǫ2) and proportional to the external auto-correlation func-

tion, i.e.,

vαβ(τ) = ǫ2AαAβaX(τ) +O(ǫ3)

Thus, as expected, the population-averaged delayed correlations are∼ O(ǫ2). But that means

that the first three terms in equation (35) are also∼ O(ǫ2), which implies that

c(0)αβ(τ) = c

(1)αβ(τ) = 0

This is what we wanted to show. Although the instantaneous current correlations arecαβ ∼O(ǫ), at lags of order unitycαβ(τ) ∼ O(ǫ2). Thus, the time-scale of the current correlations

is very fast. In fact, it can be shown that the time-scale of the population-averaged recurrent

correlations is∼ O(ǫ). The biological interpretation of this is that time-scale of the process of

tracking is of order∼ O(ǫ), leading to a situation where both the peak height and time-scale

of the population-averaged current correlations are of that same order, implying that the area

under the current cross-correlogram is∼ O(ǫ2).

The network is therefore self-consistent, because the instantaneous correlations in activity

are proportional to the area under the current cross-correlogram. This can be shown by employ-

ing the same approach we used in equation (34) to the instantaneous correlations in equation

(11). Doing this, one can obtain an alternative expression for the instantaneous population-

averaged correlations

(τα + τβ)rαβ =∫ 0

−∞dτ exp(τ/τβ)mαcαβ(−τ)mβ +

∫ ∞

0dτ exp(−τ/τα)mαcαβ(τ)mβ

We have just shown that the integrands in this equation are∼ O(ǫ2) except for a thin slice of

order∼ O(ǫ) aroundτ =0, in which they are∼ O(ǫ). Thus, the whole integral is∼ O(ǫ2) as it

should when the state of the network is asynchronous.

19

Page 20: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

1.2.6 Self-consistent propagation of asynchronous activity

We have assumed throughout that neurons from the external population were independent. We

made this assumption in order to avoid having to define the correlation structure of the external

networkad hoc. Our results, however, are still valid if the external neurons are not independent,

as long as the external network is itself asynchronous. Notethat this does not qualitatively

change the properties of the external input. Simply the existence of common input makes the

average correlation between the external component of the synaptic input∼ O(1). Thus, as

long as the population-averaged correlation between the external neurons is∼ O(1/N), this

will only lead to quantitative changes in the correlation structure of the input from the external

network. Hence, asynchronous activity can be robustly propagated between densely connected,

strongly coupled neuronal circuits.

Asymptotic expressions for how rates and correlations propagate from one asynchronous

network to another can be derived from the simple equations (30-31). Let’s consider two asyn-

chronous networks, with network 2 receiving excitation from network 1. In the largeN limit, it

holds that

mE2(t) = A1→2E mE1(t)

mI2(t) = A1→2I mE1(t)

Equations for the transformation of rates and correlationscan be obtained by equating the tem-

poral average, variance and covariance of the previous expressions

mE2 = A1→2E mE1

mI2 = A1→2I mE1

rEE2 =1

N

[

(

A1→2E

)2(aE1 +NrEE1) − aE2

]

rII2 =1

N

[

(

A1→2I

)2(aE1 +NrEE1) − aI2

]

rEI2 =1

N

[

A1→2I A1→2

E (aE1 +NrEE1)]

where we again usedaα = mα − qα. Note that changes in the structure of the temporal fluctu-

ations in network 1, either by changingrEE1 or qE1, lead to changes in the correlation structure

of network 2 without changing its average firing rate.

20

Page 21: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

2 Methods

2.1 Numerical Methods

2.1.1 Binary Networks

The results in Fig. 2 in the main text were obtained by numerical simulation of binary networks

evolving in time using an asynchronous updating rule (see, e.g., (9)). The architecture of the

networks was identical to that studied analytically. The update rule was as follows: On each

iteration, a neuroni out of the3N that compose the network was chosen at random. If this

neuron belonged to the external population, a random numberx was generated and its activity

was set to one ifx < mXi and was set to zero otherwise. The parametermX

i thus determines the

mean number of updates which result in the cell being activated, i.e., the cell’s mean activity

level. In all our simulations, all external neurons had the same activity level, i.e.,mXi = mX

for all i. If the neuron to be updated belonged to the recurrent network, the synaptic current to

this neuron was calculated using the instantaneous activity of all other cells in the network to

which projected to it. If the synaptic current was larger than a thresholdθ, the neuron was set

to one, and otherwise it was set to zero. Note that, for all cells, the outcome of the updating

process is independent of the neuron’s state at the time of update. Using this update rule, each

neuron is updated every3N iterations on average, which is defined to be equal to the neuronal

time constantτ . Thus, the resolution of the dynamics increases with the network size, i.e.,dt =

τ/3N . Since neurons from all three populations update their state at this rate, the time constants

of the three populations is the same, i.e.,τE = τI = τX = τ . The biological interpretation ofτ is

as the effective time constant with which a neuron changes its firing activity, i.e.,τ ∼ 10-25 ms.

Each dot in each curve in Fig. 2C of the main text is the averageover 50 simulations in which

the same parameters were used except for the random seed thatsets the connectivity matrix.

Each simulation for each network lastedTsim = 200,000τ . The cross-correlograms in Fig 2F

of the main text are averages over 10 statistically identical simulations of the same duration

Tsim. The parameters we used in all simulations werepαβ = p = 0.2, mX = 0.1, θ = 1,

jEE = 5/√N , jEI = −10/

√N , jIE = 5/

√N , jII = −9/

√N , jIX = 4/

√N , jEX = 5/

√N .

The synaptic parameters were chosen so that (apart from the scaling factors) the effective size

of a synaptic inputpjαβ would be the same as those used in (3). Networks were simulated using

custom written C, C++, and MATLAB software.

21

Page 22: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

2.1.2 Integrate-and-fire networks

To generate the results in Fig. 1B-F of the main text, a simplefeed-forward network of current-

based integrate-and-fire neurons was used. The membrane potential of each post-synaptic neu-

ron evolved according to

τmdVm

dt= −Vm + IE

NE∑

i=1

sEi (t) − II

NI∑

i=1

sIi (t)

whereτm = 10 ms is the membrane time-constant andIα, α = E, I are constants chosen to

obtain EPSPs (IPSPs when inhibitory inputs were considered) of peak-amplitude 0.75 (2.5) mV.

The PSCs were modeled as simple exponential decays,

dsαi

dt= −sα

i /τs +∑

tαi

δ(t− tαi )

where the PSC decay timeτs = 5 ms, andtαi are the spike times from pre-synaptic neuroni of

populationα = E, I. The spiking threshold was set at 20 mV relative to resting potential, the

reset potential at 10 mV, and the refractory period at 2 ms. InFig. 1B and the black trace of

Fig. 1C (main text), all pre-synaptic inputs where excitatory (NE = 250 andNI = 0). In the

gray curve in Fig. 1C of the main text,NE = 250 andNI = 60.

Each neuron of the pair analyzed in Fig. 1B of the main text shared exactly 250p inputs,

wherep is the shared input fraction in the x-axis of the plot. All pre-synaptic inputs were

independent and had Poisson statistics with a constant rateof 5 Hz. The output rate was also∼5 Hz. Each dot in the curve corresponds to 10,000 s of simulated time.

Each neuron in the pair analyzed in the black curve of Fig. 1C shared exactly 50 excitatory

and 12 inhibitory inputs (p = 0.2), all having a rateνin = 5 Hz. Their correlationρin (spike

count measured in windows of 50 ms) was varied along the x-axis. Correlations were generated

by thinning of a mother Poisson spike train of rateνin/ρin. Each pre-synaptic train was pro-

duced by randomly and independently keeping each spike of the mother train with probability

ρin, followed by jittering each spike by a random time drawn froma two-tailed exponential

distribution of zero mean and SD = 5 ms. Thus, all pre-synaptic trains were marginally Poisson,

but had (approximately) exponential cross-correlograms with width 10 ms, and an area equal to

ρin. For the gray curve, which includes inhibition, the statistical properties of the inputs were

identical except for their firing rates, which were adjustedto νin = 12 Hz in order for the output

rates in the presence of pre-synaptic inhibition to still be5 Hz whenρin = 0.

22

Page 23: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

The results in Fig. 3 of the main text were obtained by numerical simulation of spiking

conductance-based integrate-and-fire networks. Except for the sizes of each population, the ar-

chitecture of the spiking and binary networks was identical. The spiking network was composed

of NE =4100,NI =1100 andNX =4000 neurons. The membrane potential of each neuron

i = 1, . . . , Nα from populationα = E, I of the recurrent network evolves according to

CmdV α

i

dt= −gL(V α

i − VL) + IαiE + IαiI + IαiX + Iapp (36)

The total synaptic currents to this neuronIαiβ , β = E, I,X are given by

Iαiβ = −

Nβ∑

j=1

cαiβjgαiβj

sαiβj(t)

(V αi − V β

rev) (37)

whereV βrev is the reversal potential of the corresponding conductance. We useV E

rev = V Xrev = 0

mV andV Irev = − 80 mV. The variablecαiβj

is a binary random variable with probabilityp = 0.2

of being equal to one which determines if cellβj projects to cellαi. The variablegαiβjmeasures

the strength of the synaptic conductance between these two cells. All conductance strengths

from cells in populationβ to cells in populationα are drawn from Gaussian distributions of

meansµgEE= 2.4 nS,µgEI

= 40 nS,µgIE= 4.8, nSµgII

= 40 nS,µgEX= 5.4 nS,µgIX

= 5.4

nS; all distributions had a coefficient of variation (i.e.σgαβ/µgαβ

) equal to 0.5. The quantity

sαiβj(t) represents the instantaneous value of the synaptic gating variable describing the fraction

of open channels of the synapse from cellβj to cellαi. We model unitary conductance changes

in response to a pre-synaptic spike as a difference of exponentials, so that the gating variable

evolves according to

τddsαiβj

dt= xαiβj

− sαiβj(38)

τrdxαiβj

dt= τ

tβj

δ(t− tβj− dαiβj

) − xαiβj(39)

whereδ(. . .) is the Dirac delta function,tβjare the spike times of neuronβj, dαiβj

is the con-

duction delay between the two cells,τr = 1 ms andτd = 5 ms are the rise and decay times

of the unitary conductance change (equal for all synapses inthe network), and the factorτ =

1 ms ensures that the area under the unitary conductance is constant regardless of the rise and

decay time-constants. Conduction delays from excitatory (inhibitory) cells were drawn from

a uniform distributiondαiEj= [0.5 : 1.5] ms (dαiIj

= [0.1 : 0.9] ms) independently for each

synapse.

23

Page 24: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

When the membrane potential of a cell reaches the firing thresholdVth = − 50 mV, it fires

a spike and is reset to the resting potentialVL = − 70 mV during an absolute refractory period

τref = 2 or 1 ms for pyramidal cells and inhibitory interneurons respectively. All neurons had

a membrane capacitanceCm = 0.25 nF, and a leak conductance ofgL = 16.7 nS, resulting

in resting membrane time constant of 15 ms. Neurons in the external network had Poisson

statistics, and all fired at a constant rate of 2.5 Hz.

In Fig. 3 of the main text we simulated an intracellular recording where active and synaptic

conductances are inactivated in order to measure the dynamics of the EPSPs and IPSPs. In order

to achieve this, the spiking mechanism was inactivated in a number of neurons. To perturb the

effective amounts of excitation and inhibition experienced by these cells, an amountIapp of

DC current was used (14, 15). In each simulation of Fig. 3E of the main text, ten groups

of ten neurons each were held at average membrane potentialsspanning the range between

the reversal potentials of inhibition and excitation. For one groupIapp = 0 nA, two groups

where hyperpolarizedIapp = −0.65,−1.3 nA, and 5 groups were depolarizedIapp = n0.74 nA

(n = 1, . . . , 5). The results in this panel were averaged over ten differentsimulations (50 s of

simulated time) with identical parameters differing only in the particular instantiation of their

random connectivity.

Numerical integration of the differential equations in thesimulations was performed using

the second-order Runge-Kutta algorithm with an iteration time-step of 0.05 ms, using custom

software written in C and C++.

2.2 Experimental Methods

Detailed descriptions of surgery and recording procedureshave been published previously (16–

18). Briefly, nine rats (Sprague-Dawley; 400 - 900 g) were anesthetized with urethane (1.2

- 1.5 g/kg body weight) plus additional injections of ketamine (20 mg/kg) and urethane (0.2

g/kg) as needed, and body temperature was retained with a heating pad. Rats were placed in

a stereotaxic frame or naso-orbital restraint, and a windowin the scalp was prepared over the

somatosensory or auditory cortex. A silicon microelectrode (Neuronexus technologies, Ann

Arbor MI) was attached to a micromanipulator and moved gradually to its desired depth posi-

tion. Probes consisted of eight shanks (200 mm separation) each with eight staggered recording

sites (20 mm vertical separation). Extracellular signals were high-pass filtered (1 Hz) and am-

plified (1,000 gain) by using a 64-channel amplifier (Sensorium, Charlotte, VT), and digitized

at 25 kHz (DataMax System; RC Electronics, Santa Barbara, CA) or 20 kHz (United Electronic

24

Page 25: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

Industries, Inc., Canton, MA). The location of the recording sites was estimated to be layer V

by histological reconstruction of the electrode tracks, electrode depth, and firing patterns (16).

As we observed no significant differences between the auditory and somatosensory data, these

were pooled together in the analysis presented in the main text. Data from several of these

animals were used in previous studies (16,17). All experiments were carried out in accordance

with protocols approved by the Rutgers University Animal Care and Use Committee.

2.3 Analysis Methods

2.3.1 Single-Unit Isolation

Units were isolated by a semiautomatic algorithm (http://klustakwik.sourceforge.net) followed

by manual clustering (19) (http://klusters.sourceforge.net). Single units selected for further

analysis had less than 10% contamination in an absolute refractory period of 2 ms and fired more

than 75 action potentials in the recording session. Across the 18 recording sessions in 9 rats

that we analyzed, an average of 69 (range [16:116], interquartile range [44:93]) simultaneously

recorded single units met these conditions.

2.3.2 Classification cortical state

Under urethane anesthesia, the pattern of cortical background activity spontaneously undergoes

transitions between periods of inactivation, characterized by global fluctuations in network ex-

citability (up and downstates) similar to those seen in slow-wave sleep, and periods of activa-

tion, characterized by tonic activity more similar to REM sleep (20, 21). Periods of cortical

activation were detected off-line, using the magnitude of the temporal fluctuations in multi-unit

activity (summed spike trains of all well-isolated single units). Each recording session was di-

vided into contiguous non-overlapping 10 s periods. In eachperiod we computed the coefficient

of variation (CV) of the number of events in the multi-unit spike train in 200 non-overlapping

windows of 50 ms, thus obtaining a sequence of CVs at 10 s intervals. Periods of cortical ac-

tivation were indicated by significantly lower CVs than periods dominated by Up-Down state

transitions. The activated and inactivated states are not truly discrete, but two extremes of a

continuum (21). For simplicity of analysis, however, we selected only periods at the extremes

of the activation-inactivation continuum in each experiment. Although the CV values used as

threshold varied between experiments, all activated periods used had an average CV of less

than 0.5, and there was essentially no overlap between the distribution of CVs in the activated

25

Page 26: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

and inactivated periods across all experiments. For each experiment, data from several periods

corresponding to each state (all> 90s in length) were pooled. The mean duration of cortical

activation (inactivation) analyzed per recording sessionwas 610s (438s). While sensory stim-

uli were presented in some experiments, only unstimulated periods were used for the current

analyses.

2.3.3 Down-state detection and exclusion

We used the following procedure to isolate periods of silence during the slow oscillation (Down-

states). First, we constructed a multi-unit spike train (MUST) by merging together all well

isolated single units. We then identified inter-spike intervals (ISIs) in the MUST longer than 50

ms. We also constructed an instantaneous firing rate corresponding to the MUST by convolution

with a Gaussian density of width 10 ms. For each ISI> 50 ms, the associated a Down-state

was said to begin at the first point in time left of the center ofthat ISI where the instantaneous

rate dropped below 20% of its maximum for the whole recordingsession, and it was said to end

at the first point in time to the right of the center of the ISI where the instantaneous rate went

above the same (20%) threshold. Every period in between two Down-states was a defined as an

Up-state unless the Up-state was shorter than 50 ms, in whichcase the two surrounding Down-

states were merged together. In order to re-calculate correlations after Down-state removal (Fig.

4D-F of the main text), time-intervals classified as Down-states were removed from all spike

spike trains of well isolated cells, and the correlations between the resulting spike trains were

calculated identically as during periods of activation (see next section). Note that, beyond the

requirement of a minimum duration longer than 50 ms, we did not impose any condition of

stationarity on Up-states (e.g., the third Up-state in Fig.4C of the main text is clearly non-

stationary). Doing so would probably further reduce the average correlation during Up-states.

2.3.4 Quantifying Spiking Correlations

When assessing correlations over long periods of time, electrode drift or slow coordinated

changes in excitability of the local network may induce spurious covariations in the activity

of the recorded cells. When analyzing sensory responses, such artefactual correlations are typi-

cally removed with the shift predictor (22). To remove these during long periods of spontaneous

activity, we used jitter methods (23, 24). In this approach correlations are computed relative to

a distribution of surrogates generated by jittering the spike times of the original spike trains

by a certain amountTjitt. Relationships between spike trains on time-scales shorter thanTjitt

26

Page 27: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

are absent from the surrogates, but the temporal structure of all the surrogate spike trains is

identical to that of the original data on time scales significantly longer thanTjitt. Thus, if the

time-scale of the correlations generated by the artefactual sources is significantly longer than

Tjitt, then these correlations will be present in both the original data and its surrogates, and the

difference in correlation between the real and surrogate data will only reflect network-generated

coordination on time-scales shorter thanTjitt.

Correlations in spike trains at a time-scaleTwin are usually assessed by looking at the cor-

relation between sequences of spike counts, where each spike count is the number of spikes in

each spike train in a window of lengthTwin. Let us consider two spike trains

si(t) =∑

tspki

δ(t− tspki ) i = 1, 2

The covariance between the spike count sequences associated with these two spike trains can

be expressed as

c12(Twin) =1

N

N∑

j=1

[(

sSC(Twin)1 (tj) − 〈sSC(Twin)

1 〉) (

sSC(Twin)2 (tj) − 〈sSC(Twin)

2 〉)]

in terms of the quantities

sSC(Twin)i (tj) =

∫ tj+Twin/2

tj−Twin/2dt′si(t

′)

representing the spike count of each of the spike trains in windows of lengthTwin centered at

times{tj}. This set of times can be chosen so that the windows are non-overlapping, but we will

in general consider them to be more closely spaced, to increase the sample sizeN . sSC(Twin)i (t)

is the convolution of the spike train with a square kernel centered att. The quantities

〈sSC(Twin)i 〉 =

1

N

N∑

j=1

sSC(Twin)i (tj)

are the mean spike counts of each spike train. A normalized measure of correlation, the corre-

lation coefficient of the spike count, can be obtained by dividing the covariance by the product

of the standard deviations

ρ12(Twin) =c12(Twin)

c11(Twin)c22(Twin)

Note thatc12(Twin) gives the amount of covariation in the spike counts〈sSC(Twin)1 s

SC(Twin)2 〉

relative to their expected amount if the two trains were independenton all time-scales, i.e.,

27

Page 28: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

〈sSC(Twin)1 〉〈sSC(Twin)

2 〉 (or at least on all time-scales lower than something much larger than the

duration of the experiment).

We modified this procedure in two ways: First, in order to avoid discontinuities, we replaced

the square kernel use in the computation of the spike count bya normalized Gaussian kernel of

widthTwin. The fact that the kernel is normalized changes the covariance, but not the correlation

coefficient. The convolution of the spike train with the normalized Gaussian kernelsG(Twin)i (tj)

can be considered as an estimate of the instantaneous firing rate of the spike train at timetj , and

its average across the experiment〈sG(Twin)i 〉 = νi is equal to the average firing rate of spike train

i. Second, we measure covariations relative to their expected amount if the spike trains were

only independent on a given time scale≪ Tjitt. We do this by replacing the average firing rate

across the experiment〈sG(Twin)i 〉 = νi, by its average〈sG(Twin)

i (tj)〉jitt across a family of jittered

surrogates, where each jittered surrogate is obtained by adding a random Gassian variable of

zero mean and SD =Tjitt to each spike time of the original spike train. Since both thecounting

kernel and the jitter distribution are Gaussian,〈sG(Twin)i (tj)〉jitt is a convolution of the original

spike train with a Gaussian kernel of varianceT 2jitt + T 2

win.

Thus, the covariation of the two spike trains on a time-scaleTwin relative to what can be

expected if the two spike trains were independent on time-scales≪ Tjitt is given by

c12(Twin, Tjitt) =1

N

N∑

j=1

∆sG(Twin,Tjitt)1 (tj)∆s

G(Twin,Tjitt)2 (tj) (40)

where

∆sG(Twin,Tjitt)i (tj) = s

G(Twin)i (tj) − 〈sG(Twin)

i (tj)〉jitt

The quantity∆sG(Twin,Tjitt)i (tj) is the convolution of the spike trainsi(t) with a mexican-hat

type of kernel (of zero area) given by the difference of a Gaussian of SD =Twin minus another

Gaussian of SD =√

T 2jitt + T 2

win. Thus, algorithmically, the calculation of the covariancein

equation (5) simply amounts to the dot-product of two time series obtained by a convolution

of the original pair of spike trains by the mexican-hat kernel evaluated in the grid{tj}, j =

1, . . . , N , divided byN . The correlation coefficient obtained with this method is given by

ρ12(Twin, Tjitt) =c12(Twin, Tjitt)

c11(Twin, Tjitt)c22(Twin, Tjitt)(41)

Since jittering by a Gaussian kernel of widthTjitt only destroys correlations of a time-scale

≪ Tjitt, we used counting kernels with a widthTwin = Tjitt/4. In the main text, we used

28

Page 29: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

Twin = 50 ms. The effect of using different values forTwin is described in the Supplemen-

tary Figure COMPLETE!!. GivenTjitt andTwin, we used a grid spacing∆t = tj+1 − tj =

0.2 min(Tjitt, Twin).

2.3.5 Assessing significance of correlations

For each pair of spike trains (say 1 and 2) we calculatedρ12(Twin, Tjitt) as described in the

previous section. We also calculatedρ12(Twin, Tjitt)J (J = 1, . . . , Njitt) for each ofNjitt =

1000 jittered surrogates of each spike train pair. Surrogates were again created by adding a

Gaussian random variable of zero mean and SD =Tjitt to each spike time of each train. We

calculated a p-value for eachρ12(Twin, Tjitt) as

p = (Npos +Nneg)/Njitt

where

Npos = number of ρ12(Twin, Tjitt)J > |ρ12(Twin, Tjitt)|Nneg = number of ρ12(Twin, Tjitt)J < −|ρ12(Twin, Tjitt)|

In Fig. 4 of the main text, he correlation coefficient of a pairof spike trains was deemed

significant ifp < 0.01.

2.3.6 Cross-correlograms

The insets of Fig. 4B in the main text shows average raw cross-correlograms. The value of the

raw cross-correlogram between two spike trains was calculated by realigning one of the spike

trains to each spike in the other spike train. For a given time-bing tb, the value of the cross-

correlogram at lagntb (n = . . . ,−2,−1, 0, 1, 2, . . .) is given by the number of times that there

was a spike in the realigned spike train betweenntb−tb/2 andntb+tb/2, divided by the product

of the average spike counts of the two trains in a window of length tb. With this normalization,

if the spike trains were independent, the value of the cross-correlogram would be equal to one

at all lags (except for finite sample fluctuations). In Fig. 4B, we usedtb = 5 ms.

2.3.7 Distance-dependence of correlations

We assessed the distance dependence of the meanr and standard deviationσr of the histogram

of correlations during periods of activation (Fig. S3). To do this, each pair of spike trains was

29

Page 30: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

assigned a distanced label, from 0 to 7, corresponding to the number of intervening shanks

between the two shanks where the two cells in the pair were recorded. Pairs of cells recorded

in the same shank were assigned a distance label equal to zero. We then recomputed 7 spiking

correlation histograms for each recording session, each ofthem composed of all pairs assigned

with the same distanced.

We calculated by a simple linear regression how the particular statistic (r orσr) depended on

the distance between shanks across recording sessions. In order to know whether the slopemdata

of the regression was significantly different from zero we used a non-parametric shuffle test.

For each condition, each point was given by a pair (distance-correlation), corresponding, for

instance, to the mean correlation of all pairs recorded in the same shank in a given experiment.

We created 5000 surrogate data sets by randomly shuffling thedistance labels of each point, and

calculated the slope of the regression for each surrogate. The p-value reported in the text is the

fraction of the surrogates with a slope greater than the absolute value ofmdata or smaller than

minus the absolute value ofmdata.

30

Page 31: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

3 Supplementary Results

3.1 Relationship between instantaneous firing rate fluctuations and population-averaged correlations

An intuitive way of quantifying the degree of correlations in a neuronal population could be the

average across pairsr of the correlation coefficient of each pair in the population. For instance,

if r ∼ 0.05 the population would seem to be weakly correlated, whereas if r ∼ 0.5 correlations

would seem to be relatively strong. However, there is something unsatisfactory about this way

of quantifying correlations, as the numerical value ofr can take any value along a continuum.

For instance, would correlations be strong ifr ∼ 0.1? what aboutr ∼ 0.2?

Although in an experimental situation it is difficult to get around this problem, when ana-

lyzing a network theoretically there is an unambiguous way of differentiating networks where

correlations are strong or weak. This idea was first introduced in a rigorous way in (2). Here

we simply illustrate it graphically.

Consider the top row of the left two columns in Fig. S1A. They show a raster plot of

50 Poisson neurons, independent on the left, and with a correlation coefficient ofr = 0.1 on

the middle (throughout this figure, correlations were generated exactly as in Fig. 1C in the

main text (see Supplementary Methods above)). The difference between these two situations

does not seem striking. It becomes more obvious, however, when the number of simulated

neurons is increased (second and third rows). Although the correlation between the neurons

is identical in all rows of the first and second columns, the plots seem to be more different

when there are more cells. The reason is that what the eye is picking up from the raster is the

magnitude of the temporal fluctuations in theaverageactivity of all neurons, not the correlation

of each pair. The average activity, or instantaneous firing rate, of the population is shown in the

fourth row of the figure, in gold, magenta and black for the populations ofN = 50, 200, and

1000 neurons respectively. As is apparent in these plots, when neurons are independent, the

temporal fluctuations in the instantaneous rate become smaller the larger the population. When

the correlations are equal tor = 0.1, however, the magnitude of these fluctuations initially

decreases but then saturates to a constant value.

31

Page 32: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

1 100

0.01

1

Va

ria

nce

Network size N

N=

50

ce

lls

asynchronous (r =0)

N=

20

0 c

ells

N=

10

00

ce

lls

0 0.5

0

20

time (s)

rate

(sp

ks/s

)

synchronous (r =0.1) asynchronous (r ~ 1/N)A

B

It is easy to show this effect quantitatively by consideringa set{σi} (i = 1, . . . , N) of N

binary variables, representing for instance the presence or absence of a spike in a given time-bin.

Let us assume that their variance isv and their covariance iscov. The instantaneous average

activity ism =∑

i σi/N . The variance ofm is equal toV (m) = 1N

(v + (N − 1)cov), The

variance of the averageV normalized by the individual variancev is therefore

V

v=

1

N(1 + (N − 1)r) (42)

wherer is the correlation coefficient of theσi. Thus, as shown in the figure, if neurons are

independent (r = 0), the magnitude of the temporal fluctuations inm (the ‘error’ of the mean)

decays as∼ O(1/√N), whereas if their correlation isr, the error in the mean saturates to

the value√r. Importantly, this qualitative difference between the independent scenario, and

32

Page 33: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

one with a correlationr is always thereregardlessof the numerical value ofr. For smaller

r’s, the saturation happens at largerN ’s, but it always happens nevertheless. This observation

suggests quantifying the level of correlation, not by the value ofr, but by the dependency of the

magnitude of the temporal fluctuations in the instantaneousrate (the error inm) with N . In (2)

anasynchronousnetwork was defined as one in which the error inm scales as∼ O(1/√N).

Although a network of independent neurons is asynchronous (left column of Fig. S1A),

equation (42) shows that the asynchronous property does notrequire independence. In fact, a

network of correlated neurons can be asynchronousas long asthe average correlation between

neurons scales as∼ O(1/N). This is illustrated in the third column of Fig. S1A. For eachN ,

we usedr = 5/N , which leads to smaller temporal fluctuations in the rate of the population for

larger networks (fourth row).

These results are summarized in Fig. S1B, in a similar formatto the one used in previous

discussions of this issue (25, 26) (we varied the network sizeN along the x-axis instead of the

size of the neuronal pool being used to estimate the rate). This plot shows equation (42) for the

three different cases we have just described. The error inm can be interpreted as the error in

the estimation of the firing rate of our population that a downstream network would make by

simple averaging. Although any given network has a fixed size, if the network is capable of

being in an asynchronous state, the correlations generatedby the architecture and the dynamics

will not pose a fundamental limit to how well the firing rate ofthe population can be estimated.

3.2 Time-scale dependence of correlations in the rat cortexin vivo

We repeated the analysis of the experimental data using different counting windowsTwin. We

calculated correlation coefficients as described in Section 2.3.4, keeping the jitter time-scale

fixed to four times the counting window, i.e.,Tjitt = 4Twin. In Fig. S2 we show the average

across experiments of the meanr of the correlation histogram (empty dots) and the histogram

width (filled dots)σr for counting windows ranging from 10 ms to 500 ms during the ACT state.

We did not see qualitative differences from the results in Fig. 4 of the main text (in which

the counting window wasTwin = 50 ms) when we considered different counting windows.

The mean correlation is very small (less than 0.02) for all counting windows up toTwin = 500

ms, although it shows a slight increasing tendency. Correlations on time-scales of the order of

several seconds, however, are expected to reflect processesother than synaptic transmission and

integration, which are, in general one to two orders of magnitude faster. As expected, the width

of the histogram goes to zero with decreasingTwin, since spike count correlations become linear

33

Page 34: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

in the counting window asTwin → 0 (27).

0.01 0.03 0.1 0.30

0.05

0.1

0.15

Count Window (s)

Co

rre

latio

n

r

σr

Figure S2: Effect of the counting window on the distribution of correlations. Empty andfilled dots correspond to the average (plus minus standard deviation) across recording sessionsof the mean of the correlation histogramr and its widthσr. The histogram is wide across alarge range of counting windows.

3.3 Examples of CCGs of individual pairs

In order to ensure that the spiking correlations that we are estimating through the use of jitter

methods during spontaneous activity are consistent with more conventional measures of corre-

lation between spike trains, in Fig. S3 we show an example of one hundred cross-correlograms

(CCGs) from the experiment illustrated in Fig. 4B of the maintext, during the ACT period.

34

Page 35: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

−0.2 −0.1 0 0.1 0.2Correlation Coefficient

−0.1 0 0.1

Lag (s)

CC

G

−0.1 0 0.1

Lag (s)

CC

GC

CG

−0.1 0 0.1

Lag (s)

−0.1 0 0.1

Lag (s)

CC

GC

CG

−0.1 0 0.1

Lag (s)

−0.1 0 0.1

Lag (s)

CC

GC

CG

−0.1 0 0.1

Lag (s)

−0.1 0 0.1

Lag (s)

CC

GC

CG

−0.1 0 0.1

Lag (s)

−0.1 0 0.1

Lag (s)

CC

G

Figure S3: Examples of individual CCGs. One hundred raw CCGs (see section 2.3.6, bin size= 10 ms) from the experiment shown in Fig. 4 of the main text. CCGs where chosen to spanthe whole range of correlations observed for this experiment. Top, distribution of correlationsr for this experiment (identical to the one in Fig. 4B of the main text). The value ofr for eachpair is shown as a color line in the histogram and as the background color of each CCG. Thegray dashed line marks the value 1 which signals no correlation. The red line shows the averageCCG across 1000 jittered surrogates (jitter time = 200 ms).

We selected the 100 pairs in an effort to illustrate the diversity of CCGs associated with

the diverse correlation coefficientr values that we measured. To do this, first, we chose a

range of correlation coefficients (calculated as describedin Section 2.3.4 of the Supplementary

Information) from -0.18 to 0.2, which spans almost the wholerange ofr in this experiment

(Fig. 4B blue of the main text). Then, we divided this range into 100 equi-spaced correlation

35

Page 36: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

valuesRn (whereR1 = - 0.18 andR100 = 0.2), and for each value of theRn we selected the pair

with the closest CC toRn (in order to avoid extremely sparse CCGs, we only consideredpairs

in which the geometric mean of the firing rates of both cells inthe pair was above 1 Hz). Note

that, because every value of correlation is equally represented in the figure, the sample of CCGs

is not a random sample. Instead the sample is meant to illustrate the typical shapes of the CCGs

associated with all ther values that we observed in the experiment.

We labeled ther of each selected pair by a color, with blue and red representing negatively

or positively correlated pairs respectively. The range of selectedr values along with their color

label is shown in the top panel of Fig. S3, underneath ther histogram (identical to the one in

Fig. 4B of the main text). The CCGs of the corresponding pairsare shown, as a matrix, in

the bottom panel. The background color of the CCG representsthe value ofr of this pair, and

the pairs have been arranged so that the negative-to-positive range is spanned from the lower

left corner to the upper right corner of the CCG matrix. On each plot we show the raw CCG

(calculated as described in Section 2.3.6 of the Supplementary Information, with bin size = 10

ms) in white. The y-axis of each plot goes from zero to the maximum between two and 1.1

times the maximum of the CCG. The dashed light-gray line is ata value of one for all CCGs,

and it represents the value that (except for finite-size fluctuations) the CCG would take, at all

lags, if the two spike trains were stationary and independent. In red we show the average CCG

across 1000 jittered surrogates for each pair of spike trains. Both the counting window (50 ms)

and the jitter time ( 200 ms) were equal to those used in Fig. 4 of the main text.

Note that most negativer values are associated with CCGs which have an approximately

symmetrictrough around zero. CCGs with a symmetric structure around zero have been in-

terpreted as being the result of shared input. However, shared input can only possibly induce

positivecorrelations, i.e., CCG peaks. Thus, all of the symmetric troughs and many of the

symmetric peaks are likely to be caused by specific combinations of positively and negatively

correlated inputs to the pairs in question, instead of by shared inputs.

3.4 Distance dependence of correlations

Since our silicon electrodes consist of 8 shanks separated by 200µm each, the whole electrode

array spans a considerable distance across the cortex (1.4 mm). In order to investigate whether

the distance between two cells in a pair has some influence over the measured value of their

spiking correlation, we recomputed correlation histograms with pairs recorded a given number

of shanks away from each other (see Section 2.3.7 of the Supplementary Information) during

36

Page 37: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

periods of cortical activation.

0 0.5 1 1.5

0

0.05

0.1

Corr

ela

tion

Distance (mm)

r

σr

Same Shank

Figure S4: Distance dependence of correlations. The two curves show the average acrossrecording sessions (error bars indicate standard deviations) of the meanr of the correlationhistogram (empty dots) and of the histogram widthσr, as a function of the distance between theshanks where each of the neurons in the pair were recorded. Zero distance means both neuronswere recorded in the same shank. The mean correlationr does not depend on the distance, butthe correlation widthσr does (see text).

As shown in Fig. S4, onlyσr, but notr, decreased with the distance (σr slope=−0.017/mm,

p <0.001;r slope=−0.002/mm,p=0.35, p-values against a null hypothesis of no distance de-

pendence, see Section 2.3.7). Thus, distant pairs tended tobe more weakly correlated (for both

signs of correlation), but there were similar numbers of positively and negatively correlated

pairs at all distances. The distribution of correlations iswider (relative to its mean) for neurons

recorded in the same shank.

37

Page 38: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

References and Notes

1. L. F. Abbott, C. van Vreeswijk,Phys. Rev. E48, 1483 (1993).

2. I. Ginzburg, H. Sompolinsky,Phys. Rev. E50, 3171 (1994).

3. C. van Vreeswijk, H. Sompolinsky,Science274, 1724 (1996).

4. D. J. Amit, N. Brunel,Cerebral Cortex7, 237 (1997).

5. N. Brunel, V. Hakim,Neural Computation11, 1621 (1999).

6. D. Hansel, G. Mato,Neural Computation15, 1 (2003).

7. S. El Boustani, A. Destexhe,Neural Computation21, 46 (2009).

8. R. J. Glauber,Journal of Mathematical Physics4, 294 (1963).

9. D. J. Amit,Modeling Brain Function(Cambridge University Press, Cambridge, UK, 1989).

10. J. A. Hertz, A. Krogh, R. G. Palmer,Introduction to the theory of neural computation

(Addison-Wesley, Wokingham, U.K., 1991).

11. S. Ma,Statistical Mechanics(World Scientific, Singapore, 1985).

12. C. van Vreeswijk, H. Sompolinsky,Neural Computation10, 1321 (1998).

13. By this we mean that keeping track of terms∼ O((Aαβij )2) and∼ O(Bαβ

ij ) does not change

the leading order value of the population-averaged correlations that one obtains neglecting

them.

14. A. Hasenstaub,et al., Neuron47, 423 (2005).

15. M. Okun, I. Lampl,Nat Neurosci11, 535 (2008).

16. P. Bartho,et al., J Neurophysiol92, 600 (2004).

17. A. Luczak, P. Bartho, S. L. Marguet, G. Buzsaki, K. D. Harris, Proceedings of the National

Academy of Sciences104, 347 (2007).

18. T. Klausberger,et al., Nature421, 844 (2003).

38

Page 39: balCorrs 6 SuppInfo - CNR · particular, we assume that there are Nexcitatory cells in the external network and that the prob-ability of a connection from an external cell to a cell

19. K. D. Harris, D. A. Henze, J. Csicsvari, H. Hirase, G. Buzsaki, J Neurophysiol84, 401

(2000).

20. M. Steriade,Electroencephalography(Williams and Willkins, 1999), fourth edn.

21. E. A. Clement,et al., PLoS ONE3, e2004 (2008).

22. W. Bair, E. Zohary, W. T. Newsome,J. Neurosci.21, 1676 (2001).

23. S. Fujisawa, A. Amarasingham, M. T. Harrison, G. Buzsaki, Nat. Neurosci.11, 823 (2008).

24. M. T. Harrison, S. Geman,Neural Computation21, 1244 (2009).

25. K. H. Britten, M. N. Shadlen, W. T. Newsome, J. A. Movshon,J. Neurosci.12, 4745 (1992).

26. M. N. Shadlen, W. T. Newsome,J. Neurosci.18, 3870 (1998).

27. S. Panzeri, S. R. Schultz, A. Treves, E. T. Rolls,Proceedings of the Royal Society of Lon-

don. Series B: Biological Sciences266, 1001 (1999).

39