Liu 2009 Kinetic

Embed Size (px)

Citation preview

  • 7/25/2019 Liu 2009 Kinetic

    1/30

    J Comput Neurosci (2009) 26:339368DOI 10.1007/s10827-008-0116-4

    A kinetic theory approach to capturing interneuronalcorrelation: the feed-forward case

    Chin-Yueh Liu Duane Q. Nykamp

    Received: 9 June 2008 / Revised: 19 September 2008 / Accepted: 24 September 2008 / Published online: 6 November 2008 Springer Science + Business Media, LLC 2008

    Abstract We present an approach for using kinetic the-

    ory to capture first and second order statistics of neu-ronal activity. We coarse grain neuronal networks intopopulations of neurons and calculate the populationaverage firing rate and output cross-correlation in re-sponse to time varying correlated input. We derive cou-pling equations for the populations based on first andsecond order statistics of the network connectivity. Thiscoupling scheme is based on the hypothesis that secondorder statistics of the network connectivity are suffi-cient to determine second order statistics of neuronalactivity. We implement a kinetic theory representationof a simple feed-forward network and demonstratethat the kinetic theory model captures key aspects ofthe emergence and propagation of correlations in thenetwork, as long as the correlations do not becometoo strong. By analyzing the correlated activity of feed-forward networks with a variety of connectivity pat-terns, we provide evidence supporting our hypothesisof the sufficiency of second order connectivity statistics.

    Keywords Population density Neuronal network Synchrony Syn-fire chain Connectivity patterns Maximum entropy

    Action Editor: Carson C. Chow

    C.-Y. LiuD. Q. Nykamp (B)School of Mathematics, University of Minnesota,206 Church St., Minneapolis, MN 55455, USAe-mail: [email protected]

    1 Introduction

    Understanding how the brains neuronal networks per-form computations remains a difficult challenge. Theinteraction of large populations of neurons leads toa complex repertoire of high-dimensional activity pat-terns that is difficult to analyze. One possibility toreduce the dimensionality and complexity of such net-works is to ignore high order interactions among neu-rons and simply analyze the consequences of low orderinteractions. Stopping at second order interactions mayprovide a good description, as recent evidence suggeststhat pairwise firing statistics among neurons may be

    sufficient to capture most of the higher order firingpatterns (Schneidman et al. 2006; Shlens et al. 2006;Tang et al.2008; Yu et al.2008).

    We present a kinetic theory approach that is de-signed to capture the second order interactions amongneurons. Kinetic theory approaches have been usedto model gases and plasmas, where one uses momentclosure approximations to derive equations for lowerorder statistics of a system of particles (Ichimaru1973;Nicholson1992). We implement a similar approach totrack the second order statistics among neurons in apopulation while explicitly neglecting third and higherorder statistics.

    1.1 Motivation for approach

    To understand the behavior of a neuronal network,one may be interested in uncovering the relationshipbetween the network structure and the behavior. Mod-eling neuronal networks is challenging as the set ofpossible connectivity combinations is enormous and wedo not have experimental methods to determine the

  • 7/25/2019 Liu 2009 Kinetic

    2/30

    340 J Comput Neurosci (2009) 26:339368

    fine details of connectivity. Even if we could exactlydetermine the connectivity of the network, one wouldstill be faced with the challenge of determining whatfeatures of the network underlie the behavior of inter-est. As the exact connectivity of a given network withinthe brain may vary widely among individuals while thebehavior of interest is maintained, all the details of the

    connectivity may not be important for the behavior.One would like a method to distill the connectivitydown to its key features and to study how these featuresinfluence the behavior.

    Recent experimental results suggest an approachthat may be promising. A number of labs have pro-vided evidence that the second order statistics of in-terneuronal firing patterns may be sufficient to describea large fraction of the higher order firing patterns(Schneidman et al.2006; Shlens et al.2006; Tang et al.2008; Yu et al.2008). In a similar manner, we hypoth-esize that the second order statistics of connectivity

    patterns may be sufficient to explain a large fractionof the behavior of the brains neuronal networks. Ifthe second order connectivity statistics were sufficientto explain the second order statistics of firing patterns,then the above experimental results suggest they shouldbe sufficient to explain a large fraction of the higherorder firing patterns of the brains neuronal networks.

    If second order connectivity statistics do capturemuch of the relevant network behavior, then one cangreatly simplify the space of connectivity patterns, pa-rameterizing them by just their second order statistics.One could attempt to understand network behavior byexploring the consequences of these second order con-nectivity statistics. To do so, one would like a methodto analyze network behavior as a function of just thesecond order connectivity statistics while imposing aslittle as possible additional structure on the higher or-der connectivity patterns.

    One approach is to use kinetic theory to developtools to study second order statistics of the activityand connectivity of neuronal networks. With kinetictheory, one can explicitly ignore higher order statisticsand simply model second order distributions amongthe variables of interest. Then, one can use maximumentropy methods (Jaynes1957) to infer higher order

    distributions with minimal structure.

    1.2 Principles underlying our kinetic theoryimplementation

    Our kinetic theory implementation begins with a coarsegraining step where we group neurons into populations(see left two panels of Fig.1). Within each population,we will assume the neurons have identical propertiesand identical statistics, so this step clearly leads to aloss of detail. For each population, we will track the

    distribution of neurons over state space with a densityfunction that we call a population density function(Nykamp and Tranchina2000). However, unlike pre-vious approaches, we will not assume that the neuronswithin a population are independent. Instead, we ex-plicitly represent the joint distribution of any pair ofneurons with the population density (x1, x2, t), whereeach vector xirepresents a value of the state variablesfor a single neuron. Roughly speaking, the quantity(x1, x2, t)dx1dx2 represents the probability that thestate variablesXi(t)and Xj(t)at timetof two neuronsin the population are near the values x1and x2. Sincewe assume this holds for any pair of neurons, the pop-ulation density (x1, x2, t)clearly must be symmetric inx1and x2.

    Once we form a population density j(x1, x2, t) foreach coarse-grained population j, we need to determinethe second order statistics for the connectivity of the

    A

    C

    A

    A

    C

    C

    C

    B

    B

    BB

    B

    B

    C

    A

    Fig. 1 Schematic illustration of the procedure for forming ki-netic theory approximations of neuronal networks. Starting witha neuronal network (left), we group neurons into populations,which we indicate by letters (middle). For each population, we

    represent first and second order statistics of neuronal activity withpopulation density functions (right). The coupling among thepopulation density functions is based on first and second orderstatistics of the original network connectivity

  • 7/25/2019 Liu 2009 Kinetic

    3/30

    J Comput Neurosci (2009) 26:339368 341

    populations. For our kinetic theory implementation, wewill use the following two statistics (see Fig.2):

    1. W1jk, the average number of neurons from popula-tion jthat project onto a single neuron in popula-tionk(a first order statistic), and

    2. W2jk, the average number of neurons from popu-

    lation jthat simultaneously project onto a pair ofneurons in populationk(a second order statistic).

    In presenting the results, we will typically use the ratiojk=W

    2jk/ W

    1jkfor the second order statistic. We refer

    tojkas the fraction of shared input parameter.To derive coupling equations for the network of

    population densities, one must develop a method tocapture the effect of the connectivity statistics on theinteractions among the population density functions.Then, one can complete the simplification of the orig-inal network (e.g., left of Fig. 1) into a kinetic theorynetwork of interacting population densities (e.g., rightof Fig.1).

    The goal is to develop a kinetic theory network ap-proach where the population density functions capturethe second order statistics in the neuronal activity andthe interaction terms capture the second order statis-tics of the connectivity. Such a tool could be used toexplore the consequences of connectivity in a settingwhere both the connectivity and the neuronal activityare highly simplified and lower dimensional (Nirenbergand Victor2007) due to the neglect of higher order sta-

    Fig. 2 Illustration of the second order statistics of connectivityused in our kinetic theory approach. We consider the connectionsfrom a presynaptic population (left group) onto any pair ofneurons from a postsynaptic population (right). The first orderstatistic W1 is the expected number of neurons from the presy-naptic population that project to a single postsynaptic neuron(six in this illustration). The second order statistic W2 is theexpected number of presynaptic neurons that project to bothpostsynaptic neurons (two in this case). The fraction of sharedinputs parameter is their ratio = W2/W1 (which is 1/3 in thiscase)

    tistics. This simplified framework may facilitate explo-ration of the key features of connectivity that underlienetwork behavior.

    In this paper, we have more modest goals than al-luded to above. As an initial test of the potential of thisapproach, we develop in Section2a kinetic theory ap-proach for modeling the emergence and propagation of

    correlations through excitatory feed-forward networksof integrate-and-fire neurons. We gauge the perfor-mance of this approach as well as test our hypothesisof the sufficiency of second order connectivity statisticsin Section3.We discuss the successes and failures ofthis implementation in Section4.

    2 Derivation of feed-forward kinetic theory model

    As an initial test of our kinetic theory approach tocapturing second order statistics, we seek to developa second order kinetic theory description of excitatoryfeed-forward networks of integrate-and-fire neurons.Feed-forward networks are an ideal test system forthis approach for a few reasons. First, we study feed-forward networks that are naturally divided into layers,and we can group all neurons of a layer into a sin-gle population with little adverse effect of the coarse-graining. Second, neurons within each population donot interact, so one can derive a simplified kinetictheory description. Third, it is well-known that cor-relations emerge and propagate through feed-forwardnetworks (Diesmann et al.1999), and the feed-forwardnetworks will be a good test case to see if the ki-netic theory model correctly captures this build-up ofcorrelation.

    2.1 The integrate-and-fire model

    To keep the kinetic theory equations as simple aspossible, we base our implementation on a simplifiedintegrate-and-fire model of neuronal dynamics. We letVj(t) be the voltage of neuron jat time tand let thevoltage evolve according to the stochastic differential

    equationdVjdt

    = Er Vj

    +

    i

    Aij

    t Tij

    , (1)

    where is the membrane time constant, Eris the rest-ing potential, and(t)is the Dirac delta function. At thetimesTijof excitatory synaptic input, the voltage jumpsup by the amount Aij, which is a random variable withprobability density function fA(x). We define spiketimes tsp as those times when V(t) crosses the firing

  • 7/25/2019 Liu 2009 Kinetic

    4/30

    342 J Comput Neurosci (2009) 26:339368

    threshold vth, and we immediately reset the voltageto V(t+sp)= vreset, where vreset< Er < vth. We use thismodel because the state of the neuron is described by

    just the single state variableV(t).We assume the arrival timesTijof the input for each

    neuron j are given by a modulated Poisson process,where the rate is identical for each neuron in a given

    population. In departure from previous kinetic theoryimplementations, we allow the inputs to any pair ofneurons in the population to be correlated, though forsimplicity, we only model instantaneous correlations inthe input. We assume that the inputs to any pair ofneurons are given by independent Poisson processesto each neuron at rate ind(t)combined with synchro-nous input to both neurons from a third independentPoisson process at ratesyn(t). This correlated input willcreate correlations among the neurons in the popula-tion, which we will represent with a population densityfunction. A schematic of the random walk exhibited by

    a pair of neurons with this input is shown in Fig.3. The

    V (t)2

    vth

    V (t)1

    vreset

    rE

    rE vth

    E

    D

    A

    B C

    Fig. 3 Schematic of random walk of the voltages of a pair of

    neurons. The neuron voltages (V1(t), V2(t)) start at the pointindicated by A. Neuron 2 fires alone at B, the neurons firesynchronously at C, neuron 1 fires alone at D, and the voltagesend at E. Synchronous input jumps are indicated by the thickdiagonal lines and independent input jumps are indicated byhorizontal or vertical solid lines. Voltage decay toward( Er,Er)inbetween inputs is indicated by the thin diagonal lines. After oneor more neurons fire by crossing the thresholdvth (dotted line),the voltage is reset tovreset, as indicated by the dashed lines. Fora large population of neurons, the voltages (Vj(t), Vk(t))of eachneuron pair can be viewed as following such a random walk, andthe population density (v1, v2, t)captures the fraction of neuronpairs in the population with voltages(Vj(t), Vk(t))around(v1, v2)

    random walk includes the decay toward Erin betweeninputs as well as the voltage reset tovresetafter crossingthe thresholdvth.

    2.2 The kinetic theory equations

    To form a population density function, we assume all

    neurons in the population are identical and that everypair has the same second order statistics. We representthe second order statistics among the state variablesVj(t) by the population density function (v1, v2, t),defined by

    (v1, v2, t)dv1dv2 =Pr

    (Vj(t), Vk(t))

    ,

    whereis any region in the v1-v2phase plane andVj(t)and Vk(t) are the voltages of any two neurons in thepopulation. For simplicity, we will refer to the voltagesas V1(t)and V2(t). For a large population of neurons,

    one can view

    (v1, v2, t)dv1dv2 as the fraction ofneuron pairs with voltages in the region .Since the input to a pair of neurons is a correlated

    Poisson process, the history of the synaptic input doesnot influence future evolution of the voltage and theevolution of the voltage pair(V1(t), V2(t))is a Markovprocess. The evolution of(V1(t), V2(t))contains the de-terministic evolution of the voltages toward Erdue tothe leakage term from Eq. (1), and it contains the jumpprocesses due to independent and synchronous inputs,as well as reset to vreset after spiking. We can writedown the differential ChapmanKolmogorov equation

    (Gardiner2004) to describe these processes as follows:

    t(v1, v2, t) =

    1

    v1[(v1 Er)(v1, v2, t)]

    +1

    v2[(v2 Er)(v1, v2, t)]

    +ind(t)

    v1vreset

    fA(v1 1)(1, v2, t)d1

    (v1, v2, t)

    +ind(t) v2

    vreset

    fA(v2 2)(v1, 2, t)d2

    (v1, v2, t)

    +syn(t)

    v1vreset

    v2vreset

    fA(v1 1)fA(v2 2)

    (1, 2,t)d2d1 (v1, v2, t)

    +(v1 vreset)Jreset,1(v2, t)

    +(v2 vreset)Jreset,2(v1, t)

    +(v1 vreset)(v2 vreset)Jreset,3(t). (2)

  • 7/25/2019 Liu 2009 Kinetic

    5/30

    J Comput Neurosci (2009) 26:339368 343

    The first two lines of Eq. (2) contain the advectionterms due to the leak current of Eq. (1) that draws eachvoltage toward the resting potential Er. The third andfourth lines describe the jumps due to independent in-put to neuron 1. The integral contains the contributionto (v1, v2, t) due to a neuron with voltage V1(t)= 1receiving an input of size v1 1 so that it lands at

    V1(t)= v1. The second term is the loss of probabilityat (v1, v2, t) due to neurons with voltage V1(t)= v1receiving an independent input and jumping to a highervoltage. The fifth and sixth lines of Eq. (2) are the sym-metric terms for independent input to neuron 2. Theseventh and eighth lines describe the jumps in both neu-rons due to synchronous input. The integral containsthe contribution to (v1, v2, t)when a neuron pair withvoltages (V1(t), V2(t)) = (1, 2) receives synchronousinput of size(v1 1, v2 2)that jumps the voltages to(V1(t), V2(t))= (v1, v2). The second term is due to lossof probability at (v1, v2, t)due to a neuron pair with

    voltages (V1(t), V2(t))= (v1, v2)receiving synchronousinput that jumps both voltages higher.

    The last three lines of Eq. (2) contain terms due tothe reset of voltage to V(t)= vresetimmediately after aneuron fires a spike. The factors Jreset,kare defined by

    Jreset,1(v2, t) = ind(t)

    vthvreset

    FA(vth 1)(1, v2, t)d1

    +syn(t)

    v2vreset

    vthvreset

    FA(vth 1)fA(v2 2)

    (1, 2, t)d1d2,

    Jreset,2(v1, t) = ind(t)

    vthvreset

    FA(vth 2)(v1, 2, t)d2

    +syn(t)

    vthvreset

    v1vreset

    FA(vth 2)fA(v1 1)

    (1, 2, t)d1d2,

    Jreset,3(t) = syn(t)

    vthvreset

    vthvreset

    FA(vth 1)

    FA(vth 2)(1, 2, t)d1d2, (3)

    where FA(x) is the complementary cumulative dis-tribution function of the random jump size A, thatis, FA(x)=

    x

    fA(t)dt=Pr(A> x). The first integralof Jreset,1(v2, t) reflects the event that a neuron withV1(t)= 1 receives an independent input that jumpsthe voltage past the threshold vth. The neuron is resetto V1(t)= vreset while V2(t) stays at v2. The secondintegral of Jreset,1(v2, t)reflects the event that a pair of

    vth

    vth

    Jreset,1

    Jreset,2

    Jreset,3

    vreset

    rave

    rsyn

    V (t)1

    V (t)2

    Fig. 4 Illustration of the three reset terms and their relationshipto the average and synchronous firing rates. Jreset,1correspondsto the reset tovresetafter neuron 1 fires alone due to independentor synchronous input. Jreset,2 is the equivalent for neuron 2.Jreset,3 corresponds to the reset of both neurons to vreset aftersynchronous input causes the neurons to simultaneously crossthreshold. The synchronous firing rate rsyn is simply Jreset,3. Wecan obtain the average firing rate rave by adding all the waysneuron 1 can fire alone (all the Jreset,1) to the synchronous firingrate. Plotting convention is the same as in Fig.3

    neurons with (V1(t), V2(t))= (1, 2) receives a simul-taneous input that jumpsV1(t)past threshold andV2(t)to the subthreshold voltage v2. Jreset,2(v1, t)is identicalto Jreset,1(v2, t)with the roles of the neurons reversed.Jreset,3(t)reflects the event that a pair of neurons with(V1(t), V2(t))= (1, 2) receives a simultaneous inputthat jumps both neurons past threshold. In this case,both voltages are reset to vreset. The neuron firingscontributing to the Jreset,kare illustrated in Fig.4.

    The addition of the reset terms makes Eq. (2) aconservative system so that the integral of remainsconstant, which we fix to 1. However, the equation is

    not written in conservative form. To aid in developinga conservative numerical method to solve Eq. (2), wefirst rewrite the equation in conservative form, and thenbase our numerical scheme on the conservative form, asdiscussed in AppendixA.

    2.3 Total firing rate and synchronous firing rate

    Since the reset terms described above reflect the volt-age reset after firing, they can be used to read out

  • 7/25/2019 Liu 2009 Kinetic

    6/30

    344 J Comput Neurosci (2009) 26:339368

    the firing rate of population. If we denote Vj(T+)=limtT+Vj(t), we can interpret the reset terms as

    Jreset,1(v2, t)dv2dt= Pr(neuron 1 fires at T (t, t+ dt)

    andV2(T+) (v2, v2+ dv2)),

    Jreset,2(v1, t)dv1dt= Pr(neuron 2 fires at T (t, t+ dt)

    andV1(T+) (v1, v1+ dv1)),

    Jreset,3(t)dt= Pr(both neurons simultaneously

    fire atT (t, t+ dt)). (4)

    Note that the factors Jreset,1(v2, t)dv2, Jreset,2(v1, t)dv1,and Jreset,3(t)indicate the probability of a spike per unittime. If neuron 1 fires a spike, the spike will be reflectedin Jreset,3if neuron 2 fires simultaneously; otherwise thespike will be reflected in Jreset,1. Therefore, the averagefiring rate for a neuron is simply the sum of Jreset,3(t)plus all of the Jreset,1(v2, t),

    rave(t)=

    vthvreset

    Jreset,1(v2, t)dv2+ Jreset,3(t). (5)

    (By symmetry, we could have equally well definedrave(t) by replacing Jreset,1 with Jreset,2, obtaining theidentical firing rate of neuron 2).

    We also define the synchronous firing rate, whichis the rate at which both neurons fire simultaneously.Given Eq. (4), the synchronous firing rate is clearly

    rsyn(t)= Jreset,3(t). (6)

    Since the reset terms Jreset,kare calculated in the courseof solving for (v1, v2, t), we can easily obtain the firingrates rave(t)and rsyn(t). The threshold crossings corre-sponding to the average and synchronous firing ratesare illustrated in Fig.4.

    2.4 Capturing output correlation

    The average firing rate rave(t) captures the first orderstatistics of the population output, and the synchronousfiring rate rsyn(t) captures one type of second orderstatistic of the output, namely correlation with zerodelay. However, rsyn(t) does not capture all of thecorrelation between two neurons. If a pair of neuronsreceives independent Poisson input at rate ind(t)andsynchronous Poisson input at rate syn(t), their spikesmay become correlated at some non-zero delay, whichis not captured byrsyn(t).

    Consider the snapshot of (v1, v2, t) in the leftpanel of Fig.5. The synchronous input syn(t) has in-creased the probability of the voltage combinations(V1(t), V2(t))close to the diagonal, so that the positive

    correlation betweenV1(t)andV2(t)is clearly seen. Forthis reason, the probability for(V1(t), V2(t))to be in theupper right corner (point A in middle panel of Fig.5)is higher than it would be if the neurons were inde-pendent. If the neurons received a synchronous inputwhile their voltages were near the upper right corner,they would be highly likely to simultaneously spike and

    reset (points BC), contributing to the synchronousfiring rate rsyn(t). However, if the neurons each receivedindependent input not exactly at the same time, thenthe neurons would be highly likely to fire with a shortdelay between their firing times (ADEF in mid-dle panel of Fig.5). With a voltage distribution such asin left panel of Fig.5, this independent input would stilllead to correlations in the firing between the neuronsbecause they would be more likely to fire within a shortdelay of each other than predicted by an independentdistribution.

    We cannot directly read out this delayed correlation

    from 2(v1, v2, t). If the voltage pair (V1(t), V2(t)) isnear the upper right corner and the first neuron re-ceived an independent input, it would be highly likelyto fire by itself, after which its voltage would be resetto vreset. The voltage pair (V1(t), V2(t)) jumps to theupper left corner of 2(v1, v2, t) (point E in middlepanel of Fig.5), and the fact that the first neuron had

    just spiked is lost. Even if the second neuron receivesan input shortly thereafter and fires a spike (point F),there is no way of determining the spike time of thefirst neuron from the fact that (V1(t), V2(t)) crossesthe upper boundary of2(v1, v2, t)somewhere near theupper left corner.

    To read out the delayed correlation, we constructanother density, which we call cross(v2, ; t0), definedso that cross(v2, ; t0)/rave(t0)is the probability densityofV2(t0+ )conditioned on neuron 1 firing at time t0(right panel of Fig. 5). To compute the evolution ofcross(v2, ; t0)with respect tofor a fixedt0, we initial-ize it with the distribution of neuron 2 conditioned onneuron 1 spiking, i.e., cross(v2, 0; t0)= Jreset,1(v2, t0) +(v2 vreset)Jreset,3(t0). (This corresponds to injectingpoints C and D in the right panel of Fig. 5.) Then,we evolve cross(v2, ; t0)according a modified versionof equation Eq. (2), where we integrated out v1 andreplaced t with t0+ . We then define rcross(; t0) asthe reset term from this modified equation (the DEthreshold crossing and subsequent reset in the rightpanel of Fig.5). In this way, rcross(; t0)/rave(t0)is theprobability per unit time that neuron 2 fires at timet0 + conditioned on neuron 1 firing at time t0. Wesubtract rave(t0+ ), the marginal probability per unittime that neuron 2 fires at time t0+ , and multiplyby rave(t0). Finally, we add the zero delay correlation

  • 7/25/2019 Liu 2009 Kinetic

    7/30

    J Comput Neurosci (2009) 26:339368 345

    resetv resetv

    thv

    thv

    thv

    th

    thv

    v1

    V(

    t)

    2V(

    t)

    2v2

    V (t)1reset

    v

    cross

    A

    B

    C C

    DD

    EE

    F F

    Fig. 5 Left: Pseudocolor plot of highly correlated population

    density function (v1, v2, t). Light colors correspond to highprobability. Middle: Illustration of delayed correlation that islikely to result from a correlated population density. Given thecorrelated density, the likelihood that the voltages ( V1(t), V2(t))of a pair of neurons is in the upper right corner(A) is higher thanif the voltages were independent. Receiving synchronous inputmight lead to synchronous firing and reset of the neuron pair(ABC). However, if the neurons each received independentinput, the neurons would be likely to fire within a short delay(ADEF). After the first neuron fires, the voltage is resetto theupper left corner(E), and the fact that the first neuron fired

    recently is lost. Right: Illustration of method to track delayed

    correlation. After the first neuron fires and the voltage pairis reset to (vreset, v2) in (v1, v2, t), the firing is simultaneouslyrecorded by injecting into cross(v2, ; t) at (v2, 0) (point D onright diagram). As the voltage of the second neuron evolves(DE), the time since the first neuron fired is tracked by .When the second neuron fires (EF), the pair of spikes withdelay (i.e., at times tand t+ ) is recorded by the crossingof threshold in cross(v2, ; t)at the point(vth, ). If the neuronshad fired simultaneously, we would record this by injecting intocross(v2, ; t) at (vreset, 0) (point C on right diagram). Plottingconvention of the right two panels is the same as in Fig.3

    rsyn(t0)to obtain the cross correlation between the neu-

    rons at timet0with delay :

    C(; t0) = ()rsyn(t0) + rcross(; t0)

    rave(t0)rave(t0+ ). (7)

    The cross-correlation has units of spikes per unittime squared and is the correlation between spikesof one neuron at time t0 and the spikes of a secondneuron at time t0+ . We extend the definition fornegative delays to be the correlation between thespikes of one neuron at time t0 and the spikes of

    another neuron at timet0:C(; t0)= C(; t0 ). Tosummarize the correlation at a time t, we compute thearea of the peak of positive correlation around delay 0,

    Cpeak(t)=

    21

    C(; t)d, (8)

    where 1and 2are the delays where the correlationfirst becomes negative: 1 =min{ >0 | C(, t)0}and2 = min{ >0 | C(, t)0}.

    2.5 Derivation of network equations

    The kinetic theory Eq. (2) can be used to compute theevolution of a population of neurons to prescribed inde-pendent and synchronous input rates, ind(t)andsyn(t)respectively. Along with the evolution of the popula-tion density (v1, v2, t), we also compute the outputstatistics: the average firing rate rave(t), the synchronousfiring rate rsyn(t), and the output correlation C(; t).To apply our results to a feed-forward network, weneed a method to transform the output of a presynapticpopulation to the input of a postsynaptic population,taking into account the statistics of the connectivity.

    As mentioned in Section 1.2, we will characterizethe connectivity between two populations by both firstand second order statistics. The first order statistic isW1jk, the expected number of presynaptic neurons frompopulation jthat project to any postsynaptic neuron inpopulation k. The second order statistics is W2jk, theexpected number of presynaptic neurons from pop-ulation j that simultaneously project to any pair ofpostsynaptic neurons in population k. Our goal is tocompute the input rates kind(t)and

    ksyn(t)to population

  • 7/25/2019 Liu 2009 Kinetic

    8/30

    346 J Comput Neurosci (2009) 26:339368

    k based on these connectivity statistics, the output ofeach population j, and any external independent inputto populationkat rate kext(t). We assume that neuronsfrom different populations are uncorrelated and neu-rons within a population are uncoupled.

    We consider a pair of neurons in population k. LetN

    j1 be the number of neurons from population jthat

    project onto the first neuron and Nj2be the number ofneurons that project onto the second neuron. Further-more, letNj3be the number of neurons from populationjthat project onto both postsynaptic neurons. (In theillustration of Fig.2, Nj1 =5, N

    j2 =7and N

    j3 =2). We

    view Nj1, Nj2, and N

    j3 as three random numbers with

    expected values specified by the connectivity statistics:

    E

    Nj

    1

    = E

    N

    j

    2

    = W1jkand E

    N

    j

    3

    = W2jk. To calcu-

    late the input rates kind(t)and ksyn(t), we will first cal-

    culate them conditioned on particular values ofN in =

    Nji and then take the expected values over theNin.Define ksyn(t; Nin) to be the rate of synchronous input

    onto neurons 1 and 2, conditioned on particular valuesof theNin. We calculate that this input rate is

    ksyn(t; Nin) =

    j

    Nj3

    r

    jave(t) r

    jsyn(t)

    +

    j

    N

    j1 N

    j3

    N

    j2 N

    j3

    +

    Nj1 N

    j3

    N

    j3

    +

    N

    j2 N

    j3

    Nj3+ 2

    N

    j3

    2

    r

    jsyn(t)

    =

    j

    N

    j

    3rjave(t)+

    N

    j

    1 Nj

    2 2Nj

    3

    r

    jsyn(t)

    . (9)

    The neurons will receive synchronous input any timeone of the Nj3neuron projecting to both neurons firesa spike all by itself (i.e., no other of the presynap-tic neurons fires synchronously with it). Since we arecalculating only up to second order statistics, we canapproximate the probability per unit time that just asingle neuron fires as rjave(t) r

    jsyn(t). Multiplying by the

    number Nj3 of common input neurons, we obtain thefirst sum of Eq. (9).

    The second sum of Eq. (9) accounts for the syn-chronous input to neurons 1 and 2 that results fromone of the Nj1 neurons projecting to neuron 1 firingsynchronously with one of the Nj2neurons projecting toneuron 2. This could happen in four ways, correspond-ing to the four terms in the square brackets. First, one

    of the

    Nj1 N

    j3

    neurons projecting to neuron 1 alone

    could fire synchronously with one of the

    Nj2 N

    j3

    neu-

    rons projecting to neuron 2 alone. Second, one of the

    N

    j

    1 Nj

    3

    neurons projecting to neuron 1 alone could

    fire synchronous with one of the Nj3common input neu-rons. In this case, neuron 1 receives a double input thatis synchronous to a single input to neuron 2. Since ourkinetic theory description does not represent doubleinputs to any neuron, we represent this event just as

    synchronous input to the pair. Third, we handle in thesame way the event where one of theNj3common input

    neuron fires synchronously with one of the

    Nj2 N

    j3

    neurons projecting to neuron 2 alone. Fourth, if two ofthe Nj3common input neurons fire synchronously, bothneurons 1 and 2 receive synchronous double inputs.Since our kinetic theory does not represent doubleinputs, we model the double synchronous inputs as twosynchronous inputs. Hence, we multiply the number of

    pairs

    Nj

    3

    2

    by two. All these combinations of inputs are

    multiplied by the probability per unit time rjsyn(t)thattwo input neurons fire synchronously.

    The approximations used for the last three termsin the second sum of Eq. (9) can be justified underthe condition that Nj3 N

    j1andrsyn rave. If common

    input neurons are relatively rare and the synchronyisnt too high, then these last three terms will be small(quadratic in a small parameter) compared to the otherterms of Eq. (9). For this reason, we expect our ki-netic theory implementation to perform best underconditions where the synchronous firing and fraction ofshared input are not too high.

    Next, we take the expected value of ksyn(t; Nin)over

    Nin. The only expected value not explicitly specified byour second order connectivity statistics is the product

    E

    Nj1 N

    j2

    . We simply assume that Nj1 and N

    j2 are

    uncorrelated so that E

    Nj1 N

    j2

    = E

    N

    j1

    E

    Nj2

    . We

    obtain that the expected rate of synchronous input is

    ksyn(t) = E

    ksyn(t; Nin)

    =

    j

    W2jkrjave(t) +

    j

    W1jkW

    1jk 2W

    2jk

    r

    jsyn(t)

    =

    jjkW

    1

    jkr

    j

    ave(t)+

    jW

    1

    jk

    W1

    jk2jk

    r

    j

    syn(t),

    (10)

    where jk= W2jk/ W1jk is the fraction of shared input

    parameter.The average input to population k is simply the

    external input rate kext(t)plus the sum of the W1jkr

    jave(t),

    the average coupling times the average firing rate. Asdiscussed in Section2.4,the output of population jwill

  • 7/25/2019 Liu 2009 Kinetic

    9/30

    J Comput Neurosci (2009) 26:339368 347

    contain correlations beyond the synchronous output ofr

    jsyn(t). However, our kinetic theory equations do not

    include delayed correlation in the input. (We neglecteddelayed correlation to create a Markov process withoutadding additional state variables.) For our first approx-imation, which we term KT0, we will simply assumethat inputs that are not synchronous are independent.

    Under this approximation, the independent input rateto populationk is simply the average input rate minusthe synchronous input rate:

    kind(t)= kext(t) +

    j

    W1jkrjave(t)

    ksyn(t). (11)

    This formula not only neglects delayed correlation, butit also approximates as independent any double inputto a single neuron. (Such double inputs would occurwhen a pair of neurons projecting to the given neuron

    fire synchronously).

    2.6 Accounting for delayed correlation

    The delayed correlation described in Section2.4 maybe more substantial than the instantaneous correla-tion reflected in rsyn(t). In this case, we may severelyunderestimate the correlation by assuming all spikesnot captured by rsyn(t)are independent, as we did forderiving the coupling conditions (10) and(11). Hence,we developed another method to capture additionalcorrelation in our kinetic theory implementation. Weapproximated delayed correlation in the output of apopulation as though it were correlation with zerodelay.

    Approximating delayed correlation as instantaneousis not as simple as adding Cpeak(t)of Eq. (8) torsyn(t).The cross-correlation peak area Cpeak(t) evaluated attimetis based on output from times t+ that are laterthant. If we includedCpeak(t)in the input rates at timet, we would be attempting to use future information tocalculate the evolution of the network at timet.

    To avoid using future information, we calcu-late a modified version of the cross-correlation (7)based on a quasi-steady state approximation. Ratherthan using the density cross(v2, ; t0) introduced inSection 2.4, we calculate the evolution of anotherdensity-like quantity that we call delay(v2, ; t0). Wecalculate the evolution of delay in the same way ascross except that we freeze the input rates at ind(t0)and syn(t0). Moreover, in delay, we wish to look onlyat the first spike of neuron 2 after the spike of neu-ron 1. For this reason, we do not include the resetterms when evolvingdelay. To subtract off the expected

    distribution of v2 under the assumption that the neu-rons were independent, we initialize delayby subtract-ing off the marginal distribution of , multiplied bythe average firing rate:delay(v2, 0; t0)= Jreset,1(v2, t0) rave(t0)

    (v1, v2, t0)dv1. Although we do not reinject

    the reset term into the evolution equation for delay,we still define cdelay(, t0)as the magnitude of this re-

    set term (or, equivalently, the flux across threshold).Since we subtracted off the marginal density in definingdelay(v2, 0; t0), the reset quantity cdelay(, t0) reflectshow much more likely neuron 2 was to fire than itwould have fired if the neurons were independent.In other words, cdelay(, t)represents the delayed cor-relation between the spikes of one neuron at time tand the subsequent spike of another neuron, subjectto the quasi-steady state approximation that uses nofuture information beyond the time t. Given the quasi-steady state approximation, the correlation with nega-tive delay is symmetric to the correlation with positive

    delay, and we define cdelay(, t) = cdelay(, t) for > 0.The resulting correlation magnitude analogous toCpeak(t)is

    rsyn(t)= rsyn(t) +

    00

    cdelay(, t)d (12)

    where 0 is the delay where cdelay(, t) first becomesnegative: 0 =min{ >0 | cdelay(, t)0}. We denotethis correlation by rsyn(t) because we will use rsyn(t)in place of the synchronous firing rate rsyn(t) in the

    coupling Eqs. (10) and(11), effectively assuming thatthis correlation was due to synchronous spiking of thepresynaptic neurons. Since we have not included resetin the calculation ofcdelay, we know that the conditionrsyn(t) rave(t)is always satisfied.

    Our new connectivity equations (reintroducing su-perscripts to denote population index) are

    ksyn(t)=

    j

    jkW1jkr

    jave(t)

    + j

    W1jk W1jk 2jk r

    jsyn(t) (13)

    kind(t)= kext(t) +

    j

    W1jkrjave(t)

    ksyn(t). (14)

    We refer to this improved kinetic theory implementa-tion that accounts for delayed correlations as KT1.

    Note that there is no guarantee that kind(t)definedby the above equation will be positive, despite thefact that we know rjsyn(t) r

    jave(t). It is possible for the

    synchronous input ksyn(t) calculated from Eq. (13) to

  • 7/25/2019 Liu 2009 Kinetic

    10/30

    348 J Comput Neurosci (2009) 26:339368

    exceed the total input from presynaptic populationsjW

    1jkr

    jave(t). Such a nonphysical result is due to ne-

    glecting the possibility of three or more simultaneousinputs. As detailed in Section 4, such combinationsbecome highly likely as the correlation increases. Tokeep physically meaningful values of ksyn(t), we sim-

    ply truncate it tojW1jk

    rjave(t), the total input from

    presynaptic populations, if the quantity calculated fromEq. (13) or Eq.(10) exceeds that sum.

    The Eqs. (13) or (10) forsyn(t)contain two sourcesfor the correlated input to a population. The first sumdescribes the emergence of correlations due to theshared input between a pair of neurons. For example,when either of the common input neurons of Fig. 2spikes, it will send synchronous input into the pair ofpostsynaptic neurons. The second sum describes thepropagation of correlations through the network. Forexample, when a light and dark presynaptic neuronof Fig.2 spike synchronously, they send synchronousinput to the postsynaptic neuron pair even thoughno connections are shared. The subtraction of2 isan approximate correction to control for overcountingwhen a common input neuron fires synchronously withanother input neuron. In this way, the kinetic theorynetwork is designed to capture the emergence andpropagation of correlations in the network and describethe second order statistics of the network activity.

    3 Results

    We demonstrate our kinetic theory approach by bench-marking its output against Monte Carlo simulationsof networks of integrate-and-fire neurons. We firstcompare simulations of a single population of non-interacting neurons in order to demonstrate that thekinetic theory does accurately represent the response ofneurons to correlated input. Second, we compare simu-lations of feed-forward networks in order to investigatehow well our kinetic theory implementation capturesthe emergence and propagation of correlations througha network. We simulated the Monte-Carlo networks as

    described in Nykamp and Tranchina (2000).

    3.1 Single population

    As an initial test of our kinetic theory approach, we sim-ulated the response of a single population of neuronsto correlated input. We numerically solved the kinetictheory equations (2) as described in AppendixAin re-sponse to specified independent ind(t) and synchronoussyn(t) input rates. Since these single population runs

    did not include any network interactions, they simplyreflect the ability of the kinetic theory to capture thecorrelated output of neurons in response to correlatedinput.

    To test the accuracy of the kinetic theory solution,we simulated 500,000 realizations of a pair of integrate-and-fire neurons in response to the same input rates.

    We stimulated each neuron with an independentPoisson process with rate ind(t). We also stimulatedboth neurons with a third Poisson process with ratesyn(t). Since the input to the neurons was Poisson andcorrelated only with zero delay, these Monte Carlo sim-ulations exactly matched the assumptions of our kinetictheory model. In this way, the Monte Carlo simulationsserved simply to test if we correctly implemented ourkinetic theory equations.

    Results of one such test are shown in Fig. 6. For thisexample, we set the independent input rate ind(t) to150 spikes per second and the synchronous input ratesyn(t) to 100 spikes per second, and allowed the popula-tion to achieve steady state. Then, we kept the same in-put rates untilt=0.05s, at which point we doubled theinput rates to ind(t)=300and syn(t)=200spikes persecond. In response, the average firing raterave(t), thesynchronous firing ratersyn(t), and the cross-correlationpeak areaCpeak(t)immediately jumped to higher valuesand then settled down to new steady state values. Asexpected, the kinetic theory and Monte Carlo estimatesmatched exactly in all three cases (Fig.6(a)). Note thatCpeak(t)began to increase even before the time of theinput change due to the fact that it includes correlationof spikes at timetwith spikes at future times.

    The kinetic theory also accurately captures the formof the delayed correlation resulting from the correlatedinput. As shown in Fig.6(b), the structure of the cross-correlation averaged over all neuron pairs is preciselymatched by the kinetic theory. Note that for each pair(j, k)of neurons, the correlation at a delay includesthe effect of neuron jspiking after neuronk and neu-ron k spiking after neuron j. Therefore, since thereis no structure in the input at steady state, the cross-correlation must be symmetric with respect to delay(Fig.6(b), top). The asymmetry in the transient cross-correlation (Fig.6(b), bottom) is due to the asymmet-ric population activity around the sampled time point.We obtained equally good matches between kinetictheory and Monte Carlo simulations for every otherinput rate (ind(t) and syn(t)) combination we tested.These results are not surprising because the input in theMonte Carlo simulations was chosen to exactly matchthe assumptions of our kinetic theory implementation.The results simply serve as verification that we aresolving our kinetic theory equations correctly.

  • 7/25/2019 Liu 2009 Kinetic

    11/30

    J Comput Neurosci (2009) 26:339368 349

    0

    20

    40

    rave(sp/s)

    Monte Carlo

    Kinetic Theory

    0

    1

    rsyn

    (sp/s)

    0 0.05 0.1 0.15 0.2 0.250

    5

    Cpeak(sp/s)

    Time (seconds)

    0

    1000

    2000

    3000

    Crosscorr(sp/s2)

    steadystate

    (a) (b)

    20 10 0 10 20

    0

    1000

    2000

    3000

    Crosscorr(sp/s2)

    Delay (ms)

    transient

    Fig. 6 Results from a single population of uncoupled neurons.The kinetic theory results exactly match those from Monte Carlosimulations. (a) Average firing rave(t) (top), synchronous firingrate rsyn(t) (middle), and cross-correlation peak area Cpeak(t)(bottom) are plotted in response to a jump in independent ind(t)

    and synchronous syn(t) input rate at time t=0.05s. The his-tograms from the Monte Carlo simulations coincide with thevalues obtained from solving the kinetic theory equations. (b)Cross-correlation averaged over the steady state (t>0.15s) andcross-correlation from the transient peak inCpeak(t)(t=0.054s)

    are plotted versus delay. The Monte-Carlo data was binned usinga bin width oft=0.5ms. The kinetic theory cross-correlationcontains a delta function at zero delay. The delta-function mag-nitude was divided by t=0.0005s and added to the value ofthe continuous correlation at zero delay. Since the kinetic theory

    correlation is sampled every half millisecond, this procedureeffectively smooths the delta-function over a half millisecondwindow so that the results match the bin width used in MonteCarlo

    3.2 Feed-forward network

    To test the ability of our kinetic theory implementationto capture the emergence and propagation of correla-tions through a network, we examined its performancewith a ten layer feed-forward network. It is well-knownthat synchronous activity develops in deeper layers ofa feed-forward network (Diesmann et al.1999; Cteauand Fukai 2001; Reyes 2003; Hasegawa 2003; Wanget al. 2006; Masuda and Aihara 2002; Litvak et al.2003; van Rossum et al.2002; Doiron et al.2006). Bycomparing the kinetic theory results to Monte Carlosimulations of the feed-forward network, we could as-sess how well the kinetic theory could model the build-up of correlations underlying such synchrony.

    3.2.1 Illustration of the build-up of correlations

    To illustrate the build-up of correlations, we simulatedthe response of feed-forward networks to a step input,where at timet0we instantaneously increased the inde-pendent Poisson input to layer 1 from 1ind,ext(t)=200spikes per second to 1ind,ext(t)=300. Each layer k >1received independent Poisson input at constant ratekind,ext(t)=200 in addition to the input from the pre-vious layer. Unlike the single population simulations,we did not add any external synchronous input as wewished to explore the synchrony that emerged from thenetwork.

    We created networks with Nneurons per layer andrandomly connected neurons from each layer onto the

  • 7/25/2019 Liu 2009 Kinetic

    12/30

    350 J Comput Neurosci (2009) 26:339368

    subsequent layer so that the expected number of con-nections onto each neuron was W1 =10. (To accom-plish this, we randomly selected each of the possible N2

    connections between a pair of layers with probabilityW1/N).

    An example of the emergent synchronous activity infeed-forward networks is illustrated in Fig. 7(a). For

    the network with N=100neurons per layer (Fig.7(a),left column), peaks in the histogram begin to emergein layer 6 and become prominent in layer 10. Thesepeaks correspond to many neurons firing synchro-nously within a t=10 ms time window, as the his-togram is based on a bin width oft.

    In the network with N= 1,000 neurons per layer(Fig.7(a), right column), this synchrony does not ap-pear to emerge, at least in 10 layers, as the histogramsremain relatively flat even in layer 10. Such observa-tions make it seem that such synchronization is a finite-size effect and that the number of neurons Nplays a

    critical role in determining the emergence of synchrony

    (Doiron et al.2006). We will reexamine this hypothesisbelow.

    The build-up of correlations is illustrated in Fig.7(b).For the network with N=100 neurons per layer(Fig.7(b), left column), the cross-correlation for layer 6contains a large peak centered around zero delay, andthis peak is much larger in layer 10. On the other hand,

    when N=1,000, the cross-correlation has only a smallpeak even in layer 10 (Fig. 7(b), right column). Our goalis to capture this build-up of correlation with our kinetictheory model.

    The difficulty of capturing the synchrony with kinetictheory models is illustrated in Fig.8. The kinetic theorycan be thought of as representing the distribution ofneuron responses over many realizations of the net-work response. Figure8shows histograms of the spikesof layer 10 of the network with N=100 neurons perlayer. For each of the four top panels, the same inputused for Fig.7(a) was presented, except that the step

    in input rate to layer 1 (i.e. the stimulus) occurred at

    0

    50100 neurons

    50

    0

    50

    100

    150100 neurons

    (b)(a)

    0

    50

    50

    0

    50

    100150

    Averagecrosscorrelation(s

    pikes/second2)

    0 1 2 3 4 5

    0

    50

    100

    Time (seconds)

    Averagefiringrate(spikes/second)

    100 0 100

    0200400600800

    Delay (ms)

    1000 neurons

    Layer 2

    50

    0

    50

    100

    1501000 neurons

    Layer 2

    Layer 6

    50

    0

    50

    100150 Layer 6

    0 1 2 3 4 5Time (seconds)

    Layer 10

    100 0 100

    50

    0

    50

    100

    150

    Delay (ms)

    Layer 10

    Fig. 7 Illustration of the build-up of correlations within a feed-forward network with ten layers. (a) A histogram of the firingtimes of all neurons in a layer binned at t=10ms, normalizedto firing rate in spikes per second. The input rate to layer 1 wasincreased at timet0 = 1s (indicated by the arrow). Layers 2, 6,and 10 are shown for a network with N= 100neurons per layer(left column) and N= 1,000 neurons per layer (right column).For theN=100network, sharp peaks in the histograms of layers6 and 10 indicate simultaneous firing of many neurons within a

    single time bin. For the N= 1,000 network, although the neuronsare firing at a similar rate, no strong peaks indicating synchronyare visible. (b) The cross-correlation calculated from the steadystate (t>2 s) and averaged among all pairs of neurons. Eachpanel corresponds to the simulation of the respective panel in (a).For the network with N=100neurons per layer, a strong peak inthe cross-correlation is evident in layers 6 and 10 (note differentscale for layer 10). For the N= 1,000 network, the correlation issmall even in layer 10

  • 7/25/2019 Liu 2009 Kinetic

    13/30

    J Comput Neurosci (2009) 26:339368 351

    0

    100

    200

    0

    100

    200

    0

    100

    200

    Averagefiringrate(spikes/se

    cond)

    0

    100

    200

    0 0.1 0.2 0.3 0.40

    10

    2030

    Time (seconds)

    Fig. 8 The timing of synchronous bursts of activity will vary withdifferent presentations of the same input. The feed-forward net-work with N=100neurons per layer was repeatedly stimulatedwith the input of Fig.7,except that the step of the input occurredat t0 = 0.05 s (arrows). The histogram of activity of the layer10 neurons in response to four different presentations is shownin the top four panels. Plot is the same as the bottom left of

    Fig.7 except for a smaller bin width oft=2 ms and smallerrange of times. Thebottom panelis a peristimulus time histogram(PSTH) showing the average of 4,000 such presentations of thestimulus. Since the synchronous peaks in response to differentpresentations occurred at different times, the PSTH averages outthe peaks and is flat after an initial transient

    t0 =0.05s. In this case, we bin the spikes at a smallerresolution of t=2 ms and show only 0.4 s of data.In response to each of the four stimulus presentations,the layer 10 neurons show a high degree of synchronyas evidenced by the sharp peaks in the histograms.However, as the stimulus after t0 is constant, thereis no temporal structure that could align the times atwhich those peaks occur. Hence, in each of the fourpresentations, the synchronous peaks occur at differenttimes.

    The average over 4,000 stimulus presentations isshown in the bottom panel of Fig.8. In this peristimulustime histogram (PSTH) we see a transient of increasedfiring rate soon after t0, but then the firing rate settlesdown to a constant value. There is no indication that the

    neurons actually continued to fire synchronously. Dueto the distribution of different times of the synchronouspeaks, the average removes all of the temporal struc-ture in the firing rate.

    Since kinetic theory represents this average re-sponse, the average firing rate rave(t) will not includeany evidence of the synchronous firing, but instead

    should approach a constant value during the steadystate period. We cannot capture the synchronous peaksof Fig.7(a). Instead, our goal is a kinetic theory repre-sentation of the feed-forward network that captures thebuild-up of correlations of Fig.7(b).

    3.2.2 Demonstration of kinetic theory results

    To compare the kinetic theory with the Monte Carlosimulations of the feed-forward networks, the first stepis to calculate the connectivity statistics for the kinetictheory discussed in Section 2.5. As described above,

    we generate connections between two layers so thateach connection has probabilityW1/N, where Nis thenumber of neurons per layer and W1 =10. Since for agiven postsynaptic neuron, Npossible connections arechosen with probability W1/N, the number W1 is theexpected number of connections.

    The expected number of shared connections ontoany pair of postsynaptic neurons can be calculatedfrom the assumption that all connections are generatedindependently. For any of the Npossible presynapticneurons, it has probability W1/Nbeing connected tothe first neuron of the pair and probabilityW1/Nbeingconnected to the second. Given that the connectionsare generated independently, the probability of be-ing connection to both neurons in the pair is simply(W1/N)2. Multiplying by the Npossible presynapticneurons, and we calculate that the expected number ofshared connections onto any pair of neurons is W2 =(W1)2/N. Hence, the fraction of shared connectionsparameter is = W2/ W1 =W1/N.

    We simulated feed-forward networks with N=50,100, 200, and 1,000 neurons per population. We ran-domly generated a single network and presented theabove step input 400,000/Ntimes, as in Fig.8. Beforebeginning each presentation, we allowed the networkto equilibrate to steady state in response to the lowerinput rate (kind(t)=200 spikes per second for all lay-ers). Then, we began to record spikes and jumped theinput to layer 1 up to 1ind(t)=300spikes per second attimet0 =0.05s.

    For each N, we calculated = W1/Nand createda kinetic theory representation of the network. Thekinetic theory network consisted of ten population,where each population corresponded to a layer of the

  • 7/25/2019 Liu 2009 Kinetic

    14/30

    352 J Comput Neurosci (2009) 26:339368

    network. We initially numerically solved the kinetictheory equations in response to the lower input rate(kind(t)=200spikes per second for all layers) to calcu-late the steady state distribution for that input. Usingthat distribution as the initial conditions, we numeri-cally solved the kinetic theory equations in response tothe input described above, increasing the input to layer

    one at timet0 =0.05s.For each network, we computed the kinetic theory

    response using two methods of approximating delayedcorrelation in the output of each population. In the firstmethod (KT0), we ignored any correlation with non-zero delay in the output of each presynaptic populationand computed the input to the corresponding postsy-naptic population as though non-synchronous spikeswere independent (using Eqs. (10) and(11) to connect

    populations). In the second method (KT1), we approxi-mated delayed correlation as though it were correlationwith zero delay (using Eqs. (13) and(14) to connectpopulations). In either case, the input to each popu-lation k> 1 was a combination of independent andsynchronous Poisson input; we did not include delayedinput correlation in our kinetic theory implementation.

    The results for =0.05(i.e., N=200in the MonteCarlo simulations) are shown in Fig. 9. The averagefiring raterave(t)(Fig.9(a)) of the kinetic theory closelymatched the Monte Carlo for both KT0 and KT1. Atleast for this network, assuming Poisson input to eachlayer did not greatly alter the first order output statisticrave(t).

    For the cross-correlation peak area Cpeak(t)(Fig. 9(b)), the results had a dramatic dependence

    0

    10

    20 Layer 2

    (a)

    0

    1

    Layer 2

    (b)

    Monte CarloKT1

    KT0

    0

    10

    20Layer 6

    Averagefiringrate(spikes/second)

    0

    1

    2

    3

    Layer 6

    Crosscorrelationpeakarea(spikes/second)

    0 0.05 0.1 0.15 0.2 0.25

    0

    10

    20

    Layer 10

    Time (seconds)0 0.05 0.1 0.15 0.2 0.25

    0

    2

    4

    Layer 10

    Time (seconds)

    Fig. 9 Comparison of Monte Carlo and kinetic theory results forthe network with = 0.05(N= 200in Monte Carlo). The inputrate to layer 1 is stepped up at timet0 =0.05s (arrows). (a) Theaverage firing rate rave(t) is plotted for layers 2, 6, and 10. ForMonte Carlo simulations, we plot a normalized peristimulus timehistogram. The kinetic theory results based on ignoring delayedcorrelation in the neural output (KT0) are plotted by the thinline. The thick line indicates the kinetic theory results wheredelayed correlation is approximated as though it were instan-

    taneous correlation (KT1). Both kinetic theory approximationsresult in nearly identical estimates ofrave(t), which underestimatethe Monte Carlo results only slightly in the deeper layers. (b) Thecross-correlation peak area Cpeak(t)is plotted for layers 2, 6 and10. The same plotting convention as panel (a) is followed. KT0fails to show the build-up of correlation in deeper layers. KT1captures the correlation build-up, though correlation magnitudeis overestimated

  • 7/25/2019 Liu 2009 Kinetic

    15/30

    J Comput Neurosci (2009) 26:339368 353

    on the delayed correlation method. If we completelyignored delayed correlation (KT0), the kinetic theoryfailed to show the build-up in the correlation in thedeeper layers of the network. On the other hand, whenwe approximated delayed correlation as instantaneouscorrelation (KT1), the kinetic theory was able toshow the correlation increase. As instantaneous

    correlation has a stronger effect on neural responsethan delayed correlation, the approximation resultedin an overestimation of the actual magnitude of thecorrelation peak. But, the method captured the essenceof how the correlations built up in the network.

    Clearly, completely ignoring delayed correlation(KT0) fails miserably to capture the build-up of corre-lations. Approximating delayed correlation as instanta-neous correlation (KT1) appears to be a much bettermethod to handle the delayed correlation. From nowon, we will focus our attention on KT1.

    If we doubled the fraction of shared input to =0.1(halved the number of neurons to theN=100 of Figs. 7and 8), our kinetic theory implementation matchedthe Monte Carlo results less accurately (Fig.10). KT1captures the build-up of correlations up through layer6, still overestimating the correlation magnitude. How-ever, by layer 10, correlation peak area calculated by

    KT1 is only about 70% that of the Monte Carlo atsteady state. Even the first order statistic rave(t) isslightly underestimated by KT1. The approximationsunderlying our kinetic theory implementation do in-deed begin to break down at this high level of correla-tion, as predicted by the assumptions of our derivationin Section2.5.We discuss some practical remedies inSection4.

    We examine the structure of the steady-state cross-correlation for these two networks in Fig. 11. Sincewe approximated delayed correlation as instantaneous,

    0

    10

    20 Layer 2

    (a)

    0

    1

    2

    Layer 2

    (b)

    Monte Carlo

    KT1

    KT0

    0

    10

    20Layer 6

    A

    veragefiringrate(spike

    s/second)

    0

    2

    4

    Layer 6

    Cross

    correlationpeakarea(spikes/second)

    0 0.05 0.1 0.15 0.2 0.250

    10

    20Layer 10

    Time (seconds)0 0.05 0.1 0.15 0.2 0.25

    0

    5 Layer 10

    Time (seconds)

    Fig. 10 Comparison of Monte Carlo and kinetic theory resultsfor the network with = 0.1( N=100in Monte Carlo). Panelsas in Fig. 9. For this large fraction of shared connections ,KT1 captures the initial build-up of correlations as shown in

    layer 6 (overestimating the actual magnitude as expected), butunderestimates the increased correlation in deeper layers. KT0grossly underestimates the correlations. For both methods, theaverage firing rate is slightly underestimated in deep layers

  • 7/25/2019 Liu 2009 Kinetic

    16/30

    354 J Comput Neurosci (2009) 26:339368

    0

    50

    100

    = 0.05

    Layer 2

    0

    200

    400

    600

    Crosscorre

    lation(spikes/second2)

    Layer 6

    10 0 10

    0

    500

    1000

    1500

    Delay (ms)

    Layer 10

    0

    50

    100

    = 0.1

    Monte Carlo

    KT1

    KT0

    0

    200

    400

    600

    10 0 10

    0

    500

    1000

    1500

    Delay (ms)

    Fig. 11 Comparison of the structure of steady state cross-correlation for layers 2, 6, and 10 of the = 0.05 network ofFig. 9(left column) and the = 0.1 network of Fig. 10(rightcolumn). The Monte Carlo and kinetic theory cross-correlationswere computed as in Fig.6(b); in particular, the delta-functioncomponent of the kinetic theory correlation was smoothed overat=0.5ms bin centered at the origin. In each panel, the centralpeak of the cross-correlation, which is used to calculate the peakarea Cpeak(t), is shown in main plot. For the deeper layers of

    the = 0.05network and the intermediate layers of the = 0.1network, KT1 overestimates the correlation at zero delay. Bylayer 10 of the = 0.1 network, KT1 no longer overestimatesthe zero delay correlation and significantly underestimates it atnon-zero delay. The insets show the correlation over a largerrange of delays to reveal the negative correlation at longer delays,which are not captured by KT1. KT0 uniformly underestimatesthe correlation by a large margin

    we do not expect KT1 to accurately reproduce all thestructure of the cross-correlation as a function of delay.Indeed, for the =0.05 network, KT1 grossly over-estimates the correlation at zero delay in the deeperlayers (left column of Fig.11). This greater correlationat zero delay explains the overestimate of the peakarea Cpeak(t) as compared to Monte Carlo. As shownby the insets, KT1 also misses most of the negativecorrelation at larger delays, although this correlation

    is not included in the peak area Cpeak(t) plotted inFig.9(b).

    For the =0.1network, KT1 overestimates the cor-relation with zero delay only in the intermediate layers.By layer 10, the zero-delay correlation from the MonteCarlo simulations is as large as KT1 correlation despitethe fact that KT1 assumes the all the input correlationis at zero delay. KT1 significantly underestimates thecorrelation at non-zero delay so that the total area in

  • 7/25/2019 Liu 2009 Kinetic

    17/30

    J Comput Neurosci (2009) 26:339368 355

    the peak is well under that of the Monte Carlo, asshown in Fig.10(b).

    The cross-correlation of layer 10 for the =0.01(i.e.,N= 1,000 for Monte Carlo) and =0.2(i.e.,N=50for Monte Carlo) networks are shown in Fig.12. For =0.01, only a small amount of correlation has builtup over 10 layers (c.f., 1,000 neuron plots of Fig.7). As

    with the =0.05 case, KT1 overestimates the cross-correlation peak area Cpeak(t)for all times after stim-ulus onset (top of Fig. 12(a)) due to an overestimateof small-delay correlation (top of Fig. 12(b)). For the =0.2 network (bottom of Fig. 12), the underesti-mate of the correlation already seen with =0.1hasbecome much more dramatic. The KT1 estimate differslittle from that of =0.1, although the correlation inthe Monte Carlo network doubled. Clearly, the currentimplementation of the kinetic theory cannot capturethe high degree of correlation seen with the higher networks. Even the correlation at zero delay (bot-

    tom of Fig. 12(b)) is substantially underestimated byKT1.

    The steady-state results from all four networks areshown in Fig.13. The average firing rate (left column) is

    accurately estimated by both KT0 and KT1, except forsmall underestimates in the deeper layers for the largers. For the cross-correlation peak area (right column),KT0 fails to capture even the small correlations ofthe small networks. On the other hand, KT1 does areasonable job estimating (albeit overestimating) thecorrelation as long as the correlation isnt too large.

    Since the KT1 estimate of cross-correlation peak areaCpeakseems to saturate at around 2.4 spikes per second,the correlation estimated by KT1 falls well behind theactual correlation observed in the Monte Carlo net-works in the =0.1 and =0.2 networks. By layer10 for =0.1 and layer 7 for =0.2, the networksappear to be too correlated for the approximationsunderlying our kinetic theory implementation. We con-clude that KT1 captures the build-up of steady-statecross-correlation in feed-forward networks up throughmoderate levels of correlation but fails in cases withstrong correlation.

    As a further test of the kinetic theory, we ran ad-ditional groups of simulations. We used three differ-ent values of expected number of connections: W1 =8, 10, and 12. For each of these values, we ran two

    0

    1

    2

    (a) (b)

    = 0.01

    Monte Carlo

    KT1

    KT0

    0

    100

    200 = 0.01

    0 0.05 0.1 0.15 0.2 0.250

    5

    10

    15

    Time (seconds)

    CCpeakarea(spikes/se

    cond)

    = 0.2

    10 0 100

    2000

    4000

    6000

    Delay (ms)

    Crosscorrelation(spikes/s

    econd2)

    = 0.2

    Fig. 12 Comparison of the cross-correlation in layer 10 for thenetworks with = 0.01( N= 1,000 for Monte Carlo) and =0.2 (N= 50 for Monte Carlo). (a) The cross-correlation peakarea Cpeak(t)is plotted versus time. Panels as in Fig.9(b). Thesmall correlation in the = 0.01 network is slightly overesti-mated by KT1. The large correlation in the = 0.2network is

    substantially underestimated by KT1 (b). The cross-correlation isplotted versus delay. Panels as in Fig.11.KT1 overestimates thesmall-delay correlation for the = 0.01network and uniformlyunderestimates the correlation for the = 0.2 network. KT0misses the correlation in all cases

  • 7/25/2019 Liu 2009 Kinetic

    18/30

    356 J Comput Neurosci (2009) 26:339368

    1 2 3 4 5 6 7 8 9 100

    10

    20 = 0.01

    1 2 3 4 5 6 7 8 9 100

    1

    = 0.01KT1KT0Monte Carlo

    1 2 3 4 5 6 7 8 9 100

    10

    20

    Average

    firingrate(spikes/secon

    d)

    = 0.05

    1 2 3 4 5 6 7 8 9 100

    1

    2

    3

    Crosscorrela

    tionpeakarea(spikes/s

    econd)

    = 0.05

    1 2 3 4 5 6 7 8 9 100

    10

    20 = 0.1

    1 2 3 4 5 6 7 8 9 100

    2

    4 = 0.1

    1 2 3 4 5 6 7 8 9 100

    10

    20

    Layer

    = 0.2

    1 2 3 4 5 6 7 8 9 100

    5

    Layer

    = 0.2

    Fig. 13 Comparison of the steady state values of the averagefiring rateraveand cross-correlation peak areaCpeakfor all layersof the four tested networks with = 0.01, 0.05, 0.1 and 0.2.Both KT1 and KT0 approximate the steady-state firing rate fairlywell for all networks. For 0.05, KT1 captures the build-upof correlations through all ten layers of the network, though it

    overestimates the magnitude. For the = 0.1and 0.2, KT1 onlycaptures the initial build-up of correlation in the first layers ofthe network but fails to estimate the large degree of correlationthat occurs in the deeper layers. KT0 severely underestimates allcorrelations

    groups of simulations: one where we set the externalinput rates kind,extso that each population fired around1015 spikes per second (the low rate group) andanother where we set thekind,extso that each populationfired around 5060 spikes per second (the high rategroup). The simulations shown above were thereforefrom theW1 =10, low rate group. For each group, wesimulated networks with various , comparing the KT1and Monte Carlo results at steady state.

    As shown in Fig.14,KT1 captures the steady statecorrelation in all networks as long as the correlationisnt too high. For each layer 2 and above (forlayer 1, the correlation is zero), we plotted a point

    corresponding to the steady state correlationCpeakas afraction of the average firing rate ravecalculated by KT1and Monte Carlo simulations. For Cpeak/raveless thanabout 0.1, the points lie above the diagonal, confirmingthat the KT1 captures the build-up of correlations in allthese networks, slightly overestimating the magnitude.Depending on the average connectivityW1 and firingrate, KT1 reaches a point where the correlation itcan represent saturates at some maximal value. Fornetwork layers where the correlation estimated byMonte Carlo simulations exceeded this maximumvalue, the KT1 limitation lead to underestimation ofthe correlation.

  • 7/25/2019 Liu 2009 Kinetic

    19/30

    J Comput Neurosci (2009) 26:339368 357

    0 0.05 0.1 0.15 0.2 0.25 0.30

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    Cpeak

    /rave

    (Monte Carlo)

    Cpeak/rave(KT1)

    W=8, high rateW=8, low rateW=10, low rateW=10, high rateW=12, low rateW=12, high rate

    Fig. 14 Comparison of the steady state correlation calculatedfrom KT1 and Monte Carlo for many networks and layers 2through 10. The cross-correlation peak area Cpeak is plotted as

    a fraction of the average firing raterave. The KT1 network tendsto overestimate the cross-correlation until KT1 saturates at itsmaximum possible correlation. (Points where the Monte CarloCpeak/rave >0.3 were omitted; in these cases, the KT1 valuessimply stayed at the saturation point.) For all low rate simu-lations, we set the input to layer to be 1ind,ext =300spikes per

    second, and for all high rate simulations, we set 1ind,ext = 600spikes per second. For the remaining layers k >1, the externalinput kind,ext = 0 varied by networks as follows (all values in

    spikes per second).W1 =8: low rate0 =220, high rate0 =300.W1 =10:lowrate 0 =200, high rate 0 = 200.W1 =12:lowrate0 = 175, high rate 0 = 110

    3.2.3 The sufficiency of second order connectivity

    statistics

    We are developing our kinetic theory approach basedon the hypothesis that second order statistics of neu-ronal activity are sufficient to describe much of thebehavior of neuronal networks (Schneidman et al. 2006;Shlens et al. 2006; Tang et al. 2008; Yu et al. 2008).We do not test this hypothesis directly here, as we havebeen examining output statistics of only first and secondorder (average firing rate and cross-correlation). Evenso, we can indirectly test one implicit assumption ofthe approach: that second order connectivity statistics(expected number of inputs W1 and fraction of sharedinputs ) are sufficient to determine the second orderstatistics of the network output. We can test this hy-pothesis by simulating networks with different types ofconnectivity patterns that have the same second orderconnectivity statistics but differ in third and higherorder statistics.

    In AppendixB,we calculate the statistics W1 andfor networks with prescribed outgoing degree distrib-utions and with incoming degree distributions that areequivalent to those of the random networks describedabove. Then, we generated multiple ten layer feed-forward networks with W1 =10and various values of between 0 and 0.2. We simulated the networks to

    the same input conditions as in the W1 =10, low rate,networks described above and calculated the steady-state values of the average firing rate rave and cross-correlation peak areaCpeakfor layer 10.

    For each set of network parameters, we performedtwo types of Monte Carlo simulations. First, we ran-domly generated a single network and simulated theresponse of this fixed network to 400 max(1, 1000/N)presentations of the input, where Nis the number ofneurons per layer. This first type of simulation cor-responds to the Monte Carlo simulations employedabove. For the second type of Monte Carlo simulation,

    we simulated the network response to the same numberof presentations of the input, but this time, for eachinput presentation, we regenerated a new realizationof the network. By rewiring the network for each re-alization, we averaged over the fluctuations due to par-ticular network configurations so that we could betterestimate how the output statistics depend on the type ofnetwork. Note that the kinetic theory networks are de-signed to capture the behavior of the average networkwith particular network statistics as represented by thelatter type of Monte Carlo simulations.

    We simulated Monte Carlo networks with sevendifferent degree distributions. The first network classwas composed of random networks, as described above(which have a binomial degree distribution). The sec-ond consisted of networks with outgoing degree distri-butions that followed a power law. Then, we simulatednetworks with a power-law outgoing degree distribu-tions, except that we truncated the maximum outgoingdegree at different values: 100, 500, 2,000, and 5,000.(Clearly, these differed from the networks with theunmodified power-law distribution only if the numberof neurons per layer Nwas larger than the cutoff.)Lastly, we simulated networks with outgoing degree-distributions that were Gaussian.

    As described in AppendixB,each class of networkshas a single free parameter once the layer size Nwasspecified. This parameter was set to match the condi-tion that the expected number of incoming connectionswas W1 =10. The value of that parameter determinedthe fraction of shared input . In this way, for eachclass of network, was a function of N. Since thisfunction depended on the class of network, we couldobtain different values of for each N. By examining

  • 7/25/2019 Liu 2009 Kinetic

    20/30

    358 J Comput Neurosci (2009) 26:339368

    the output statistics rave and Cpeak evaluated for thesteady state of layer 10, we could investigate if theseoutput statistics were primarily determined by (aswe hypothesize in the formulation of the kinetic theoryequations) or by the layer size N(as Fig.7seemed toindicate).

    The results of these simulations are shown in Fig.15.If we look only at networks where we regenerated thenetwork circuitry for each realization (filled symbols),we see that both the average firing raterave(Fig.15(a))and the cross-correlation peak area Cpeak (Fig.15(b))are very well approximated as functions of . (The

    KT1

    KT0

    Monte Carlo Random

    Monte Carlo Power Law

    Monte Carlo Power Law 100

    Monte Carlo Power Law 500

    Monte Carlo Power Law 2000

    Monte Carlo Power Law 5000

    Monte Carlo Gaussian

    0 0.05 0.1 0.15 0.20

    2

    4

    6

    8

    10

    (b)

    CCpeakarea(spik

    es/second)

    0 5000 100000

    2

    4

    6

    8

    10

    (c)

    Number of neurons

    CCpeakarea(spik

    es/second)

    0 0.05 0.1 0.15 0.20

    10

    20

    30

    40(a)

    Firingrate(spikes/second)

    Fig. 15 Steady state values of average firing rateraveand cross-

    correlation peak area Cpeak in layer 10 for different classes ofnetworks.Filled symbolscorrespond to Monte Carlo simulationsof averaged networks, where the network connectivity was re-generated at each realization. Open symbols correspond to asimulations of a single fixed network of the same network classas the corresponding filled symbol. The number for the powerlaw networks indicates the maximum outgoing degree allowed.(a) The firing rate for averaged networks increases modestly with, and the fixed network values are more scattered. The KT1 andKT0 results are similar to the averaged networks, though theyshow little increase with . (b) The cross-correlation peak areaCpeakfor averaged networks appears to be nearly a function of

    , as the filled symbols lie along a single curve. Open symbols

    are scattered around the filled symbols, indicating the variabilityobserved among the fixed networks. Insetis detail of results for 0.05(shaded region). In this range, KT1 is a good approx-imation of the averaged network results, though the magnitudeis overestimated. For larger , KT1 does not show the increaseinCpeakwith . KT0 fails to capture Cpeakeven for small . Fordisplay purposes, Cpeak was truncated to 10 for two points. (c)The cross-correlation peak area Cpeak is not well approximatedas a function of layer size N. For any value ofN, the correlationdepends strongly on the type of network. To correspond to panel(b), only points with 0.2are shown

  • 7/25/2019 Liu 2009 Kinetic

    21/30

    J Comput Neurosci (2009) 26:339368 359

    open symbols corresponding to simulations of fixed net-works have a large spread around the filled symbols.)Moreover, for 0.05, the functions of given bythe filled symbols are close to that predicted by KT1(though the KT1 curve overestimates the magnitudethe cross-correlation). Importantly, even for large thefilled symbols nearly lie along a curve, indicating that

    the second order connectivity statistics are sufficient todetermine the second order output statistics. Althoughthe KT1 curve strongly deviates from the Monte Carlocurve for large , the fact that the Monte Carlo pointslie along a single curve indicates that it may be possiblefor a kinetic theory based on W1 and to capture thefull range of correlations (see Section4).

    In contrast, the plot ofCpeak versus N (Fig.15(c))indicate that layer size N is not a good predictor ofthe correlation if the class of network is not specified.Even the filled symbols have a large spread for all layersizes. If one restricts attention to a particular network

    class, then the filled symbols do fall along a single curve,which is due to the fact that is a function of Nwhen the network class is fixed. These results clearlydemonstrate that the fraction of shared input , notlayer size N, is critical for determining the build-up ofcorrelations in feed-forward networks.

    As a final illustration of the fact that the first andsecond order connectivity statistics, not network size,determine the network behavior, we created four dif-ferent networks with vastly different sizes and the sameconnectivity statistics. In each case, we created net-works to match the connectivity statistics W1 =10and =0.05of the network from Fig.9.If we used a powerlaw outgoing degree distribution capped at 5,000, weneededN= 17,500 neurons per layer to reducedownto 0.05. Capping the outgoing degree at 2,000 and 500reduced the layer size to N= 8,350 and N= 2,750,respectively. In comparison, the random network ofFig.9had only N=200 neurons per layer. For eachnetwork, we stepped up the input rate at t=0.05, usingthe input rates for Fig.9.We presented this input 2,000times and regenerated the circuitry for each realization.

    The neuronal output of layer 10 of each network isshown in Fig.16.The activity of each network is similar,indicating the first and second order connectivity statis-tics did determine most of the behavior. Some minordifferences are that the random network had slightlylower steady state correlation and some of the powerlaw networks appeared to have a larger transient corre-lation in response to the input step. Hence, although thesecond order connectivity statistics did not determineevery detail of the neuronal activity, the connectivitystatistics W1 and did capture the essential features of

    the connectivity needed to describe most of the secondorder statistics of the network activity.

    4 Discussion

    We have developed a kinetic theory approach that

    captures the build-up of correlations in feed-forwardneural networks as long as the cross-correlation doesnot become too large. We demonstrated that althoughapproximating delayed correlation as instantaneouscorrelation resulted in an overestimate of the magni-tude of output cross-correlation, it accurately repre-sented how the correlations increased with respect tolayer and with respect to fraction of shared inputs inthe connectivity structure.

    4.1 Sufficiency of second order connectivity

    Our goal is to use a second order kinetic theory descrip-tion to describe the behavior of neuronal networks.We base this goal on the assumption that describingneuronal activity via second order statistics will be suf-ficient to explain much of the behavior of the networks(c.f., Schneidman et al.2006; Shlens et al.2006; Tanget al.2008; Yu et al.2008). In order to derive equationsby which to couple the population densities of thekinetic theory, we distilled the network connectivitypatterns between two populations into first and secondorder statistics. Our hypothesis was that these first andsecond order connectivity statistics would be sufficientto specify the resulting first and second order statisticsof the neuronal activity. It is not obvious that such a hy-pothesis should be true, and we expect that higher orderstatistics will have some effect. For example, the secondorder connectivity statistic does have a small effecton the first order output statistic rave, as shown by theslight increase of the filled dots within Fig.15(a). Thefact that we demonstrated that the cross-correlationpeak area is primarily determined by (for fixed firstorder statistic W1 and fixed input rates) is encouragingsupport of the notion that our kinetic theory ap-proach may be able to capture the fundamental mech-anisms underlying the emergence and propagation ofcorrelations.

    One caveat should be stressed. In all the MonteCarlo networks we used to generate Fig.15,we variedonly theoutgoingdegree distribution of the nodes. Wedid not, for example, create networks with power lawincoming degree distributions. Especially since we donot include inhibition, we could not include neuronsthat had 100 times more inputs than average. If we did,

  • 7/25/2019 Liu 2009 Kinetic

    22/30

    360 J Comput Neurosci (2009) 26:339368

    0

    10

    20N=17,500

    (a) (b) (c)

    0

    2

    4

    N=17,500

    Monte Carlo

    KT1

    KT0

    0

    5

    10

    15 N=17,500

    0

    10

    20 N=8350

    0

    2

    4 N=8350

    0

    5

    10

    15 N=8350

    0

    10

    20 N=2750

    Average

    firingrate(spikes/second)

    0

    2

    4

    Crosscorrelationpeakarea(spikes/seco

    nd)

    N=2750

    05

    10

    15

    Crosscorrelation(spikes/second2100)

    N=2750

    0 0.1 0.20

    10

    20 N=200

    Time (seconds)0 0.1 0.2

    0

    2

    4

    Time (seconds)

    N=200

    10 0 100

    5

    10

    15

    Delay (ms)

    N=200

    Fig. 16 Comparison of layer 10 results from four different net-

    works with different numbers of neurons but the same connec-tivity statistics (W1 =10and = 0.05). The top row is a networkwith N=17,500 neurons per layer and a power law outgoing de-gree distribution capped at a maximum outgoing degree of 5,000.In the secondand third row, the power law degree distributionwas capped at 2,000 and 500, respectively, and the number ofneurons per layer was 8,350 and 2,750, respectively. The fourthrowis a random network withN= 200neurons per layer. In eachcase, network activity was obtained in response to 2,000 realiza-tions of same input as for Figs. 913,and the network circuitry

    was regenerated for each realization. Except for this circuitry

    regeneration, the fourth row is the same as the bottom row ofFig.9 and the bottom left of Fig. 11.The kinetic theory resultsare simply replotted for each row. (a) Average firing rate rave(t)plotted as in Fig.9(a). (b) Cross-correlation peak area Cpeak(t)plotted as in Fig.9(b). (c) Steady state cross-correlation structureplotted as in Fig.11.Despite the vastly differing network size,the network output is similar because for each network class,we chose the network size to match the connectivity statisticsW1 =10and = 0.05

    those neurons would have fired at unreasonable rates.Hence, in all networks, we always assigned postsynaptictargets randomly.

    4.2 Related kinetic theory and correlation analyses

    Our kinetic theory approach is based on the populationdensity formulation of interacting neuronal populationsintroduced by Knight, Sirovich, and colleagues (Knightet al.1996; Omurtag et al.2000b; Sirovich et al. 2000;Omurtag et al. 2000a; Knight2000; Casti et al. 2002;

    Sirovich et al. 2006; Sirovich 2008) that was furtherdeveloped by Tranchina and colleagues (Nykamp andTranchina 2000, 2001; Haskell et al. 2001; Apfaltreret al.2006). Although there are many examples of usingkinetic theory to model neuronal networks (Abbottand van Vreeswijk1993; Barna et al.1998; Brunel andHakim1999; Gerstner2000; Cteau and Fukai2001;Hohn and Burkitt2001; Mattia and Del Giudice2002;Moreno et al. 2002; Meffin et al. 2004; Cteau andReyes2006; Doiron et al.2006; Cai et al.2006; Huertasand Smith2006), most approaches do not explicitly rep-resent correlations among neurons but instead assume

  • 7/25/2019 Liu 2009 Kinetic

    23/30

    J Comput Neurosci (2009) 26:339368 361

    each neuron in a population is an independent samplefrom the distribution.

    A notable exception is upcoming work by de laRocha et al. (2008) where they also develop a kinetictheory based on a pair of neurons with correlated input.In their case, they assume the inputs are Gaussian whitenoise and are even able to derive an analytical formula

    for the output cross-correlation. Their goal is also tostudy how correlated activity propagates over multiplelayers of a network.

    There have been many studies characterizing howcorrelated inputs to uncoupled neurons lead to corre-lated outputs (Dorn and Ringach 2003; Galn et al.2006; Svirskis and Hounsgaard2003; Moreno-Bote andParga 2006; Binder and Powers 2001; Shadlen andNewsome2001). Recent results have shown how undercertain circumstances, the amount of input correlationthat is transfered to output correlation depends primar-ily on the firing rate (de la Rocha et al. 2007), and

    one can further characterize this relationship betweeninput and output correlation (Shea-Brown et al.2008).One additional challenge we have faced in our kinetictheory implementation is how to transform the outputcorrelation of a presynaptic population into the inputcorrelation of its postsynaptic population.

    Our application of kinetic theory to correlations infeed-forward networks was motivated by experimentalobservations of the emergence of sustained synchronyin such networks (Reyes2003) and applications of ki-netic theory to understand their properties (Cteau andReyes 2006;Doironetal. 2006). The authors discoveredtypical kinetic theory implementations miss the build-up of sustained synchrony. Although an ad hoc finite-size correction (Brunel and Hakim1999; Mattia andDel Giudice2002; Hohn and Burkitt2001) can restoreelem