Upload
others
View
15
Download
0
Embed Size (px)
Citation preview
UNIVERSIDAD AUTONOMA DE MADRIDFacultad de Ciencias
Departamento de Fısica Teorica
Computational consequences of Short TermSynaptic Depression
Jaime de la Rocha Vazquez
Memoria de Tesis
Presentada en la Facultad de Ciencias
de la Universidad Autonoma de Madrid
para optar al grado de Doctor en Ciencias Fısicas
Trabajo dirigido porNestor Parga Carballeda
Madrid, 26 de noviembre de 2002
Dedico esta tesis a mis padres, Paloma y Manolo,
por todo el apoyo y el carino recibido, y por todo
lo que me han esenado.
Tambien a Monica, por compartir su vida conmigo.
Contents
Table of contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1 Introduction: the synaptic function 1
1.1 Anatomical description of a synapse. . . . . . . . . . . . . . . . . . . . . 1
1.2 Synaptic transmission and synaptic dynamics. . . . . . . . . . . . . . . . 3
1.2.1 Unreliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Short term depression. . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Facilitation and other mechanisms. . . . . . . . . . . . . . . . . . 7
1.2.4 Univesicular release hypothesis. . . . . . . . . . . . . . . . . . . 8
1.2.5 Synaptic diversity. . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Objectives and overview of this work. . . . . . . . . . . . . . . . . . . . . 10
2 A model of synaptic depression and unreliability 13
2.1 Experimental motivation of the model. . . . . . . . . . . . . . . . . . . . 13
2.2 Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.1 Model of one synaptic contact with many docking sites. . . . . . . 15
2.2.1.1 Vesicle Release. . . . . . . . . . . . . . . . . . . . . . 15
2.2.1.2 Vesicles Recovery. . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Input spike-train statistics. . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 Statistics of the synaptic response. . . . . . . . . . . . . . . . . . 22
2.2.4 Population of synapses. . . . . . . . . . . . . . . . . . . . . . . . 28
2.3 Results: synaptic response statistics. . . . . . . . . . . . . . . . . . . . . 30
2.3.1 Single docking site:N0 = 1 . . . . . . . . . . . . . . . . . . . . . 31
2.3.1.1 Poisson input . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.1.2 Correlated input. . . . . . . . . . . . . . . . . . . . . . 34
2.3.2 Multiple docking sites:N0 > 1 . . . . . . . . . . . . . . . . . . . 38
2.4 Tables of symbols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
i
ii Contents
3 Information transmission through synapses with STD 47
3.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2 Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.1 Fisher Information. . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.2 Mutual Information. . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.3 Mutual Information in a population of synapses. . . . . . . . . . . 53
3.2.4 Optimization with Metabolic considerations in the recovery rate. . 55
3.2.5 Numerical methods. . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.1 Rate dependence of information measures. . . . . . . . . . . . . . 58
3.3.1.1 Dependence of the Fisher information onν . . . . . . . . 58
3.3.1.2 Dependence of the Mutual Information onf(ν) . . . . . 62
3.3.2 Optimization of the recovery time constantτv . . . . . . . . . . . . 65
3.3.2.1 Optimizingτv with the Fisher Information. . . . . . . . 65
3.3.2.2 Optimizingτv with the Mutual Information. . . . . . . . 69
3.3.2.3 Several vesicles:N0 ≥ 1 . . . . . . . . . . . . . . . . . 71
3.3.2.4 Dependence ofτopt on the other parameters. . . . . . . . 78
3.3.2.5 Metabolic considerations. . . . . . . . . . . . . . . . . 82
3.3.3 Optimization of the release probabilityU . . . . . . . . . . . . . . 85
3.3.3.1 OptimizingU with the Fisher Information . . . . . . . . 85
3.3.3.2 OptimizingU with the Mutual Information. . . . . . . . 90
3.3.4 Optimization of the distribution of synaptic parameters. . . . . . . 93
3.4 Conclusions and Discussion. . . . . . . . . . . . . . . . . . . . . . . . . 96
4 Synaptic current produced by synchronized neurons 103
4.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
4.2 Parameterization of the afferent current. . . . . . . . . . . . . . . . . . . 105
4.3 Afferent spike trains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4 Statistics of the synaptic releases. . . . . . . . . . . . . . . . . . . . . . . 109
4.4.1 Dynamics of one synaptic contact between two neurons. . . . . . 109
4.4.2 Several synaptic contacts between two neurons. . . . . . . . . . . 113
4.4.3 Release correlations among two synapses from different neurons. . 117
4.5 Statistics of the total afferent current. . . . . . . . . . . . . . . . . . . . . 118
4.5.1 The mean of the afferent current. . . . . . . . . . . . . . . . . . . 119
4.5.2 Correlations of the current. . . . . . . . . . . . . . . . . . . . . . 120
4.5.2.1 Auto-correlation in single contacts. . . . . . . . . . . . 121
Contents iii
4.5.2.2 Cross-correlation between pairs of contacts with the same
input train . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.5.2.3 Cross-correlation between pairs of contacts with corre-
lated input trains. . . . . . . . . . . . . . . . . . . . . . 123
4.5.2.4 Total current correlation. . . . . . . . . . . . . . . . . . 124
5 The response of a LIF neuron with depressing synapses 129
5.1 The Leaky Integrate and fire (LIF) neuron. . . . . . . . . . . . . . . . . . 129
5.2 The analytical calculation of the output rate of a LIF neuron. . . . . . . . 130
5.2.1 The diffusion approximation. . . . . . . . . . . . . . . . . . . . . 131
5.2.2 The solution ofνout for awhite noiseinput . . . . . . . . . . . . . 133
5.2.3 Perturbative solution ofνout for a correlated input. . . . . . . . . . 134
5.2.4 Several input populations. . . . . . . . . . . . . . . . . . . . . . . 135
5.3 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
5.3.1 The saturation ofµ and the subthreshold regime. . . . . . . . . . . 138
5.3.2 The modulation of the variance. . . . . . . . . . . . . . . . . . . 141
5.3.2.1 VaryingM , with CM constant . . . . . . . . . . . . . . 141
5.3.2.2 VaryingM , with MJ constant . . . . . . . . . . . . . . 147
5.3.3 The output rate of a LIF neuron. . . . . . . . . . . . . . . . . . . 152
5.3.3.1 Output rate atCM fixed . . . . . . . . . . . . . . . . . . 152
5.3.3.2 Saturations eliminates synchrony. . . . . . . . . . . . . 154
5.3.3.3 Output rate atMJ fixed . . . . . . . . . . . . . . . . . . 156
5.3.4 Information beyond saturation of the mean current. . . . . . . . . 161
5.4 An interesting experiment. . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.5 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
A The exponentially correlated input 167
B Computing ρiri(t|ν) for any renewal input 173
C The calculation of the population distribution D(U,N0) 179
D Computation of the output whenN0 = 1 183
D.1 Computation of the p.d.f. of the IRIsρiri(t) . . . . . . . . . . . . . . . . . 183
D.2 Computation of the correlation function of the IRIs. . . . . . . . . . . . . 185
D.3 Release firing rateνr and coefficient of variationCViri . . . . . . . . . . . 186
E Computation of the conditioned probability 〈pv(β|α)〉 189
iv Contents
F Computation of the conditioned probability 〈pv(j|i)〉 193
Bibliography 195
List of Figures 213
List of Tables 217
Agradecimientos
Quiero dar las gracias, en primer lugar, a Nestor, por haber estado siempre ahı y haberme
ensenado tanto. En segundo lugar, a mis companeros del grupo: a Jose por su ayuda en los
ultimos momentos; a Ruben por todo lo aprendido juntos; a Gonzalo por tan provechosos
comentarios sobre la tesis ; a Angel por su sabia lentitud y su talante tan generoso; a Alfonso
por ensenarme que las cuentas pueden ser bonitas, y a apasionarme por este trabajo. A mis
contemporaneos Enrique, Alex, Ernesto, David T. y el bueno de Yago con el que tantos anos
he compartido despacho. Tengo especial gratitud para Natxo por toda su ayuda y por su
carino, y para Stephane por todo lo compartido.
Gracias tambien a mis padres por su apoyo incondicional y a mis hermanos Manolo,
Marta y Miguel. A los amigos de siempre, Isra, Marcos, Cucho, Manus, Juan, Vıctor e Ivan.
Tambien a Alberto, Yago, Dani y Pardo. A mis companeros de carrera, Ivan, Estela, Raul,
Elena, etc.
Gracias en especial a Monica, por susplannings, sus esfuerzos, sus cuidados y sus mi-
mos.
v
Preface
The nervous systems of an organism collects information about its environment, processes
it, and computes a behavioral response. Its survival in its enviromment relies on its capa-
bility to perform this task in an optimal manner. An interesting idea is that the evolution,
viewed as the natural selection of the organisms which best adapt to the environment, can be
interpreted as an optimization of the strategies the animal uses to overcome the difficulties
that its survival prompts.
A key optimization of the system would be to obtain an efficientinternal representa-
tion of the sensory world. This information is represented in the brain by the activity of the
neurons, that is, the emission of action potentials or spikes. In particular, the activity rate
of specific populations of neurons is believed to convey, at least, part of this information
[Adrian, 1926, Werner and Mountcastle, 1965, Tolhurst et al., 1983, Tolhurst, 1989, Britten
et al., 1992, Tovee et al., 1993]. The way this information is processed and the computa-
tions which are performed are still a puzzle for neuroscientists. Some operations seem to be
performed by cortical microcircuits, whereas the capability of individual neurons to accom-
plish computations is still an open question [Koch and Segev, 2000]. Whatever the case, if
information is encoded in the firing frequency, neurons receiving those spikes downstream
need to have an efficient and robust internal representation of those pre-synaptic rates. The
striking feature about this description is that the pre-synaptic spikes reach the neuron through
a noisy dynamical channel: the chemical synapse.
The motivation of my work is to study whether or not the dynamical properties of the
synapses improve the internal representation of the pre-synaptic information. I will focus on
synaptic unreliabilityandshort-term depressionand I will address the question of whether
their existence can be postulated in terms of an optimization criterion. In other words, I will
optimize the transmission of information through a model synapse by tuning its parameters,
and check if the resulting model resembles the biology described in the experiments. An
additional working hypothesis is that this could occur if the information is encoded in the
spike trains in a redundant fashion so that synapses may optimize the internal representation
by filtering out that redundancy [Barlow, 1961, Atick, 1992, Nadal and Parga, 1994, Dan
vii
viii Contents
et al., 1996, Nadal et al., 1998, Goldman et al., 2002].
In order to perform the optimization of the synaptic channel, I will quantify the trans-
fer of information across the synapse by using Information Theory. Information Theory
[Shannon, 1948] is a mathematical framework which has been widely used in Computational
Neuroscience to quantify how much information is encoded in neuronal activity [Borst and
Theunissen, 1999]. In this thesis it will be used to quantify the goodness of the representa-
tion provided by the synaptic responses (or transmitter releases) about the pre-synaptic firing
rate.
Neurons, on the other hand, must not only obtain an efficient representation of the in-
coming information, but process it and eventually carry out simple computations. In the last
years the notion of what neurons can do has evolved enormously. The idea that neurons are
just simple current integrators has been overcome by showing that neurons are provided with
the necessary biophysical machinery (such as dynamic synapses, active conductances, etc)
to process the pre-synaptic activity in a rather sophisticated manner. The role of synaptic dy-
namics in increasing the computational capabilities of neurons has just started to be explored
[Abbott et al., 1997, Tsodyks and Markram, 1997, Lisman, 1997, Senn et al., 1998, Chance
et al., 1998a, Maass and Zador, 1999, Matveev and Wang, 2000a, Natschlager and Maass,
2001, Maass and Markram, 2002, Fuhrmann et al., 2002, de la Rocha et al., 2002]. For exam-
ple, an important consequence of short-term depression is that it constraints the range of rates
that can be transmitted to the post-synaptic neuron: since the recovery of synaptic resources
takes about a few hundreds of millisecondsindependentlyof the pre-synaptic frequency, it
sets an upper bound to the synaptic activity rate [Abbott et al., 1997, Tsodyks and Markram,
1997]. Beyond this limit, the synapse saturates and the mean post-synaptic current becomes
independent of presynaptic rate. However, neurons are driven not only by mean synaptic ac-
tivity but also by the fluctuations of this activity around the mean [Amit and Tsodyks, 1991,
Shadlen and Newsome, 1994, 1998]. Thus, the stochasticity of synaptic transmission and the
correlations of the input spike trains can play a crucial role in driving the neuronal response.
I will explore the implications of short-term depression and the stochasticity of transmission
when the presynaptic population shows correlated activity. I will show how both aspects may
combine, yielding to a non-monotonic transfer function, which enables the neuron to extract
information about the input rate, even beyond the saturation of the mean synaptic current.
Contents ix
Part of the work presented in this thesis has been published or presented in the following
articles and conferences:
• Jaime de la Rocha, Angel Nevado and Nestor Parga.
Information transmission by stochastic synapses with short term depression: neural
coding and optimization.
Neucomputing, 44:85-90 (2002).
• Jaime de la Rocha, Angel Nevado and Nestor Parga
Information transmission by stochastic synapses with short-term depression: neural
coding and optimization.
Oral communication in the Tenth Annual Computational Neuroscience Meeting.
(CNS2001. Monterey - Pacific Grove, USA. June 30 - July 5, 2001)
• Nestor Parga, Jaime de la Rocha and Angel Nevado.
Information transmition of correlated spike trains through stochastic synapses with
short-term depression.
Poster communication in the 31th Annual Meeting of the Society for Neuroscience
(San Diego. December, 2001).
Society for Neuroscience Abstracts 27.
• Nestor Parga, Jaime de la Rocha and Angel Nevado.
Information processing by depressing synapses.
Oral communication in the Program on Dynamics of Neural Networks: From bio-
physics to behavior. (Jul-Dec 2001).
Institute for Theoretical Physics, University of Santa Barbara, USA.
• Nestor Parga, Jaime de la Rocha and Angel Nevado
Optimizing information processing by depressing synapses
Oral communication in the Trimester on Neuroscience and Computation.
Centre Emile Borel - Institut Henri Poincare, Paris. February 2002
• Ruben Moreno, Jaime de la Rocha and Nestor Parga.
Response of a leaky integrate and fire neuron when stimulated with synapses showing
short-term depression.
Poster communication in the 32th Annual Meeting of the Society for Neuroscience
(Orlando. November, 2002).
Society for Neuroscience Abstracts28: 752.6
x Contents
• Ruben Moreno, Jaime de la Rocha, Alfonso Renart and Nestor Parga.
Response of spiking neurons to correlated inputs.
Physical Review Letters, 89 (28), 288101 (2002)
Chapter 1
Introduction: the synaptic function
1.1 Anatomical description of a synapse
Neurons in the nervous system (NS) communicate with each other throughsynapses.
The synapse is the connection where two or more neurons establish a functional contact,
that is, a “touching” point where a communication channel is built. There are two basic
types of synapses:electricalandchemical. In the first type, also calledgap junctions, neu-
rons exchange ions that cross the plasma membrane of both cells through a special kind of
channels called connexons. The second, and more common type of synapses in mammals
are the chemical synapses in which neurons “talk” to each other by means of chemical sub-
stances, theneurotransmitters. In the most common case, an axonal fiber forms a cavity
called synaptic bouton, or pre-synaptic terminal, which is located very close to a dendritic
spine1, to the dendritic shaft or to the soma (cell body) of the post-synaptic cell. The pre- and
post-synaptic membranes are separated by the synaptic cleft which is approximately20− 50
nm wide. The pre-synaptic bouton is filled with dozens of little membrane-enclosed sacks
(50 nm of diameter), eventually filled with transmitter, which are called synaptic vesicles.
At a certain number of specific locations of the synapse, both pre- and post-synaptic neurons
develop the necessary machinery which mediates release and reception of transmitter. These
particular areas within the synapse, which look very dense under the electronic microscope,
are called synapticspecializations(see micrograph in figure1.1 and picture in figure1.2).
The number of specializations in a synaptic bouton, depends on the synapse type and on its
size. Glutamatergic synapses onto CA1 pyramidal neurons in the hippocampus often display
one or two [Sorra and Harris, 1993, Schikorski and Stevens, 1997]. Other type of larger
boutons may have many separate specializations (from15 − 20 in the case of the contacts
1A small bag of membrane that protrudes from the dendrites of some cells and receives synaptic input [Bear
et al., 1996].
1
2 Chapter 1: Introduction: the synaptic function
Figure 1.1: Electron micrograph of a synapse from the stratum radiatum in CA1 in the
hippocampus of an adult mouse. Three active zones are delimited by arrows at both pre-
and post-synaptic side. Head arrows indicate the presence of a docked vesicle. Scale bar:
0.5 µm. (Taken from [Schikorski and Stevens, 1997])
between spindle afferents and spinocerebellar tract cells [Walmsley, 1991], to hundreds in
the neuromuscular junctions [del Castillo and Katz, 1954a, Katz and Miledi, 1968] or the
calyx of Held synapse [Held, 1893, Ryugo et al., 1996] in the auditory pathway).
At the pre-synaptic side of the specialization, also defined asactive zone, in the plasma
membrane we can find: i) voltage-gated calcium channels; ii) a series of proteins, with a
filament shape, which are responsible for keeping some vesicles attached ordockedat the
membrane (see head arrows in the micrograph in fig.1.1). Upon the influx of calcium (see
yellow trace in fig.1.2), these proteins can make one of thedockedvesicles fuse its membrane
with the cell membrane. In this fusion operation, calledexocytosis, the transmitter content
of the vesicle is released into the synaptic cleft (see red trace in fig.1.2). The signal that
triggers the release, i.e. the elevation of the concentration of intracellular calcium[Ca2+]i, is
elicited by the arrival of an action potential (AP) to the pre-synaptic terminal. The released
neurotransmitter diffuses throughout the synaptic cleft eventually reaching the post-synaptic
membrane, where it binds to specific receptors. This causes the influx of a post-synaptic
current (PSC) into the membrane of the post-synaptic cell, via directly coupled channels or
activation of second messenger pathways. Depending on the type of synapse, the transmitter
released may cause a depolarization of the post-synaptic cell by means of an excitatory post-
synaptic current or EPSC (e.g. if the transmitter is glutamate), or an hyper-polarization by
means of an inhibitory post-synaptic current (IPSC), e.g. if the transmitter is GABA.
1.2. Synaptic transmission and synaptic dynamics 3
Figure 1.2: Description of the morphology of a synapse and the mechanism of vesicular
release. Top: A synaptic bouton with three synaptic specializations (or active zones) is
shown. Vesicles are painted in red.Bottom: Detailed description of an active zone. Many
vesicles are separated from the membrane (reserve pool) while the one on the right (touching
the plasma membrane) represents a docked vesicle. On the center an exocytosis event is
taking place: influx of calcium (shown in yellow) has caused a nearby docked vesicle to fuse
with the membrane and release its transmitter content (shown in red) to the synaptic cleft.
(Taken from [Walmsley et al., 1998])
1.2 Synaptic transmission and synaptic dynamics
Although the basic picture of how synaptic transmission occurs is partly described by
the previous explanation, and it is believed to be similar for most synapses in both inver-
tebrates and vertebrates, many important mechanisms involved in synaptic transmission are
not completely understood. Here, we will discuss briefly the main properties of synaptic
transmission which are relevant for this work.
4 Chapter 1: Introduction: the synaptic function
1.2.1 Unreliability
Central synapses are usually very unreliable [Hessler et al., 1993, Rosenmund et al.,
1993, Allen and Stevens, 1994, Stevens and Wang, 1994]. Only a fraction of the incoming
AP’s elicit measurable responses at the post-synaptic terminal. Thus, transmission has been
modeled as an stochastic process with a certain probability of release,pr [del Castillo and
Katz, 1954a, Boyd and Martin, 1956].
Katz and colleagues [del Castillo and Katz, 1954a, Katz, 1969] discovered the quantal
nature of transmission at the neuromuscular junction of the frog. They first described the
synaptic transmission as a probabilistic process and initiated what has become a widely used
method in the study of synapses,quantal analysis[del Castillo and Katz, 1954a, Boyd and
Martin, 1956]. This method assumes that the pre-synaptic terminal hasn release sites, where
individual vesicles may independently fuse to the membrane with a uniform probabilityp.
The morphological correlate of a release site (which is an heuristic definition coming from
quantal analysis) can be an active zone if, at each of them, only a single vesicle release
can occur upon the arrival of an AP (see section1.2.4below about the uni-vesicular release
hypothesis). If on the contrary, one assumes that multiple vesicles can be released at a single
active zone, then the number of release sites within an active zone would be bigger than one.
The method of quantal analysis is completely specified by giving a third parameterQ, which
represents the mean amplitude of an EPSC elicited by the release of a single vesicle. Thus,
the number of releases upon arrival of a spike follows a binomial distribution of parameters
n andp. Via the analysis of the histogram of the synaptic response amplitude (the size of the
PSC), one can estimate the three parametersn, p andQ [Tuckwell, 1988].
In synapses like the neuromuscular junction, where the number of release sites (or the
maximum number of vesicles which simultaneously undergo exocytosis) is of the order of
several hundreds, unreliability produces fluctuations from stimulus to stimulus of the re-
sponse amplitude. On the other hand, the effect of unreliability in synapses with only one or
two release sites is much more noticeable. A probability of transmission of e.g.0.1 implies
that, on average, nine out of ten spikes would fail to transmit any information. Whether this
represents a limitation of the brain wet-ware or if, on the contrary, it represents a functional
advantage for the NS is still and open and interesting issue [Smetters and Zador, 1996].
This last possibility seems more plausible in the light of observations showing failure-free
synapses outside [Paulsen and Heggelund, 1994, 1996, Bellingham et al., 1998] and inside
[Stratford et al., 1996] the brain. If the NS is able to produce reliable contacts but most of
them are unreliable, the latter are likely to be advantageous in some sense. Recently [Levy
and Baxter, 2002] have argued that unreliability may be an optimal way of transmission
under energy constraints. In contrast, other studies which explored the impact of synaptic
1.2. Synaptic transmission and synaptic dynamics 5
failures in information transmission without metabolic constraints, have shown that unrelia-
bility severely reduces the information content of the output [Zador, 1998, Fuhrmann et al.,
2002]. In addition it has been shown that the release probabilitypr is subject to plasticity
[Markram and Tsodyks, 1996a] so that it could be involved in long-term synaptic changes
related with experience and learning.
The causes of unreliability in the transmission have been intensively investigated inin
vitro experiments [Rosenmund et al., 1993, Hessler et al., 1993, Allen and Stevens, 1994].
Although the mechanisms are not entirely clear, several experiments have shed light over
several points: first, synaptic unreliability is not due to a failure of nerve impulses to arrive
to the synapse [Allen and Stevens, 1994]. Second, release probability correlates with the
number of morphologically identified vesicle docking sites [Schikorski and Stevens, 1997],
which are special positions in the active zone where vesicles dock before releasing the trans-
mitter. The number of docking sites, in turn is positively correlated with the size of the
active zone [Schikorski and Stevens, 1997], which implies that release probability is larger
at larger synaptic specializations. In addition, several features of the calcium channels af-
fect the probability of release: the number and type of voltage-gated calcium channels at the
pres-ynaptic terminal; the spacing between docking sites and calcium channels together with
the extracellular calcium concentration (see [Atwood and Karunanithi, 2002] for a review).
On the other hand, variations of temperature affect very little the release probability [Allen
and Stevens, 1994]. Despite many experimental observations of release failure in culture
neurons or in slice preparations, the probabilistic nature of transmission on central synapses
still needs to be shownin vivo.
1.2.2 Short term depression
At many synapses, pre-synaptic activity dynamically affects synaptic strength (see e.g.
[Magelby, 1987, Fisher et al., 1997, Zucker, 1989, Zador and Dobrunz, 1997] or [von Gers-
dorff and Borst, 2002, Zucker and Regehr, 2002] for recent reviews). Thus, the amplitude of
the post-synaptic response is not a static quantity but, besides fluctuations due to noise, it is
an activity dependent magnitude: recent activity can either decrease or enhance the efficacy
of the synapse. All these changes, which have a time scale ranging from milliseconds to at
most a few minutes, are referred to asshort-term plasticity(STP).
Short-term depression is one of the most common expressions of these changes. At
synapses with this type of modulation, pre-synaptic activity produces a decrease in synaptic
strength [Magelby, 1987]. The most common explanation for this decrease in efficacy is
the reduction of transmitter release due to depletion of synaptic vesicles (see [Liley and
North, 1953, Hubbard, 1963, Stevens and Wang, 1995] and [Zucker, 1996, Neher, 1998,
6 Chapter 1: Introduction: the synaptic function
Schneggenburger et al., 2002] for reviews). There are different models of vesicle depletion
with different degrees of physiological detail [Dobrunz and Stevens, 1997, Weis et al., 1999,
Matveev and Wang, 2000b, Trommershauser, 2000]. A visual common general assumption
is that, at a single active zone, only a small pool of the pre-synaptic vesicles, thereadily
releasable pool(RRP) [Rosenmund and Stevens, 1996], are prepared to be immediately
released upon a sudden increase of[Ca2+]i produced by the arrival of an AP. The size of
this functional pool (i.e. the RRP) correlates with the morphological definition of the pool
of docked vesicles [Schikorski and Stevens, 2001], i.e. the set of vesicles which, when seen
through the electronic microscope, appear tethered to the cell membrane2. In the micrograph
shown in fig. 1.1 the head arrows indicate what is believed to be a docked vesicle at a
hippocampal synapse. A key finding, as described by Dobrunz and Stevens, is that the
release probabilitypr covaries with the number of vesicles in the RRP across a population
of synapses [Dobrunz and Stevens, 1997], and at single synapses [Dobrunz, 2002]. In other
words, the more vesicles waiting ready to be released, the more likely it is that the exocytosis
of one of them takes place.
Once a pre-synaptic pulse triggers the exocytosis of one of theseready-for-releasevesi-
cles, it takes some time until it undergoesendocytosis(the mechanism by which the vesi-
cle becomes separated from the plasma membrane ) and it is refilled with transmitter again
[Murthy and Stevens, 1998, Sun et al., 2002]. In the meanwhile, after the release, the number
of vesicles at the RRP is transientlyreduced, resulting in a transitory decrease of the trans-
mission probability. This means that the synapse is momentarily depressed. If one leaves
the contact rest for a period of time of the order of a second, that vesicle or new ones which
were already filled of transmitter but not ready for release (commonly described as belong-
ing to the reserve pool) may dock making the transmission probability return to its rest value
[Dobrunz and Stevens, 1997, Matveev and Wang, 2000b].
Little is known about how the RRP recovers. In most cases it has been observed that
the transmission probability recovers following a single exponential in both, hippocam-
2More specifically, the RRP is believed to be a subset of the docked pool, i.e. to be docked is a necessary
but not sufficient condition to beready to go[Atwood and Karunanithi, 2002]. More sophisticated versions
of vesicle turnover models [Matveev and Wang, 2000b, Trommershauser, 2000] define a third pool of vesicles
(besides the RRP and the reserve pool), usually called theprimedpool or pre-primed pool. Vesicles in the
reserve pool first approach the membrane and “dock”. In a subsequent step, they areprimed(the configuration
of the proteins which hold the vesicle attached to the membrane changes) so that the vesicle is then ready to be
fused when[Ca2+]i increases. While the RRP has been estimated to be formed by about a dozen vesicles, the
primed pool is supposed to be composed of only one or two vesicles (see [Zucker, 1996, Zenisek et al., 2000]
for biophysical details and [Weis et al., 1999, Matveev and Wang, 2000b] for modeling). However, due to
analytical calculation constraints, we will not make that distinction in this thesis. Therefore, the termsdocked
andprimedwill denote the vesicles in the RRP.
1.2. Synaptic transmission and synaptic dynamics 7
pal [Stevens and Tsujimoto, 1994, Dobrunz and Stevens, 1997], and neocortical synapses
[Markram et al., 1998a, Wang and Kaczmarek, 1998, Finnerty et al., 1999, Varela et al.,
1999, Petersen, 2002], with a time constant which ranges between half a second and several
seconds. Sometimes two exponentials have been used to fit a fast recovery (∼ 0.5s) along
with a slower recovery (∼ 5 − 10 s) [Varela et al., 1997]. Although in the first genera-
tion of models recovery was described as an activity independent process, several findings
have shown that the[Ca2+]i affects the rate at which vesicles become primed, i.e. the RRP
[Dittman and Regehr, 1998]. Since[Ca2+]i is basically modulated by pre-synaptic activity,
some works have experimentally shown that the stimulus input rate affects the recovery rate
[Wang and Kaczmarek, 1998, Stevens and Wesseling, 1998, Neher and Sakaba, 2001]. For
that reason, some models of vesicle depletion have included this calcium dependency of the
recovery rate in the vesicle dynamics [Weis et al., 1999, Dittman et al., 2000].
There are, however, other underlying mechanisms apart from vesicle depletion, which
could also be involved in short-term depression [Dobrunz et al., 1997, Hanse and Gustafs-
son, 2002, Zucker and Regehr, 2002]. In the pre-synaptic side the release of transmitter into
the synaptic cleft can lead to activation of auto-receptors which are usually inhibitory. This
would result in a negative feedback and a reduction of future transmitter release [Zucker and
Regehr, 2002]. Other mechanisms have been proposed such as the so-calledinactivationof
exocytosis machinery [Hsu et al., 1996, Dobrunz et al., 1997], which differs from the previ-
ous ones in that now the probability of vesicle fusion diminishes after the production of an
exocytosis event even when[Ca2+]i is maintained at a high concentration [Hsu et al., 1996].
On the post-synaptic side of the synapse, the desensitization of post-synaptic receptors can
also lead to a use-dependent decrease of the response (see [Jones and Westbrook, 1996] for
a review).
1.2.3 Facilitation and other mechanisms
Short-term facilitation is one of the mechanisms by which pre-synaptic activity produces
an enhancement of the synaptic response [Fisher et al., 1997, Zucker and Regehr, 2002]. It
has a time scale of hundreds of milliseconds. One common way of assessing facilitation is
paired-pulses facilitationwhich occurs when the synapse is stimulated with a pair of con-
secutive pulses, and the response amplitude to the second is larger than the response to the
first. If one separates the two pulses by a large enough time window (several hundreds of
milliseconds) the effect disappears, and the response to both pulses is equal on average. An
AP may produce facilitation at some synapses, regardless of whether vesicle release occurs
or not [del Castillo and Katz, 1954b]. The effect is believed to happen at some point af-
ter the AP arrival and before transmitter release. The biophysical substrate of facilitation
8 Chapter 1: Introduction: the synaptic function
is thought to be the existence of residual calcium [Katz and Miledi, 1968]: after an action
potential invades the terminal, a calcium influx elevates the concentration[Ca2+]i which in
turnmayproduce vesicle exocytosis. If a second AP reaches the terminal before[Ca2+]i has
returned to normal levels, the new incoming calcium will add to the residual one, resulting
in an enhancement of the probability of release [Zucker, 1996].
Facilitation and depression effects interact: if facilitation produces an increase of release
probability, but the number of releasable vesicles is small, depletion of the vesicle pool is
faster. Thus the train of responses elicited by repetitive stimulation of a facilitating synapse
may show, first, an enhancement of the amplitude of the first EPSCs due to facilitation,
and later, a decrease due to short-term depression, perhaps produced by vesicle depletion
[Dittman et al., 2000, Gupta et al., 2000].
Other forms of synaptic enhancement with larger time scales are the so-calledaugmen-
tation andpost-tetanic potentiation[Zucker and Regehr, 2002]. In these processes each AP
may produce an increase of synaptic strength by only1− 15%. However, since they last for
5− 10 seconds in the case of augmentation, and from30 secs. to several minutes in the case
of post-tetanic augmentation, in episodes of prolonged periods of high pre-synaptic activity,
their effects can be important, and even relevant inworking memorytasks [Hempel et al.,
2000].
1.2.4 Univesicular release hypothesis
So far, we have not discussed the exact relation between the number of vesicles in the
RRP and the probability of release. In particular, assuming that at a given active zone sev-
eral vesicles are ready for release, how many of them can undergo exocytosis upon arrival
of an AP? Given the observation that at individual active zones in synapses onto spinal mo-
toneurons the response amplitude has little variability, Redman and colleagues proposed that
synaptic transmission occurs in an all-or-none manner [Edwards et al., 1976]. This little
variability can be due either to the saturation of the post-synaptic receptors (which would
be already produced by the content of a single vesicle) or to the occurrence ofat mostone
release per active zone (see e.g. [Triller and Korn, 1982] or [Redman, 1990, Walmsley et al.,
1998] for review). This last possibility gained increasing support by further experiments
[Triller and Korn, 1982, Stevens and Wang, 1995] and has become a commonly used hy-
pothesis. A plausible explanation is the following: after one vesicle undergoes exocytosis,
the energy barrier of fusion for the rest of the primed vesicles increases, resulting in an effec-
tive lateral inhibition that prevents them to be released. After a short period of time (∼ 5 ms.)
the fusion energy returns to its resting value. However the calcium concentration[Ca2+]i has
already decreased to its stationary value and no more vesicles undergo exocytosis until a new
1.2. Synaptic transmission and synaptic dynamics 9
AP arrives [Stevens and Wang, 1995, Dobrunz et al., 1997].
The univesicular release hypothesis does not imply, however, that multi-modal response
histograms cannot occur. This kind of histogram would appear if the synaptic connection
consists of several synapses, or if at each synapse there are several specializations (since the
hypothesis only states that at each active zone at most one release may occur, existing com-
plete independence between active zones). Thus, this hypothesis leads to an identification of
the morphological concept of active zone (or synaptic specialization) and to the functional
definition of release site.
On the other hand there is evidence that multi-vesicular release may occur at some
synapses (usually under high release probability conditions), so that the homogeneity in the
response amplitude should be attributed to receptor saturation [Tong and Jahr, 1994, Auger
et al., 1998, Oertner et al., 2002].
In this thesis, except when explicitely specified, the uni-vesicular release hypothesis will
be assumed. Hence, we will use indistinctly the terms synaptic specialization, active zone
and release site. We will reserve the term synaptic connection to refer to the whole set of
specializations established by two neurons.
1.2.5 Synaptic diversity
A third important aspect is the huge diversity that exists among synapses in the nervous
system (see e.g. [Atwood and Karunanithi, 2002] for a recent review). The first notorious
difference between synaptic connections in the brain is their size and the number of con-
tacts they have [Walmsley et al., 1998]: while some connections in the nervous system have
hundreds of contacts (usually in the peripheral nervous system like in the neuromuscular
junctions, or in the brain-stem like the giant “end-bulbs of Held”, etc.), in the cortex neurons
are commonly connected by a few small boutons (5−10 [Markram et al., 1997a]), sometimes
only one [Sorra and Harris, 1993], containing usually a single specialization. Furthermore,
specializations vary greatly in shape and size. Since the larger the active zone the more dock-
ing sites it has, larger active zones exhibit a larger probability of release, while small ones
are more unreliable [Walmsley, 1991, Ryugo et al., 1996, Schikorski and Stevens, 1997].
This variability in size may partially explain the heterogeneity in release probability among
release sites from the same pre-synaptic fiber [Rosenmund et al., 1993, Hessler et al., 1993,
Murthy et al., 1997]. Nevertheless this cannot be the only factor, since, as it has been shown
in neocortical slices, this probability is subject to pairing plasticity [Markram and Tsodyks,
1996a].
Other forms of synaptic differentiation arise from the diversity in the expression of ac-
tivity dependent mechanisms like short-term depression or facilitation (see above1.2.2and
10 Chapter 1: Introduction: the synaptic function
1.2.3). The first interesting observation is that different cells innervating a given neuron
evoke different post-synaptic responses. For instance, climbing fibers innervating a given
cerebellar Purkinje cell elicit a depressing response, while parallel fibers innervating the
same cell elicit a facilitating response [Dittman et al., 2000]. This type of diversity is even
more prominent in interneuron synapses [Gupta et al., 2000]. On the other hand, different
branches of the same axon may contact different post-synaptic cells showing a variety of
distinct characteristics at each bouton [Markram et al., 1998a].
In the first part of this thesis we will be concerned with the heterogeneity of the release
probability of a population of synapses innervating a given neuron and the functional impli-
cations of this variability. The functional relevance of the number of contacts constituting a
connection will also be assessed in the second half of this work.
1.3 Objectives and overview of this work
During the last years there has been a strong interest in the functional relevance of the
different synaptic properties, such as unreliability [Zador, 1998, Levy and Baxter, 2002,
Senn et al., 2002a], short-term depression [Tsodyks and Markram, 1997, Abbott et al., 1997,
O’Donovan and Rinzel, 1997, Chance et al., 1998b, Senn et al., 1998, Adorjan and Ober-
mayer, 1999, Matveev and Wang, 2000b, Natschlager et al., 2001, Senn et al., 2002b, Gold-
man et al., 2002], facilitation [Lisman, 1997, Matveev and Wang, 2000a, Fuhrmann et al.,
2002] and heterogeneity [Markram and Tsodyks, 1996b, Markram et al., 1998b]. However,
there are still many open questions regarding how these synaptic features influence the com-
putational capacity of neurons. The objective of this thesis is to investigate what is the effect
of short-term depression and unreliability in, firstly, the transmission of information from the
pre- to the post-synaptic neuron, and secondly the computational capabilities of neurons.
In particular we want to study how this kind of synapses transform auto-correlations
in the pre-synaptic spike trains and whether this transformation can lead an efficient rep-
resentation of the input rate on the synaptic responses. We would like to test whether the
optimal values of the synaptic parameters, such as the release probability or the recovery
time constant, which maximize the information transfer, coincide with those observed in the
experiments. This optimization can be achieved in a number of different ways depending on
the choice of the information measure (e.g. information transmission, parameter estimation,
information per unit time, per response, taking into account metabolic constraints). Thus,
we will check which are the optimal values of these parameters under different optimization
hypothesis and we will check whether the experimental data validates one over the others.
We also want to assess the functional relevance of the synaptic diversity by computing the
1.3. Objectives and overview of this work 11
information transmitted through an heterogeneous population of synapses, and comparing
its performance with different distributions.
Furthermore, we will compute the response firing rate of aleaky integrate-and-fire neuron
(LIF) when synapses are modeled as depressing stochastic channels with an arbitrary number
of contacts. We will analytically compute the transfer function when the pre-synaptic neu-
rons fire synchronously, and make numerical simulations of a LIF spike generator to validate
the analytical predictions. We will study how the synchronous firing of a pre-synaptic popu-
lation of neurons results in a non-monotonic behavior of the fluctuations of the input current,
which eventually decrease as synaptic saturation takes place. This provides a mechanism to
obtain a non-monotonic transfer function if the target cell works in a regime driven by the
current fluctuations. It will be shown that, in this regime, the response of the cell conveys
information about the input rate beyond the occurrence of saturation of the mean current.
The quantification of the information transmission in a model which considers these
synaptic features will help to elucidate whether or not their existence improves or worsens the
transmission of messages between cells. For example it may explain why synapses are not
failure-free channels. Moreover, a theoretical approach to this issue provides a framework to
investigate what sort of computations neurons can, or cannot, perform when a certain realistic
biological property is considered. For instance, the fact that the release of neurotransmitter
often occurs in an asynchronous manner with respect to the arrival of the AP [Hagler and
Goda, 2001], may constitute a strong limitation to the hypothesis that the precise time of
each AP conveys information. In addition, the asynchronous release would impose severe
limitations to a coordinated code in which, for instance, the synchronous arriving of AP
constitutes a coding strategy.
The thesis is organized as follows: In chapter2 a model of synaptic transmission is pre-
sented. It consists in a stochastic model of vesicle turnover, with which we can analytically
compute the statistical properties of the responses. The output statistics are calculated for
input spike trains with auto-correlations. The model considers only a pre-synaptic terminal
with a single specialization which has several docking sites. Vesicle dynamics are stochastic
and release is unreliable. At the end of the chapter, a model of the heterogeneity found in
hippocampal synapses of CA3-CA1 connections is proposed.
In chapter3, using the response statistics of a single mono-synaptic connection calculated
in chapter2, we compute the information they convey about the input firing rate for a general
family of correlated inputs. In particular, we are interested in finding optimal values for the
synaptic parameters which maximize the information transmission and estimation of input
parameters. At the level of a population of inputs, an optimization is performed over the
distribution of values of the synaptic parameters.
12 Chapter 1: Introduction: the synaptic function
In chapter4 we compute the synaptic current produced by an ensemble of pre-synaptic
cells onto a target neuron when the connections between them are formed by and array of
synaptic contacts (or specializations) individually modeled as in chapter2. Besides, cross-
correlations among pre-synaptic units are included in a simplead hocmanner.
In chapter5 we compute the output of a leaky integrate-and-fire neuron (both numerically
and analytically) when the total current obtained in the previous chapter isinjectedinto it.
We also study the effect of the spatial correlations in the output introduced either by multi-
contact connections or by synchronous firing.
The conclusions of the work are presented at the end of chapters3 and5, where a brief
discussion is also laid out.
Chapter 2
A model of synaptic depression and
unreliability
2.1 Experimental motivation of the model
As previously introduced in chapter1, synaptic transmission is a complex process which
occurs in an unreliable way (see section1.2.1and references [Hessler et al., 1993, Rosen-
mund et al., 1993, Allen and Stevens, 1994, Stevens and Wang, 1994]). It also involves rapid
(from ∼ 100 ms. to several seconds) use-dependent mechanisms which are generally re-
ferred to asshort-term plasticity[von Gersdorff and Borst, 2002, Zucker and Regehr, 2002].
In this thesis we will focus on short-term synaptic depression [Magelby, 1987] and on the
stochastic nature of synaptic transmission. We propose a model of vesicle depletion, where
release occurs in an unreliable way, mainly based on experiments performed on pyramidal
cell synapses, from mouse hippocampal slices (or culture hippocampal cells) by Stevens, Do-
brunz and colleagues [Stevens and Wang, 1994, Murthy et al., 1997, Dobrunz and Stevens,
1997, Dobrunz et al., 1997, Dobrunz, 2002] and by Hanse & Gustaffson [Hanse and Gustafs-
son, 2001a,b]. These works have studied the connections between CA3 and CA1, which are
believed to be mono-synaptic, with a single active zone in most cases (around70 − 90%)
[Sorra and Harris, 1993, Schikorski and Stevens, 1997]. This allows the experimentalist to
measure the properties of an individual active zone, something usually difficult in neocor-
tical synapses, where the number of synapses between two cells is usually larger than one
[Markram et al., 1997a, Gupta et al., 2000]. Using the method ofminimal stimulation1 they
1The method of minimal stimulation [Raastad et al., 1992, Allen and Stevens, 1994, Dobrunz and Stevens,
1997] consists of applying a stimulus in a given pathway and record intracellularly from a neuron to which
those fibers project. Because the stimulation is extracellular it is not possible to tell how many axons are being
excited. However if the stimulus is gradually decreased until a final “step” is reached before the response in
13
14 Chapter 2: A model of synaptic depression and unreliability
extracellularly activate a putative synapse and record the synaptic currents produced. Stim-
ulating with a single pulse they can detect, with great accuracy, whether a vesicle release
occurs and, repeating the stimulation over many trials , compute the probability of release.
In addition, they can deplete the readily releasable pool (RRP) using a high frequency train,
while counting the releases which take place until complete depletion is achieved. This
number gives the size of the RRP. The first important conclusion of their first experiments
was that release probabilitypr covaries with the number of docked vesicles [Schikorski and
Stevens, 1997] across a population of synapses. In other words,Schikorski and Stevens
[1997] found that those synapses which optically showed more vesicles adjacent to the cell
membrane, had a higher release probabilitypr. The mean number ofvisualizeddocked vesi-
cles was∼ 10. Dobrunz and Stevens[1997], using physiological methods, found that the
release probability correlated with the number of vesiclesN in the RRP across a population
of synapses. They proposed a functional relation for the release probability of an individual
synapse,pr(N), which approximately fitted the data obtained from a population of synapses
[Dobrunz and Stevens, 1997, Murthy et al., 2001]. A possible explanation of the results is
the following: if all readily releasable vesicles have the same and independent probability of
undergoing exocytosis,pv, but only one of them can release its content (see section1.2.4),
the release probability will be one minus the probability that all of them fail to fuse:
pr(N) = 1− (1− pv)N (2.1.1)
This expression is non-linear in N and tends to one as the number of releasable vesicles is in-
creased. RecentlyDobrunz[2002] confirmed that this relation also holds for a single synapse
(and not only across a population). She also showed thatpv should be the average of the in-
dividual fusion probabilities, in the likely case that they are heterogeneous. In addition, in
facilitating synapses, residual calcium may increasepv, and an activity independent descrip-
tion of the functionpr(N) would no longer be valid. Likewise, if the distribution of fusion
probabilitiespv was bimodal, (like in the calyx synapses [Wu and Borst, 1999, Neher and
Sakaba, 2001, Schneggenburger et al., 2002] where a subset of the RRP has higher fusion
probability than the rest of the pool), a description of the release probabilities with a single
function pr(N) would not apply. This partition of the RRP into two pools also seems in
order [Matveev and Wang, 2000b] if one wants to obtain the largepaired pulseddepression
(defined as the ratio between the magnitude of the averaged response of the second and the
the voltage of the target cell disappears, it is likely that at that level of stimulation only asinglefiber is being
excited, in such a way that the depolarization of the whole patched cell is due to a single synapse. This method
presents several limitations over the dual whole cell recordings in which one can be sure that a single connection
(perhaps with several synapses) is mediating the excitation of the target cell [Thomson and Deuchars, 1997,
Markram et al., 1997a, Gupta et al., 2000].
2.2. Methods 15
first pulses) observed in neocortical slices [Markram and Tsodyks, 1996a, Varela et al., 1997,
Thomson, 1997] and in hippocampal slices [Hanse and Gustafsson, 2001a] (see [Trommer-
shauser, 2000, Matveev and Wang, 2000b] for examples of this specialized pool model and
[Schneggenburger et al., 2002] for a recent review on calyx synapses).
In our work we will not consider these extensions of the single RRP model [Dobrunz
and Stevens, 1997]. Thus, our model will be an adaptation of the one proposed by Dobrunz
and Stevens with a single RRP, which we will also call primed pool or docked pool. We will
not consider facilitation or any other activity-dependent changes, because our purpose is to
analyze the effects that depression alone has on the transmission of information.
2.2 Methods
2.2.1 Model of one synaptic contact with many docking sites
The model of a single active zone, or synaptic contact, is completely described by: i) the
release probability functionpr(N), which gives the probability that one vesicle releases its
content, when there areN of them in the RRP. ii) The functionPrec(n, t|N), which defines
the probability that anyn out ofN empty docking sites are refilled by new vesicles during a
time window of lengtht.
2.2.1.1 Vesicle Release
The experimental data about the probability of releasepr for different synapses was ap-
proximately fitted with the functionpr(N) defined by equation2.1.1(see fig. 5 in [Dobrunz
and Stevens, 1997]. This function ofN shows saturation and a non-linear behavior. In
order to make further calculations analytically tractable, we have adapted this exponential
dependence, capturing both aspects, non-linearity and saturation, into a simpler new func-
tion which reads,
pr(N) = UΘ(N −Nth) N = 0 . . . N0 (2.2.1)
whereΘ(N) is the Heaviside function. Thus, if the RRP sizeN is smaller than a certain
threshold valueNth, the probability of release is zero, and when the number of vesicles in
the pool exceedsNth, the probability becomesU . The last parameterN0 sets the maximum
number of vesicles that the RRP can hold, which in our description represents the number of
morphologically separate docking sites that together constitute the synaptic contact2. Thus,
2The question of whether the number of morphological vesicle docking sites constitutes the maximum size
of the RRP or on the contrary the latter pool is a functional subset of the first (see [Schikorski and Stevens,
16 Chapter 2: A model of synaptic depression and unreliability
p_r
N
Figure 2.1: Release probability as a function of the number of vesicles in the RRP: points
are experimental data of the release probability to the first pulse of a burst, from different
CA-CA1 hippocampal synapses (taken from [Dobrunz and Stevens, 1997]). Dashed line is
the fit with the exponential model by Dobrunz & Stevens (1997) (see eq.2.1.1). Solid black
line is an extended version of that model (see [Dobrunz and Stevens, 1997]). Solid red line
represents the step-like model used in this thesis, with the parametersNth = 5, N0 = 13 and
U = 0.7.
depletion of vesicles below the threshold levelNth will produce depression in the synapse,
because there will be a time interval, before more vesicles join the RRP, during which the
release probability is zero. Figure2.1 shows the experimental data taken from fig. 5 of
[Dobrunz and Stevens, 1997] together with our step-like release probability function. Al-
though this function is an oversimplified model, the figure illustrates that it accounts for the
qualitative behavior of the data.
2.2.1.2 Vesicles Recovery
Besides the dependence of the release probability on the number of ready-releasable
vesicles, one must define the stochastic dynamics of the vesicles recovery. Much less is
known about the recovery of vesicles than about their release. The only experimental data
2001]) is beyond the aim of this model. However, even though in some synapses docked vesicles must undergo
a priming step before they become ready for fusion [Sudholf, 2000], in hippocampal synapses it has been
shown that the number of visually docked vesicles is roughly the same as the number of vesicles in the RRP
[Schikorski and Stevens, 2001, Murthy et al., 2001].
2.2. Methods 17
r p (N) Release:
N 0N − N
Unavaliable Pool
Avaliable Pool
Recovery: 1/τ
Figure 2.2: Schematic picture of the synaptic dynamics seen as a system composed of two
pools of vesicles: the avaliable pool represents the (RRP). The unavailable pool represents
the set of empty docking sites. An action potential evokes the release of a vesicle with
probabilitypr(N), shown in the picture as a transition of a vesicle from the avaliable to the
unavailable pool. The recovery of the vesicles takes place with a rateN0−Nτv
, i.e. each empty
docking site recovers with a constant rate1τv
.
is that the release probability recovers exponentially [Dobrunz and Stevens, 1997, Markram
and Tsodyks, 1996a]. Given this constraint, we model the recovery from the point of view
of the structural docking sites instead of from the vesicles perspective. Hence, we assume
that each of theN0 − N empty docking sites may be filled at any time with a constant
and homogeneous probability1/τv, which means that the amount of time that any docking
site stays empty is an independent random variable, exponentially distributed, with mean
τv. This type of dynamics implies two things: i) since thesingle siterecovery rate is not
affected by the number of empty sites, the number of vesicles in the recovery pool (vesicles
filled with transmitter but not docked) should be much larger3 thanN0 ; ii) the recovery rate
1/τv is independent of pre-synaptic activity ; iii) docking sites are equally distributed across
the synaptic bouton, i.e. there is no preference to refill certain empty sites over the others.
Because we have assumed that the refilling of each empty site is an independent process, we
can now easily write the probability thatn, out of theN0 −N empty docking sites, recover
within a time window∆ as:
Prec(n, ∆|N0 −N) =
N0 −N
n
(1− e−∆/τv)n (e−∆/τv)(N0−N−n) (2.2.2)
3This is a good approximation because the number of docking sites is around5 while the total number of
vesicles in the synaptic bouton is about200 in hippocampal synapses [Schikorski and Stevens, 1997]
18 Chapter 2: A model of synaptic depression and unreliability
which is just a binomial distribution of parametersN0 − N and(1 − e−∆/τv). This second
parameter is the probability that a vesicle fills an empty site in the time interval∆.
Here it must be noticed that, when in this model we setN0 = 1, that is, the maximum
number of primed vesicles is one, we recover the model introduced bySenn et al.[2001] and
later used in several papers [Goldman et al., 2002, Fuhrmann et al., 2002].
2.2.2 Input spike-train statistics
Our first assumption will be that the generation of action potentials (AP) by cortical
neurons can be modeled as random events, that is, a spike train can be treated as a stochastic
point process. This hypothesis is based on the experimental observation that cortical neurons
in vivo fire irregularly [Softky and Koch, 1993], seemingly randomly. The consideration of
the AP as a point event is a good approximation if one takes into account the stereotyped
shape they present and their very short duration (∼ 1 ms) in comparison with the rest of the
temporal scales we will consider in our model.
The statistics of these spike trains in the brain are often well approximated by renewal
processes [van Vreeswijk, 2000], which means that consecutive inter-spike-intervals (ISI’s)
can be considered as samples of a random variable, which areindependentlydrawn from a
given probability distribution function (p.d.f.)ρisi(t) [Cox, 1962]. A Poisson process is the
most typical example of renewal process, whose ISI’s follow and exponential distribution.
Nevertheless, the fact that the correlation between consecutive ISI’s is zero, does not imply
that the correlation between spike times are zero. The occurrence of a spike may increase
(or decrease) the probability of observing another one immediately afterwards, with respect
to the case in which nothing is known about the previous activity. The Poisson process has
no memory in this sense, that is, previous spikes do not convey any information about what
is coming next.
Positive autocorrelations among spike times have often been observed inin vivo ex-
periments [Mandl, 1993, Zohary et al., 1994, Bair et al., 1994, Lisman, 1997, Fenton and
Muller, 1998, Matveev and Wang, 2000a, Goldman et al., 2002]. A common case of positive
auto-correlations arebursts, which are groups of spikes which occur in close succession. In
hippocampus, for instance, pyramidal cells fire what are calledcomplexspikes [Kandel and
Spencer, 1991, Ranck, 1973], that are brief bursts of two to nine spikes with ISI’s ranging
from 2 to 10 ms. Positive autocorrelations can express aredundantcoding of sensory (or
other kind) of information [Dan et al., 1996, Baddeley et al., 1997]. Since the information
coming from the sensory world is very redundant in time [Dan et al., 1996], neuronal activity
in sensory areas may show autocorrelations arising from the encoding of such redundancy.
Besides, sensory acquisition mechanisms such as saccadic eye movements, whisking, etc.
2.2. Methods 19
may also produce autocorrelations [Goldman et al., 2002]. Thus, positive autocorrelations
have been observed in the visual areas of awake monkeys such as V1 [Matveev and Wang,
2000a, Goldman et al., 2002] and MT [Bair et al., 1994]. Autocorrelations have also been
measured in other higher processing areas such as prefrontal cortex [Matveev and Wang,
2000a].
We will consider input trains governed by stationary renewal processes, that is, theρisi(t)
will not change in time, with non-zero autocorrelation between the spike times. For the sake
of simplicity, we assume that this auto-correlation is exponential, which will turn out to have
an accessible analytical treatment. As it is shown in AppendixA, any renewal process with
exponential correlations has an ISI distribution which can be written,
ρisi(t) = (1− ε)β1e−β1t + εβ2e
−β2t , t > 0 (2.2.3)
This function is normalized by construction ifβ1, β2 > 0. In order to be a p.d.f. it must also
be non-negative, condition which imposes two more constraints in the values that the term
[β1, β2, ε] can take (see AppendixA ). One can also parameterize this input in terms of three
physical quantities: the spike rateν, defined as the inverse of the mean ISI; the coefficient
of variation of the ISI’sCVisi, defined as the ratio of the standard deviation to the mean
of the ISI; and the time scale of the correlationsτc, which is the characteristic time of the
exponential decay. The dependence of[ν, CVisi, τc] on [β1, β2, ε] follows a complex mapping
which is notone-to-onedue to the symmetryβ1 ↔ β2, β2 ↔ β1, ε ↔ 1 − ε (see Appendix
A).
Let us now define the two-point correlation functionC(t, t′) defined as the probability
density of finding one event (e.g. a spike or a release) at timet and another one at time
t′. For a stationary renewal process this function only depends on the time difference, i.e.
C(t, t′) = C(t′−t). It can be computed, for a renewal process, using the following expression:
C(t− t′) = C(t) = (2.2.4)
= r(δ(t) + ρ(t) +
∫ t
0dxρ(x)ρ(t− x) +
∫ t
0dx∫ t
0dyρ(x)ρ(y)ρ(t− x− y) + . . .
)Here,ρ(t) denotes the p.d.f. of either the ISI’s (ρisi) or the time interval between synaptic
responses/releases, which we will refer to as inter-response-interval or IRI, (ρiri). For this
expression to be well defined, we need to defineρ(t) for t < 0 as zero. The magnituder
is the constant event rate (ν for spikes andνr for releases). Each of the terms of the sum
represents the probability density of finding two events, at time zero and at timet, when
there has been zero, one, two,... event-intervals between them, respectively. We now connect
this correlation and normalize it byr and, assumingt > 0, we discard the delta function.
20 Chapter 2: A model of synaptic depression and unreliability
The result reads
Cc(t) =C(t)− r2
r(2.2.5)
This new functionCc(t) is often called the connectedconditionalrate: added to the rate
r, it gives the frequency of events at timet given there was one at time zero. However,
by abuse of terminology, we shall also refer toCc(t) as the correlation function, because it
represents how much more probable it is to find an event at timet, given that there was one
at time zero, in comparison to the uncorrelated case.
We now come back to the input model introduced above (eq.2.2.3) and compute, by
means of the Laplace transform, the correlationCc(t) (AppendixA). If we write it as a
function of the physical parameters[ν, CVisi, τc] it takes the simple form
Cc(t) =CV 2
isi − 1
2τc
e−t/τc (2.2.6)
It is now straightforward to describe this correlation in terms of the physical parameters: in
the first place, the correlation does not depend on the spike rateν (because it is connected).
The value ofCVisi determines the total area under the correlation function: ifCVisi > 1
the function is positive for allt, and negative forCVisi < 1. At CVisi = 1 the correlation
becomes zero for allt and one recovers exactly a Poisson process4. Finally, τc determines
the temporal extent of the correlation. We stress again the fact that in the three-dimensional
[ν, CVisi, τc] space, the admissible region is not the whole subspace determined byν > 0,
CVisi > 0, τc > 0. If CVisi ≥ 1 (positive correlation) then any value ofν > 0 andτc > 0 is
allowed. However, in the interval0 < CVisi < 1 the conditionτc >(1−CV 2
isi)
νmust hold to
ensure thatρisi is a p.d.f. (see AppendixA).
As previously mentioned, we are mostly interested in positive input auto-correlations,
that isCVisi > 1. In this regime0 < ε < 1 and it is possible to interpret the input process
ρisi(t) as a combination of two Poisson processes with ratesβ1 andβ2. At each ISI realization
one of these two underlying Poisson processes is chosen with probabilities(1 − ε) and ε
respectively. Fig.2.3 shows some input examples for different values of[ν, CVisi, τc]. It
can be observed that, for small values ofτc (∼ 10ms) and large values ofCVisi (∼ 2), the
train shows a bursty structure. In fact, a third description of the input as aburst generator
is possible5: if β2 ∼ 100 − 500 Hz andβ1 β2, then following the description of the two
Poisson processes, it can be shown that the mean number of spikes within a burst,Nb reads:
Nb =ε
(1− ε)2+ 1 ' CV 4
isi − 1
4+ 1 (2.2.7)
4At this limit, one of the exponentials of the ISI distributionρisi(t) vanishes becauseε becomes either one
or zero, resulting in a Poisson process with rateβ2 or β1, respectively.5Although a renewal process may seem incompatible with a process of bursts, if the notion of burst is
generalized by assuming a large variability in the number of AP’s within it, then the description is conceivable.
2.2. Methods 21
0 25 50 75 100(ms)
0
0.05
0.1
0.15
ρ isi(t
| ν)
−100 −50 0 50 100 (ms)
0
0.02
0.04
0.06
0.08
0.1
0.12
Cc (t
)
||| | | | |||| || | ||| |||| |||||| | | || |
|| | | | | | || || ||| ||| | | | | ||
| ||| ||||| |||| || | ||0 250 500 750 1000
(ms)
CV= 1
CV= 1.3
CV= 1.8
Figure 2.3: Examples of input spike trains with exponential autocorrelations.Top left: Dis-
tribution of ISI’s given the rateν, ρisi(t|ν). Top right: Connected conditional rate (also
called correlation function, given by eq.2.2.6). Bottom plots Examples of spike train re-
alizations: vertical bars represent the position of the action potentials on a temporal axis.
Three values ofCVisi have been chosen for all plots:CVisi = 1 (green lines),CVisi = 1.3
(red lines) andCVisi = 1.8 (blue lines). It can be observed that the number of spikes within
a burst increases asCVisi increases (andν andτc are kept constant). In all cases:ν = 10 Hz,
τc = 10 ms.
22 Chapter 2: A model of synaptic depression and unreliability
and the mean duration of the bursts,Tb, is just the mean interval between spikes of a burst,1β2
, times the mean number of intervals within a burstNb − 1:
Tb =ε
β2(1− ε)2(2.2.8)
The third parameter is the mean interval between bursts (which can have onlyoneAP) which
is simply 1β1
. In this way, the burst generatorρisi(t) would be expressed as a function of the
term [β1, Tb, Nb].
Our objective in the following sections will be to determine the information about the
input rateν conveyed by the synaptic responses. Thus, the firing rateν is the coding variable
of the input signal, so that it takes values from an ensemble with a certain p.d.f. that will
be denoted asf(ν). To be rigorous we must therefore change the notation ofρisi(t) we just
introduced. Hence for a fixed value ofν we will write ρisi(t|ν) while if want to refer to
the distribution of the ISI considering the whole input ensemble we will denote it asρisi(t)
which is defined as:
ρisi(t) ≡∫
f(ν) ρisi(t|ν)dν (2.2.9)
2.2.3 Statistics of the synaptic response
It is standard to consider AP’s as point events because of their very short duration and
because, at a single neuron, they are almost replicas of each other. On the contrary post
synaptic potentials (PSP), at a single neuronal membrane, differ in sign, size and duration,
depending on the amount of transmitter that released, the number and type of post-synaptic
receptors activated and consequently the type and number of ion channels that produced
the PSP. However, if one focuses on one pair of neurons, it is sometimes the case that they
are connected by only one synaptic contact6. If we then neglect the fact that the amount
of transmitter vesicles release fluctuates, and assuming theuni-vesicular release hypothesis
(section1.2.4), then a certain train of spikes propagating along the pre-synaptic fiber, will
elicit a number of synaptic releases. These will produce in turn a train of PSP’s of equal
amplitude, which, regardless of the temporal extent, will be a decimated version of the input
train. In this sense, the synapse is making a stochastic transformation of the spike trains
arriving at the synaptic bouton into trains of, what we generally refer to as, synaptic re-
sponses. This description fails if one takes into account the fact that two neurons may be
6This is the case for instance in the previously discussed CA3-CA1 synapses in the hippocampus, where
it is been shown [Dobrunz, 2002, Hanse and Gustafsson, 2001a] that when the Shaffer collateral pathway is
stimulated with bursty patterns, the amplitudes of the elicited EPSCs measured in the CA1 pyramidal cell were,
excluding failures, equal in amplitude, reflecting the fact that one fiber makes a single synapse with only one
active zone
2.2. Methods 23
connected by several contacts, which in the cortex are usually between 1-15 [Markram et al.,
1997b, Walmsley et al., 1998]. If this is the case, the PSP’s produced by the connection
of a single pair of neurons, do not have the same magnitude because they can be produced
by the release of several vesicles. Notice that this does not imply that we are rejecting the
uni-vesicular release hypothesis.
In other more formal words, we are modeling the synapse as a stochastic mapping
tini=0 −→ tjm
j=0 (2.2.10)
wheretini=0 are the times of the input spikes andtjm
j=0 are the times of the synaptic
responses, a subset of the previous spike-times set.
Since our final technical purpose is to analytically compute the information (both Fisher
and Shannon informations) transmitted by this synaptic channel, once the p.d.f of the input
ρisi(t) is defined, we need to calculate the p.d.f. of the output. We make use here of the fact
that the input is chosen to be a renewal process, so that the distribution of the traintini=0
is completely defined by the distribution of the ISI’sρisi(t). Thus, for a generalρisi(t) it
is feasible to compute the distribution of the intervals between synaptic responses/releases
ρiri(t). Nonetheless, this last quantity is not enough to define the distribution of the responses
because the output is not anymore a renewal process, except in the instance where the number
of docking sites,N0, is one. The explanation of this is the following:
Renewal character Let us suppose that at timet0 a release occurs. At timet+0 = t0 + δt
there are stilln vesicles in the RRP. The distribution of the time of the next release
ρiri(t − t0) depends on several random sources: firstly on the times at which new
spikes will reach the terminal. Since the input is renewal this dependency will be the
same every time there is a release. Second, on the recovery of new vesicles to the RRP
and in turn its size at the time of a spike arrival. Although the recruitment of a vesicle
for an empty site is again a renewal process (which even keeps no memory of the
time the site became empty) the distribution of thetotal number of vesiclesN in the
RRP at timet does keep memory of how many were there at times beforet0. In other
words, the distributionρiri(t − t0) would be different if we know there aren vesicles
in the RRP at timet+0 than if there aren + 1. If, on the contrary, there is only one
docking site (N0 = 1), the RRP size immediately after a release is zero. This happens
because the only vesicle site, that was occupied, just got depleted. Therefore, there is
no dependence on the number of vesicles in the RRP att+0 because it is always zero.
In conclusion, the resulting stochastic process that underlies the response generation is
renewal if and only ifN0 = 1. Only in this particular case, we will be able to compute
the correlation functionCr(t) of the responses using eq.2.2.5.
24 Chapter 2: A model of synaptic depression and unreliability
We define the probability function of the sizeN of the RRP at timet as℘N(t). Since our
input is stationary, unless we make use of some recent information about the state of the
RRP, the function℘N(t) will not depend ont, hence we will write℘ssN , wheress stands for
the stationary state. This distribution can be computed for a general renewal inputρisi(t|ν)
and a general release probabilitypr(N) by following the next three steps.
• First we establish a system ofN0 + 1 equations for the℘N(t) (N = 0 . . . N0) at the
times of two consecutive spikes(tj−1, tj) that define an ISI of duration∆ = tj − tj−1:
℘N(tj) = ℘N(t+j−1) Prec(0, ∆|N0 −N) +
+ ℘N−1(t+j−1) Prec(1, ∆|N0 −N + 1) +
+ . . . + ℘0(t+j−1) Prec(N, ∆|N0) (2.2.11)
where℘N(t+j−1) = ℘N(tj−1 + δt) is the probability an instant after the (j-1)-th spike
arrival and relates to the probability just before the arrival through
℘N(t+j ) = ℘N(tj) (1− pr(N)) + ℘N+1(tj)pr(N + 1) (2.2.12)
The system of equations2.2.11comes from the dynamics that the probabilities℘N(t)
obey over the interval∆ between two spikes: at timetj there might beN vesicles in
the RRP if, 1) there wereN at timet+j−1 and during∆ no new sites were recovered, or
2) there were(N − 1) at t+j−1 and one site recovered, or 3)..., or N+1) there were no
vesicles in the RRP butN of them recovered in the interval∆.
• We average the system2.2.11over the stimulus ensemble∆ using the p.d.f.ρisi(∆|ν).
℘N(tj) = ℘N(t+j−1) 〈Prec(0, ∆|N0 −N)〉 +
+ ℘N−1(t+j−1) 〈Prec(1, ∆|N0 −N + 1)〉 +
+ . . . + ℘0(t+j−1) 〈Prec(N, ∆|N0)〉 (2.2.13)
where the brackets〈·〉 represent
〈Prec(N, ∆|N0 −N)〉 =∫ ∞
0d∆ρisi(∆|ν) Prec(N, ∆|N0 −N) (2.2.14)
• We then substitute eq.2.2.12into the system2.2.13and finally impose the stationary
condition which sets that in the steady state:℘N(tj−1) = ℘N(tj) = ℘ssN . Thus, we
reach a system ofN0 +1 linear equations for theN0 +1 probabilities℘ssN . This system
is degenerate (there are onlyN0 independent equations), but we can complete it by
adding the normalization condition∑
N ℘ssN = 1. Solving this system one obtains the
exact distribution of the size of the RRP,N , for an inputρisi(t|ν) and a release function
pr(N) in the stationary regime.
2.2. Methods 25
We focus now on the main calculation of this chapter which describes how to obtain the
expression of the p.d.f. of the IRI’s,ρiri(t|ν), given a general renewal inputρisi(t|ν). We
start by making the following expansion
ρiri(∆|ν) = ρ(1)iri (∆|ν) + ρ
(2)iri (∆|ν) + ρ
(3)iri (∆|ν) + . . . (2.2.15)
whereρ(i)iri(t|ν) is the p.d.f. of having an IRI of length∆ composed ofi consecutive ISI’s, or
put differently, the probability that, given that there was a release att0, thei-th event arriving
at t0 + ∆ is the first one to trigger a new release. We give here the expression of the first of
these terms and leave the details of the calculation for the AppendixB.
ρ(1)iri (∆|ν) =
N0∑N=0
℘ssN︸ ︷︷ ︸
(i)
N0−N+1∑n1=0
Prec(n1, ∆|N0 −N + 1)︸ ︷︷ ︸(ii)
ρisi(∆|ν)︸ ︷︷ ︸(iii)
pr(N − 1 + n1)︸ ︷︷ ︸(iv)
(2.2.16)
The probability that a synaptic release occurs is a combination of all the stochastic dis-
tributions of the problem: the sizeN of the RRP (℘ssN ), the numbern1 of recovered vesicles
(Prec), the arrival of an input spike after a time-window∆ (ρisi(∆|ν)) and finally the release
probability upon arrival of the spike (pr). The factor(i) represents the weighted sum of all
possibles sizes of the RRP at an arbitraryt0 given that a spike succeeded to release a vesicle
at that time. The tilde over℘ssN accounts for this conditioning which could be expressed using
the Bayes rule:
℘ssN =
℘ssNpr(N)∑N0
k=0 ℘ssk pr(k)
(2.2.17)
The term(ii) corresponds to the probability that between the releases att0 and att0 + ∆, n1
new vesicles recover (wheren1 = 0 . . . N0 − N + 1, because att+0 there areN0 − (N − 1)
empty sites). The expression(iii) represents the probability that the next spike, after the one
at t0, arrives att0 + ∆ and the last term(iv) is the probability that it succeeds.
Subsequent termsρ(i)iri(t|ν) (i = 2, 3, . . .) of the expansion2.2.15are constructed like
the one described but including the convolution product ofi distributionsρisi in each of
the terms. This in turn means nesting more summations with the respective probability
functions in a systematic manner (see AppendixB). To perform the explicit calculation of
all the summations in each termρ(i)iri(t|ν) an expression for the functionpr(N) is needed.
Most choices here make the calculation non-feasible. It is viable, however, for the step-
like function of our model (eq.2.2.1). Still the convolution ofi distributionsρisi remains
unsolved in each of the terms.
26 Chapter 2: A model of synaptic depression and unreliability
We shall now introduce an alternativedivide-and-conquerinspired approach to look at
our synaptic model, that will be very convenient for the calculation. Since the two stochastic
sources of the synapse namely, the recruitment of vesicles for the RRP and the unreliable
release, are independent, we will treat them as separated phases of a two-stage channel. This
can be done as a consequence of the following renormalization property which any synaptic
model where recovery and release are model by independent distributionsPrec(n|N) and
pr(N), posseses,
ρisi(∆|ν)Prec, pr−→ ρiri(∆|ν) ≡ ρisi(∆|ν)
U−→ ρdisi(∆|νd)
Prec, p∗r−→ ρiri(∆|ν)
The renormalized approach (r.h.s. of previous diagram) reads as follows: spikes come
through the first stage of the channel which is just a non-activity-dependent “filter” that
decimates the train with a constant probability(1 − U) or in other words, every spike has
a probabilityU of crossing. After this stage we have a diluted version of the original train
which we have calledρdisi(∆|νd) pointing that at least7 the rateν has a different valueνd,
which will be termeddiluted rate. The second stage of the synaptic channel is an activity-
dependent filter where depression takes place. It is equivalent to the original one-stage model
except for the fact that the release probability has been renormalized topr(N)∗ = pr(N)U
ensuring thatpr(N)∗ < 1 for all N . To understand why this two-stage model may be con-
venient let us apply the same transformation to the step-like model. In this simple case,
pr(N)∗ = Θ(N − Nth). Because of this, the stochasticity in the second stage is only due
to the recovery process, so that the unreliability in the release process has been transfered
to the first, non-activity-dependent stage. It is interesting to observe that in the limit where
the recovery of vesicles occurs very fast (τv → 0), the second stage becomes trivial, because
the RRP size is alwaysN0 and transmission at this phase is always successful. In this limit,
the global transformation of the spike train into a response train is reduced to that of the first
stage, i.e.ρiri(∆|ν) = ρdisi(∆|νd).
Coming back to the computation ofρiri(∆|ν) (eqs.2.2.15-2.2.16), we can apply it to the
computation of the second-stage transformation by just substitutingρisi → ρdisi andpr(N) →
p∗r(N) = Θ(N −Nth).
The last step then, is to sum the expansion2.2.15. To accomplish this, we apply to the se-
ries the Laplace transform, which has the well-known property to convert each convolution
into a simple product in Laplace space. With this trick the expansion becomes a geomet-
rical series and can be explicitly operated. Notice that so far in this calculation, the input
distributionρisi(∆|ν) was not specified. Therefore, we obtain an expression of the Laplace
7We will see later that for the exponentially correlated input we are using, also theCVisi is transformed
asCV disi =
√(CV 2
isi − 1) + 1. The third parameterτc is invariant to this transformation and the correlation
Cc(t) remains as in2.2.6but withCV disi instead ofCVisi.
2.2. Methods 27
transform ofρiri(t|ν), for a general renewal input, which reads
ρiri(s) = ρdisi(s) − ℘ss
Nth
ρdisi(s + N+
τv)[1− ρd
isi(s)]
1− ρdisi(s + N+
τv)
(2.2.18)
which depends on the Laplace transformρdisi(s) of the diluted version of the input distribution
ρisi(∆|ν). The variableN+ = N0 − Nth + 1 is thex-size of the upper portion of the step
functionp∗r(N). We also introduced the conditional probability℘ssNth
, which was defined for
a generalN in eq.2.2.17, but that we rewrite here for this particular case
℘ssNth
=℘ss(Nth)
℘ss(Nth) + ℘ss(Nth + 1) + . . . + ℘ss(N0)(2.2.19)
that is, the probability that the RRP size isNth given that the size is equal or larger thanNth.
In the particular case where there is only one vesicle docking site (N0 = 1), both variables
℘ssNth
andN+ equal one.
Still the transformation of the first stage, i.e.ρisi(∆|ν) −→ ρdisi(∆|νd), must be estab-
lished. Expandingρdisi in a series like2.2.15, each of the terms does not depend on any
vesicle dynamics anymore. They only depend on the constant probabilitiesU and(1 − U)
of being or not transmitted, respectively, in the following way
ρdisi(∆|νd) = ρisi(∆|ν)U +
∫ ∆
0dxρisi(x|ν)ρisi(∆− x|ν)(1− U)U + (2.2.20)
+∫ ∆
0dx∫ ∆
0dyρisi(x|ν)ρisi(y|ν)ρisi(∆− x− y|ν)(1− U)2U + . . .
which Laplace reads
ρdisi(s) =
U
(1− U)
∞∑k=1
[ρisi(s)(1− U)]k =
=Uρisi(s)
1− (1− U)ρisi(s)(2.2.21)
Taking eq.2.2.21into eq.2.2.18one obtains an expression forρiri(s) in terms of the input
distributionρisi(s) and of the synaptic parameters [τv, U , N+]. Thus, no dependence onNth
nor onN0 appears. This occurs because, in the stationary state, the size of the RRP,N , never
goes belowNth−1 since no vesicle can be released ifN < Nth. For this reason, the absolute
size of the RRP is functionally irrelevant and all it matters is the relative distance from the
maximum size,N0, to the edge of the step,Nth. Since we have the freedom to giveNth any
value, we set it equal to one, makingN+ = N0. In conclusion, the only relevant synaptic
parameters are[τv, U, N0].
To obtain finally an explicit analytical expression forρiri(∆|ν), one must first define the
distributionρisi(∆|ν), and then apply the Laplace anti-transform to eq.2.2.18. In the Results
section we will show the particular easy case of Poisson input, while the derivation of the
formula for the correlated case is given in AppendixD.
28 Chapter 2: A model of synaptic depression and unreliability
2.2.4 Population of synapses
As a step further in the analysis of the transmission properties of dynamical synapses,
we study a population ofM pre-synaptic neurons each of them making a single contact, like
in the previous model, onto the same post-synaptic cell (see fig.2.4). We will assume that
the M units fire with the same statistics, given byρisi(t|ν) (which in turn depends on the
input parametersCVisi andτc) and the p.d.f. of the ratef(ν). We will impose that given the
rateν, the spike trains arriving through each of the axons are independent, that is, there are
no spatial cross-correlations (apart from those arising because all the cells fire with the same
ν). We model the synaptic response to this multi-channel stimulus as a vector of IRI’s∆iwhich represents the set of intervals between responses produced in each of theM synapses.
. . .
∆
∆
∆
∆ 1
3
2
Μ
In
pu
t:
Po
pu
lati
on
of
neu
ron
sC
Vc
[ν, τ
,
]
3
2
1
M
Figure 2.4: A population ofM neurons making single contacts onto a target cell. All cells in
the population share the same firing statistics given byCVisi, τc andf(ν). The parameters of
the synaptic contacts are, however, distributed according with the distributionD(U,N0, τv).
The input code is the input rateν. The output code is theM -dimensional vector of IRI’s
∆i.
Motivated by puzzling findings showing that, among a population of synapses of the
same class, the synaptic variables (RRP maximum size, probability of release in a stationary
state, recovery time,..) take a wide distribution of values [Allen and Stevens, 1994, Dobrunz
and Stevens, 1997, Murthy et al., 1997, 2001], we try to asses the functional relevance of
such a heterogeneity in terms of information transmission. We characterize the population
of synapses by a joint probability distributionD(U,N0, τv) which is partially determined by
experimental data. In particular we refer to results obtained from the CA3-CA1 hippocampal
synapses [Allen and Stevens, 1994, Murthy et al., 1997, Dobrunz and Stevens, 1997, Murthy
2.2. Methods 29
et al., 2001]. Firstly, Murthy et al.[1997] found that the distribution of the probability of
release when the synapse is at rest,p1 (not stimulated for a long period of time), can be well
fitted by a gamma function of order2, namely
Γλ(p1) = λ2p1e−λp1 (2.2.22)
with λ = 7.9.
In our model We identify this probabilityp1 with the probability of release when the
RRP size is maximal, i.e.p1 = pr(N = N0) = U . As already mentioned at the beginning
of this chapter, several works [Dobrunz and Stevens, 1997, Murthy et al., 2001, Hanse and
Gustafsson, 2001a, Dobrunz, 2002] found that in a population of synapses,p1 correlates with
the RRP maximum sizeN0. In other words, the marginal joint probability ofU and the RRP
maximum sizeN0 do not factorize:D(U,N0) = f(N0)R(U |N0) whereR(U |N0) 6= R(U).
In particular,Murthy et al. [2001] found that when averaging the value ofU across the
population for a fixed value ofN0, the data can be fitted with the single contact model (see
section2.2.1), that is:
〈U〉N0 = 1− (1− pv)N0 (2.2.23)
with pv ' 0.05. Using a different frequency of stimulation,Hanse and Gustafsson[2001a]
find that the maximal size of the RRPN0 takes much lower values. Fitting their data with
the same release model (eq.2.2.23) yields values ofpv ' 0.3− 0.6.
Therefore, we use equations2.2.22and2.2.23as requirements which theD(U,N0, τv)
must fulfill. Thus, we write the marginal joint probability ofU and N0 as the product
D(U,N0) = f(N0)R(U |N0) and we model the conditional probabilitiesR(U |N0) with the
following ansatz8
Rq(U |N0) =qN0+1
N0!UN0e−qU (2.2.24)
which means that for eachN0, the distributionR(U |N0) is a gamma function of orderN0 +1
and parameterq. This new parameter has to be determined with the experimental constraints
(eqs.2.2.22-2.2.23).
Imposing first that the marginal ofU has to equal the function given in eq.2.2.22,
Γλ(U) =∞∑N0
f(N0)Rq(U |N0)
From this expression we obtain the coefficientsf(N0) by expanding both sides in powers of
U and identifying terms of the same order. In this way, we find that thef(N0) are polynomi-
als in λq
of gradeN0 + 1 (see AppendixC).
8The experimental works described [Dobrunz and Stevens, 1997, Murthy et al., 1997, 2001] did not collect
enough data to fit the distribution ofU for a fixedN0.
30 Chapter 2: A model of synaptic depression and unreliability
To fix the parameterq, we make use of the second constraint of eq.2.2.23, making an
expansion up to first order in the parameterpv, that is
〈U〉N0 = 1− (1− pv)N0 ' pv N0 (2.2.25)
Performing the average〈U〉N0, our model gives:
〈U〉R(U |N0) ≡∫
U Rq(U |N0) dU =N0 + 1
q(2.2.26)
Thus, identifying the slopes of these two equations9, we find that1q' pv.
Summarizing, we first propose that the family of distributionsR(U |N0) (N0 = 1, 2 . . .)
is a gamma basis of parameterq; then we determine this parameter as a simple function
of the experimentally obtainedpv; finally we determine the probability functionf(N0) for
N0 = 1, 2 . . . Nmax0 (where the cutoffNmax
0 has been chosen so that the cumulative prob-
ability∑Nmax
0N0=1 f(N0) is close enough to one) as polynomials inλ
q, whereλ is the second
experimentally measured parameter. In fig.2.5 (bottom) we show the plane(N0, U) with
55 realizations from the final distributionD(U,N0) when the values ofpv andλ are taken
from [Murthy et al., 2001] and [Murthy et al., 1997], respectively. The theoretical curve
1 − (1 − pv)N0 along with a straight line representing the model average〈U〉R(U |N0) are su-
perimposed for comparison. The figure resembles the experimental one in ([Murthy et al.,
2001], fig.1-B). The top figure is the histogram of the release probability of 600 realizations
together with the experimental fit, equation2.2.22(see [Murthy et al., 1997] fig. 2-B).
Since there is no experimental data about howτv is distributed in the population and
whether it correlates or not withN0 andp1, we will apply an optimization criterion presented
in next section to determine the complete joint distributionD(U,N0, τv).
2.3 Results: synaptic response statistics
We will describe now the main statistical features of the synaptic responses, as a conse-
quence of the stochastic dynamics of the synaptic channel. We will depict these statistical
properties of the response train starting from particular simple cases (Poisson input,N0 = 1)
with which we may gain an intuition, and we will tackle later the harder general case.
9If the parameterpv takes a larger value, like in [Hanse and Gustafsson, 2001a] which is pv ∼ 0.5, the
expansion in equation2.2.25can be performed with respect to the parameterN0, which in those cases is small
(∼ 1). Thus, we identify1q ' −ln(1− pv).
2.3. Results: synaptic response statistics 31
0 2 4 6 8 10 12 14 16 18 20N
0
0
0.2
0.4
0.6
U
0 0.2 0.4 0.6 0.8 1U
0
1
2
3
4
Γλ(U
)
Figure 2.5: Model for the population distributionD(U,N0). Top: Histogram of600 real-
izations of the marginal distribution of the release probabilityΓλ(U), which is fitted with
a gamma function of order 2 and parameterλ (this constraint has been taken from exper-
imental findings shown in [Murthy et al., 1997], where the valueλ = 7.9 has been taken
from). Bottom: the plane(N0, U) with 55 sampled points drawn from the population joint
distributionD(U,N0). The black line represents the experimental fit to the averaged release
probability〈U〉R(U |N0) which equals:1− (1− pv)N0 [Murthy et al., 2001]. The straight red
line represents the average〈U〉R(U |N0) = (N0 + 1)/q given by our model. We have taken
q ' pv, whose numerical value (taken from [Murthy et al., 2001]) readspv = 0.057.
2.3.1 Single docking site:N0 = 1
As we already mentioned, the case where there is at most one vesicle ready-for-release,
yields a renewal synaptic output process. This exhibits several advantages: first it permits to
calculate the connected conditional rate (that we will call correlation function for simplicity)
Ccr(t) of the responses and with it, compute the coefficient of variation of the IRI’s,CViri.
Moreover, in the next chapter, we will be able to compute the information conveyed in the
number of responsesn(T ) in a large time windowT (see section3.2).
32 Chapter 2: A model of synaptic depression and unreliability
2.3.1.1 Poisson input
WhenCVisi = 1 the input statistics becomePoissonwith ρisi(∆) = νe−∆ν . In this
case the expressions for the IRI distributionρiri(∆|ν) and the correlation function of the
responsesCcr(t) read [de la Rocha et al., 2002] (see AppendixD)
ρiri(∆|ν) =νU
1− νUτv
(e−νU∆ − e−∆/τv) (2.3.1)
Ccr(t) = − νU
1 + νUτv
e−t/τr (2.3.2)
where we have defined a new time constantτr ≡ τv
1+νUτv, which corresponds to the time
scale of the correlations induced by the synaptic depression. It is now very helpful to recall
the interpretation of the synapse as a two-stage channel in which the afferent spikes are first
removed with a constant probabilityU , while in a second activity-dependent phase, they
reliably elicit a response if there is a vesicle at the RRP. Because now the input is Poisson,
the first stage becomes a trivial dilution of the train by a factorU which is equivalent to
renormalize the rateνd = Uν. The random synaptic response process defined by eq.2.3.1, is
acombinationof two exponentials arising from the two Poisson processes involved: the input
process with a diluted rateνd = Uν, and the replenishment of the docking site, which spends
a time drawn from an exponential distribution, and consequently it can be seen as a Poisson
process with rate1/τv. The ratio between the two rates,νUτv, determines which process is
faster and which is slower and hence dictates the behavior of the responses: whenνUτv 1,
the recovery of a vesicle is so fast that no depression can be observed and the release process
reproduces the renormalized Poisson input with rateνd. In the opposite limit, which will be
referred to hereafter as thesaturationregime,νUτv 1, almost every spike finds the site
empty, and when it is occupied a new afferent spike provokes its release straight away. In
this limit the output becomes again Poisson but with rate1/τv. As a consequence of this, in
the two limits the amplitude of the correlationCcr(t) (eq.2.3.2) vanishes to zero. However,
although both limits posses identical statistics (except for the rate) they differ enormously
in terms of information transmission. In the saturation limit, no information aboutν can be
transmitted because the post-synaptic neuron is onlylisteningthe input-independent refilling
of the RRP. This saturation regime can be better described by analyzing the rateνr and the
coefficientCViri of the responses, which for Poisson input read (AppendixD)
νr =νU
1 + νUτv
(2.3.3)
CViri = 1− 2τrνr =1 + (νUτv)
2
(1 + νUτv)2(2.3.4)
We can now analytically verify from these expressions that in the saturation regimeνUτv 1, νr approaches the asymptotic value1
τvandCViri → 1.
2.3. Results: synaptic response statistics 33
0 0.5 1 1.5 2 2.5τ
v [s]
0
0.5
1
1.5
2
CV
2 iri
0 20 40 60 80 100ν [Hz]
0
0.5
1
1.5
2
ν r [H
z]
CV=1CV=1.5CV=2.5
Figure 2.6:(Top) Synaptic transfer function: the synaptic response rateνsr vs the input rate
ν. τv = 500 ms. Vertical arrows indicate the position of the corresponding saturating input
rateνsat(χ = 0.75) (see text). It can be observed that the correlated inputs saturate later, than
the Poisson input, to the same limit value1τv
. (Bottom) Squared coefficient of variation of
the IRI’s,CV 2iri, vs τv for several values ofCVisi. Top plot inset values apply to both plots.
Other parameters are:ν = 10 Hz(bottom),U = 0.5 andτc = 50 ms.
In order to formalize where does this saturation regime begin, we defineνsat(χ) as the
input rate at which the output rateνr has reached a fractionχ of its total dynamic range1τv
,
which means thatνr(νsat) = χτv
. Adopting this functional definition, we arrive atνsat(χ) =χ
(1−χ)Uτv. We realize now that we can arbitrarily fix the parameterχ, because the choice of a
different value, implies just a multiplication by a constant factor. Thus, we setχ = 0.5 and
finally obtain
νsat =1
Uτv
(2.3.5)
Thus, we recover the definition ofAbbott et al.[1997] of the limiting input rate beyond
which, no information can be transmitted in the stationary state about this spike rate. Since
the parameterU has been found to be plastic [Markram and Tsodyks, 1996a], the range
below saturation can be adjusted in an activity-dependent manner.
What happens between the two Poissonian limits? Fig.2.6(bottom, black line) illustrates
the behavior ofCViri as a function ofτv. In both limits, whenτv → 0 andτv → ∞, the co-
efficient of variationCViri → 1. In the whole intervalCViri < 1, which means that the train
34 Chapter 2: A model of synaptic depression and unreliability
is more regular than Poisson. This can also be concluded by inspection of the correlation
functionCcr(t) (eq. 2.3.2): because it is an exponential with negative amplitude, the coeffi-
cient of variation must be negative (see eq.2.2.6). The fact that the correlation is negative,
implies that after a release occurs, and during a time window of size∼ τr, the probability
that the synapse triggers subsequent responses is reduced. Intuitively, the response train be-
comes more regular than the Poisson input because very short intervals have been removed,
that is, it is very unlikely that two spikes separated by a short ISI of size∆ (or more exactly
∆ τv) elicit, both releases. This occurs because a vesicle must be recruited to occupy
the empty docking site before the second spike comes, and this takes on average a timeτv.
Fig. 2.6also shows thatCViri has a minimum which is achieved precisely when this average
recovery timeτv equals the renormalized mean ISI,1Uν
. It is remarkable that, at this point,
the output release process becomes a gamma of order 2:ρisi(∆) = U2ν2 ∆ exp(−Uν∆)
whose coefficient of variation is exactly1√2.
2.3.1.2 Correlated input
Let us now study what happens when the input is correlated, that is we use theρisi(∆)
given by the two exponential distribution of eq.2.2.3. In this case,ρiri(∆) andCcr(t) result
in a combination of four and three exponentials, respectively (see AppendixD).
ρiri(∆|ν) =U
τv
(C1 e∆s1 + C2 e∆s2 + C3 e−
∆τv + C4 e
− ∆τ1
)(2.3.6)
Ccr(t) = K1 e−t( 1
τv−s1) + K2 e−t( 1
τv−s2) + K3 e−
tτc (2.3.7)
The coefficientsCi, Ki are fractional functions of both input and synaptic parameters. The
time constantτ1 = ( 1τv
+ 1τc
)−1 ands1 ands2 are the roots of a second order equation which
is shown in the AppendixD. From these expressions we can derive the output variablesνr
andCViri which read (see AppendixD)
νr =νU
1 + τvνU + τvUCV 2
isi−1
2(τv+τc)
(2.3.8)
CV 2iri =
H
(τc + τv + τv Uν τc + τv Uα + τv2Uν)2 (2.3.9)
where
H = τv4U2ν2 + 2 τv
3U2ν2τc + τv2U2α2 + 4 τv
2U2ν α τc + 2 Uα τv2 + τv
2 + τc2 +
+τv2U2ν2τc
2 + 2 τv U2α2τc + 4 τv Uα τc + 2 τv U2ν α τc2 + 2 τv τc + 2 Uα τc
2
andα = (CV 2isi − 1)/2.
2.3. Results: synaptic response statistics 35
The first result that can be easily extracted from these formulas is that the saturation rate
νsat is altered by the input correlations in the following way
νsat =1
Uτv
+CV 2
isi − 1
2(τv + τc)(2.3.10)
which means that if the input stimulus is positively correlated the saturation regime is boosted
towards higher values of the input rate. In fig.2.6(top) we have plotted the synaptic transfer
functionνr(ν). Along with the Poisson input (black line) there are two more values of the
input CVisi (red and green lines). The three curves tend asymptotically to1τv
but the two
correlated ones approach slower than the Poisson case. Three small arrows have been placed
on the rate axis to indicate the corresponding valueνsat(χ = 0.75)10 in of each curve. It is
interesting to remark here that, since the dynamical range of the output rate does not change
with the input parameters, the increase of the the non-saturating range due to the positive
correlations, has the prize of a poorer encoding of this larger range. This can be seen by
comparing the slope of the functionνr(ν) of the Poisson and correlated examples showed in
fig.2.6for small values of the input rateν (below the first arrow).
In figure2.6(bottom) we have also plotted the coefficientCViri of the same three exam-
ples, with a fixedν = 10 Hz, as a function ofτv. For large values ofτv (saturation limit)
all curves tend to the same asymptote atCViri = 1. However, when the vesicle dynamics
are rapid (smallτv) the correlated cases produce more irregular trains than the Poisson input
case. This is the consequence of the larger variability of the ISI’s in the input (in particular
CVisi = 1.5 andCVisi = 2.5) and the fast recovery which permits short ISI’s to cross the
synapse. In the limitτv = 0 where there is no depression (the RRP gets instantaneously re-
plenished), the curves of the correlated cases do not hit theCViri-axis at their correspondent
input valuesCVisi because of the the unreliability represented by factorU . Particularly, the
effect on the variability (CVisi → CViri) of a non-dynamical stochastic filterU is to make
the output closer to a Poisson. This is expressed by:
CV 2iri − 1 = U(CV 2
isi − 1) (2.3.11)
As τv starts to increase, the response trains rapidly become more regular, as happened in
the Poisson case explained above. Now a larger value ofτv is needed to reach the minimum.
Since the minimumCViri reached is roughly the same than in the Poisson case, the reduction
of variability, measured as the distance from the maximum ofCViri (at τv = 0) to the
minimum, is greatly increased as the input correlations grow. Thus, we conclude that in
terms of regularizing the response train, short-term depression becomes more efficient the
10We have picked a different value ofχ for the sake of graphical clarity. Using the previously defined
νsat(χ = 0.5) would imply a simple rescaling of the arrowsx-position by a13 factor:
36 Chapter 2: A model of synaptic depression and unreliability
ττ τ
τ τ τ
Poisson spikes
Bursty spikes
Figure 2.7: Why do correlated trains saturate for higher input rates than Poisson trains? The
picture shows two caricatured spike trains, namely a Poisson and a bursty correlated train,
with equalrateν. It is also shown how they would be transformed into a train of releases if
they arrive at a reliable (U = 1) synapse with a single docking site (N0 = 1). Vertical lines
represent spikes on a temporal axis:solid lines represent spikes transmitted whiledashed
lines represent unsuccessful ones, which found the RRP empty. Shadowed orange boxes
represent the time during which the release site is empty: at the left border of each box the
release of the vesicle occurs, while at the right border the docking of a new vesicle takes
place. We have set the recovery time equal at each release to illustrate the explanation more
clearly. Because of theclustered structureof the bursty train, there exists a larger interval
between the docking of a vesicle a the next release (blue arrows), which indicates that the
synapse saturates less than in the Poisson case. (Note:τ in the figure corresponds toτv in
the text).
more positively correlated is the stimulus. Perhaps a non-trivial comment is to indicate that,
at the point in which theCViri curve crosses the horizontal lineCViri = 1, the output is not
a Poisson process11.
Why do positively-correlated trains saturate for higher rates than Poisson trains (or than
negatively-correlated trains, data not shown)? The answer is illustrated in figure2.7 where
two spike trains transformed by the recovery dynamics have been caricatured, namely a
Poisson and a bursty input. Both spike trains have the same rateν. In the correlated input,
pulses tend to cluster so that, to keep the rate fixed, the interval between clusters (bursts)
11Recall that the constraintCV = 1 is a necessary but not sufficient condition for a renewal process to be
Poisson.
2.3. Results: synaptic response statistics 37
is larger than the mean ISI. The orange boxes indicate the time in which the synapse is
empty so that any spike falling inside it, fails to elicit any response. The right end of each
window denotes the time at which a new vesicle has docked. For the bursty input, more
spikes are filtered out by the vesicle depletion (illustrated as an orange box), meaning that
νr is lower than for the Poisson input. However, a larger fraction of failures in transmission
does not imply more saturation. What is saturation then? Blue arrows indicate the time
interval between a docking event and a release. As the synapse is more and more saturated,
this temporal distance shrinks, eventually becoming zero (complete saturation). Thus, we
can observe that for the Poisson input the average of this interval is smaller than in the bursty
input. The key point is that when the same number of spikes (that is, the input rate kept fixed),
are clustered the distance between clusters increases as the burst gather more spikes (see also
figure2.3). Thus, at high rates, the same increase inν will produce a larger increase inνr if
the input is correlated because the “remaining” temporal range to codeν (blue intervals) is
larger in this case.
The last relevant effect of depression, related to the reduction of variability, is the trans-
formation of correlations. In figure2.8 (left) we represent the correlation function of the
response train,Ccr(t) (eq. 2.3.7) for different values ofτv, as well as the correlation of the
input spike trainCc(t) (eq. 2.2.6). The caseτv = 0 (no depression) shows that the output
correlation equals the input multiplied by a factorU . As soon asτv has a finite value, the
correlation for short intervals is rapidly reduced because of the effectivesynaptic refractory
timethat the vesicle replenishment imposes. For larger values ofτv, Ccr(t) becomes negative
for all t, and in the saturation limit, when the statistics of the responses is governed by the
docking of the vesicles, the correlation vanishes.
It is known [Atick, 1992, Dan et al., 1996, Nadal et al., 1998] that de-correlation, or
more generally, minimal redundancy, is directly related with the maximization of the infor-
mation transmission. However, it turns out that in our system this connection is clearly not so
straightforward, due to the fact that in the saturation limit, where the output does not convey
any information about the input rateν, the response train is indeed uncorrelated. Despite
this observation, we will compare the mutual information with the total magnitude of the
correlation function that will be quantified by means of the following measure
K =∫ ∞
0dt |Cc
r(t)| (2.3.12)
i.e. the integral of the absolute value (to avoid cancellations due to the sign) of the correlation.
This absolute measure of the correlation is plotted in fig.2.8 (right) for the same parameter
values chosen for the left side plot. It is interesting to realize that even if the general trend of
K asτv grows is to decrease, it draws a local minimum before it goes asymptotically to zero,
38 Chapter 2: A model of synaptic depression and unreliability
0 0.1 0.2 0.3 0.4 0.5
∆ [s]
−2
0
2
4
6
Crc (∆
) [H
z]
inputτ
v= 0 ms
τv= 10 ms
τv= 50 ms
τv= 500 ms
τv= 2 s
0 0.2 0.4 0.6 0.8
τv [ms]
0.1
0.15
0.2
0.25
0.3
K
τc= 100 ms
τc= 500 ms
Figure 2.8:Left plot: Connected conditional rateCcr(∆) (also called correlation function in
the text) of the synaptic responses for several values ofτv. The input connected conditional
rate,Cc(t), is also shown for comparison (thick black line).Right plot: Integral of the
absolute value ofCcr(∆) (denoted asK in the text) as a function ofτv. From both plots it
can be deduced that in the limitτv →∞ the output is completely decorrelated. However,K
displays, for someτc values, a non-monotonic behavior. Parameters:ν = 10 Hz, τc = 100
ms.(left),CVisi = 1.5, U = 0.5, N0 = 1.
2.3.2 Multiple docking sites:N0 > 1
When the synaptic contact is endowed with several vesicle docking sites, upon arrival
of a spike, transmission occurs with probabilityU only if the ready-releasable pool (RRP)
contains at leastNth vesicles. As previously mentioned (section2.2.3), in the stationary
state, the RRP size never gets belowNth − 1, so that the RRP absolute size can be redefined
by settingNth = 1. Because of this, the synapse will release transmitter with probability
U if there is one or more vesicles ready for release. The main difference with respect to
the single docking site scenario is that now, if a long enough time interval elapses with no
spikes hitting the terminal, the RRP size,N , can grow untilN ∼ N0. At that point, several
consecutive releases must occur (approx.N0) before the pool gets depleted and depression
takes place. Thus, if the number of sitesN0 is large, even if the recruitment of vesicles is not
too fast, depression would be moderate, becauseN would usually be well over the edge of
the step-like function, that isN 1.
2.3. Results: synaptic response statistics 39
0
0.5
1
1.5
2
ρ iri(∆
| ν)
[s
−1 ]
N0= 1, τ
v= 0.5 s
N0= 2, τ
v= 1 s
N0= 3, τ
v= 1.5 s
N0= 4, τ
v= 2 s
CV=1 (Poisson)
0
1
2
3
4
CV=2 (Bursts)
0 1 2 3∆ (s)
0
0.5
1
1.5
2
[s−1
]
ρ(∆ | ν; [N0=1,τ
v=0.5])
0 1 2 3∆ (s)
0
1
2
3
4
Figure 2.9: Distribution of the synaptic responsesρiri(∆|ν) for several values of the number
of docking sitesN0 and different recovery rates1/τv. Top plots: distribution ρiri(∆|ν)
for a Poisson input (top left) and a correlated input withCVisi = 2 (top right). Bottom
plots: Decomposition ofρiri(∆|ν) in the two terms shown in equation2.3.13, namely: i) the
leading termρiri(∆, [N0 = 1, τv ]N0
) which equals the distribution given by a single docking
site with a renormalized recovery rate; ii) a perturbative term which includes the qualitatively
different response effects obtained with several docking sites (e.g. the non-zero probability
of obtaining IRI’s of infinitely small size). It can be observed in the two right panels, that the
introduction of input short-range correlations changes severely the shape ofρiri only when
N0 > 1. Top plot inset values apply to all plots. Bottom plot inset values apply to bottom
plots. Other parameters are:ν = 10 Hz, τc = 50 ms.,U = 0.5, τv = 0.5 s.
40 Chapter 2: A model of synaptic depression and unreliability
Because our aim is focused on the optimization of the information transmission through
the synapse, and since depression will turn out to be beneficial for this purpose, we now write
the distribution of IRI’s,ρiri(∆), of a multiple site synapse as a perturbative extension of a
single site model. The reason for this strategy is that, as will be shown in the next section, the
more moderate is the depression trace due to a largeN0, the worse will be the transmission
(at the optimal value ofτv). In this sense, the single docking site model performs better than
the multi-docking site model12, which under some choice of the parameters can resemble a
single docking site model. Thus, if we manipulate the expression of the Laplace transform
of the IRI distributionρdiri(s) eq.2.2.18we find thet it can be written like as
ρiri(s, [N0, τv]) = ρiri(s, [1,τv
N0
]) + (1− ℘ss1 )
ρdisi(s + N0
τv)(1− ρd
isi(s))
1− ρdisi(s + N0
τv)
(2.3.13)
which means that the distribution forN0 docking sites recovering withτv, ρiri(∆, [N0, τv]),
equals the distribution of the single site case with recovery constantτv
N0, plus a perturbative
term which is multiplied by the factor(1− ℘ss1 ) = ℘ss(N>1)
℘ss(N≥1)(see eq.2.2.19). This factor rep-
resents the probability that, known that the RRP is not empty, it holds more than one vesicle.
This expression however, could be indeed partially deduced: let us suppose that the synapse
is near the saturation regime, or in other words that a large fraction of the afferent spikes
found the RRP empty because the vesicle recovery machinery did not work fast enough. In
this case the probability that the RRP size exceeds one is small (℘ss(N > 1) 1) and
therefore the factor(1 − ℘ss1 ) ∼ 0. Then, if we sit at the synaptic contact and watch the
vesicles randomly dock at any of theN0 sites, they would arrive as a Poisson process of rateN0
τv, which results of the superposition of theN0 independent Poisson processes with rate1
τv
which govern the refilling of each individual site. Thus, the multi-site model becomes ef-
fectively a single-site contact with a recovery time constantN0 times smaller (i.e.N0 times
faster), andρiri(∆, [N0, τv]) is well approximated byρiri(∆, [1, τv
N0]). When the stationary
state of the synapse is far from the saturation regime, then the second term in eq.2.3.13
becomes comparable to the first one. In the limit whenN0 becomes very large (and the rest
of the parameters are fixed) the output distributionρiri(∆, [N0, τv]) → ρdisi(∆).
Figure2.9 shows the distributionρiri(∆|ν) for several values ofN0. As more docking
sites have been included, the recovery time has been changed in such a way that the ratioτv
N0is kept constant. This, as we just explained, leads to a comparison of different con-
tacts with the same the limit frequency (the maximum frequency at which releases can take
place). For this reason, the tails of the distribution coincide, independently ofN0 (top plots).
12Depending on the optimization criterion the optimal transmission will occur with a single docking site, if
the Fisher information is picked, while a two-docking site contact will be optimal if the Shannon information
is chosen.
2.3. Results: synaptic response statistics 41
The bottom plots show the decomposition explained in equation2.3.13into a leading term
ρiri(∆|ν, [1, τv
N0]) (which is common to all values ofN0) and a perturbative term (second term
in the r.h.s. of eq.2.3.13), which includes the effect of several vesicles being docked at the
same time. In both Poisson and correlated cases, the introduction ofN0 > 1 provokes a
qualitativechange at the origin, namely, thatρiri(∆ = 0|ν) > 0. This change is particularly
prominent when the input is correlated (exhibited in the large peak at the origin reflect-
ing the effect of the input bursts). Thus, we can conclude that adding a new docking site
while renormalizing the recovery rate, only produces a substantial change when going from
N0 = 1 to N0 = 2. This will have important consequences when computing the information
for different values ofN0.
0 10 20 30 40 50 60 70ν [Hz]
2
4
6
ν r [H
z]
0
2
4
6
8
ν r [H
z]
N0=1
N0=2
N0=3
N0=4
Figure 2.10: Synaptic response rateνsr vs the input rateν for different values ofN0. CVisi =
1 (top) andCVisi = 2.5 (bottom). The straight dashed line is the limitN0 → ∞ where no
depression is observed. Vertical arrows indicate the position of the corresponding saturation
rateνsat(χ = 0.75) (see text). As theN0 increases,νsat also increases. Top plot inset values
apply to both plots. Parameters:τc = 50 ms.,τv = 500 ms. andU = 0.5.
In figure 2.10 the synaptic transfer functionνr(ν) is plotted for different values ofN0
while τv has been kept fixed. The behavior in all cases is qualitatively the same. It is clear
though, that the rate to whichνr saturates grows withN0 as N0
τv. It is also apparent, that the
greaterN0 the later in theν-axis depression takes place (seen as a deviation from the straight
line of slopeU ). The position ofνsat is represented by the vertical arrows on theν-axis.
42 Chapter 2: A model of synaptic depression and unreliability
2.4 Tables of symbols
MODEL OF SYNAPTIC CONTACT
pv Single vesicle fusion probability , defined as the probability that an
individual docked vesicle undergoes exocytosis upon arrival
of a spike (see section2.1)
pr(N) Probability of auni-vesicularrelease as a function of the number
of docked vesicles,N
N0 Maximum number of vesicles in the
readily-releasable-pool (RRP)
Nth Threshold in the number of docked vesicles above which
pr(N > Nth) = U
U Release probability when the number of docked
vesicles isN > Nth
τv Mean recovery time of a single docking site
Prec(n, t|N0 −N) Probability thatn out ofN0 −N empty docking sites
are recovered in a time windowt.
℘N(tj) Probability ofN vesicles being docked upon
arrival of thej − th spike
℘ssN Probability ofN vesicles being docked upon
arrival of a spike in the stationary state
℘ssN Probability ofN vesicles being docked upon arrival of a spike
in the stationary state known that it will trigger a response
Table 2.1: Parameters and functions of the model of a pre-synaptic terminal
2.4. Tables of symbols 43
MODEL POPULATION OF SYNAPSES
D(U,N0, τv) Population joint distribution of the synaptic
parameters[U,N0, τv]
D(U,N0) Population joint distribution of the synaptic
parameters[U,N0]
f(N0) Marginal probability of the population parameterN0
R(U |N0) Conditioned probability ofU givenN0, modeled
as Gamma functions of orderN0 + 1
q Parameter of the Gamma functionsR(U |N0)
Γλ(U) Marginal probability ofN0
λ Parameter of the Gamma functionΓλ(U)
Table 2.2: Parameters and functions of the population of synapses distribution
44 Chapter 2: A model of synaptic depression and unreliability
INPUT STATISTICS
ν Pre-synaptic firing rate
CVisi Coefficient of variation of pre-synaptic inter-spike-intervals (ISI’s)
τc Temporal scale of the exponential auto-correlations of the input
ρisi(t|ν) Probability distribution function of the inter-spike-intervals (ISI’s)
given the input rateν
[β1, β2, ε] Parameters of the p.d.f.ρisi(t|ν) which are
complex functions of the physical parameters[ν, CVisi, τc]
C(t, t′) Two-point correlation function, defined as the probability
of finding spikes at timest andt′
Cc(t) Connected conditional rate, defined as the extra probability of
finding a spike at timet given that there wasanotherone at time zero
Nb Mean number of spikes within a burst
Tb Mean duration of a burst
f(ν) Probability distribution function of the input ratesν
ρisi(t) Probability distribution function of the inter-spike-intervals (ISI’s)
for the whole ensemble of inputs
νd Rate of thediluted input process, defined
as the result of decimating the input by a random factorU
ρdisi(t|νd) P.d.f. of the intervals of thedilutedspike train
ρisi(s) Laplace transform of the ISI’s p.d.f.ρisi(t|ν)
Table 2.3: Parameters and functions used to model the input spike statistics.
2.4. Tables of symbols 45
SYNAPTIC RESPONSE STATISTICS
νr Rate of synaptic responses (or releases)
CViri Coefficient of variation of the inter-response-interval (IRI’s)
ρiri(t) Probability distribution function of inter-response-intervals (IRI’s)
Ccr(t) Connected conditional rate of responses, defined as the probability of a release
at timet given there was one at time zero
τr Time scale of the exponential correlations of the releases
when the input is Poisson
νsat Saturation frequency, defined as the input rate at which
the output rateνr has reached half of its maximal value
K Absolute measure of the response correlations, defined as
the integral of the absolute value ofCcr(t)
Table 2.4: Parameters and functions used to model the synaptic response statistics
Chapter 3
Information transmission through
synapses with STD
3.1 Introduction
In this chapter we will try to answer some of the fundamental issues addressed in this
thesis, which can be resumed in a simple general question: how do the biophysical properties
of the synapses, namely the fact that vesicle release is unreliable and the effect of short term
depression (STD), affect theinformation transmission from the pre to the post-synaptic
terminal?
To answer this question we will build upon the synaptic model presented in the previ-
ous chapter. In particular, we will compute analytically the information transmitted by one
synapse composed of a single contact (or synaptic specialization) but many vesicle docking
sites like the one described in section2.2.1. Moreover, we will make use of the statistical
properties of the synaptic responses computed in section2.3of the previous chapter. There,
we obtained an analytical expression for the distribution of the inter-response-intervals (IRIs)
ρiri(∆|ν) when the input is a renewal process with exponential temporal correlations be-
tween the spike times. Therefore, the distributionρiri(∆|ν), along with the statistical quan-
titiesνr andCViri, will be the starting point of our information calculation.
It is necessary to describe here the basic hypothesis and simplifications that we make to
address the problem of the quantification of the information.
• Code: rate, timing,... The first decision one takes when calculating information is
which are the input and output signal variables or, in other words, which is the code.
We will assume that the input signal is the pre-synaptic firing rateν, which means that
the pre-synaptic neuron uses arate codeto transmit information to the post-synaptic
47
48 Chapter 3: Information transmission through synapses with STD
cell. In the output, we will consider two different choices: the precise timing of the
responses, conveyed in the size of the consecutive IRIs, and the number of responses
in a certain time window. It is obvious that the first of these codes always gives more
information than the second because, besides the number of responses, it provides the
particular temporal pattern. However it is not so trivial to state that, if one only wants
to determine the rate of the particular process, knowing the temporal pattern still gives
you an advantage over knowing just the number responses. In the general case, the
estimation of the rate is always better with a temporal code than with a counting code.
Only in certain particular cases, like the Poisson process, no information is lost when
the spike count is used to estimate the rate [van Vreeswijk, 2001]. For this reason, in
the cases that it is technically possible, we will compute the information contained in
the output by both codes.
• Information per response vs. information per unit time. Because we are inter-
ested in finding the values of the synaptic parameters that optimize the information
transmission, a pertinent question immediately arises: is it desirable to maximize the
information per synaptic response or the information per second, what is commonly
called information rate. The latter case would be relevant if the post-synaptic neuron
needed to decode the inputν as rapidly as possible without any metabolic constraints.
On the contrary, if the cell is trying to extract the information in an efficient way in
terms of energy consumption, no matter the velocity of the process, it would maximize
the information per response. It seems clear that these two scenarios are limit situa-
tions unlikely to occur in the brain. Perhaps the plausible optimization occurring in the
cortex, if any, takes into account both aspects in a complex manner, lying in between
the two simple limits. Nevertheless, if one takes care not to reach unreasonable results,
the analysis of these two approaches gives a valid approximation to the more realistic
problem. For instance, if the optimal value of the release probability becomes infini-
tively small, that would be an absurd result because it would lead to the meaningless
conclusion that it would take an infinitively large time to extract the information con-
tained in the output, since the output rate would be almost zero. In conclusion, we will
analyze both the information per response and per unit time, discussing every time the
significance of the optimization results.
• Stationarity vs transient. The third assumption is that the analysis is carried out in
a stationary situation where the input firing rateν is constant and the synapse is in a
stationary state. This stationarity does not mean that the synaptic vesicle dynamics
remain in equilibrium, but only that thestatisticalproperties of the synapse (such as
the probability that the docking sites are empty) do not depend on time. The reason
3.1. Introduction 49
for this assumption is simply to start with the easiest non-trivial case in which the
calculations are analytically tractable. Several previous works [Abbott et al., 1997,
Tsodyks and Markram, 1997] have proposed that short-term depression might play
an important role in the transmission of information over non-stationary periods, in
particular during the transient following a sudden change of the input firing rate. The
analysis of transient aspects however, are left for future work.
• Temporal correlations. The last key-point we have introduced in our analysis is the
presence of temporal auto-correlations in the spike trains (see section2.2.2). As orig-
inally proposed byGoldman et al.[1999], Goldman[2000] and as we will show in
this chapter [de la Rocha et al., 2002], depressing synapses are specially well suited to
transmit information from positively correlated spike trains. This will turn out to be a
key-point when looking for the optimal value of the vesicle recovery timeτv: it will
be shown that, unless there are positive correlations in the input, the optimal recovery
time is zero,τopt = 0, which means that a non-depressing synapse is optimal (this
is the case for instance when the input is Poisson). As soon as positive correlations
are introduced, depressing synapses become advantageous over static ones, meaning
τopt > 0. Because the correlations considered are positive and exponentially shaped,
their existence implies an increase in the variability of the ISI’s with respect to the
Poisson case (see the expression for the conditional rate in eq.2.2.6). For this reason,
in this particular case increasing(decreasing) the variability of the input is equivalent
to increasing(decreasing) the positive autocorrelations. Nevertheless, this equivalence
breaks down when we reduce the variability belowCV = 1. In this case decreasing
the variability means introducingnegativecorrelations (see eq.2.2.6). From hereafter
we will make this correspondence (correlations = variability) when is pertinent without
further explanations.
The ability of STD to increase the information transmission about the precise input spike
pattern has been studied previously by different groups [Matveev and Wang, 2000a, Gold-
man, 2000, Fuhrmann et al., 2002]. Goldman[2000] computed the information about the
input spike times contained in an IRI. He found that STD is advantageous when compared
with an unreliable synapse in which the release probability is renormalized so that the frac-
tion of successful spikes in both cases is the same. He also showed that in the case in which
the input is embedded with positive correlations, the transmission capacity of a depressing
synapse is enhanced. However, from his results one concludes that both release unreliability
and short-term depression (produced for example by vesicle depletion) always decrease the
amount of information transmitted in comparison with a reliable synapse (U = 1) with no
depression (τv = 0). In other words, he shows that the existence of these two characteristics
50 Chapter 3: Information transmission through synapses with STD
of the synapses in the brain cannot be justified in terms of the optimization of the informa-
tion transmission.Fuhrmann et al.[2002] studied the information about the input spike times
conveyed in a different alternative variable: the magnitude of a single EPSP. Because of the
existence of non-linearities due to the presence of short-term depression and facilitation,
they show that the amplitude of a single EPSP carries information about the times of several
preceding spikes. They also find an optimal vesicle recovery time constantτopt > 0, which
implies that depression is increasing the capacity of the channel, but the perfectly reliable
synapse always performs better than the unreliable one, i.e. they findUopt = 1.
3.2 Methods
We introduce now the different information measures we have computed and briefly show
how they are obtained and used. We are interested in establishing what the information
transmission capacity of synapses with short-term depression is, and to answer whether the
synaptic parameters can be tuned to optimize the transmission.
3.2.1 Fisher Information
The Fisher informationJ is a quantity widely used in the context of parameter estimation
[Blahut, 1988, Frieden, 1999]. It quantifies the encoding accuracy of a given particular value
of the input variables. In our case, this variable is the input rate,s = ν. J is related to the
mean squared stimulus reconstruction error,σ2s , through the Cramer-Rao inequality [Blahut,
1988]
σ2s ≥
1
J(3.2.1)
Therefore, it provides a measure of how important is the error in the estimation ofν.
When taking the IRIs temporal length as the output encoding variable, that is, the timing
code, we will refer toJsr(ν|∆) as the Fisher information about the rateν given one interval
between responses,∆, and we will compute it using the usual definition [Blahut, 1988]
Jsr(ν|∆) =∫
d∆ρiri(∆|ν)
(∂
∂νlog ρiri(∆|ν)
)2
(3.2.2)
When the output encoding is the number of responses within a time windowT , n(T ), one
needs to calculate the probabilityg(n(T )|ν) of n(T ). This is impracticable for a generalT ,
so we will use the expression valid for the largeT limit [ van Vreeswijk, 2001], in which
g(n(T )|ν) becomes a Gaussian distribution with meanνrT and varianceFT νrT . FT is the
3.2. Methods 51
Fano factor of the responses defined (for any kind of train) as
FT =V ar[n(T )]
< n(T ) >(3.2.3)
where< n(T ) > andV ar[n(T )] are the mean and the variance of the count of events in a
time windowT . In the limit mentioned, the Fisher information of the rateν givenn(T ) is
up to first order [van Vreeswijk, 2001]
JT (ν|n) =T
FT νr
(∂νr
∂ν
)2
(3.2.4)
The first quotient of the r.h.s. of this equality represents the Fisher information about there-
sponses rateνr given the number of releases,JT (νr|n). Becauseνr is a continuous function
of the spike rateν (what we call the synaptic transfer function), the partial derivative trans-
forms JT (νr|n) → JT (ν|n). Nevertheless, the Fano factor is analytically tractable only if
there exists an expression for the response correlation function,Cr(t), which can be obtained
only if N0 = 1 (see section2.2.2). Because of this restriction, the information contained in
the countn(T ) will be calculated only in the particular instance where there is exactly one
vesicle release site (N0 = 1). In this case, the synaptic response is a renewal process, so for
large enough time windowsT the Fano factorFT equals the squared coefficient of variation
of the IRIs,FT ' CV 2iri (see e.g. [Gabbiani and Koch, 1998]). We will make use of this
property and writeJT (ν|n) as a function ofCV 2iri.
Finally, to obtain the Fisher informationper synaptic response, we have to take the ra-
tio betweenJT (ν|n) and the average number of releases in the windowT , which is νrT ,
resulting [de la Rocha et al., 2002]
Jsr(ν|n) ≡ JT (ν|n)
νrT=
1
CV 2iriν
2r
(∂νr
∂ν
)2
(3.2.5)
Besides the information per response, we will use the informationper unit time, which
is obtained, in the case of the count code, dividingJT (ν|n) by T . Thus, we simply drop the
dependence ofJT (ν|n) onT and rename it asJ(ν|n).
J(ν|n) ≡ JT (ν|n)
T=
1
CV 2iriνr
(∂νr
∂ν
)2
(3.2.6)
For the timing code, we had shown the expression for the Fisher information given the size
of an IRI,Jsr(ν|∆) eq. 3.2.2. To transform this expression into the information contained
in the response timesper second, all we have to do is multiply by the rate of the responses,
νr, obtaining:
J(ν|∆) = νrJsr(ν|∆) = νr
∫d∆ρiri(∆|ν)
(∂
∂νlog ρiri(∆|ν)
)2
(3.2.7)
52 Chapter 3: Information transmission through synapses with STD
3.2.2 Mutual Information
The second measure of information content we will employ is the mutual information,
also called Shannon information [Shannon, 1948, Blahut, 1988, Cover and Thomas, 1991].
An important difference with the Fisher information is that it is not defined for a fixed value
of ν, but for a whole input ensemble, defined by its probability distributionf(ν). The mu-
tual information not only quantifies the ability of discerning which stimulus was presented
by observation of the output response, but it also accounts for thecapacityof the input-
output channel, which basically measures the number of distinguishable signals that can be
communicated through that channel. This means that in the hypothetical case of perfect
discrimination of the stimulus (case in which the Fisher Information would be infinity), the
information would be upper bounded by the entropy of the stimulus1. This upper bound is
directly related with an essential characteristic of the mutual information which states that it
is not additive, that is, the observation of independent realizations of the response does not,
in general, increase the information linearly. On the contrary the Fisher information does
grow linearly with the number of independent output observations [Nadal, 2000, Brunel and
Nadal, 1998].
We use the following definition for the information aboutν conveyed in an interval be-
tween responses∆ [Cover and Thomas, 1991]
I(∆; ν) = H(∆)− 〈H(∆|ν)〉ν (3.2.8)
where the angular brackets denote average over the input ensemblef(ν), and the entropies
H(∆) and〈H(ν|∆)〉ν are defined as
H(∆) = −∫ ∞
0d∆ ρiri(∆) log2 ρiri(∆) (3.2.9)
〈H(∆|ν)〉ν = − 〈∫ ∞
0d∆ ρiri(∆|ν) log2 ρiri(∆|ν) 〉ν (3.2.10)
and the IRI distributionρiri(∆) is obtained by averaging over the input distribution the con-
ditioned distribution:
ρiri(∆) = 〈 ρiri(∆|ν) 〉ν =∫
dνf(ν)ρiri(∆|ν) (3.2.11)
We will use an exponential function for the input rate distribution for two reasons: first
because it has been measured that neurons follow in some areas such a distribution [Treves
et al., 1999]. Second, because in the context of parameter estimation the prior distribution
is typically chosen as the maximal entropy distribution, taking into account all the known
1In the case that the input signal is a real random variableX with a certain distributionf(X), the entropy
H(X) is infinity, meaning that in the case where the noise in the channel is zero, the information is infinity.
3.2. Methods 53
constraints on the parameter. In our case, the estimated parameter/stimulus is the input rateν,
and the only constraint we impose is that it has a given finite averageν. With this restriction
the function which maximizes the entropy is an exponential distribution:
f(ν) =e−ν/ν
ν(3.2.12)
The information per unit time, that is, the information rateR(∆; ν) , is defined as
R(∆; ν) = νr I(∆; ν) (3.2.13)
In both approaches, Fisher and mutual information, we will look for those values of the
synaptic parametersτv andU which maximize the information. We will denote these optimal
values byτopt andUopt. We will study first how these optimal variables depend on both the
input parameters[ν, CV, τc] and the other synaptic parameters, and then make a quantitative
comparison of their values with those in the experimental literature.
3.2.3 Mutual Information in a population of synapses
Besides computing the information transmitted by a single synaptic contact, we will
consider the information conveyed in the responses produced by a population of synapses.
This population of synapses are activated independently (their activity is not correlated) by
a population of pre-synaptic neurons which fire with the same statistics, that is, the sameν,
CV andτc. As illustrated in figure2.4in the previous chapter, we model the response to this
multi-synaptic stimulus as an array of IRIs,∆iMi=1. Since each synapse is stimulated by a
different spike train (though with the same statistics) given the input rateν the distribution
of ∆iMi=1 factorizes as follows
ρiri(∆i|ν) =M∏i=1
ρiri(∆i|ν) (3.2.14)
When this conditional independency holds, there exists an explicit relationship between the
Fisher information aboutν given∆iMi=1, J(∆i|ν), and the mutual information between
the variablesν and the M-dimensional∆iMi=1 which is valid in the largeM limit [ Rissanen,
1996, Brunel and Nadal, 1998]
I(∆i; ν) '∫
dν f(ν) log(1
f(ν)) −
∫dν f(ν) log(
√2πe
J(∆i|ν)) (3.2.15)
Due to the independence of the individual responses∆i given ν, J(∆i|ν) is just the
sum of the individual informations, which, for largeM , can be replaced by the average over
54 Chapter 3: Information transmission through synapses with STD
the distributionD(U,N0, τv) of the synaptic parameters
J(∆i|ν) =∑M
i=1 Ji(∆i|ν) ' M∑N0
∫ ∫dUdτvD(U,N0, τv)J(∆|ν) (3.2.16)
Using the relation between Fisher information and Shannon information of a population,
we are able to compute the latter for a general input p.d.f.f(ν) and a general synaptic
distributionD(U,N0, τv).
If a neuron could modify the properties of the synapses impinging onto its dendrites, and
if its purpose was to read the rateν of a population of afferent stimuli, which follow a certain
distributionf(ν), the optimal way to achieve this goal would be to change in acoordinate
manner, the physiological values of the parameters[U,N0, τv] of all the pre-synaptic termi-
nals contacting upon its dendritic tree, according to the optimal distributionDopt(U,N0, τv).
This function is defined as the p.d.f. which maximizes the information about the input rate,
that the times of the responses, produced by each contact, convey.
We would like to think that CA3-CA1 synapses, from which we have partially modeled
this distribution, are optimized in this sense. Thus, the answer to the question of how the
values of the recovery time constantτv are distributed and how they correlate withN0 andU
would arise if we could optimizeD(U,N0, τv) under the constraints the experimental find-
ings impose [Dobrunz and Stevens, 1997, Murthy et al., 1997, 2001]. This is however a
non-practicable calculation if one tries to accomplish it in an analytical fashion. We will
therefore, compute an approximate solution to this problem by means of a different opti-
mization criteria. Instead of optimizing globally through the population, the optimization
will be performed locally. Thus, the relevant solution may be lost with this procedure. Let
us write the complete synaptic distribution as follows
D(U,N0, τv) = f(N0) R(U |N0) P (τv|N0, U) (3.2.17)
We have mentioned before that at a single contact, there exists an optimal valueτopt for the
recovery time of the synapse which maximizes the mutual informationI(ν ; ∆). This τopt
depends firstly on the input parametersCV andτc and on the rate distributionf(ν), but also
in the other two synaptic parametersU andN0. Thus, we hypothesize that every synaptic
bouton (defined by the values ofU andN0) individually adjusts the velocity of its synaptic
machinery by modifyingτv so that it matches the optimal valueτopt(U,N0). In this case, the
population distributionD(U,N0, τv) would take the form
D(U,N0, τv) = f(N0) R(U |N0) δ(τv − τopt(N0, U)) (3.2.18)
and we can reach the marginal distribution ofτv by simply summing and integrating overN0
andU
g(τv) =∑N0
∫dU D(U,N0, τv) (3.2.19)
3.2. Methods 55
=∑N0
∫dUf(N0) R(U |N0) δ(τv − τopt(N0, U))
It is important to remark here that this procedure does not lead us in general, to the
optimal solutionDopt(U,N0, τv). By assuming that each contact tunes the parameterτv
individually, we prevent the ensemble of synapses from beating the problem in a cooperative
manner (which would be the case in the instance of optimizingD(U,N0, τv) as a whole).
3.2.4 Optimization with Metabolic considerations in the recovery rate
When optimizing the mutual information by adjusting a number of biophysical param-
eters, or a certain distribution of them, one must not ignore that the mechanism underlying
information transmission consumes energy. An organism must have an optimal representa-
tion of the sensory world, but at what prize? The final goal of achieving such an efficient
encoding is the organism survival. It would be a disadvantage for the animal to have the
most efficient encoding if, for instance, having a certain parameter equal to its optimal value
requires more energy than the extra metabolic gain (e.g. obtained by procuring food with the
optimized sensory mechanism). Another example would be the rates at which tha neurons
fire. If instead of emitting AP’s at 10-50 Hz they fired at 1000 Hz, perhaps the temporal
resolution of the visual system could be enhanced. However, firing at those rates our neu-
rons would consume many more ATP molecules, and the benefit of having such a super-fast
vision would not make the elevated rates worthwhile.
In conclusion, when making a particular optimization, one must consider the efficiency of
such a maximization in terms of metabolic consumption [Levy and Baxter, 1996, Baddeley
et al., 1997, Laughlin et al., 1998, Balasubramanian et al., 2001, de Polavieja, 2002]. The
simplest way to do it is to maximize the mutual information per unit energy,I which can be
defined asBalasubramanian et al.[2001], de Polavieja[2002]
I =I(X; Y )
E(~p)(3.2.20)
whereI(X; Y ) is the information conveyed byY aboutX, andE(~p) is the energy required
in the transmissionX → Y (which may include the production of the signalX, its transfor-
mation intoY , the reading of the responseY , etc.). One must propose, therefore, this energy
function, E(~p), which depends on the parameters~p to be optimized. If the details of the
metabolic consumption are known, the energy function can be derived from the biophysical
mechanisms taking place in the transmission. If, on the contrary, this dependence is un-
known, one needs to make anansatzaboutE(~p). The simplest one is that the energy scales
up linearly with energy-consuming variables such as the firing rate, the vesicle recovery rate,
56 Chapter 3: Information transmission through synapses with STD
the release probability , etc.[Levy and Baxter, 1996, Baddeley et al., 1997, Balasubramanian
et al., 2001, de Polavieja, 2002]:
E(~p) = β r(~p) (3.2.21)
whereβ is a constant whose units are energy divided by the units of the variabler(~p), which
depends on the parameters~p. In this case, the value of~p which maximizesI does not depend
on theβ because it just represents a proportionality factor.
Let us introduce an example of efficient metabolic consumption for the synaptic channel
we are interested in. Here the input signalX is always the pre-synaptic rateν. The param-
eters to be optimized are the recovery time constantτv and the release probabilityU , i.e.
~p = τv, U. If we consider now that the output code is given by the times of the responses
in a given unit time, ∆t, we can think of an energy function which depends linearly on the
release rater = νr. In other words, it seems reasonable to state that the energy needed to
generate the output variable (the times of the responses in a time window) depends on the
mean number of responses per unit time,νr, which means that
E(τv, U) = β νr(τv, U) (3.2.22)
In this way the information per unit energy would read,
I(∆t; ν) =I(∆t; ν)
β νr
(3.2.23)
We have previously introduced the mutual information between the input rate and an inter-
response interval,I(∆; ν) , and the information rateR(∆; ν) = νr I(∆; ν) . Since, as men-
tioned in the previous section, the Shannon information is not additive [Cover and Thomas,
1991], the information conveyed by the responses lying in a time window does not equal,
in general, the information of a response times the number of them occurring in that time
window. When the responses are conditionally independent2, the information rateR(∆; ν)
is always larger or equal than the information of the responses occurring in a unit time,
I(∆t; ν) ≤ R(∆; ν) = νr I(∆; ν) (3.2.24)
which using eq.3.2.23leads to
I(∆t; ν) ≤ I(∆; ν) (3.2.25)
2This means that given the input signalν, the joint probability of many responses factorizes into the product
of individual probabilities. If this condition does not hold, the information given by several responses together
can be larger than the sum of the information each provides separately. This is known assynergy.
3.2. Methods 57
Therefore, despite the inequality, one may consider that optimizing the informationper re-
sponse, I(∆; ν) , is approximately3 equivalent to maximizing the information of the re-
sponses produced in a small time window, considering the metabolic consumption due to the
generation of these responses, i.e.I(∆t; ν).
Another example refers to the cost of the recovery rate of the vesicles. If one assumes
that structural constraints make the synapses have only a small number of docking sites,
therefore, in order not to loose the information of incoming spikes, after the release of the
vesicles which were ready for release, the synapse should replenish those empty docking
sites in a fast manner. It might be the case that the optimal value of that recovery rate is
infinite, that is, the mean time elapsed between release and recovery equals zero (and thus
the synapse is never “empty”). However it seems reasonable to state that the number of
ATP molecules to refill an empty site at an infinite rate should be much larger then the case
of a slower recovery. This implies not only that the consumption rate (the number of ATP
molecules consumed per unit time) is higher but that each single recovery expends more the
faster it occurs. This can be formalized by assuming that the energy functionE(τv) grows
monotonically with the recovery rate1τv
. In particular we assume,
E(τv) =β
τv
(3.2.26)
In this case, the information about the input rate per unit energy and per response reads
I(∆; ν) =I(∆; ν)
E(τv)= β I(∆; ν) τv (3.2.27)
In the results section, it will be shown that, considering the metabolic consumption in this
way, primes larger values ofτv to be optimal, because of the higher cost of a fast replenish-
ment. In addition to the mutual information per unit energy we will also compute the Fisher
information per unit energy, defined as the ratioJE(~p)
.
3.2.5 Numerical methods
The specific way in which all the information measures have been computed varies from
one to another. As intended along most of this thesis, when tractable, the calculation has
been overcome analytically. Therefore, the Fisher information using the count code has been
computed analytically, because its calculation does not require to perform any integral and
all the necessary variables have been obtained in this way. However, in the cases of the Fisher
3Because the response rates are smallνr (∼ 1 − 5 Hz) and therelevanttime window is smaller than one
second, the number of responses within it will be just a few. This makes the additive approximation a better
estimation [Nadal, 2000]
58 Chapter 3: Information transmission through synapses with STD
information based on the synaptic event timing, and of the mutual information, the integrals
present in their definitions do not have a primitive4, so they have been carried out numerically
using standard methods [Press et al., 2002]. To find the optimum values of the parameters
that maximize a certain quantity, except when explicitly indicated, numerical algorithms
have been used because analytics became rapidly very tedious. In this particular case, the
numerical approach was much faster giving almost the same insight into the problem.
To obtain the marginal of the optimal distribution of synaptic parameters defined in equa-
tion 3.2.20, a Monte Carlo simulation was performed generating values ofU andN0 using
the analytical expression of the marginalD(U,N0). For each data sample the optimalτv was
computed and an histogram of optimal values was obtained.
3.3 Results
In this section we will start analyzing how does the information depend on the input
frequencyν and on the input distribution of ratesf(ν). Afterwards, we will address the
question of how does the information depend on the biophysical parameters of the synapse
[τv, U, N0] and whether it is possible or not to find optimal values for them, which maximize
the information.
3.3.1 Rate dependence of information measures
3.3.1.1 Dependence of the Fisher information onν
We first study which input rates are best represented by the train of post-synaptic re-
sponses, that is, which rates would be reconstructed with the minimal error. This is expressed
by the Fisher information that, as we mentioned, sets a lower bound for the mean squared
error of the best estimator. We will assume, since now on, that the estimator that saturates
this bound exists so that we use the Cramer-Rao bound as an equality that states:σ = 1√J.
Because we are going to compare the errorσ for different values of the rateν, to make this
comparison adequate, we compute the relative errorε defined as the ratio between the error
and the signal:
ε =σ
ν=
1
ν√
J(3.3.1)
Depending whether we are referring to the Fisher information per response or per unit time,
we will express the relative error asεr andεt, respectively.
4Even in the simplest case in which the input is Poisson, the conditioned distribution of the responses
ρiri(∆|ν) is the sum of two exponentials [de la Rocha et al., 2002]. Thus, taking the logarithm of this sum of
exponentials immediately leads to an integrand with no primitive.
3.3. Results 59
What is the expected behavior ofεr andεt as a function ofν? Firstly, the two of them
have to be strongly influenced by the most obvious effect of synaptic depression: saturation.
As we discussed in the previous chapter, whenever the input rate is very high (ν 1Uτv
),
or the vesicle recovery process is very slow (τv 1Uν
), it happens that a large fraction of
incoming spikes find the synaptic bouton depleted of vesicles, and therefore do not succeed
to trigger any response detectable by the post-synaptic terminal5. In this saturating regime,
more accurately defined byντvU 1 (see section2.3), changes inν would hardly produce
any change in the response train statistics, and therefore the estimation ofν will be poor.
Thus, beyond some point, if we keep increasing the rateν, the relative errorε (no matter
whether it is per response or per unit time) has to increase and tend to infinity as we approach
the limit ν →∞.
Now, what happens when we approach the opposite limitν → 0? The first obvious
consequence is that the number of synaptic responses, within a fixed time window, tends to
zero, that is, we have less and less information in the post-synaptic cell about what occurs in
the afferent fiber. Thus, the Fisher information per second must tend to zero whenν → 0,
which means thatεt →∞. In conclusion, because the Fisher information is continuous inν,
there exists a non-trivial minimum forεt which is achieved for a particular value ofν, that
we shall denote byνmin.
In fig. 3.1 (bottom) the relative errorεt is plotted as a function ofν for several values
of the inputCVisi. Both theεt obtained fromJ(ν|∆) (timing code, solid lines) and from
J(ν|n) (counting code, dashed lines) are included. Several things must be noticed: i) In
all the cases there is a minimum at a value nearνmin ∼ 5Hz. ii) Comparing the codes,
timing vs. counting (for the same value ofCVisi), although the error is always smaller for
the timing case, for small values of the rate (ν < 10 Hz) both solid and dashed lines almost
superimposed, meaning that counting responses provides an estimation as good as observing
the timing. iii) Different inputCVisi’s have slightly differentνmin (see inset fig.3.1), or more
exactly, the bigger the inputCVisi the largerνmin. iv) However, in the case of timing code,
εt is very similar, for all the values ofCVisi explored, in theν range shown in the figure.
Unlike this insensitivity for the ISI variablity, the case of the counting code shows that, forν
beyondνmin, the larger theCVisi the worse is the estimation.
Because both codes, timing and counting, reach the minimum at the same valueνmin, we
can easily obtain its analytical formula using the simple expresion derived forJ(ν|n) (eq.
5To be precise, although closely related to the loss of a large fraction of spikes (which find the synapse
empty of vesicles), it has been shown in section2.2.3the level saturation is not equivalent to the fraction of
release failures. Certainly the latter it is a necessary condition to achieve saturation but is not sufficient (see the
example shown in figure2.7.
60 Chapter 3: Information transmission through synapses with STD
0123456789
10111213
ε r (re
lativ
e er
ror
per
resp
onse
)
cv= 1cv= 1.5cv= 2
0 10 20 30 40 50
ν [hz]
0
1
2
3
4
5
6
7
8
9
10
ε t (re
lativ
e er
ror
per
sec.
)
0 5 10
Figure 3.1: Relative reconstruction errorε per response (top) and per second (bottom) versus
input rateν. The relativeε is derived from the Fisher Information (see eq.3.3.1) given the
number of responses (dashed lines) or the times of the responses (solid lines). Top plot inset
values apply for both plots. Inset, represents a magnification of the minima in bottom plot.
Parameters:τc = 50 ms,τv = 500 ms,U = 0.5 andN0 = 1.
3.2.6), that we write down again here:
J(ν|n) =1
CV 2iriνr
(∂νr
∂ν
)2
(3.3.2)
which yields to an expression forεt which reads
εt =CViri
√νr
ν
∂ν
∂νr
(3.3.3)
For a Poisson input, it takes the following expression as a function of the input and the
synaptic parameters
εt =
√(τv
2ν2U2 + 1) (1 + τv ν U)
Uν(3.3.4)
and its minimum occurs at
νmin =0.657
Uτv
(3.3.5)
Let us recall here, that the saturation frequencyνsat defined in the previous chapter, when
the input is Poisson (eq.2.3.5), is exactlyνsat = 1Uτv
. Thus, we can rewrite the rate where
the minimum is achieved asνmin = 0.657 νsat. The interpretation of this dependency is
straightforward: the reconstruction error per secondεt does not decrease monotonically as
3.3. Results 61
the input rate increases (due to the occurrence of more responses per second) because the
response rateνr rapidly saturates, (see the transfer function,νr(ν), in figure2.6). For that
reason, the later the transfer function saturates ( i.e. the biggerνsat), the laterεt will start
to increase because of this saturation, meaning a larger value forνmin. Furthermore, as we
saw in the previous chapter, introducing aCVisi > 1 in the input, boosts the saturation rate
νsat towards higher values (see eq.2.3.10). Consequently,νmin takes larger values the larger
is the inputCVisi (see inset of fig.3.1). We have checked numerically that the relation
νmin = 0.657 νsat also holds approximately whenCVisi > 1.
On the contrary, figure3.1(top) shows that the reconstruction relative errorper response,
εr, does not go to infinity asν goes to zero. This implies that if, in the limitν = 0, the
post-synaptic cell observes a response, it conveys some information to make an estimation.
Despite the reconstruction error in this limit is minimal, it is not a biophysically relevant
situation because there are barely any responses, and it would take a huge time to be able to
make any estimation.
Observing the quantitative aspect of the estimation, we find that for any values ofν the
relative error is always bigger than one. This means that the best we can do, is to estimate
the input rateν with an errorσ which is at least as large asν. This is obviously a very
poor estimation. Things get even worse beyond the saturation rateνsat where the error is
several times the signalν. We have to remind, however, that this is an estimation made by
the observation of one response in asinglesynapse. If we think that the post-synaptic neuron
receivesN afferent inputs providing the same signalν, assuming that the spike trains coming
along the pre-synaptic fibers are independent, the total Fisher information would be the sum
of N terms like the one we are analyzing. Hence, the errorε would be proportional to 1√N
,
and it can be made infinitively small by just pooling enough independent synaptic responses.
Concerning the value ofνmin, for the parameter values chosen in figure3.1, it ranges,
for different inputCVisi’s, between2 − 5 Hz. These low values ofνmin, apparently more
inside the interval of spontaneous cortical activity than in the range of rates associated with
the performance of a cognitive task, are due to the inverse relation with the recovery time
constantτv whose values in cortical pyramidal neurons varies between 0.4-1.5 s. [Markram,
1997, Varela et al., 1997, Finnerty et al., 1999, Varela et al., 1999, Petersen, 2002]. For
a Poisson input, since the release probability,U , runs from 0.1-0.95 [Rosenmund et al.,
1993, Hessler et al., 1993, Markram, 1997, Murthy et al., 1997, 2001], νmin ranges from0.657
0.95∗1.5∼ 0.5 Hz to 0.657
0.1∗0.5∼ 14Hz.
To conclude, in the case of the estimation error per second, there exists a finite non-zero
value of the input rateνmin for which a minimal reconstruction error is achieved. This value
is directly proportional to the saturation frequency defined in chapter2, so it is sensitive to
62 Chapter 3: Information transmission through synapses with STD
0
0,05
0,1
0,15
0,2
Info
[bi
ts]
CV= 1CV= 1.5CV= 2CV= 2.5
Exponential input distribution: f(ν)= e(-ν/<ν>)/<ν>; τ
c= 20 ms., τ
v= 500 ms., U=0.5, N
0=1
0 10 20 30 40 50 60 70 80
<ν> [hz]
0
0,05
0,1
0,15
0,2
0,25
Info
Rat
e [b
its/s
]
Figure 3.2: InformationI(∆; ν) (top) and information rateR(∆; ν) νr I(∆; ν) (bottom) ver-
sus input rateν. Inset values apply for both plots. Parameters:τc = 20 ms, τv = 500 ms,
U = 0.5, N0 = 1.
the introduction of auto-correlations in the input by increasing the value ofCVisi. Besides,
both the counting and the timing code provide similar performance before we enter into the
saturating regime. When the estimation becomes severely affected by saturation, the timing
code seems to be less influenced than the counting code.
3.3.1.2 Dependence of the Mutual Information onf(ν)
We now study the question of what are the best input ensembles to be transmitted across
a unreliable depressing synapse. In other words, how does the Shannon information depend
on, for example, the mean input rateν? How does the choice of the input distributionf(ν),
affect the information transmitted? As opposed to the Fisher information, the computation
of the mutual information requires to define the probability distribution of the stimulus. We
take, to start with, an exponential functionf(ν) = e−ν/ν
νand later we will explore other
possible choices. A lower cut-offνinf was taken in the distributionf(ν) to account for the
experimental fact that neurons are seldom completely “quiet”, but in general they fire at least
with a low spontaneous rate which falls around 1-2 Hz [Sanseverino et al., 1973, Abeles,
1982, Legendy and Salcman, 1985]. Thus, the p.d.f. of the input rate reads:
f(ν) =e−
ν−νinfν
ν(3.3.6)
3.3. Results 63
if ν > νinf , and zero otherwise. This implies that the mean input rate is not anymore the
parameterν, but ν + νinf . However since we will take a rather smallνinf ∼ 1 Hz, we will
keep callingν the mean rate.
Figure3.2(top plot) responds to the question of which is the bestmeaninput rateν to op-
timally transmit information inonesynaptic IRI, when the input has fixed autocorrelations,
i.e. CVisi andτc are constant. The four values of the inputCVisi displayed, show a tuned
curve with a maximum at different rates, ranging between10 − 20 Hz. The maximum at a
non-trivial value of the mean rate can be explained by understanding why the information
drops to zero at both limitsν → 0 andν →∞. When the mean rate goes to zero, the distri-
bution converges, in the sense of distributions, to the Dirac functionδ(ν − νinf ). What does
this imply? A Dirac delta p.d.f. indicates that the only stimulus with non-zero probability is
ν = νinf . This in turn means, that the number of input messages isone, because the channel
transmits always the same frequency. This implies that the information is zero, because the
input rate is known previous to any observation.
The opposite limitν → ∞ approaches zero for a different reason: saturation. Whenν
is very large ( or in generalUντv >> 1, see section2.2.3) the synapse saturates and the
rate of responses is rather insensitive to changes in the input. In section3.3.2.2, it will be
shown that the two entropiesH(∆) and〈H(∆|ν)〉ν converge to the same function ofτv as
saturation becomes more and more prominent. Although increasing the mean input rateν
does not imply thatall input rates saturate, it makes the fraction of input messagesν which
are not well transmitted larger. As a consequence,I(∆; ν) tends to zero asν increases. As
a result of these behaviors at the limitsν → 0 and ν → ∞, I(∆; ν) exhibits a maximum.
In other words, given a synapse defined by the parametersU andτv , there exists an optimal
exponential input distribution, namely the one whose mean valueν maximizesI(∆; ν) in
the way just explained.
This optimization is possible regardless of the input correlations. It occurs for any values
of CVisi andτc . The effects of introducing correlations, namely the enhancement and shift of
the maximum, are due to the displacement of the saturation frequency. This can be observed
in figure3.2, where higher inputCVisi ’s produce a maximum at a higherν . The bottom plot
of this figure, shows that the information rate,R(∆; ν) , displays the same kind of behavior,
although now the mean rate at which the maximum is achieved, is larger (∼ 15 − 30 Hz.)
We want now to test whether this tuning inν can be reproduced for other choices of input
distributions. Figure3.3shows three different input p.d.f.f(ν), namely the exponential and
two Gamma distributions of order3 and10 defined as [Feller, 1950]
f(ν) =( p
ν )p
νp−1 e−pνν
(p−1)!(p = 1, 3, 10) (3.3.7)
where the order of the gamma is the parameterp. Notice that the exponential distribution is
64 Chapter 3: Information transmission through synapses with STD
0 10 20 30 40 50
ν [Hz]
0
0,025
0,05
0,075
0,1
0,125
0,15
f(ν)
p=1p=3p=10
Gamma input distributions of order p and mean rate 10 Hz.
Figure 3.3: Three input Gamma distributionsf(ν) (orderp = 1, 3 and10) constructed in
such a way that all have meanν = 10 Hz (see expression3.3.7)
the particular casep = 1. These gamma functions have been constructed in such a way that
the mean rate does not depend onp, so that it is always equal toν .
Figure3.4 shows the informationI(∆; ν) as a function ofν for the three distributions,
p = 1, 3 and10. In the three cases the qualitative behavior is the same: all of them show
an optimal value ofν , although different in each case: the higher the order, the lower is the
optimal mean rate. Independently of the tuning, the overall scale decreases severely as the
order of the Gammap increases. This happens because the input entropyH(ν) associated
with each distribution decreases asp increases. An indirect proof is obtained by looking at
the input varianceV ar[ν], which is usually proportional to the entropy. For the particular
normalization chosen for the Gamma distributions (see eq.3.3.7) it reads6: V ar[ν] = ν2
p.
This means that asp increases, the range of possible input rates deacreases, that is, the
number of input distinguishable messages decreases makingI(∆; ν) become smaller.
To summarize, whatever the choice off(ν) within this family of distributions, the exis-
tence of an optimal mean rateν , seems to be a robust effect. The value of the optimalν
6If we had made the standard choice for the Gamma distributions, i.e.
f(ν) =ap e−aν νp−1
(p− 1)!(3.3.8)
the variance would be proportional to the orderp: V ar[ν] = pa2 . However in this case, the mean of the
distributions would not be the same for different ordersp, that is,< ν >= pa , and increasing the orderp would
produce a diminution ofI(∆; ν) because of the increasing of< ν >.
3.3. Results 65
0 10 20 30 40 50 60 70
ν [hz]
0
0,005
0,01
0,015
0,02
0,025
I(∆;
ν) [
bits
]
0
0,05
0,1
0,15
0,2I(
∆;ν)
[bi
ts]
τc= 50 ms., U=0.5, τ
v= 500 ms., N
0= 1
00,010,020,030,040,050,060,07
I(∆;
ν) [
bits
]
CV= 1CV= 1.3CV= 2
Gamma 1
Gamma 3
Gamma 10
Figure 3.4: InformationI(∆; ν) versus the mean input rateν , for three different input
distributionsf(ν), namely three Gamma functions of orderp =, 3 and10 (see eq.3.3.7).
Inset values apply to all plots. Parameters:τc = 50 ms,τv = 500 ms,U = 0.5, N0 = 1.
depends on the distribution chosen and on the correlations of the input, but it ranges between
5 − 20 Hz. When looking atR(∆; ν) , the tuning to an optimalν is even more prominent
and the maximum occurs at higher values ofν .
3.3.2 Optimization of the recovery time constantτv
We focus our attention now in determining how does the biophysical synaptic variable
τv influence the way the information is transmitted to the post-synaptic cell. Our aim is to
check the dependence of the different information measures onτv, and to compute the value
τopt that maximizes the observed quantity.
3.3.2.1 Optimizingτv with the Fisher Information
We start by analyzing the Fisher Information as a function ofτv. As done before, we will
distinguish the two output coding strategies (counting and timing), and the Fisher informa-
66 Chapter 3: Information transmission through synapses with STD
tion per response and per unit time.
The dependance of the reconstruction error onτv, is dictated again by the two major
effects of depression in the response trains statistics (see section2.3):
i) Reduction of the variabilityCViri, by means of the elimination of very short IRIs,
resulting in a distribution for the IRIs closer to a Gamma function of order two (which
has aCViri = 1√2) than to an exponential (withCViri = 1).
ii) Saturation of the response rate.
The first, enhances the information because the estimation of the rate (or equivalently the
mean ISI) is easier in a regular train than in a very variable train. If we consider the limit
case in which the variability is zero, i.e. a periodic train withCViri = 0, the estimation error
committed by the observation of one IRI would be zero, because all the intervals equal the
mean IRI. In this case the Fisher information would be infinity. This is precisely expressed
by the dependence ofJsr(ν|n) andJ(ν|n) on the inverse ofCV 2iri, shown in equations3.2.5
and3.2.6.
Contrariwise, saturation diminishes the information due to the fact that eventually the
output rate becomes insensitive to changes in the input signalν. This is also shown explicitly
in the expressions ofJsr(ν|n) andJ(ν|n) by the squared derivative of the transfer function∂νr
∂ν(see equations.3.2.5and3.2.6) which, as the synapse starts saturating, decays rapidly to
zero.
The synaptic parameterτv tunes the strength of depression and consequently the level
of saturation of the system. In the case of Poisson input (where the saturation frequency is
defined asνsat = 1Uτv
, see section2.3), for a fixed frequencyν, a saturation recovery time
can be defined,τsat ≡ 1Uν
, as the value ofτv beyond which the system starts to saturate.
Qualitatively, if the replenish of vesicles takes a time much smaller than the typical diluted
ISI7, τv 1Uν
= τsat, the fraction of spikes that do not trigger any response due to vesicle
depletion is negligible so that changes inν may be observed inνr. In the limit τv → 0
the synapse does not show any depression at all. We will call the model at this limit, a
static synapse. On the other hand, whenτv 1Uν
= τsat, the synapse shows a strong
depression and saturation is prominent. Therefore, it seems like, regarding the consequences
of saturation on the information transfer alone, the optimalτv to estimate the rateν will be
zero, that is, a static synapse.
However, as illustrated in the previous chapter, the plot ofCViri versusτv shows that,
before saturation starts to play a dominant role, the variability of the output train of is severely
7As we described in section2.2.3 the complete 3-parameters synaptic model can be reduced to a 2-
parameters model by absorbing the parameterU into the input. The resulting input is basically adiluted
version of the original input. The diluted rate equalsνd = Uν.
3.3. Results 67
0,002
0,004
0,006
0,008
0,01
J pe
r re
spon
se [
s-2] CV=1
CV=1.5CV=2
0,1
1
(dν r/d
ν)2
0 0,2 0,4 0,6 0,8τ
v [s]
0,8
1
1,2
1,4
1,6
CV
iri
Figure 3.5: Optimization of the recovery time constantτv regarding the Fisher information
per responseJsr. Top: Solid linesrepresentJsr(ν|∆)(the Fisher information given the length
of an IRI) versus the recovery timeτv; dashed linesrepresentJsr(ν|n) (counting code);
dotted linesrepresent a counting code when the output is a “forced” Poisson (see text).
Middle: Squared derivate of the transfer functionνr(ν) in a logarithmic scale.Bottom:
Solid linesrepresent the coefficient of variation of the IRIs,CViri. Dotted linesrepresent
the squared inverse of the coefficient,1CV 2
iri(which is the term which appears in the Fisher
information). Colors of the inset applies to all plots and all lines types. Parameters:ν = 10
Hz, τc = 50 ms,U = 0.5, N0 = 1.
reduced with respect the input spike train. For a Poisson input,CViri reaches its minimum
valueCViri = 1√2
and then tends asymptotically to one, asτv is increased (2.6). The same
qualitative behavior occurs when the input has correlations of short range (τc ≤ τv), namely
CViri falls down to a minimum and then increases towards one (see figure2.6). Therefore,
as far as variability is concerned, the optimal estimation would be achieved at theτv where
CViri reaches its minimum, because there the output train is the most regular.
The question now is, what is the outcome of this sort ofpull and pushmechanism in
which saturation pulls theτopt towards zero, while regularization pushes it towards a finite
non-zero value? The answer depends critically on the amount and time scale of the input
correlations. The explanation of this dependency is depicted in figure3.5. In order to better
illustrate the action of both saturation and elimination of variability, we have isolated the
effect of the first by thinking about the following example:
68 Chapter 3: Information transmission through synapses with STD
• Poissonresponsetrain. Let us suppose that theresponsestatistics are always Poisson
with rate νr, i.e. the same saturating function we have derived in eq.2.3.8. We
are going to plot the Fisher information contained in the train of synaptic responses
about the input rateν, but assumingthat the output is Poisson. In this way, we may
separate the effect of saturation, and later include the effect of the variability. The top
plot in fig. 3.5 displays the Fisher information per response (dotted lines). Different
colors represent again different values of theCVisi. Because this case is by hypothesis
Poisson, both codes, timing and counting, coincide. As expected for this case, thepull
and pushcompetition, in which just one player (saturation) is activated, is resolved
by an optimalτopt = 0. For any value of the inputCVisi the information decays
monotonically asτv increases. The factor∂νr
∂ν, responsible of the information loss with
τv, is shown in the middle plot of fig.3.5. It decreases monotonically fromU towards
zero. Summarizing, in this hypothetical case, any amount of depression, that isτv > 0,
increases the estimation error of the rate.
Including the effect ofCViri, yields a quite different result, as it showm in figure3.5 (top
plot). Different colors represent different magnitudes of the input correlations, which we
now examine separately.
• Poisson input.When the input is a Poisson train, the maximal information is achieved
at τopt = 0 for both codes. Still, the information is always larger than in the example
previously considered, where theresponseswere Poisson, becauseCViri is smaller
than one for any value ofτv (see black solid line in the bottom plot). This means
that the train is more regular than Poisson and consequently the estimation is better.
CViri is plotted againstτv in the bottom plot of the figure together with1CV 2
iri(dotted
lines). This is exactly the factor that appears in the information quantitiesJsr(ν|n)
andJ(ν|n). Thus, we conclude that in spite of the fact that the output train is maxi-
mally regular forτv 6= 0 (∼ 200 ms), saturation effects dominate andno compromise
between the two is obtained: the static synapse still performs better.
• Correlated input. When the input has aCVisi > 1, the Fisher information per re-
sponse, in both coding schemes, shows a maximum for a finiteτopt > 0. The explana-
tion of this can be read out from the behavior of theCViri (bottom plot). Forτv = 0
(static synapse),CV 2iri = (CV 2
isi − 1)U + 1 (eq. 2.3.11). Hence the variability of the
responses for smallτv is high, reflecting a correlated input withCVisi > 1. This makes
the Fisher information smaller than in the Poisson case, whenτv ∼ 0. As τv increases,
the response train becomes rapidly more regular untilCViri reaches a minimum, which
is very close to the minimum for the input Poisson case. As a consequence, thereduc-
tion of variability, defined as the difference between the maximum value atτv = 0 and
3.3. Results 69
the minimum, is now bigger. Because of this more severe elimination of variability, the
Fisher information presents a maximum for aτopt > 0. It is interesting to notice that
although the effect of the regularization of the response train is strong,τopt is still be-
low the position of the minimum ofCViri. The reason is that saturation ispulling τopt
down towards zero so that, at the end, a compromise between the two effects arises.
Finally, because for stronger correlations saturation occurs at higher values ofτv, the
caseCVisi = 2 (blue lines) shows a biggerτopt that the caseCVisi = 1.5 (red lines).
The discussion above holds for both codes. The reason is that, as we saw in the previous
section, both coding schemes behave surprisingly similar in the range where saturation is not
dominant, i.e.τvνU 1 (compare solid and dashed lines in the top plot of fig.3.5). Since
the maximization of the Fisher information takes place in a non-saturating regime (i.e.τopt
< τsat), both codes display quantitatively almost the same result, that is, both give the same
τopt .
After analyzing the Fisher information per response, we plot in figure3.6 (bottom plot)
the Fisher information per unit time. The ratio between both is simplyνr, namelyJ = νrJsr,
whereνr is a decreasing function ofτv (remember thatνr is constrained by0 < νr <1τv
). Thus, unless the maximum is very prominent in the information per responseJsr, the
information per unit timeJ is a monotonically decreasing function ofτv , meaningτopt = 0.
In conclusion, unless the stimulus is a very irregular spike train (CVisi ∼ 3 meaning a bursty
trains with many impulses per burst, data not shown), in what regards the reconstruction
error per time unit, synaptic depression is a disadvantage.
3.3.2.2 Optimizingτv with the Mutual Information
We now study the dependency of the Shannon information onτv . We take as the p.d.f of
the input stimulus, an exponential function, with a lower cut-offνinf = 1 Hz, which reads
f(ν) = e−ν/ν
ν. The mutual (middle plot) and Fisher (top and bottom plots) informations are
plotted together in figure3.6to compare their dependence onτv. This is done by settingν, in
the Shannon case, equal to the rateν in the Fisher case. Despite the absolute values (which
correspond obviously to different units),I(∆; ν) follows very well the behavior of the Fisher
information per response: the maximum occurs for a non-zeroτopt only when the input is
correlated and the values ofτopt are approximately the same.
Because of this analogous behavior of the Shannon and Fisher information per response,
when computing the information per unit time, the result is also qualitatively the same to
the Fisher information per time unit (data not shown): unless correlations are very large, the
information per time unit decreases monotonically, meaningτopt = 0.
70 Chapter 3: Information transmission through synapses with STD
0
0,002
0,004
0,006
0,008
0,01
J pe
r re
spon
se [
s-2]
0
0,05
0,1
0,15
0,2
0,25
I(∆;
ν)
[bits
] CV= 1CV= 1.5CV= 2
0 0,1 0,2 0,3 0,4 0,5τ
v [s]
0
0,01
0,02
0,03
0,04
0,05
J pe
r se
cnod
[s-2
]
Figure 3.6: Optimization of the recovery time constantτv , and comparison of the Fisher
information per responseJsr, per unit timeJ and the mutual informationI(∆; ν) . Top:
Solid linesrepresentJsr(ν|∆) (the Fisher information given the length of an IRI).Dashed
lines representJsr(ν|n) (counting code) as a function ofτv . Middle: Mutual information
I(∆; ν) . Bottom: Fisher information per second:Solid linesrepresentJ(ν|∆) anddashed
linesrepresentJ(ν|n). Colors of the inset applies to all plots and all lines types. Parameters:
ν = 10 Hz, τc = 20 ms,U = 0.5, N0 = 1.
It is well known property [Rieke et al., 1996, Dayan and Abbot, 2001], that for a given
renewalprocess with a fixed mean rate, the inter-event-interval distribution that maximizes
the entropy of the train is an exponential, that is, a Poisson process. This result can also be
phrased in the following way: any correlations between the time of the events decrease the
entropy with respect to a Poisson process. However maximizing the mutual information is
not equivalent to maximize the entropy of the response. If we rewrite the expression of the
mutual information of eq.3.2.8
I(∆; ν) = H(∆)− 〈H(∆|ν)〉ν (3.3.9)
we see the the maximization ofI(∆; ν) requires a compromise between the maximization
of the differential entropy of the responseH(∆) but without allowing the noise entropy
〈H(∆|ν)〉ν to get too big. Whenever the synapse enters in the saturation regime, the distri-
bution of the responses begins to be dominated by the vesicle recovery dynamics, and thus
to be independent of the input rate. When this occurs, the noise entropy〈H(∆|ν)〉ν equals
3.3. Results 71
the response entropyH(∆) (i.e. when the input signal,ν, is fixed, the response contains
the same variability than when the whole ensemblef(ν) is considered), and the information
vanishes. In other words, no matter the magnitude of the response entropyH(∆): saturation
always makes the information fade away.
Figures3.7 and3.8 show, together with the mutual information (middle plot), the two
entropies,H(∆) and〈H(∆|ν)〉ν (top plot), and the efficiency of the transmission (bottom
plot) as a function ofτv for a correlated input (CVisi = 1.5). The efficiency is defined as
e =I(∆; ν)
H(∆)(3.3.10)
Again we have computed the hypothetical example in which the response output is “forced”
to be Poisson with a rateνr(ν).
The mutual information is defined as the distance between the two entropies. Hence,
we observe in fig.3.7 that although both entropies grow monotonically withτv the distance
between them varies. In the case of Poisson output, the only observable effect is that the
noise entropy slowly approaches the response entropy, leading to a monotonous decrease of
the mutual information and the efficacy (red dashed lines in middle and bottom plots). In the
un-constrained case, for largeτv both terms also approach each other, making the information
decrease to zero. For very smallτv, when there is barely any depression and the statistics of
the output are dominated by the input, the response entropyH(∆) becomes much smaller
than the equivalent term in the Poisson output case. This happens because, in this range,
the output has large positive correlations inherited from the input and falls far from its upper
Poissonian bound. Whenτv ∼ 70 ms,H(∆) saturates this upper bound because depression
introduces negative correlations that cancel the positive correlations of the input. It is in this
range where the mutual information reaches its maximum. In fig.3.8 the same plots have
been drawn with a larger scale inτv . In the top plot we can check that for very large values
of τv , H(∆) again saturates its upper bound. This is indeed due to de-correlation as well,
but in this case no maximization of the information is achieved because both entropiesH(∆)
and〈H(∆|ν)〉ν have converged to the same value.
3.3.2.3 Several vesicles:N0 ≥ 1
After proving that a synapse with a single vesicle can be tuned to recover with an optimal
rate 1τopt
, we will consider now a synapse with several docking sites,N0. As we described in
section2.3.2of chapter2, N0 represents the maximum number of vesicles that can be ready
for release, i.e. the size of the RRP is alwaysN ≤ N0. Since we are assuming that at most
one vesicle fuses the membrane upon arrival of a spike [Edwards et al., 1976, Triller and
72 Chapter 3: Information transmission through synapses with STD
0 0,05 0,1 0,15 0,2τ
v [s]
0,015
0,02
0,025
0,03
effi
cacy
0,15
0,2
0,25
0,3
I(∆;
ν) [
bits
]
-0,8
-0,6
-0,4
-0,2
0
0,2
0,4
0,6
Ent
ropy
H(∆)<H(∆|ν)>νH(∆) ouput Poisson<H(∆|ν)>ν ouput Poisson
Figure 3.7: Analysis of the mutual informationI(∆; ν) and entropiesH(∆) and〈H(∆|ν)〉νas a function ofτv . In the three plots two examples with the same parameters are plotted:
dashedrepresent the “forced” Poisson (see text) whilesolid represent the same case but
considering the variability of the responses.Top: EntropyH(∆) and conditioned entropy<
〈H(∆|ν)〉ν > of the inter-response-interval.Middle: Mutual InformationI(∆; ν) . Bottom:
Efficacy of the transmission defined in eq.3.3.10. Parameters:ν = 10 Hz, τc = 20 ms,
CVisi = 1.5, U = 0.5, N0 = 1.
3.3. Results 73
0 0,2 0,4 0,6 0,8 1 1,2 1,4τ
v [s]
0
0,01
0,02
0,03
0,04
effi
cacy
0
0,1
0,2
0,3
0,4
I(∆;
ν) [
bits
]
0
1
2
3E
ntro
py
H(∆)<H(∆|ν)>νH(∆) ouput Poisson<H(∆|ν)>ν ouput Poisson
Figure 3.8: Same representation as in figure3.7but showing a different range ofτv.
Korn, 1982, Stevens and Wang, 1995], the probability of release is defined by a function of
the number of docked vesicles (eq.2.2.1):
pr(N) = UΘ(N − 1) N = 0, . . . , N0 (3.3.11)
We want to address now the question of wherther a synapse whereN0 > 1 transmits in-
formation better, worse or in the same manner as a single docking site synapse. The presence
of several vesicles ready for release has two important consequences:
1. There is a quantitatively different transfer functionνr(ν) which saturates latter (νsat
grows withN0) and to a larger value, the larger isN0. By analytic arguments we ar-
gued in section2.3.2that if the stationary probability thatN > 1 is small compared
with the probability thatN = 1, then the synaptic contact behaves effectively like
a one vesicle synapse but with recovery time constantT = τv
N0, that is, the effective
uniquevesicle recoversN0 times faster. For this reason, two scenarios for comparing
synapses with different number of docking sites can be considered. The first compari-
son is the straightforward one, where we just add docking sites to a synapse and keep
74 Chapter 3: Information transmission through synapses with STD
the recovery time of each site,τv, constant. In this scenario, the number of releases
per second, that isνr, will increase as we introduce docking sites (see fig.2.10). The
second possible situation consists in a comparison where, as one includes more dock-
ing sites, the maximal response rate is kept constant. This is achieved by changingτv
each timeN0 is varied, so that
τv
N0
= T = constant (3.3.12)
whereT is a time constant whose inverse sets the maximal possible value of the re-
lease rate,νr < 1T
. In this way, we would compare a synapse with one docking site
replenished with rate1τv
with another with two sites each recovering with a slower rate1
2τv. Renormalizingτv in this manner does not make the response rates for different
N0’s equal, but just equals the range ofνr.
2. The elimination of variability does not occur with the same efficiency. ForN0 = 1,
the process of regularization of the train (i.e. the reduction ofCVout = CViri with
respect toCVin = CVisi) is based on the incapability of the single site synapse to
trigger consecutive responses when the spikes come very close together (e.g. in bursts).
Because there is only one release site, after a vesicle undergoes exocytosis, some finite
time is needed to refill the site with a new vesicle. During this time, the rest of the
spikes in the burst find the terminal empty and are filtered away. If insteadN0 > 1,
there is a non-zero probability that two or more spikes in a burst elicit a response. This
makes the regularization of the input train, or phrased in another way, the reduction of
correlations present in the incoming spikes, less efficient. Thus, those IRIs produced
by spikes within a burst will increase the variability of the train of responses making
the estimation worse. Comparing the shape of the distribution of IRIs whenN0 is one,
and when is larger (figure2.9), it can be observed that, only in this latter case,ρiri(∆|ν)
shows a skewed high peak at the origin, which reflects the bursty nature of the input,
and accounts for the probability that two spikes within a burst elicit a response.
Comparison of different N0’s at τv fixed
In this case the range of the output rateνr increases withN0 as0 < νr < N0
τv. This
implies that saturation is pushed upwards to larger values of the inputν (or τv). ForN0 > 1,
more than one vesicle can be ready for release and theshowerof afferent spikes has to be
more “intense” (higher rate) to keep the synapse depleted, that is, in saturation. This effect
was shown in fig.2.10, where the arrows in thex-axis indicate the value of the saturation
rate,νsat. In terms of information transmission, boosting the position ofνsatincreases the
3.3. Results 75
0,002
0,004
0,006
0,008J sr
(∆|ν
) [s
²]N
0=1
N0=2
N0= 3
N0= 4
N0= 5
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1τ
v [s]
0
2
4
6
8
10
ν e [hz]
Figure 3.9: Optimization ofτv regarding the Fisher information per response, for different
values of the readily releaseable pool sizeN0. Top: Fisher Information per response for a
time output code,Jsr(ν|∆), as a function ofτv , for several values ofN0. Bottom: Response
rateνr versusτv . Inset values apply to both plots. Parameters:ν = 10 Hz, CVisi = 1.8,
τc = 10 ms,U = 1.
ability of the channel to transmit higher input rates. In figure3.9 (top plot) we have plotted
theFisher information Jsr(ν|∆) as a function ofτv, for several values ofN0, when the input
is correlated (CVisi = 1.8). For high values ofτv, the cases whereN0 is small, i.e. one or
two, feel the saturation and the information falls to zero, whereas for largeN0, i.e. four or
five, the information remains almost constant (in the range ofτv shown). Moreover, in these
latter cases a maximum is hardly detected, because the bursts are poorly filtered away and
no regularization of the train occurs. On the other hand, whenN0 = 1, 2, if τv goes from0
to 100 ms, firstly, the response rate drops down reflecting that many spikes are being filtered
(bottom plot); secondly, because of this filtering, the variability is severely reduced and the
information shows a maximum (top plot).
What happens to themutual information ? The result, displayed in the top plot of figure
3.11, is qualitatively the same. However, the caseN0 = 1 seems to stand apart from the cases
N0 > 1. More specifically, the systematic behavior (observed in the Fisher information),
76 Chapter 3: Information transmission through synapses with STD
0,002
0,004
0,006
0,008
J(∆|
ν) [s
²]
N0=1
N0=2
N0= 3
N0= 4
N0= 5
0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4 0,45 0,5Τ = τ
v/N
0 [s]
0
2
4
6
8
10
ν e [hz]
Figure 3.10: Optimization ofT = τv
N0regarding the Fisher information per response, for
different values of the readily releaseable pool sizeN0 (whereτv in each case varies such
thatτv = N0 T .) Top: Fisher Information per response for a time output code,Jsr(ν|∆), as
a function ofT , for several values ofN0. Bottom: Response rateνr versusT . Inset values
apply to both plots. Parameters:ν = 10 Hz, CVisi = 1.8, τc = 10 ms,U = 1.
namely the displacement of the maximum towards higher values ofτv, and the decline of the
information at the maximum, occurs only forN0 ≥ 1. Moreover, the optimization of the
Shannon information is achievable for higher values ofN0 than for the Fisher information.
This can be checked by comparing the instancesN0 = 5 of figures3.9 and3.11: while the
Fisher information shows a negligible maximum, in the mutual information the bump is still
prominent.
In conclusion, if the synapse is able to optimize the biophysical parameterτv, then it
would be more convenient to set just one (or two for the mutual information) docking site,
than to provide the synapse with many. If on the contrary,τv is fixed at a large value (e.g.
∼ 1 s.), then including more docking sites would increase the estimation performance and
the information transfer, particularly for high input rates.
3.3. Results 77
0.05 0.1 0.15 0.2
T [s]
0.1
0.15
0.2
0.25
0.3
I [b
its]
0 0.1 0.2 0.3 0.4 0.5τ
v [s]
0.1
0.15
0.2
0.25
0.3
I [b
its]
N0= 1
N0= 2
N0= 3
N0= 4
N0= 5
Figure 3.11: Optimization ofτv andT = τv
N0regarding the mutual informationI(∆; ν) .
Top: InformationI(∆; ν) versusτv for different values of the readily releaseable pool size
N0. Bottom: InformationI(∆; ν) as a function ofT , for several values ofN0 (whereτv in
each case varies such thatτv = N0 T .). Inset values apply to both plots. Parameters:ν = 10
Hz, CVisi = 1.8, τc = 10 ms,U = 1.
Comparison of different N0’s at T = τv
N0fixed
Figure3.10(top plot) shows theFisher information as a function ofT , in this different
comparative scenario. Indeed, this plot was obtained by rescaling thex-axis of fig. 3.9by a
factor 1N0
(for eachN0 case). The addition of more docking sites now gives little advantage
in terms of diminishing the saturation effects (the bottom plot illustrates the response rate
νr for different values ofN0). In all cases,νr converges to the same limit frequency,1T
, as
T is increased. The top figure shows that all the maxima arealignedand become less and
less prominent asN0 is increased. This is consistent with what we just saw: the reduction
of variability is less efficient as more vesicles can be dockedat the same time. Thus the
beneficial effect of depression, as a correlation filter, is not performed in such a proficient way
anymore. The fact that all the maxima are aligned can be traced back to the decomposition
of the statistics of the IRI made in previous chapter (see eq.2.3.13): the multiple sites case
78 Chapter 3: Information transmission through synapses with STD
0 10 20 30 40 50
ν [hz]
0
0,2
0,4
0,6
0,8
1
1,2
1,4
τ opt [
s]
τc= 100ms
τc= 50ms
τc= 20ms
τc= 5ms
cv=1.8, U=0.5, N0= 1
0 10 20 30 40 50
ν [hz]
cv= 1.2cv= 1.5cv= 2cv= 2.5
τc= 50ms, U=0.5, N
0= 1
Figure 3.12: Optimal recovery timeτopt as a function of the input rateν, for different magni-
tudes and temporal scales of the input correlations. Parameters:CVisi = 1.8 (left), τc = 50
ms (right),U = 0.5, N0 = 1.
with a renormalized recovery time constant, behaves like the single release site model plus a
perturbative term (which vanishes in the saturation regime), that accounts for the probability
of more than one vesicle being ready at the same time.
The bottom plot in figure3.11, shows theShannon information as a function ofT .
The outcome again follows a behavior similar to that of the Fisher information, that is, all
maxima are aligned and less prominent asN0 is increased. However, the caseN0 = 1 aligns
out of this general trend: the optimalT is larger andI(∆; ν) at the maximum is smaller than
for N0 = 2, 3. In this sense, the mutual information seems to be more sensitive than the
Fisher information to this qualitative difference, namely, that forN0 > 1, there is a non-zero
probability of having an infinitively small IRI.
3.3.2.4 Dependence ofτopt on the other parameters
In this section we analyze the dependence of the optimal recovery time constantτopt on
the rest of the parameters, and we will try to bound it quantitatively taking into consideration
3.3. Results 79
0
0.1
0.2
0.3
0.4
0.5
τ opt (
s)
N0= 1
N0= 2
N0= 3
N0= 4
ν= 10 Hz; ; U= 1
0
0.1
0.2
0.3
0.4
0.5τ op
t (s)
1 1.25 1.5 1.75 2 2.25 2.5
CV
0
0.1
0.2
0.3
0.4
0.5
τ opt (
s)
τc= 10 ms
τc= 25 ms
τc= 50 ms
Figure 3.13: Optimal recovery timeτopt of the Fisher information per response, as a function
of the inputCVisi, for different number of docking sitesN0. Colors in the inset apply for all
plots. Parameters not indicated in the figure:ν = 10 Hz, U = 1
the natural ranges of the other parameters.
In a first approximation, what sets the scale ofτopt ? The order of magnitude ofτopt is
determined by the renormalized input rate,νd = Uν. The rest of the parameters also affect
the value ofτopt but the changes are small. Figure3.12 shows theτopt which maximizes
Jsr(ν|n) as a function ofν, for several values of the inputCVisi andτc. The dependence is
close to an hyperbolic relation. When a non-linear regression was performed, the relation in
all cases was
τopt =C
να(3.3.13)
whereC is a constant that depends in the other input parametersCVisi and τc, whereas
α ∼ 0.75 for all the examples. Because of this dependence, if one assumes that the firing
rate range in which information is transmitted in the brain lays from around10 to 100 Hz,
and takeU from 0.1 to 0.95 [Markram, 1997, Murthy et al., 1997], the range ofτopt obtained
is: 0 < τopt < 200 ms. In order to obtain higher values ofτopt lower input rates must
be considered [de la Rocha et al., 2002]. It is interesting to mention here thatFuhrmann
et al. [2002] find by optimization of the information (using different input-output codes) a
relationτopt = 1Uν
. In the last section of the chapter we will discuss the compatibility of this
result with the experimental data. The dependence ofτopt on the input correlation magnitude
(that is, the inputCVisi) is depicted in figure3.13. As we know, the correlations pushed the
80 Chapter 3: Information transmission through synapses with STD
0
0,002
0,004
0,006
0,008
0,01
J pe
r re
spon
se [
s-2]
0
0,05
0,1
0,15
0,2
0,25
I(∆;
ν)
[bits
] CV= 1CV= 1.5CV= 2
0 0,1 0,2 0,3 0,4 0,5τ
v [s]
0
0,01
0,02
0,03
0,04
0,05
J pe
r se
cnod
[s-2
]
Figure 3.14: Optimal recovery timeτopt of the Fisher information per response, as a function
of the time scaleτc of the input correlations, for different values of theCVisi (left), and
different number of docking sitesN0 (right). Parameters left:ν = 10 Hz, U = 0.5, N0 = 1.
Right: ν = 10 Hz, U = 1, CV = 1.8.
saturation up towards higher values. At the same time, increasingCVisi makes the response
coefficient of variationCViri reach its minimum for a higherτv. Both effects cooperate to
increase the value ofτopt asCVisi increases. In the same figure, it can be observed that, if
N0 > 1, having a correlated input is not a sufficient condition to findτopt > 0 (what we will
call to besubject to optimization). TheCVisi-threshold, over which the synapse is subject to
optimization, grows withN0 and decreases with the correlation timeτc. Thus, if the synapse
has many docking sites, it can only be optimized if the input is a very bursty train, which
implies short correlation range (τc 1ν) and large positive correlations (CVisi ∼ 2). The
variability (or equivalently the auto-correlation) present in the input is filtered in a more
efficient manner if the temporal scale of the correlations is small. This points in the direction
that short term depression is best suited to transmit information conveyed in spike trains
made up of bursts. In figure3.14the relation betweenτopt andτc is explicity analyzed: the
top plots representτopt as a function ofτc , while the bottom plots show the ratio between
the informationτopt and atτv = 0. This ratio gives a quantitative measure of how much
advantageous is to have depression tuned to its optimal value (τv =τopt ), as opposed to a
static synapse with no depression (τv = 0). Solid lines in the four plots represent theτopt
when the Fisher information was considered, whereas dashed lines apply for theτopt of the
3.3. Results 81
0 1 2 3 4 5 6 7N0
0
0,05
0,1
0,15
0,2
0,25
τ opt (
s)
τc= 100 ms
τc= 50 ms
τc= 25 ms
τc= 10 ms
τc= 4 ms
ν= 10 Hz; CV= 1.8; U= 1
0 1 2 3 4 5 6 7N
0
1
1,2
1,4
1,6
1,8
2
J(τ v=
τ opt)
/ J(τ
v=0)
Figure 3.15:Top: Optimal recovery timeτopt of the Fisher information per response, as
a function of the number of docking sitesN0, for different values ofτc . Bottom: Ratio
between the Fisher information at the maximum and at the origin, i.e.Jsr(τv=τopt)Jsr(τv=0)
. Colors in
the inset apply for both plots. Parameters:ν = 10 Hz, CV = 1.8, U = 1
.
mutual information. In the first place, we notice that the range ofτc over whichτv can
be optimized is clearly upper bounded. This happens because the height of the maximum
decreases continuously withτc . Roughly speaking, the filter time windowτopt must grow
with the temporal extent of the correlations which need to be removed. But ifτc keeps
increasing, we need to increaseτopt beyondτsat, so that the information drops to zero. This
is shown in the left top plot of fig.3.14. For this reason, when the input has a largerCVisi, the
upper bound ofτc , where saturation starts, is larger due to the dependence of the saturation
regime on the inputCVisi . On the contrary, adding docking sites has the opposite effect, that
is, it shrinks the range ofτc where the input is subject to optimization.
Finally, we have plotted the dependence ofτopt on the parameterN0. The τopt which
optimizes the Fisher information is represented in figure3.15while the case of the mutual
information is drawn in fig.3.16. In both figures, the bottom plot represents the ratio of
the information atτopt and atτv =0. As it has been previously discussed, increasingN0
decreases the capability of optimizingτv , as well as the advantage of having depression at
its optimal tuning over a static synapse. Eventually, aN0 is reached where depression does
not constitute any advantage anymore. Ifτc is large (∼ 100 ms) this maximalN0 is reached,
for the parameters chosen in fig.3.15, already atN0 = 1. As we temporally bound the
82 Chapter 3: Information transmission through synapses with STD
0
0,05
0,1
0,15
0,2
τ opt [
s] cv=1.5cv= 2
ν=10, τc= 20 ms, U=1
0 1 2 3 4 5 6 7
N0
1
2
3
4
I(τ v=
τ opt)/
I(τ v=
0)
Figure 3.16: Top: Optimal recovery timeτopt of the mutual informationI(∆; ν) , as a
function of the number of docking sitesN0, for different values of the inputCV . Bottom:
Ratio between the mutual information at the maximum and at the origin, i.e.I(τv=τopt)I(τv=0)
.
Colors in the inset apply for both plots. Parameters:ν = 10 Hz, τc = 20 ms,U = 1
.
correlations, i.e. decreaseτc , the maximalN0 becomes bigger. In the case of the mutual
information thislimiting effect occurs for higher values ofN0.8
3.3.2.5 Metabolic considerations
As described in section3.2.4, we study now the effects of considering the metabolic
consumption in the optimization of the information transmitted. To do this, we use the energy
function proposed in equation3.2.26to account for the metabolic expenditure produced in
the recovery of vesicles. This energy function depends linearly on the recovery rate as
E(τv) =β
τv
(3.3.14)
The information about the input rate per unit energy and per response reads (eq.3.2.27)
I(∆; ν) =I(∆; ν)
E(τv)= β I(∆; ν) τv (3.3.15)
8Solving our model requires to work out analytically theN0 × N0 linear system of eq.2.2.13. Due to
computational constraints (basically, the calculation exceeded the RAM memory of the computer), we could
not compute the solution of the model for values ofN0 beyond 7, but one can tell by continuity arguments what
the result for higherN0’s would be.
3.3. Results 83
0
0.1
0.2
0.3
Info
. per
Ene
rgy
unit
CV= 1CV= 1.5CV= 2
0 0.2 0.4 0.6 0.8 1 1.2 1.4
τv [s]
0
0.5
1
1.5
Info
. per
tim
e un
it pe
r E
nerg
y un
it
Figure 3.17: Optimization of the recovery time constantτv regarding the mutual informa-
tion per unit energyI(∆; ν) . Top: Dashed linesrepresent the mutual informationI(∆; ν)
(without metabolic considerations) whilesolid linesrepresent the information per unit en-
ergyI(∆; ν) (see eq.3.3.15). Bottom: Dashed linesrepresent the information rateR(∆; ν) ,
while solid linesrepresent information per unit time per unit energy,R(∆; ν) τv. Both quan-
tities in both plots are shown in arbitrary units and have been normalized to have similar
overall scale. Inset values apply for both both plots and for solid and dashed lines. Parame-
ters: ν = 10 Hz, τc = 20 ms,U = 0.5, N0 = 1.
We have plottedI(∆; ν) versusτv in figure3.17(top plot, solid lines) along with the in-
formationI(∆; ν) (dashed lines), for the same values chosen in the previous figure3.6. Both
quantities are illustrated in arbitrary units and have been normalized so that their magnitudes
are similar (in these arbitrary units).
The first thing which must be noticed is thatI(∆; ν) is zero whenτv = 0, for any value of
the inputCV . Besides, the optimalτopt which maximizesI(∆; ν) is not optimal forI(∆; ν)
in any case. On the contrary,I(∆; ν) reaches its maximum value for a largerτv, and seems
to saturate at that maximum for higherτv. However, it is hard to determine numerically the
limit value ofI(∆; ν) asτv goes to infinity, because it is the product of a vanishing integral,
which is computed numerically, multiplied byτv which tends to infinity. Therefore, we have
to conclude that this particular result is not veryrobust to the election ofE(τv): any other
power of the rate1τv
smaller than one would have lead to a vanishingI(∆; ν) at the limit
τv →∞
In the bottom plot of figure3.17 (solid lines) the information rate per unit energy, i.e.
84 Chapter 3: Information transmission through synapses with STD
0
0.2
0.4
0.6
0.8
1
J pe
r re
spon
se p
er E
nerg
y un
it
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
τv [s]
0
0.2
0.4
0.6
0.8
1
J pe
r tim
e un
it pe
r E
nerg
y un
it
CV= 1CV= 1.5CV= 2
Figure 3.18: Optimization of the recovery time constantτv regarding the Fisher informa-
tion per unit energyI(∆; ν) . Top: Dashed linesrepresent the Fishermutual information
Jsr(ν|∆) (without metabolic considerations) whilesolid linesrepresent the ratioJsr(ν|∆)E(τv)
,
meaning the reconstruction error per response and per energy unit.Bottom: Dashed lines
represent the Fisher information per secondJ(ν|∆) while solid linesrepresentJ(ν|∆)E(τv)
, mean-
ing the reconstruction error per second and per energy unit. Both quantities in both plots
are shown in arbitrary units and have been normalized to have similar overall scale. Inset
values apply for both both plots and for solid and dashed lines. Parameters are as in fig.3.17:
ν = 10 Hz, τc = 20 ms,U = 0.5, N0 = 1.
R(∆;ν)E(τv)
, is depicted as a function ofτv, along withR(∆; ν) , for comparison purposes (both
quantities have been normalized in arbitrary units). As opposed to the behavior ofR(∆; ν)
which decreases monotonically, the information rate per unit energy shows a maximum for
values ofτv between200 and400 ms, depending on the correlations of the input, that is, the
CV . Regardless of which quantity, namelyI(∆; ν) or R(∆;ν)E(τv)
, we are optimizing, it is clear
that considering the energetic cost of the recovery process, by computing the information
per unit energy, punishes small values ofτv, which are costly, and pushes the optimalτopt
towards higher values in the range of several hundreds. Other choices of the dependence of
E(τv) on τv (e.g. logarithmic, quadratic,...) would have produced different curves but, as
long as they are monotonically increasing functions ofτv, their qualitative result would be
the same: priming higherτv values over low ones. Notice that, with the choice made here,
no extra parameter was required to obtain this higherτopt .
In figure 3.18 we have plotted the Fisher information per response (top plot) and per
3.3. Results 85
unit time (bottom plot) divided by the energy functionE(τv). In both cases, the information
per unit energy shows a maximum for a value ofτv higher than the one obtained with no
metabolic cost constraint. In particular, a non-zeroτopt is now obtained even for the case
in which the input is Poisson (something that never happened when no energy considera-
tions were taken into account). The new values ofτopt range, for the parameters chosen in
this example, from150 ms to400 ms, again in agreement with the result from the mutual
information.
The maximization ofI(∆; ν) performed in previous sections resulted in an optimalτopt
ranging in50 − 150 ms. Now, the introduction of a metabolic cost constraint changes this
result quantitatively makingτopt equal to200−600 ms. As mentioned above, these last values
are indeed already in the range of the values observed in neocortical synapses [Markram,
1997, Varela et al., 1997, Markram et al., 1998a, Finnerty et al., 1999, Varela et al., 1999,
Petersen, 2002].
3.3.3 Optimization of the release probabilityU
In this section we will analyze the relevance of the synaptic parameterU , which rep-
resents the probability of release when the RRP is not empty. We will again address the
question of whether it is possible to maximize the information by tuning the value ofU . The
optimal value which maximizes the information will be denoted byUopt . We will start by
exploring the Fisher information, which in some cases can be analytically maximized, and
latter we will focus in the Shannon information.
3.3.3.1 OptimizingU with the Fisher Information
Let us start studying the Fisher informationper response, Jsr. A naive approach tells us
that whenU approaches zero, the information should decrease to zero because in that limit
synaptic channel is “closed”. In these circumstances, it seems that no estimation of the input
rateν can be accomplished by the post-synaptic cell. However, as it will be derived now, the
limit of Jsr whenU → 0 is not zero but a finite value.
As already shown in previous sections, both coding strategies, counting and timing, re-
sult in almost the same Fisher information (only in the saturation regime the differences are
appreciable). Therefore, we will use the analytical expression forJsr(ν|n), the Fisher infor-
mation per response given the total number of responses, to explore the dependence onU .
Let us study first a Poisson input and later, we will consider the correlated situation. Thus,
whenCVisi = 1, Jsr(ν|n) equals (see eq.3.2.5),
Jsr(ν|n) = 1CV 2
iriν2r
(∂νr
∂ν
)2=
[(1 + νUτv)
ν2U2[1 + (νUτv)]
] [U2
(1 + νUτv)4
]
86 Chapter 3: Information transmission through synapses with STD
0
0,5
1
1,5
2
ν r [hz
]
U=1, CV=1U=0.1, CV=1U=1, CV=2U=0.2, CV=2
CV=1, τ= 500 ms
0 10 20 30 40 50
ν [hz]
0
0,05
0,1
0,15
0,2
0,25
0,3
dνr/d
ν
Figure 3.19: Analysis of the transfer functionνr(ν) for two values of the release probability
U . Top.- Black lines: response rateνr as a function of the input rateν when the input is
Poisson andU = 1 (solid) andU = 0.1 (dashed).Redlines: the same but for a correlated
input (CV = 2). Brown straight lines: transfer function when the recovery time constant
τv equals zero (νr = Uν), for two values of the release probability :U = 1 andU = 0.1.
Bottom.- Derivative of the transfer function,νr(ν)′, for the cases depicted above (except
whenτv = 0). Greenvertical lines mark the value ofν at whichνr(ν; U = 1)′ = νr(ν; U =
0.1)′. Whenν is beyond the green line the changeU = 1 → U = 0.1 makes the slope of
the transfer function increase, while ifν is below the green line, the change makes the slope
decrease. Inset values apply for both plots. Parameters:τc = 20 ms,τv = 500 ms,N0 = 1.
= 1ν2[1+(νUτv)2]
(3.3.16)
One might think that the Fisher information should tend to zero asU → 0 because the
transfer functionνr(ν) becomes completely flat and equal to zero. However, when asU → 0
the decreasing behavior of the derivative∂νr
∂ν, which is the factor that captures this intuitive
idea, is compensated by the variance factorCV 2iriν
2r because both go to zero asU2. For this
reason, we consider that the optimization ofJsr(ν|n) by tuningU is not well defined because
it shows a paradoxical behavior asU → 0. Moreover,U = 0 turns out to be the optimal
value to estimateν (data not shown). To claim that the cell would optimizeU following
the maximization of the quantityJsr(ν|n) and thus making the synapse completely useless,
seems not very reasonable, so we will disregard this criterion. What happens if we consider
the Fisher informationper secondJ(ν|n)? The expression ofJ(ν|n), when the code is the
3.3. Results 87
counting of responses and the input is Poisson, reads (eq.3.2.6)
J(ν|n) = Jsr(ν|n)νr =U
ν[1 + (νUτv)2](1 + νUτv)(3.3.17)
By inspection, it is clear thatJ(ν|n) does converge to zero asU → 0. The question now is,
why could it be advantageous that the release were unreliable, i.e.U < 1? If the synapse is
static (τv = 0 and therefore no depression at all) the Fisher information for a Poisson input
equalsJ(ν|n) = U/ν. Thus,J(ν|n) grows linearly withU , a behavior that can be traced
back to the transfer function, which in the static case is simplyνr = Uν. U would be playing,
therefore, the role of the gain. As a consequence, the larger the release probability, the larger
the gain and the better represented is the inputν in the outputνr, making the estimation more
accurate.
Nevertheless, whenτv 6= 0, the transfer function is not linear anymore but it saturates
to 1τv
. Figure3.19illustrates the functionνr(ν) (top plot) and its derivate (bottom plot) for
two values ofU , for Poisson input (solid lines) and for a bursty input (CVisi = 2, dashed
lines). The release probabilityU still controls the gain but only at the origin, that is, when
ν = 0 (this is shown in the figure by the brown straight lines which are exactlyνr = Uν).
Then, when the input is Poisson, decreasing the reliability fromU = 1 to U = 0.1 (see
figure) results in a decrease of the slope of the gain functionat the originand at low values
of ν. However, for higher values ofν (beyond the vertical green dotted line) the gain has
increased! This reflects the dependence of the saturating frequencyνsat = 1Uτv
(eq.2.3.5) on
the parameterU : decreasingU pushes upνsat so that for those inputsν that saturate when
U = 1, with U = 0.1 they will be better represented because the transfer function is steeper
at that value ofν.
The range of rates for which the change[U = 1 → U = 0.1] is advantageous is de-
termined by the intersection of the derivatives of each transfer function (bottom plot). If
the input is correlated, the result is qualitatively the same, except that now, forU = 1 the
non-saturating range has been expanded (see eq.2.3.10). Therefore, one has to go to larger
values ofν to make the change[U = 1 → U = 0.1] worthwhile9. To summarize, if the
input rateν is high enough, it might be beneficial to make the synapse unreliable. However,
if U gets close to zero the informationJ(ν|n) vanishes, because no responses are observed
within a unit time (eq.3.3.17). Thus, there must be an optimal valueUopt to estimate the
input rateν, lying between a reliable synapse withU = 1 and adeadsynapse withU = 0.
Furthermore, input correlations are not needed to obtainUopt > 0, i.e. this optimization can
9The fact that whenU = 0.1 the transfer function does not exhibit much difference between Poisson and a
bursty input withCVisi = 2 is the following: the enlargement of the non-saturating regime scales withU (see
eq.2.3.10), so for small values ofU the output is less sensitive to the value ofCVisi .
88 Chapter 3: Information transmission through synapses with STD
0
0,005
0,01
0,015
0,02
0,025
0,03
J p
er s
econ
d [s
-2]
CV=1CV=1.5CV=2
ν=10 hz, τc= 50 ms, N
0= 1
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1
U
0,0005
0,001
0,0015
0,002
0,0025
0,003
J p
er s
econ
d [s
-2]
0
0,001
0,002
0,003
0,004
0,005
0,006
J p
er s
econ
d [s
-2]
τv = 100 ms
τv = 500 ms
τv = 1000 ms
Figure 3.20: Optimization of the release probability ,U , regarding the Fisher information per
second, when the output code is the number of responses. The three plots representJ(ν|n)
versusU for three values of the inputCV (colors in the inset apply to all plots). The larger
the recovery time constantτv , the “stronger” are the depression effects, and the lower is the
optimalUopt .
be achieved for both Poisson and correlated inputs. The only requirement is thatντv 6= 0,
that is, that depression is present.
Figure3.20displays the Fisher information per second, as a function ofU , for different
values ofτv. Only in the top plot both output codes were plotted, showing that they are very
similar. In the other two plots only the timing code is exhibited for clarity. The cases depicted
try to represent three different levels of depression namely: weak (τv = 100 ms), moderate
(τv = 500 ms) and strong depression (τv = 1 s.). These three levels of depression could
also be obtained by fixing the value ofτv and increasing the input rateν. For the Poisson
input (black lines), in all three cases there exists a non-trivialUopt , i.e. 0 < Uopt < 1. As
τv increases depression becomes stronger and, for a fixed rateν, saturation is increasingly
important. Hence, the optimalUopt becomes smaller as the vesicles turn slower. The strategy,
that the model synapse seems to adopt to estimateν, is simple: “If you are not fast enough
to get ready toattend everyone, just reduce the number ofvisitsby randomly getting rid of a
fractionU . Only in that case you are still sensitive to changes in thevisitsrate”.
Input correlations change little the result only in the case whereUopt falls near one: as
3.3. Results 89
0
0.2
0.4
0.6
0.8
1
τ
0
5
10
15
20
ν
0
0.2
0.4
0.6
0.8
1
Figure 3.21: 3-D plot of the optimal release probability ,Uopt , as a function ofν andτv,
when the input is Poisson (see eq.3.3.19). It can be observed that only for lowντv, the
reliable synapse, i.e.Uopt = 1, is optimal. (Note:τ in the axis label stands for theτv of the
text.)
CVisi increases, the optimalUopt boosts slightly to higher values.
If the input is Poisson, an analytical expression forUopt can be easily derived. Taking the
derivative ofJ(ν|n), the equation∂J(ν|n)∂U
= 0 reads,
2τ 3v ν3U3
opt + τ 2v ν2U2
opt − 1 = 0 (3.3.18)
whose solution reads
Uopt = min(1,0.657
ντv
) (3.3.19)
Figure23 shows the 3-D hyperboloid defined by the functionUopt(ν, τv). It can be ob-
served that only when the productντv is very small (more exactly whenντv < 0.657), the
completely reliable synapseU = 1 is optimal. As soon asτv increases to plausible values
andν ranges in frequencies around10− 20 Hz, the optimalUopt drops very fast to values in
the interval0.1− 0.2.
When the input is correlatedUopt does not change too much. As fig.3.22shows, unless
τc is very small (∼ 10 ms), it barely changes with respect to the Poisson input, ifCVisi is
increased. In that case, the input correlations pushUopt closer to one. For example, if for
Poisson inputUopt is not too small (which only happens if the productντv is not too big), a
bursty input (CVisi ∼ 2, τc ∼ 5 ms) with the sameν would have a largerUopt . Hence, the
90 Chapter 3: Information transmission through synapses with STD
0 10 20 30 40 50
ν (Hz)
0
0.2
0.4
0.6
0.8
1
Uop
t
CVisi
= 1
CVisi
= 1.5
CVisi
= 2
0
0.2
0.4
0.6
0.8
1
Uop
t
0 10 20 30 40 50
ν (Hz)
Figure 3.22: Optimal release probability ,Uopt , as a function ofν, when the input is corre-
lated. Common parameters:U = 1, N0 = 1, CV = 1 (black lines),1.5 (red lines) and2
(blue lines). Top left:τv = 0.1 s.,τc = 50 ms. Top right:τv = 0.1 s.,τc = 5 ms. Bottom left:
τv = 0.5 s.,τc = 50 ms. Bottom right:τv = 0.5 s.,τc = 5 ms.
range where the reliable synapse is optimal for Poisson is enlarge for correlated inputs (see
top left plot in fig.3.22). In conclusion, for correlated inputs is more likely the case that the
reliable synapse performs optimally.
3.3.3.2 OptimizingU with the Mutual Information
We now focus on the question of finding the optimal values of the release probabilityU ,
that maximize the mutual informationI(∆; ν). We will make use of an exponential p.d.f of
input rates defined in previous sections (see eq.3.3.6): fν(ν) = e−(ν−νinf )/ν
ν.
In the first place, in the limitU → 0, I(∆; ν) converges to zero.10 A precise formal
10 As we saw in previous section, that is not the case ofJsr(ν|∆) (neither ofJsr(ν|n)) which showed a
paradoxical behavior in this limit. However we must emphasized here the fundamental differences between the
Fisher and Shannon information. Although related (and behave similar in many cases) they are qualitatively
distinct: the Fisher information in this limitU = 0 only provides an lower bound for the reconstruction error.
It does not say anything about the existence of an estimator which saturates this bound. This paradoxical limit,
seems to be one of those cases in which there is not such an estimator (simply because there are no responses to
estimate with). On the contrary the information gives a much accurate quantification about how much can the
input ensemble be constrained, by observing the output. In this caseU = 0, the observation ofno responses
does not restrict the number of possible input messages and thusI(∆; ν) equals zero.
3.3. Results 91
explanation of this limit is the following:
Limit U → 0 Let us divide the synaptic model into the equivalent two-stage channel defined
it in section2.2.3of chapter2. The first stage is a non-activity dependent random filter
which decimates the input spike train with a probabilityU , while in a second step we
incorporate the vesicle dynamics. The out-coming process of the first stage, is what
was defined as the diluted (or decimated) inputρdisi(∆|ν). If the input is Poisson, this
dilution is equivalent to renormalizing the input rate by a factorU , i.e. the output is
the same input process but with a lower rate:
ρdisi(∆|ν) = ρisi(∆|Uν) (3.3.20)
Let us compute the non-conditioned distributionρdisi(∆) of the diluted train:
ρdisi(∆) =
∫∞0 dνρd
isi(∆|ν)fν(ν) =∫∞0 dνρisi(∆|Uν)fν(ν)
and let us assume, for simplicity, thatνinf = 0 and change the integration variable as
ν ′ = Uν obtaining
ρdisi(∆) =
∫∞0 dνρisi(∆|Uν)
exp(−νν
)
ν=
∫ ∞
0dν ′ρisi(∆|ν ′)
exp(−ν′
Uν)
Uν=
=∫∞0 dν ′ρisi(∆|ν ′)fUν(ν
′)
The final expression ofρdisi(∆) is simply the input distributionρisi(∆) when the mean
of the input ensemble has been renormalized toUν. Thus, as we takeU → 0, the
p.d.f. fUν(ν′) tends to the Dirac delta distributionδ(ν ′). When this occurs, the entropy
H(∆d) associated to the distributionρdisi(∆) becomes zero11 and thetotal informa-
tion I(∆; ν) also vanishes (because at the intermediate variable∆d, the entropy has
vanished).
DoesI(∆; ν) exhibits a non-monotonic behavior as the probabilityU is increased from
zero to one? The answer depends on the amount of depression. An unreliable synapse (U <
1) still has some advantages over the reliable one (U = 1) in terms of transforming the gain
of the transfer function. Figure3.23(top plots) shows the mutual information as a function
of U for the same three values ofτv chosen in fig.3.20(for the Fisher information), and the
mean rateν = 10 Hz Although the same values for the parametersτv and ν were chosen,
the three situations do not represent as before weak (τv = 100 ms), moderate (τv = 500
11Thedifferentialentropy[Cover and Thomas, 1991] does indeed converge to minus infinity. If we discretize
the signals by for instance binning the time, the entropyH(∆d) would converge exactly to zero asU → 0.
92 Chapter 3: Information transmission through synapses with STD
0
0.02
0.04
0.06
0.08
0.1
τv= 1 s
0 0.2 0.4 0.6 0.8 1
U
0
0.05
0.1
0.15
0.2
0.25
0
0.05
0.1
0.15
0.2
τv= 500 ms
0 0.2 0.4 0.6 0.8 1
U
0
0.02
0.04
0.06
0.080
0.05
0.1
0.15
0.2
0.25In
fo [
bits
]
CV=1CV=1.5CV=2
τv= 100 ms
0 0.2 0.4 0.6 0.8 1
U
0
0.5
1
1.5
Info
Rat
e [b
its/s
]
Figure 3.23: Optimization of the release probability regarding the mutual information
I(∆; ν) for different values of the input correlations magnitudeCV and the recovery time
constantτv. Top: Shannon informationI(∆; ν) versusU . Bottom: Information Rate versus
U . Left plots: τv = 100 ms. Center plots:τv = 500 ms. Right plots:τv = 1 s. Inset values
apply for all plots. Common parameters:ν = 10 Hz, τc = 50 ms,N0 = 1.
ms) and strong depression (τv = 1 s.). Now there is an exponential ensemblefν(ν) of input
rates, implying that the three instances represent negligible, low and moderate depression.
Therefore, for the Poisson case (black lines), only in the last two examples it comes out
thatUopt < 1. Again, the larger is the productντv (the stronger the depression) the smaller
resultsUopt . Because in these examplesUopt lies near one for Poisson, the introduction of
correlations disrupts the maxima and makesUopt = 1 (see explanation in previous section).
When we consider the information rateR(∆; ν) = νrI(∆; ν) , sinceνr is a monotonically
increasing function ofU , we obtained qualitatively the same result except that the optimal
Uopt is slightly pushed up towards one.
In conclusion, when maximizing the mutual informationI(∆; ν) if depression is strong
and the saturation effects are present, an unreliable synapse transmits more information than
a reliable one.
3.3. Results 93
3.3.4 Optimization of the distribution of synaptic parameters
In this section we will optimize the population distribution of synaptic parameters,D(U,N0, τv)
introduced in section2.2.4. After optimizing, we will compute the mutual information that
the array of responses generated by the population of synapses,∆iMi=1, conveys about the
population rateν (see fig.2.4).
As explained in previous chapter2, in the population model proposed, the distribution
can be expressed as
D(U,N0, τv) = f(N0) R(U |N0) P (τv|N0, U) (3.3.21)
wheref(N0) andR(U |N0) are determined with two requirements derived from experimen-
tal data (see section2.2.4). On the contrary,P (τv|N0, U) will be determined by thelocal
optimization of individual synapses as (section3.2.3)
P (τv|N0, U) = δ(τv − τopt(U,N0)) (3.3.22)
By this procedure we can obtain the marginal distribution ofτv, g(τv), by just integrating
D(U,N0, τv) overU andN0, after the optimization (eq.3.2.20).
To study the effect of the distribution in the transfer of information, we have picked three
values of the parameterq, of the Gamma functionsR(U |N0), which areq = 13.2, 11.2 and
9.3. The first choice give rise to a distribution more similar to the ones reported in [Dobrunz
and Stevens, 1997] and [Murthy et al., 2001], where the mean number of docking sites12
< N0 >∼ 5. On the contrary, the choiceq = 9.3 is closer to a the distribution reported by
Hanse and Gustafsson[2001a] where the mean number ofprimedvesicles when the synapse
is at rest, is∼ 1. We would like to test which of the of distributions results in a larger
information, when the optimization described is applied.
The second parameter of the distributions,λ, has been chosen to be the same in the three
cases and equal to the value reported byMurthy et al. [1997], i.e. λ = 7.9. Thus, after
computing the joint distributionsD(U,N0) for the differentq’s, we have generated three
populations, each one composed ofM = 745 synapses. At each synapse (determined by
the valuesU,N0) the value ofτv is determined by maximizing the informationI(∆; ν) at
that particular synapse. The populations of synapses built in this way are, on the one hand
optimally tuned to transmit information, and on the other hand, are qualitatively in agreement
with the experimental data found in hippocampal CA3-CA1 synapses [Murthy et al., 1997,
2001, Hanse and Gustafsson, 2001a].
12WhatDobrunz and Stevens[1997] andMurthy et al.[2001] really measured was the size of the RRP when
the synapse was at rest. Under our perspective, this is equivalent to the maximal size of the RRP which isN0.
94 Chapter 3: Information transmission through synapses with STD
0 1 2 3 4 5 6 70
0,2
0,4
0,6
0,8
0 0,05 0,1 0,15 0,20
20
40
60
80
0 0,05 0,1 0,15 0,20
20
40
60
80
100
120
0 0,05 0,1 0,15 0,2
τv
(ms)
0
50
100
150
0 1 2 3 4 5 6 70
0,2
0,4
0,6
0,8
0 1 2 3 4 5 6 70
0,2
0,4
0,6
0,8
0 0,2 0,4 0,6 0,8 10
1
2
3
4
0 0,2 0,4 0,6 0,8 10
1
2
3
4
0 0,2 0,4 0,6 0,8 10
1
2
3
4
q= 13.2
q= 11.2
q= 9.3
< N0 >= 2.25
< N0 >= 1.89
< N0 >= 1.25
Figure 3.24: Three examples of optimization of the population distributionD(U,N0, τv).
Each example represents a different value ofq (indicated in the right plots), an is illustrated
by three plots of the same color (top panel black; middle panel red and bottom panel blue).
Within each panel:(Top left plot) Histogram ofU generated by745 synapses (which can
be fitted with the Gamma functionΓλ(U) of eq. 2.2.25, with λ = 7.9. (Bottom left plot)
Histogram ofN0 generated by the same synapses.(Right plot) Colored bars represent the
histogram ofτv (which is an estimation of the marginalg(τv)) after the local optimization
was performed (see text). Blue line represent the histogram ofT = τv
N0. Other parameters
used:f(ν) =exp(ν−νinf )/ν
ν, ν = 20 Hz, νinf = 1 Hz, τc = 20 ms,CVisi = 2.
3.3. Results 95
Figure3.24shows the results of the construction of these three populations. In each of
the three panels (vertically aligned), the bottom left plot shows the histogram of the release
probabilityU (which, in the three cases can be fitted with the Gamma functionΓλ(U) of eq.
2.2.25). The top left plot shows the histogram ofN0, which differs from one panel to the
other due to the different choice ofq. The mean value< N0 > is shown inside the plot. In
the top panel (black), whereq = 13.2, the histogram ofN0 decays slowly up to the value
N0 ∼ 5 beyond which the probabilityf(N0) is very small. Therefore, the mean number of
sites is< N0 >= 2.25. On the other hand, in the bottom panel, whereq = 9.3, the sameN0-
histogram shows a rapid decay, and only the probabilitiesf(1) andf(2) are clearly different
from zero. In this case< N0 >= 1.25.
In the right plot of each panel, the colored bars illustrate the histogram ofτv . The shape
of the marginalg(τv) can be estimated from this histogram. In the three examples, it has
a prominent peak at∼ 60 ms. The top and the middle ones, show a significant tail, which
in the first reaches almostτv ∼ 200 ms. The superimposed blue line shows the histogram
of the variableT = τv
N0. As previously shown (section3.3.2.3), it represents the recovery
time of theprobability. This is indeed the variable which can be directly measured in the
experiments. Its bimodal shape can be traced back to the dependence ofτopt on the number of
vesicles (section3.3.2.3). There, it was shown that there is a qualitative difference between
the optimization of a single site synapse and a multi-site synapse.τopt depends linearly on
N0 onlywhenN0 ≥ 2. WhenN0 = 1, τopt takes a value “apart” of the linear trend it follows
whenN0 ≥ 2 (see fig.3.16). As a result, the small an wide bump centered atT ∼ 25 ms is
produced by the synapses withN0 ≥ 2, while the tall and narrow peak centered atT ∼ 60
ms is the outcome of single-site synapses. With this explanation, it is clear why the small
bump is less prominent in the bottom panel: the number of synapses withN0 ≥ 2 is smaller.
Once the three populations have been generated, we have computed the information that
the array of IRI’s produced in each synapse,∆iMi=1, conveys aboutν. Because the mutual
information is not additive [Cover and Thomas, 1991], the information conveyed by the
sum of responses is not the sum of the informations. We will use eq.3.2.15to compute
I(∆i; ν). To computeJ(∆i|ν) first, we simply sum the individual contributions of each
contact (eq.3.2.16):
J(∆i|ν) =M∑i=1
Ji(∆i|ν) (3.3.23)
The result in the three cases reads:
I(∆i; ν) =
7.0 bits, if q = 13.2
6.9 bits, if q = 11.2
6.9 bits, if q = 9.3
(3.3.24)
96 Chapter 3: Information transmission through synapses with STD
Therefore, we conclude that the differences between these populations are not manifested in
the value ofI(∆i; ν), since a variation of0.1 bits is within the error.
3.4 Conclusions and Discussion
A couple of previous works have addressed the question of whether short term depression
may increase the information transmitted from the pre-synaptic terminal, to the post-synaptic
cell [Goldman, 2000, Fuhrmann et al., 2002, de la Rocha et al., 2002]. All of them conclude
that, under certain assumptions, depression may be an advantageous strategy to transmit
information. In the last two chapters, we have studied the problem in detail, going further in
several aspects: i) A wide family of correlated inputs was considered which consists in all the
renewal processes with exponential correlations. ii) A synaptic model which includes a RRP
with several vesicles was used. iii) Most of the calculations were carried out analytically,
providing a deeper understanding of the problem.
Furthermore, we have adopted different encoding variables in the input and output: in our
case, the inputsignal is the spike rate,ν, whereas in the output, we consider two encoding
schemes, timing and counting.
The main results obtained here are the result of the interaction between two effects
which take place at depressing synapses, namely saturation [Abbott et al., 1997, Tsodyks
and Markram, 1997] and the transformation of auto-correlations13 [Goldman et al., 1999,
2002, de la Rocha et al., 2002], what, when they are exponentially shaped and positive, can
be viewed as a reduction of variability.
We briefly enumerate the main results:
1. If the rest of the parameters are fixed, increasing the input rateν always increases the
reconstruction errorper responsemade in the estimation ofν (see top plot fig.3.1). In
the estimation performedper unit time, there exists an optimalν, which for realistic
values of the parameters, is around∼ 5− 15 Hz (see bottom plot fig.3.1).
2. On the other hand, for a wide family of rate distributionsf(ν) (i.e. Gamma distribu-
tions) there exists an optimal mean input frequencyνopt (∼ 5 − 20 Hz) at which the
mutual information is maximal (see fig.3.4).
3. If the input is an un-correlatedPoissonprocess (or has negative correlations), a static
synapse (τv = 0) is more advantageous than a depressing synapse (τv > 0). Moreover,
13When the input is embedded with positive temporal autocorrelations between the spikes, depression acts as
“filter” of those correlations. However, if the input is un-correlated, depression introducesnegativecorrelations
[Goldman et al., 1999]
3.4. Conclusions and Discussion 97
both Fisher and mutual informations are monotonically decreasing functions ofτv (see
top plot fig.3.5and middle plot in fig.3.6).
4. For the caseN0 = 1, if the input has short-range positive correlations, there always
exists a non zero vesicle recovery time,τopt > 0, which gives a larger information
(both Fisher and Shannon) than the static situation, i.e.τv = 0 (see top plot fig.3.5
and middle plot in fig.3.6). However, for realistic values of the parameters,τopt falls
in the interval50− 200 ms, smaller than the values reported by the experiments.
5. WhenN0 > 1, having input positive correlations is not a sufficient condition to obtain
τopt > 0. However, there exists a minimal value ofCVisi above whichτopt > 0. This
CVisi “threshold” is larger the bigger isN0, and also increases withτc (fig. 3.13). We
can summarize all these dependencies by a heuristic inequality which must hold to
haveτopt > 0:
ντcN0 < constant (CV 2isi − 1) (3.4.1)
6. The optimal value of the vesicle recovery time is inversely proportional to the diluted
rateνd = Uν (fig. 3.12):
τopt ∝ 1(Uν)α , 0 < α < 1
7. When considering the metabolic consumption as a constraint to obtain high recovery
rates 1τv
, the optimization ofτv results in values which range from200 to 1000 ms
depending on other parameters. These values are in accordance with the experimental
data.
8. For τv =τopt , a synapse with one (if we consider the Fisher information, fig.3.9) or
two (for the Mutual information, fig.3.11) docking sites (N0 = 1, 2) performs as well
(Poisson input) or better (correlated input) as a multiple sites terminal (N0 > 2).
9. If the synapse does not show short term depression (τv = 0), the optimal release
probability is one in all the cases, i.e.Uopt < 1 (figs. 23and3.22).
10. But, if the synapse shows depression, the Fisher information per unit time, the mutual
information and the information rate may have a non-trivial optimal release probabil-
ity, Uopt < 1. In other words, unreliability becomes an advantage to transmit informa-
tion (eq. 3.3.19). As opposed to the optimization ofτv , input correlations are not
necessary to obtainUopt < 1. Indeed, the larger the correlations the more unlikely that
Uopt = 1.
98 Chapter 3: Information transmission through synapses with STD
11. The more depression is present (the largerUντv >>) the lowerUopt . For the Poisson
case (when considering the Fisher information per unit time), the following relation
holds (eq.3.3.19)
Uopt ∝const.
ντv
(3.4.2)
The capacity of synaptic activity-dependent processes to increase the computational power
of neurons has been largely discussed in the last years: as a gain control mechanism [Ab-
bott et al., 1997]; as non-linear temporal filters [Natschlager et al., 2001, Maass and Zador,
1999]; as a mechanism for reading a synchrony code [Senn et al., 1998]; as a mechanism to
decode information conveyed in bursts [Lisman, 1997, Matveev and Wang, 2000a]; as mem-
ory buffers [Maass and Markram, 2002]; as redundancy removing filters [Goldman et al.,
1999, 2002, de la Rocha et al., 2002]; as a mechanism which affects the efficacy of different
codes [Tsodyks and Markram, 1997, Fuhrmann et al., 2002, de la Rocha et al., 2002] and
as a mechanism responsible of adaptation [Chance et al., 1998a, Adorjan and Obermayer,
1999]. The study of its implications at the network level has only started but has already
yielded interesting results [Tsodyks et al., 1998, 2000, Reutimann et al., 2001, Loebel and
Tsodyks, 2002, Pantic et al., 2002].
In the present chapter, we have comprehensively studied the way activity-dependent de-
pression transforms the temporal auto-correlations among the input spikes, leaving the train
of synaptic responses with a different temporal structure. In particular, we have shown that,
if the input auto-correlations are positive and are confined in a short range (e.g. bursts), that
transformation can filter out spikes in an informatively efficient way. Although the informa-
tion conveyed in a certain time window decreases (basically because there are less responses
per unit time) each synaptic response conveys more information about the input rate than if
no spikes where filtered. If on the other hand, we consider the information rate but imposing
a metabolic constraint in the recovery rate of the vesicles (which simply states that higher
recovery rates cost more energy than low ones), again a non-zero recovery timeτv is advan-
tageous over a non-depressing synapse. In both cases, the transmission is efficient in terms
of metabolic consumption, that is, we have optimized the informationper unit energy.
In addition, we have shown that, if depression is prominent, the existence of unreliability
in the synaptic transmission can be explained in terms of optimization of the information.
Unreliable synapses result to transmit a larger information per response. What seemed to be
a “bug” in the circuity has turned to be an advantageous strategy to overcome the saturation
produced by short term depression.
Quantitatively, our prediction of the optimal recovery time constantτv , when no metabolic
constraint in the recovery rate is imposed, seems to be lower than what has been measured
in the experiments: if the inputν ∼ 10− 20 Hz, τopt ranges from50− 150 ms, while exper-
3.4. Conclusions and Discussion 99
iments in slice report values from400 ms [Markram et al., 1998a, Petersen, 2002] to 2000
ms [Dobrunz and Stevens, 1997]. This inconsistency could be due to a least three reasons: i)
The experimental conditions do not resemblein-vivoconditions, that is, that synapses in the
slice do not behave like in an intact brain (something imputable to many different factors14
[Steriade, 2001]) ii) Small rates (2 − 5 Hz) might carry information so that they are not, as
usually considered, spontaneous rates. iii) The metabolic consumption of the recovery must
be considered in the optimization.
On the other hand, if the recovery metabolic cost is considered in a simple manner (by
means of an energy function linear on the recovery rate), the values of the optimalτopt are
in the range200 − 1000 ms, and again bursty inputs give larger values. It seems then like,
perhaps in synapses with STD, the recovery time value has arisen as a trade off between
energy efficiency and maximization of the information transfer.
The third synaptic parameter which was analyzed in the present chapter is the size of the
RRP,N0, i.e. the maximum number of ready vesicles. The result of its optimization depends
on the ability of the synapses to adjust theirτv to its optimal value: ifτv = τopt the optimal
N0 is one or two depending on whether we maximize the Fisher or the Shannon information,
respectively. If on the contrary,τv is fixed to a large value (e.g.∼ 1 s.) larger values
(N0 > 7) are optimal. If the first scenario occurs, then our findings are in agreement with
the recent experimental findings [Hanse and Gustafsson, 2001a,b] which observed that the
most immediately releasable pool (what they call the pre-primed pool) has on average one or
two vesicles [Hanse and Gustafsson, 2001a]. These experiments also show that the recovery
of the pre-primed pool takes less than a hundred milliseconds. Besides the pre-primed pool
they suggest there exists a docked pool with slower recovery kinetics and a larger size. In this
way, our result of an optimal pool with one or two vesicles and fast recovery rates (τv = 50
ms), could represent the pre-primed pool which has a strong an immediate impact on the
release probability change, and therefore could eliminate the correlations due, for instance,
to bursts. A second and larger pool, harder to deplete, could have a different role, and it was
not modeled in this work. It is necessary to mentioned, that this perspective of two pools
with different sizes was proposed byMatveev and Wang[2000b] to explain the largepaired
pulsedepression observed in neocortical slices [Markram and Tsodyks, 1996a, Varela et al.,
1997] which seems at odds with a large unique RRP. At this point, ifN0 = 1, 2, our model
would be able to reproduce such a high paired pulse depression.
We would like now to mention a parameter relationship, which has appeared in some of
our results, which is in agreement with previous works [Fuhrmann et al., 2002], and to make
14We have notice for instance that slice recordings are often performed at room temperature (∼ 22o C.)
which, as explained by [Dobrunz and Stevens, 1999], may contribute to slow down the kinetics of neurons up
to a factor of3
100 Chapter 3: Information transmission through synapses with STD
an experimental prediction with it. The two synaptic parametersU andτv , and the input rate
ν have revealed, in different partial results, to be related, when one of them is optimized by
Uopt ∝ 1
τv ν(3.4.3)
τopt ∝ 1
U ν(3.4.4)
Experiments, however, have shown little correlation betweenτv andU [Petersen, 2002]. Our
prediction is that those neurons which code information at high rates (highν) would have,
either very unreliable synapses (lowU ) or they would recover fast (lowτv ). On the other
hand, if the synapses of a pre-synaptic neuron have very slow recovery (largeτv ) or are very
reliable (U ∼ 1), it should fire at low rates.
Future work
The work presented in this chapter is a first step towards understanding the role of the
different biophysical properties of cortical synapses in information processing. To start with,
we have chosen an elementary input-output system: a model of a pre-synaptic terminal which
captures the effects of unreliability and short-term depression. Other mechanisms, such as
facilitation, post-synaptic dynamics like desensitization of transmitter receptors or synaptic
conductance kinetics, should be added in further work.
To test whether the results obtained here, are the same when the input is a natural stimulus
with positive auto-correlations, is also important to validate the conclusions. Thus, it is our
purpose to compute numerically the information transmitted by bursty stimuli recorded in
hippocampal pyramidal cells [Dobrunz and Stevens, 1999, Fenton and Muller, 1998].
The analysis carried out about the information transmitted by a population of synapses,
should be continued. The results obtained so far, do not explain the benefits of having an
heterogeneous distribution of parameters neither what sort of distribution would provide a
better representation of the input. Obtaining, by numerical methods, the optimal distribu-
tion Dopt(U,N0, τv) and comparing the information transfer then with that of the locally
optimized distribution, would clearly show whether or not such heterogeneity is relevant to
transmit an ensemble of inputs.
Information theory provides a mathematical framework for quantifying how well we can
establish the stimulus identity by observing the responses. However, it does not tell us how a
neuronextractsand processes the information present in the responses, neither what sort of
computations can be performed by using this information. For example, it has been shown
[Fuhrmann et al., 2002] that the size of an EPSP conveys information about the time of the
previous spikes. In this thesis we have worked under the hypothesis that information in the
3.4. Conclusions and Discussion 101
input was encoded in the firing rate. Whether neurons use one or the other (or both) coding
strategies, depends on their capacity to extract (decode) the information of such a signal, and
to encode their output response in the same manner (assuming that from one stage to the next,
the code does not change). Therefore, we consider that an interesting line of investigation
that complements the work presented in this chapter, is the analysis of the integration and
response of the whole cell, to complex temporal structured inputs. This analysis would
provide hints of the mechanism that neurons might use to process the information conveyed
in their inputs. The next two chapters point in this direction.
Chapter 4
Synaptic current produced by a
population of synchronized neurons
4.1 Introduction
A major challenge in neuroscience is to understand how a cortical neuron, receiving a
time-varying signal composed of small current pulses, from around103 − 104 [Braitenberg
and Schuz, 1991] synapses distributed along its dendritic tree, is capable to perform any
computation. If we had to find a simile, it would be like reading the information encoded
in the outcome of a shower, where thousands of drops (representing spikes) are hitting our
skin in a continuous manner, and all our work would be to make a computation out of that
barrage of water and encode the solution, in the best case, in a series of simple binary sounds
coming out our throat. Moreover, several facts contribute to make such a fabulous task
even more complex: a) the spike trains impinging the dendrites are highly irregular [Softky
and Koch, 1993], seemingly random, which sounds incompatible with he simplest view of
cortical processing [Softky and Koch, 1993, Softky, 1995]; b) these spike trains have to
transmit its information to the post-synaptic neuron across cortical synapses which, as posed
in the previous chapter, are often highly unreliable [Allen and Stevens, 1994, Dobrunz et al.,
1997], meaning that a large fraction of afferent spikesfail in reaching the post-synaptic cell;
c) these synapses display a large number of different activity-dependent processes such as
short-term depression, facilitation, augmentation, post-tetanic potentiation, etc. (see [Zucker
and Regehr, 2002] for a review), which make that post-synaptic responses wax and wane
as the pre-synaptic cell dictates with its activity. In addition, these dynamical processes are
often widely heterogenous in their quantitative features from one synapse to another, even
along synaptic boutons of the same axon [Murthy et al., 1997, Markram et al., 1998a, Gupta
et al., 2000].
103
104 Chapter 4: Synaptic current produced by synchronized neurons
Since many years, it has been generally agreed that the neurons output firing rate contains
information about its inputs [Adrian, 1926, Werner and Mountcastle, 1965, Tolhurst et al.,
1983, Tolhurst, 1989, Britten et al., 1992, Tovee et al., 1993]. Recently there has been grow-
ing evidence that the precise times of the spikes also conveyed information [Panzeri et al.,
2001, Rieke et al., 1997, Bialek et al., 1991, Softky, 1995]. Hence, there exists a debate in
which the two poles are represented by two limit coding hypothesis, namely: i) a pure rate
code in which the signal is represented by just the mean number of spikes in a certain time
window, and where the variance of that number is only noise; ii) the simple idea that the
spike patterns produced by a single neuron encode information as if it speaks binary mean-
ingful words [Rieke et al., 1997], that is, the exact position of each spike is not noise but
signal.
However, there are experimental indications that other forms of coding lying in be-
tween these two coding schemes are used by the central nervous system ([deCharms and
Merzenich, 1996, Bergman et al., 1995, Murthy and Fetz, 1996, Vaadia et al., 1995, Prut
et al., 1998] see also [deCharms and Zador, 2000] for a review). In particular, the gener-
ally calledCoordinated-coding hypothesissuggests that the temporal relation among signals
from multiple neurons plays a crucial role in the processing of messages in the brain [Carr,
1993, Hopfield, 1995, 1996]. To decode the information content of a spike train one must
compare its temporal pattern with the output of other neurons. Information cannot be ex-
tracted by independent analysis of the spike trains neither by pooling the independent votes
of individual neurons.
The synchronous firing of action potential (AP) by neurons from a given population has
been hypothesized as a particular instance of coordinated coding ([Gray and McCormick,
1996, Singer, 1999], see also [Salinas and Sejnowski, 2001] for a recent review). General-
izing the idea of synchrony to longer delays in the firing of coordinated neurons and more
complex temporal relations,correlatedneural activity is believed to play an important func-
tion is cortical processing such as attention, gain modulation, and sensory encoding [Salinas
and Sejnowski, 2001].
The impact of correlated input in the response of a neuron has been studied by many dif-
ferent groups [Bernander et al., 1994, Murthy and Fetz, 1994, Stevens and Zador, 1998, Feng
and Brown, 2000, Bohte et al., 2000, Salinas and Sejnowski, 2000, Rudolph and Destexhe,
2001, Kuhn et al., 2002, Moreno et al., 2002] .
The response of a neuron with STP has also been studied in previous works [Tsodyks
and Markram, 1997, Markram et al., 1998a,b, Abbott et al., 1997]. The most important
result found by both groups is that no information about an constant input rate can be trans-
mitted, when operating in the saturation regime (section2.3). In this regime the response
4.2. Parameterization of the afferent current 105
of the output neuron reflects only changes of the rate during a brief transient [Tsodyks and
Markram, 1997]. If the input is a time varying signal, the outcome of the neuron, working
in saturation, is sensitive to proportional changes of the input rather than to the absolute
magnitude of those changes [Abbott et al., 1997].
In chapter2 the statistics of the output of a model of vesicle depletion were analyzed. The
implications in the information transmission through that stochastic channel was studied in
chapter3. In the present chapter we will examine how those synaptic releases are integrated
by the post-synaptic cell. In previous chapters, however, we only considered positive tempo-
ral auto-correlations in the input spike train, because the analysis was performed in asingle
synaptic contact. Since the system now will be a spiking neuron receiving afferents from
approximately104 synaptic contacts, our aim here will be to quantify the relevance of the
spatial cross-correlations between the impinging trains of AP’s. In this way, two examples
of cross-correlations will be considered: i) first, the correlations existing between synaptic
contacts which “belong” to the same pre-synaptic neuron. In other words, we will extend the
model of connection introduced in previous chapters, from a single contact to an arbitrary
number of synaptic contacts between the pre and the post-synaptic neuron. ii) The case in
which a pre-synaptic population of neurons fire with a certain grade of synchrony will be
considered, i.e. zero-lag cross-correlations will be included in the pre-synaptic activity.
The goal in this last two chapters is twofold: first, to obtain an analytical description of
the statistics of the current as well as the first moment of the output rate, as a function of the
synaptic parametersτv andU (see section2.2.1), and the cross-correlations of the incoming
spike trains. Secondly, to evaluate the results in terms of the ability of synchrony and STD
to increase the capacity to process and communicate information between neurons.
In the present chapter we will model and parameterize the afferent current and we will
obtain the expressions of its first and second order statistics. In the next chapter, the response
of a leaky integrate-and-fire neuron (LIF) receiving this current will be analyzed, and the re-
sults and conclusions of these two chapters will be laid out. It will be shown that the presence
of STD results in a wide range of effects which could not be obtained with synchrony itself.
4.2 Parameterization of the afferent current
We will first specify the parameterization of the current, which will be integrated by
the output neuron in the following chapter. Here is a brief description, which follows the
explanations in chapter1 of the way vesicle release takes place and how the synaptic current
is then generated. We will also explain the several simplifications introduced in order to
obtain a tractable model.
106 Chapter 4: Synaptic current produced by synchronized neurons
How is the synaptic current generated? When an action potential invades the pre-synaptic
bouton, one vesicle, which isready-for-release, may fuse the cell membrane, releasing its
transmitter content to the synaptic cleft. Then, the transmitter interacts through a chemical
reaction in which a certain number of molecules bind to a closed ionic channel receptor and
open it. The opening of a synaptic channel can be modeled with a simple gating variable,
0 < PS(t) < 1, which measures the probability for the channel to be open. Because the time
it takes to open the ionic channel is much shorter than the time it takes to close, the dynamics
of this gating variablePs(t) can be modeled as (see e.g. [Dayan and Abbot, 2001])
dPs(t)
dt=−Ps(t)
τs
+ (1− Ps(t))Pmax
∑rel
δ(t− ti) (4.2.1)
wherePmax is the maximum fraction of open channels when a release occurs and all chan-
nels were previously closed. The sum∑
rel δ(t − tk) represents the series of releases which
take place at timesti (i = 1, 2, . . .) The synaptic time constantτs is a characteristic of
the type of receptor associated with that synaptic conductance. For excitatory conductances,
channels mediated by AMPA receptors have a time constantτs ∼ 5 ms. while those me-
diated by NMDA receptors is usually much largerτs ∼ 150 ms. In the case of inhibitory
synapses, GABAA time constant isτs ∼ 10 ms.. The conductance of a specific synapsei,
is obtained by multiplying the gating variablePs(t) by a constant parameter,gi (which rep-
resents the maximum conductance in that particular synapse achieved when all channels are
open),gi(t) = giPs(t). Finally, the current produced by that synaptic contacti is defined as
Ii(t) = gi(t)(V (t)− Es) = giPs(t)(V (t)− Es) (4.2.2)
whereEs is the reversal potential of the synaptic channel, andV (t) is the membrane potential
of the neuron. If we neglect the fluctuations of the membrane potentialV (t) around its
mean value, we can substitute this dependence in equation4.2.2by writing the mean voltage
V instead ofV (t). Therefore, following this model, if for example one AMPA mediated
conductance is activated by spontaneous release of vesicles at a very low rate (∼ 1 Hz), if
we patch the post-synaptic neuron with an electrode and record the incoming currents at the
soma, we would observe a series of excitatory post-synaptic currents (EPSC’s) which would
be described by the following expression
I(t) = giPmax(V − Es)(e−t−t1
τs Θ(t− t1) + e−t−t2
τs Θ(t− t2) + . . .)
(4.2.3)
= giPmax(V − Es)∑k
e−t−tk
τs Θ(t− tk)
The sum overk accounts for the consecutive releases taking place at contacti. Let us
include now the synaptic activity of the rest of the synaptic contacts. Although the particular
4.3. Afferent spike trains 107
location of each synaptic contact in the dendritic arbor affects the way EPSC’s are integrated
[Rall and Segev, 1987, Segev and Rall, 1998, Schutter, 1999, Magee, 2000], we will neglect
the spatial dimension by assuming that the EPSC’s from each synapse add linearly at the
soma. Therefore, the total synaptic current coming fromN contacts takes the simple form
I(t) =N∑i
giPmax(V − Es)∑k
e−t−ti
kτs Θ(t− tik) (4.2.4)
The net charge entering the cell at each release is the integral of the EPSC, that is,Q =
giPmax(V − Es)τs. We will make a final simplification of this scheme, which is to assume
that the synaptic time constantτs is much shorter than the rest of the time constants of the
problem, namely the membrane time constantτm (which will be defined in section5.1) and
the vesicle recovery time constantτv. This is a reasonable approximation in the case of the
AMPA mediated currents, not so good for GABAA, and is definitely wrong in the case of the
long time courses given by the NMDA-mediated conductance. Assuming that the EPSC’s
(and IPSC’s) are very brief pulses of current entering the cell, we can finally express the total
synaptic current as
I(t) 'N∑i
Cm Ji
∑k
δ(t− tik) (4.2.5)
We have introduced the quantityJi = giPmax(V−Es)τs
Cmwhich has voltage units and exactly
represents the magnitude of the instantaneous jump of the membrane voltage upon the in-
jection of an EPSC. In other words,Ji is the size of the excitatory post-synaptic potential
(EPSP) produced by a release at the synaptic contacti. Cm is the capacitance of the cell
membrane.
Now, what determines the statistics of the afferent currentI(t)? As can be seen in eq.
4.2.5, the timestik, wherei is the contact indexi = 1, 2, . . . N , andk the releaseindex
k = 1, 2, . . ., will dictate the temporal structure of current. In addition, the times at which
the pre-synaptic AP’s reach the pre-synaptic terminals, will regulate in time the occurrence
of releases. Since in the next chapter the afferent current will be simplified, by means of the
diffusion approximation (section5.2.1), and described as a Gaussian process, we will only
compute its statistics up to second order. Before starting the calculation of the correlation
function of the total current, we will describe the correlations between the spikes and we will
compute the correlations between the releases.
4.3 Afferent spike trains
As we said in the Introduction, we would like to study the interaction between short
time scale positive spatial cross-correlations, that is, synchrony among pre-synaptic neurons,
108 Chapter 4: Synaptic current produced by synchronized neurons
and unreliable short-term depressing synapses. Thus, we define the AP’s coming from pre-
synaptic neuroni as a series of delta functions centered at the spike times:
Si(t) =∑
l
δ(t− tli) (4.3.1)
Superimposing the activity of all pre-synaptic cells, one obtains a compound train made up
of the contributions ofC pre-synaptic cells
S(t) =C∑i
∑l
δ(t− tli) (4.3.2)
The second order statistics of this compound train is completely defined by the two point
connected correlation function [Bialek et al., 1991] defined as
C(t, t′) ≡< (S(t)− < S(t) >) (S(t′)− < S(t′) >) > (4.3.3)
where the angles< · > denote average over the ensemble of input trains. The function
C(t, t′) represents the excess of probability of finding a spike (independently of the neuron
it is coming from) at timet given the occurrence of one spike at timet′. If there were no
correlation between the spikes, this function would equal a Dirac delta function centered at
zero. This would be the case, if the afferent spikes were independent Poisson process. We
model each afferent individual fiber spiking activity as an stationary Poisson processes with
identical rateν. However, the individual trains are not independent of each other, but are
partially synchronized, that is, they show cross-correlations with zero time scale. This is
formally imposed by rearranging the correlation function as
C(t− t′) = < S(t) S(t′) > − < S(t) >< S(t′) > (4.3.4)
= <∑C
i,j
∑l,m δ(t′ − tli) δ(t− tmj ) > − (
∑Ci <
∑l δ(t− tli) >)2
=C∑i
<∑l,m
δ(t′ − tli) δ(t− tmi ) >
︸ ︷︷ ︸same neuron
+C∑
i6=j
<∑l,m
δ(t′ − tli) δ(t− tmj ) >
︸ ︷︷ ︸diff. neurons
−C2ν2
Since all the Poisson processes are stationary (ν 6= ν(t)) the correlation function can not
depend on any absolute timet or t′ but only on the difference between both. Moreover, we
have replaced<∑
l δ(t− tli) > by the rateν because it represents the probability of finding a
spike at timet. We have reached an expression where two terms appear: the autocorrelation
of each spike train arriving from thesame neuron, and a second term, which represents the
cross-correlations fromdifferent pre-synapticneurons. Because individual trains are Poisson
processes, the probability of finding two spikes at timest andt′ is just the squared rateν2,
4.4. Statistics of the synaptic releases 109
except ift = t′ and the two spikes are the same. This sort of trivial correlation is formulated
with a Dirac delta function centered att− t′, so that the first term in eq.4.3.4reads,C∑i
<∑l,m
δ(t′ − tli) δ(t− tmi ) >
︸ ︷︷ ︸same neuron
= (4.3.5)
C∑i
<∑
l
δ(t′ − tli) δ(t− tli) >︸ ︷︷ ︸same spike
+C∑i
<∑l 6=m
δ(t′ − tli) δ(t− tmi ) >
︸ ︷︷ ︸diff. spike
=
= Cνδ(t− t′) + Cν2
The second term, accounting for the the cross-correlation, reflects the fact that given a
spike emitted by thei-th neuron at timet, there is a probabilityρ that thej-th neuron spikes
at timet as wellC∑
i6=j
<∑l,m
δ(t′ − tli) δ(t− tmj ) >
︸ ︷︷ ︸diff. neurons
= C(C − 1) ν ρ δ(t− t′) + C(C − 1)ν2 (4.3.6)
We take this expression as a definition of the correlation parameterρ, which grades the
strength of the cross-correlations: ifρ = 0 all pre-synaptic neurons fire independently, while
if ρ = 1 theC individual trains are all copies of the same train, or in other words, the spikes
from different neurons are all aligned in time. This way of establishing correlations has been
previously used by many authors [Rudolph and Destexhe, 2001, Kuhn et al., 2002]. Now
putting equations4.3.5and4.3.6together in eq.4.3.4we reach
C(t− t′) = C ν [1 + ρ (C − 1)] δ(t− t′) (4.3.7)
4.4 Statistics of the synaptic releases
We turn now to the calculation of the correlation function of the synaptic releases pro-
duced by the afferent spike statistics defined above. We will make use here of the correlation
function computed for a single contact in chapter2. First we will define the dynamics of
the probability of transmission in a single contact. Later, we introduce a generalization of
the single-contact model between two neurons in which the number of synaptic contacts
established by two neurons is a new parameterM .
4.4.1 Dynamics of one synaptic contact between two neurons
In this section we will solve the dynamics of the system, composed of a synapse with
a single docking site (N0 = 1) receiving a Poisson input. This simple synaptic model is
110 Chapter 4: Synaptic current produced by synchronized neurons
UτPoisson
Figure 4.1: Model of a single synaptic contact. Spikes are treated as point events as well as
synaptic responses. A release may occur with probabilityU whenever a spikes arrives at the
synapse and a vesicle is ready-for-release. After a release, a new vesicle gets prepared within
a stochastic time taken from an exponential distribution of parameterτv.
illustrated in figure4.1. The probability of transmission upon arrival of a spike equals the
probability that a vesicle is ready-for-release, times the probability of release when the vesi-
cle is docked,U . We will only define the dynamics of the former, because the parameterU
is constant1.
We can now determine the dynamics of the probability for the vesicle being ready-for-
release at two different levels: i) for a particular realization of the input spike trains. ii) for
the whole input ensemble, i.e. the dynamics without the knowledge of which is the particular
Poisson realization of the input train. We shall denote the first probability aspv(t), and its
dynamics will depend on the precise times of the spikes. We will refer to the second one as
< pv(t) >, where the brackets mean average over all the input trains, and its dynamics will
not depend on the precise spike times but on the firing rateν.
Dynamics ofpv(t)
Let us consider first the case where we have a given input spike train, described by the
functionS(t) =∑
l δ(t − tl), and we would like to know how does our initial knowledge
about the state of the vesicle site evolves in time. The diagram of figure4.2describes a time
stepdt of the dynamics of the system. Let us suppose that our vast wisdom makes us detect
the presence of the vesicle at timet′, but with some uncertainty, i.e. weknow the value of
pv(t′). During a very short time stepdt, two processes contribute to change the probability
for the vesicle to be ready at timet′ + dt:
• a) If a spike arrives between(t′, t′+dt), then, if the vesicle was prepared, with proba-
bility U it is released. This is expressed by the negative transition rate:−pv(t′) S(t) U
(red line in diagram4.2).
1If we had considered facilitation,U would have its own dynamics in which every afferent spike will make
it increase towards one, and in the interval between spikes it would decrease exponentially to zero [Tsodyks
and Markram, 1997].
4.4. Statistics of the synaptic releases 111
1- Release pv(t+dt)
1-pv(t)=P
0(t)
pv(t)=P
1(t)
1-pv(t+dt)
Release
Recov
er
Figure 4.2: Diagram of the temporal evolution of a system composed of a single synaptic
contact. At timet the contact has a vesicle prepared for release with probabilitypv(t) or
the contact is empty with probability1 − pv(t). A time stepdt later, a transition may have
occurred: a release may occur if the vesicle is ready-for-release , a spike arrives , and release
is successful. The recovery of the contact may occur if the docking site was empty and a
new vesicles suddenly occupies it.
• b) If the vesicle is not avaliable, there is a probability that in the next time step it gets
recovered. This contributes with the positive transition rate:(1−pv(t′))τv
(green line in
diagram4.2)
Thus the dynamics ofpv(t) read:
dpv(t + dt)
dt=
1− pv(t)
τv
− pv(t) U∑k
δ(t− tk) (4.4.1)
This stochastic model has been previously used in other works [Senn et al., 2001, Fuhrmann
et al., 2002]. The only indirect experimental validation, so far, has been that, when averaging
the synaptic response over trials with the same stimulus, the mean response can be fitted with
the time evolution of the product2 Upv(t) [Tsodyks and Markram, 1997]. Because response
recordings of single trials in neocortical neurons are very noise (see fig.1 C-D of [Markram
and Tsodyks, 1996a]), it is not easy to directly validate this stochastic model. However,
it would be interesting to compare other moments of the response, like the mean squared
response for instance, in order to confirm whether this stochastic model is valid or not.
2BecauseU pv(t) represents the probability of an all-or-none process, if we compute the mean of this
process, assessing the value 1 when there is a release and 0 when there is not, the evolution of the first moment
coincides with that ofU pv(t).
112 Chapter 4: Synaptic current produced by synchronized neurons
The deterministic mean-response model ofTsodyks and Markram[1997], and other
equivalent versions [Varela et al., 1997], have been extensively used in the literature [Tsodyks
and Markram, 1997, Fuhrmann et al., 2002, Tsodyks et al., 1998, 2000]. As we will see later,
it provides the same description of the mean response, and subsequently of the mean current,
as the stochastic model. However, they differ in the description of the second-order statistics
(like the current variance). Nevertheless, it is an approximate correct description of higher
order statistics of the stochastic model, if the number of synaptic contacts is very large (of
order 100). This will be shown in the next chapter.
Dynamics of〈pv(t)〉
How does the probability of the state of the vesicle evolves in time when all we know
about the input is that is a Poisson process of rateν? All we need to do is to take the
average of eq.4.4.1over the input ensemble of trains [Tsodyks et al., 1998]. We denote this
averaging by angular brackets〈·〉. The only non-trivial term is the product〈∑k pv(t)δ(t −tk)〉. However, since the arrival of Poisson spikes does not keep memory of any of the
previous events, and sincepv(t) depends only of the activitybeforetime t, each term of the
previous sum factorizes obtaining〈pv(t)〉〈∑
k δ(t−tk)〉 = 〈pv(t)〉ν(t) [Tsodyks et al., 1998].
If the input is not Poisson, this factorization would not be correct because the arrival time of
a spike would depend, aspv(t) does, on the previous spikes. Thus, the dynamics of〈pv(t)〉are governed by [Tsodyks et al., 1998]
d〈pv(t + dt)〉dt
=1− 〈pv(t)〉
τv
− 〈pv(t)〉 U ν(t) (4.4.2)
If the input rate does not vary in time, the solution of this equation reads,
〈pv(t)〉 = 〈pv(0)〉 e−t/τc + 〈pssv 〉 (1− e−t/τc) (4.4.3)
〈pssv 〉 =
1
1 + Uντv
, τc ≡τv
1 + Uντv
(4.4.4)
where〈pv(0)〉 is the probability at time zero, andτc is the characteristic time constant of the
dynamics3. The stationary state of this probability ,〈pssv 〉, gives us the responses rateνr (eq.
2.3.3in section2.3) for the Poisson case: all we need to do is multiply it byU , times the
probability of arrival of a spikeν:
νr = 〈pssv 〉 Uν =
Uν
1 + Uντv
(4.4.5)
3Do not confuse thisτc with the correlation time scale of the input trains introduced in chapter2.
4.4. Statistics of the synaptic releases 113
Figure 4.3: Illustration of the different ways two neurons may be connected. Picture(a)
depicts a mono-synaptic connection, this is the case in which a given axon makes only one
contact onto the post-synaptic cell.(b) In this case the connection is multi-synaptic: the same
axon establishes up to three synaptic boutons with the same post-synaptic neuron.(c) This
case illustrates the instance in which within a single synaptic bouton, there exists several
synaptic specializations(three in this case) where vesicle release occurs. In our modeling
situation(b) and(c) will be treated as an equivalent connection: both connections are made
up of threefunctionalsynaptic contacts (taken from [Natschlager, 1999]).
4.4.2 Several synaptic contacts between two neurons
Cortical neurons are seldom connected by only one contact. A single axon may some-
times make multiple contacts onto the same post-synaptic cell. Moreover, an individual
synapse may be have several active zones or release sites (see figs.4.3and1.2). Since we are
neglecting the spatial dimension of the output neuron, these two distinct ways of increasing
the connectivity between two neurons are, according to our approach, equivalent. We will
use the termfunctional contact[Zador, 1998] to refer to both situations. Trying to quantify
how many functional contacts exist between two cortical neurons, seems to be an experi-
mental arduous task. Therefore, the literature is extensive and some times contradictory.
Numbers in the neocortex range from one up to twenty or thirty in some cases[Abeles, 1991,
Markram et al., 1997a,b, Larkman et al., 1997]. In somatosensory cortex of developing rats,
Markramet al revealed a range between four and eight, with an average of 5.5 [Markram
et al., 1997a,b]. GABAergic interneurons tend, nevertheless, to establish more contacts, from
10 to 20 [Gupta et al., 2000]. In the hippocampus different groups report different numbers:
while some groups communicate to find a single contact between CA3 and CA1 pyramidal
cells [Stevens and Wang, 1995, Hanse and Gustafsson, 2001a] others report values between
3 and 18 [Stricker et al., 1996, Larkman et al., 1997]. In addition, it is interesting to mention
114 Chapter 4: Synaptic current produced by synchronized neurons
Figure 4.4: Schematic picture of the calyx of Held synapse, a giant excitatory synapse in the
mammalian auditory pathway. The pre-synaptic terminal contains hundreds of individual
synaptic specializations in which transmitter release occurs (taken from [Walmsley et al.,
1998].
particular examples in the nervous system in which the numberM of functional contacts is
much higher. The first classical example of a synapse with hundreds of contacts are neuro-
muscular junctions [del Castillo and Katz, 1954a, Katz and Miledi, 1968], which due to their
size, were the foremost accessible synapses studied, giving the earliest understandings about
the quantal nature of synaptic release [Katz, 1996]. Other extensively studied example are
the giant synapses in the auditory system known as the end-bulbs and calces of Held [Held,
1893, Ramon y Cajal, 1995] which are excitatory central synapses in the mammalian brain-
stem [von Gersdor et al., 1997, Weis et al., 1999, von Gersdorff and Borst, 2002]. A calyceal
synaptic bouton, contains hundreds of excitatory synaptic specializations, as depicted in fig-
ure4.4, and shows short-term depression induced by the depletion of pre-synaptic resources
[von Gersdor et al., 1997, Weis et al., 1999, Trussell, 2002, von Gersdorff and Borst, 2002].
To summarize, we will consider the situation in which the pre-synaptic neurons makeM
functionalcontacts4 with the target neuron, and we will investigate the impact in the response
of variations inM . The M contacts, that each of theC pre-synaptic neurons establish,
receive the same spike train, but each of them is provided with its own independent vesicle
dynamics (in particular a single docking site model, i.e.N0 = 1, see2.2.1)5. This causes,
4We will index the contacts sharing the same input spike train with the Greek lettersα, β and will leave the
Latin indexesi, j to denote contacts from different afferent neurons, andk, l to index the spike time arrivals.5It is interesting, however, to establish here an equivalence between this model withM functional contacts
(which so far are identified with the active zones, regardless of the synaptic bouton they belong to) where at
most one vesicle is released, to the model that one obtains if the univesicular release hypothesis is ignored. Let
us take our model of a single synaptic contact (described in section2.2.1of chapter2) with N0 docking sites
4.4. Statistics of the synaptic releases 115
that releases in each of theM contacts are highly correlated. This means that if we observe
a release in one contact att′, it is very likely that more releases occur in the rest of theM −1
contacts at the same time than by chance. However, because of the independent vesicle and
release dynamics, when the common spike train is known (the times of the spikes are given),
the response in each of theM contacts is an independent process. This property is usually
denote as conditional independence, since the joint probabilities of release in each of the
M contacts only factorize if all previous spikes in the train are given. Before addressing
the calculation of the correlation function of the synaptic releases, we will first formalize
this conditional independence and derive some quantities, that will be necessary later in this
chapter.
The first question we would like to prompt is, can we model the release of theM contacts
as a binomial distribution of parametersM andpr, wherepr is defined as the probability that
a spike triggers a response in one contact? This has been the way in which several authors
have modeled this simple system (see [Fuhrmann et al., 2002] for instance), but as we will
see now it is not a strictly correct description. The answer then is no, the response of the
M contacts is not a binomial, simply because the processes are not independent, but only
independent when all the previous spike times are given. If for instance, the vesicle recovery
time is very long (very slow recovery) and release occurs when the vesicle is docked with
probability close to one (U ∼ 1), if we observe that a spike arriving att′ elicits a release in
contactα, it will be more likely to find a release at contactβ than if we had no knowledge
about what happened at the other. The explanation is easy: the release at theα-th contact
indicates that the vesicle there, was ready yet, probably because the previous spike came a
long time ago (since we set a slow replenishment of vesicles). Because both contacts share
the same incoming spikes, then the last spike which arrived at theβ-th contact did it a long
time beforet′, making the recovery of the vesicle more likely to happen. This dependency
occurs for arbitrary values ofU andτv .
In order to quantify this dependency, we need to compute the conditional probability
that a certain spike att′ finds a ready-for-release vesicle given that a common contact did
find one at the arrival of the same spike. These probabilities have to be averaged over the
input ensemble, i.e.〈·〉. We will denote this conditional probability as〈pv(β|α)〉. Although
and now assume that docked vesicles may fuse the membrane independently in such a way that, at a single
active zone, it can be produced as many simultaneous releases as docking sites are located in the area of the
active zone. In this case we do not identify a release site with an active zone, but with a docking site (see
comment in section1.2.4). In this description, the model of synaptic connection would be formalized exactly
in the same way as we have modeled a connection made of many functional contacts with a single docking site.
Thus, our model ofM contacts may be reinterpreted as a model with a single active zone withM docking sites
(release sites) where no univesicular release paradigm is adopted.
116 Chapter 4: Synaptic current produced by synchronized neurons
Uτ
Uτ
Poisson
Figure 4.5: Schematic description of a model connection with two synaptic contacts (M =
2). Input spikes arrive along the axon and aredeliveredto each of the contacts. Thus, the
spikes that arrive at both contacts are completely synchronous (see dotted lines). At the same
time, each contact is embedded by its own vesicle machinery. Both recovery of vesicles and
release upon arrival of a spike, occur at each contact in an independent manner. Nevertheless,
this does not mean that the probability of a spike triggering a release at one of the contacts
is independent of what happens in the other contact (see text)
not explicity written, this quantity is computed in the stationary state. The derivation of
〈pv(α, β)〉 is neither trivial nor straightforward. To achieve it, we need to solve the stationary
state of a system like the one illustrated in figure4.5. This is a system composed of two
contacts which share the same input but have independent vesicle refilling dynamics. Since
we are only concerned about the second order correlations, it is enough to solve a system
with two contacts instead of including all theM contacts. The calculation is described in
detail in AppendixE. The result is the following:
〈pv(β|α)〉 =1
[1 + Uντv (1− U/2)](4.4.6)
As we see, this conditioned probability (eq.4.4.6) differs of the non-conditioned (〈pssv 〉 =
11+Uντv
) demonstrating that the two processes are not independent. However, if we would
only care about themeannumber of releases upon arrival of a spike, using a binomial distri-
bution of parametersM and〈pssv 〉U , will lead us to the right solution (which isM〈pss
v 〉U ).
This happens because the mean is independent of the second order correlations. Never-
theless, we need to calculate the two-point correlation of the current so that the binomial
hypothesis fails. We can add here that, if at the spike arrival we know that there areN sites
4.4. Statistics of the synaptic releases 117
ready, the total number of releases would actually be a binomial of parametersN andU
[Matveev and Wang, 2000b].
By inspection of〈pv(β|α)〉 in eq. 4.4.6, we can state that it is always larger than the
non-conditional one, and that the difference among them is maximal the larger the product
ντv (when depression is strong and the response saturates) and when the release is reliable.
4.4.3 Release correlations among two synapses from different neurons
As we established in section4.3, spike trains coming from different neurons display
spatial cross-correlations of temporal range zero (synchrony) which are quantified by the
parameterρ, defined in expression4.3.6. It has been named asρ because it coincides with
the correlation coefficient of the spike count defined as:
ρ ≡ 〈ni(T )nj(T )〉 − 〈ni(T )〉〈nj(T )〉√〈(ni(T )− 〈ni(T )〉)2〉 〈(nj(T )− 〈nj(T )〉)2〉
(4.4.7)
The quantityni(T ) refers to the number of spikes produced by neuroni in a time windowT ,
and the angular brackets represent an average over repetitions of the stimulus. For the zero-
lag correlations defined (eq.4.3.6), this expression does not depend on the time window
T .
We now turn to investigate how these spatial correlations are passed on to the synaptic
releases. We start by computing the conditional probability〈pv(j|i)〉 of having a release at
thej-th synapse given there is been one, at the same time, at thei-th synapse. The calcula-
tion resembles the one of〈pv(β|α)〉, except that now the trains arriving at each contact are
different and only a fraction of the spikes are synchronized. This is caricatured in figure4.6
where the two synapses are stimulated with spikes from two distinct Poisson processes, and
a fraction of them fall temporally aligned (see vertical blue dashed lines in fig.4.6).
The calculation ofpv(j|i) is described in AppendixF. The final expression reads
〈pv(j|i)〉 =1[
1 + Uντv (1− Uρ2
)] (4.4.8)
A few limits can be verified by simple inspection: i) When we makeρ = 1 theC afferent
fibers provide the same spike train, and we recover the conditioned probability〈pv(β|α)〉of eq. 4.4.6. ii) In the limit ρ = 0, the conditional probability becomes equal to the non-
conditioned probability〈pssv 〉 = 1
1+Uντv, indicating that, since the inputs are now indepen-
dent, the docking of vesicles becomes also independent.
We are ready to go now through the main calculation of this chapter which consists in
deriving the first and second order statistics of the total afferent current.
118 Chapter 4: Synaptic current produced by synchronized neurons
Uτ
Uτ
ρ
Poisson 2
Poisson 1
Figure 4.6: Schematic description of two synaptic contacts belonging to different pre-
synaptic neurons whose activity is correlated. A fraction of the input spikes coming along
each axon arrives at the contacts at the same time, they are synchronized. Each contact has
its own independent synaptic machinery which produces unreliability and short-term depres-
sion. However, this does not imply that the probability of a spike triggering a release at one
of the contacts is independent of what happens in the other contact unless the coefficient of
correlation equals zeroρ = 0 (see text)
4.5 Statistics of the total afferent current
At this point, we have gathered all the elements needed to compute the statistics of the
afferent current, which has been parameterized as described in section4.2. We include in eq.
4.2.5the generalization that describes the connection between two neurons asM functional
contacts, leading to the following expression,
I(t) =C∑i
M∑α
Cm Ji,α
∑k
δ(t− tki,α) (4.5.1)
Consequently, we have adopted a further simplification: we assume that the number of
contacts that theC pre-synaptic cells establish is the same. We also assume that EPSP’s sizes
Ji,α are distributed according to a probability distributionf(J) with meanJ and variance
J2∆2 (i.e. the new parameter∆ is the coefficient of variation of theJ ’s). This distribution of
synaptic efficacyJ , may account for different aspects which were not explicitely modeled:
the heterogeneity of the response amplitudes across synapses due to variable size of synaptic
vesicles [Atwood and Karunanithi, 2002], the variable number of post-synaptic receptors
[Walmsley et al., 1998]; the spatial distribution of synaptic contacts across the dendritic
arbor which strongly influences the depolarization at the soma [Segev and Rall, 1998]. The
4.5. Statistics of the total afferent current 119
variability observed in the quantal response amplitudes of single cortical synapses [Korn
et al., 1984, Larkman et al., 1997] may not be included in this list since this is a variability
from release to release, whereas in this model the singleJi,α are constants extracted from the
population distributionf(J). In order to avoid confusion between the membrane capacitance
Cm and the number of pre-synaptic neuronsC, we drop the dependence on the first from the
current, and, later on, when considering the equation of the integrate-and-fire neuron we will
put it back.
4.5.1 The mean of the afferent current
We are interested now in computing the mean of the afferent current, that is
µ = < I(t) > =C∑i
M∑α
Ji,α <∑k
δ(t− tki,α) > (4.5.2)
The series of deltas between brackets in the r.h.s. equals the average number of releases
found at timet. This quantity equals to the average number of spikes arriving at timet,
times the fraction of them which elicited a release. This fraction is exactly the probability of
success of a spike in triggering the release, which in turn equals the probability〈pssv 〈, times
the probability of releaseU . Putting all together
µ = =∑C
i
∑Mα Ji,α ν 〈pss
v 〈 U =C∑i
∑αM
Ji,αν U
1 + Uντv
(4.5.3)
Since the number of post-synaptic contacts a cortical neuron makes is of the order of104,
i.e. N ≡ CM = 104, the sum of synaptic efficacies self-average leading to
µ = C M JUν
1 + Uντv
(4.5.4)
This expression can be interpreted as the mean current ofCM inputs bombarding events of
strengthJ at a rateνr = Uν1+Uντv
. Two things must be remarked: concerning the first order
statistics i) there is no difference between contacts coming from the same neuron or from
different neurons, in other words, the meanµ depend only on the productCM = N which
is the total number of contacts. ii) there is no trace of the spatial correlation coefficientρ6.
6In chapter2 we derived the release rate in a single contact when the spike train had auto-correlations. That
rateνr depended on the input auto-correlations parametersCVisi andτc (eq.2.3.8). On the contrary, the release
rate now, does not show any trace of the cross-correlations. This can be explained because there, the temporal
structure of a single afferent affected the recovery dynamics of synapse, while now, individual contacts have
no information about cross-correlations and only see Poisson trains.
120 Chapter 4: Synaptic current produced by synchronized neurons
Once again, we underline the most characteristic signature of short-term depression
present in the mean current: saturation. The mean current is upper bounded by what we
will call the limit mean current,µlim, which reads
µlim =CMJ
τv
(4.5.5)
The discussion about the consequences of saturation in terms of information transmission
from the pre to the post-synaptic terminal has been extensively covered in chapter2. At the
end of the next chapter new important consequences, at the level of the response of the output
neuron, will be presented.
4.5.2 Correlations of the current
We define the correlation function of the current as
C(t, t′) ≡ < (I(t)− < I(t) >) (I(t′)− < I(t′) >) >= < I(t) I(t′) > −µ2(4.5.6)
Now let us work out the first term of the r.h.s. of this equation. Substituting the definition of
the current eq.4.5.1, we obtain
< I(t) I(t′) > = <C∑i,j
M∑α,β
Ji,αJj,β
∑k,l
δ(t− tki,α)δ(t′ − tlj,β) > (4.5.7)
We break this expression into three terms, namely i) one that represents theauto-correlations
at the single contacts (termA(t′− t)), ii) one that represents thecross-correlationsbetween
pairs of contacts with common pre-synaptic cell (termB), iii) and the last that constitutes
thecross-correlationsbetween pairs of contacts which receive spikes coming from different
pre-synaptic neurons (termC(t′ − t)).
< I(t) I(t′) > =C∑i
M∑α
J2i,α <
∑k,l
δ(t− tki,α)δ(t′ − tli,α) >
︸ ︷︷ ︸same contact:A(t′−t)
+ (4.5.8)
+C∑i
M∑α 6=β
Ji,αJi,β <∑k,l
δ(t− tki,α)δ(t′ − tli,β) >
︸ ︷︷ ︸same pre−neuron:B(t′−t)
+ (4.5.9)
+C∑
i6=j
M∑α,β
Ji,αJj,β <∑k,l
δ(t− tki,α)δ(t′ − tlj,β) >
︸ ︷︷ ︸diff. pre−neurons: C(t′−t)
(4.5.10)
We will treat each term separately so that we can carry out the operations step by step.
To be more concise, we will make use of two previously defined variables that we now write
4.5. Statistics of the total afferent current 121
again: i)νr which represents the probability of finding a synaptic response in a single contact,
at any time:
νr =Uν
1 + Uντv
(4.5.11)
ii) The time constant of the dynamics of〈pv(t)〉 (see eq.4.4.3):
τc ≡τv
1 + Uντv
(4.5.12)
4.5.2.1 Auto-correlation in single contacts
The sum of deltas within the brackets in the first termA(t′ − t), equals simply the non-
connected auto-correlation function of the releases in one contact for Poisson input, com-
puted in chapter2 (see eq.2.3.2) In that chapter we defined a related quantity: the condi-
tional rate. This function is obtained from the correlation by connecting it, dropping the delta
function and normalizing by the rate (see eq.2.2.5):
νcr(t) = νr (1− e−t/τc) t > 0 (4.5.13)
We will use it here for the sake of clarity.
Now, the termA(t′ − t) can be operated obtaining
A(t′ − t) =C∑i
M∑α
J2i,α [νr δ(t′ − t) + νrν
cr(|t′ − t|)] = (4.5.14)
=C∑i
M∑α
J2i,α
[νr δ(t′ − t) + ν2
r (1− e−|t′−t|/τc)
](4.5.15)
Now applying the self-averaging argument explained above one obtains
A(t′ − t) = C M J2[νr δ(t′ − t) + ν2
r (1− e−|t′−t|/τc)
]= (4.5.16)
= C M J2 (1 + ∆2)[νr δ(t′ − t) + ν2
r (1− e−|t′−t|/τc)
]where we have used the relation between the variance, and the first and the second moments
of J :
V ar[J ] = ∆2 J2 = J2 − J2 (4.5.17)
4.5.2.2 Cross-correlation between pairs of contacts with the same input train
The second termB(t′ − t) representing the correlation between responses generated in
contacts sharing the same pre-synaptic neuron is more complicated:
B(t′ − t) =C∑i
M∑α 6=β
Ji,αJi,β [νr δ(t′ − t) 〈pv(β|α)〉U+ (4.5.18)
122 Chapter 4: Synaptic current produced by synchronized neurons
+ νr 〈pv(β|α)〉 U νcr(|t′ − t|) + (4.5.19)
+ νr 〈pv(β|α)〉 (1− U) ν 〈p∗v(|t′ − t|)〉 U + (4.5.20)
+ νr (1 − 〈pv(β|α)〉) νcr(|t′ − t|) ] (4.5.21)
The four terms inside the square brackets represent theprobability of occurrence of four
situations which cover the whole event’s space, namely
4.5.18 There is a release in contactα (νr); therefore, there is a spike hittingβ at the same
time (δ(t′− t)); the vesicle atβ is ready, known that the one atα was prepared as well
(〈pv(β|α)〉); the vesicle atβ is also released (U ).
4.5.19 There is a release in contactα at t (νr); the vesicle atβ was ready att, known the
one atα was prepared as well (〈pv(β|α)〉); the vesicle atβ was then released att (U );
there is a new release atβ after |t′ − t| time, known that att+ the contact was empty
(νcr(|t′ − t|)).
4.5.20 There is a release in contactα at t (νr); the vesicle atβ was ready att, known the one
at α was prepared as well (〈pv(β|α)〉); the release atβ fails to succeed att (1 − U );
there is a new spike arriving at timet′ (ν); the vesicle is ready att′ known it was ready
at t (〈p∗v(|t′ − t|)〉); the recovered vesicle atβ is released att′.
4.5.21 There is a release in contactα at t (νr); the vesicle atβ wasnot ready, however, at that
time ((1 − 〈pv(β|α)〉)); there is a release atβ at timet′ known that att+ the contact
was empty (νcr(|t′ − t|)).
The only new term we have introduced is the evolution of the probability of the vesicle
to be ready〈p∗v(|t′− t|)〉. This is just a particular case of the general solution of〈pv(t)〉 given
by equation4.4.3with initial condition equal to one, i.e.〈pv(0)〉 = 1:
〈p∗v(|t′ − t|)〉 = 〈pv(0)〉 e−|t′−t|/τc + 〈pss
v 〉 (1− e−|t′−t|/τc) =
= e−|t′−t|/τc +
1
1 + Uντv
(1− e−|t′−t|/τc) (4.5.22)
Of the four terms listed, the first one is a positive correlation of time lag zero, caused
by the fact that the contactsα andβ are stimulated with the same input train. The other
three will turn out to be a negative exponential due to depression,. If we just substitute, the
functions〈pv(β|α)〉, 〈p∗v(|t′ − t|)〉 andνcr(t), and consider that the two summations of the
product of synaptic efficacies inB(t′ − t) also self-average, after some simple algebra one
4.5. Statistics of the total afferent current 123
obtains
B(t′ − t) = C M (M − 1)J2
[νr δ(t′ − t)
U (M − 1)
1 + Uντv
− (4.5.23)
− ν2r e−
|t′−t|τc
U (M − 1) (1 + Uντv/2)
1 + Uντv
+ ν2r
](4.5.24)
It is interesting to detect that depression, which introduces negative auto-correlations in a
single contact, spreads out its effects to the cross-correlations of two contacts receiving the
same spike train. In other words, if one observes a release inα at timet, during some period
(of the orderτc) there will be less chances to have a release inβ due to depression, even if we
have no information whether there was a release inβ at that timet! Later it will be shown
that this also happens when the contacts do not receive the same trains, but only correlated
trains. The stronger this correlation, the larger will be the influence of depression in the
cross-correlograms.
4.5.2.3 Cross-correlation between pairs of contacts with correlated input trains
We finally focus on the third termC(t′−t) of the correlation function of the current, which
account for the cross-correlations between synaptic contacts from different pre-synaptic neu-
rons. Its structure is very much the same as the termB(t′−t), except that now the conditioned
probabilities are〈pv(j|i)〉 instead of〈pv(β|α)〉.
C(t′ − t) =C∑
i6=j
M∑α,β
Ji,αJj,β [νr ρ δ(t′ − t) 〈pv(j|i)〉U+ (4.5.25)
+ νr ρ 〈pv(j|i)〉 U νcr(|t′ − t|) + (4.5.26)
+ νr ρ 〈pv(j|i)〉 (1− U) ν 〈p∗v(|t′ − t|)〉 U + (4.5.27)
+ νr ρ (1 − 〈pv(j|i)〉) νcr(|t′ − t|) + (4.5.28)
+ νr (1 − ρ) ν 〈p∗∗v (|t′ − t|)〉 U ] (4.5.29)
The first four terms inside the squared brackets are equivalent to the four terms in the ex-
pression ofB(t′ − t) and do not need further comment, except the presence ofρ which is
needed to consider the situations in which two spikes arrive synchronously at contactsi and
j. The fifth term within the squared brackets was not present in the expression ofB(t′ − t).
It describes the situation in which a spike arrives at terminali at t and evokes a response
(νr); there is not a synchronouspartner arriving at j (1 − ρ); then, another spike arrives
at j (ν); the vesicle is now present (〈p∗∗v (|t′ − t|)〉) and is finally released. The probability
〈p∗∗v (|t′ − t|)〉 of finding j occupied, when all we know is that the vesicle ini was prepared
at timet, is obtained by the temporal evolution of〈pv(t′)〉 from an initial condition att equal
to 〈pv(j|i)〉 (see eq.4.4.3):
124 Chapter 4: Synaptic current produced by synchronized neurons
〈p∗∗v (|t′ − t|)〉 = 〈pv(j|i)〉 e−|t′−t|/τc + 〈pss
v 〉 (1− e−|t′−t|/τc) = (4.5.30)
=e−|t
′−t|/τc
1 + Uντv(1− Uρ/2)+
(1− e−|t′−t|/τc)
1 + Uντv
Now, substituting the expressions of〈pv(j|i)〉 (eq. 4.4.8), 〈p∗v(|t′ − t|)〉 (eq. 4.5.22),
〈p∗∗v (|t′ − t|)〉 (eq. 4.5.30) and the the conditional release rateνcr(|t′ − t|) (eq. 4.5.13) into
eqs.4.5.25-4.5.29and self-averaging the summation of the synaptic efficacies one obtains,
C(t′ − t) = C(C − 1)M2 J2
[νr δ(t′ − t)
U ρ(C − 1)M
1 + Uντv(1− Uρ/2)− (4.5.31)
− ν2r e−
|t′−t|τc
Uρ(C − 1)M(1 + Uντv/2)
1 + Uντv(1− Uρ/2)+ ν2
r
](4.5.32)
4.5.2.4 Total current correlation
We have expressed the correlation function of the current as
C(t, t′) = < I(t) I(t′) > −µ2 = A(t′ − t) + B(t′ − t) + C(t′ − t) − µ2 (4.5.33)
We now need to introduce the last two variables which will parameterize this correlation
function as done byMoreno et al.[2002]. The delta varianceσ2w and theexponential vari-
anceΣ2, will be the coefficients of each of the two correlation terms in the following way:
C(t, t′) = σ2w δ(t′ − t) +
Σ2
2τc
e−|t′−t|
τc (4.5.34)
If we operate equation4.5.33above and identify these new coefficients, their expression
after some algebra read
σ2w = C M J2 νr
[(1 + ∆2) +
U(M − 1)
1 + Uντv(1− U/2)+
Uρ(C − 1)M
1 + Uντv(1− Uρ/2)
]
=C M J2 Uν
1 + Uντv
[(1 + ∆2) +
U(M − 1)
1 + Uντv(1− U/2)+
Uρ(C − 1)M
1 + Uντv(1− Uρ/2)
](4.5.35)
and
Σ2 = −2C M J2 τc νr
[(1 + ∆2) +
U(M − 1)(1 + Uντv/2)
1 + Uντv(1− U/2)+
Uρ(C − 1)M(1 + Uντv/2)
1 + Uντv(1− Uρ/2)
]
=−2C M J2 U2ν2τv
(1 + Uντv)3
[(1 + ∆2) +
U(M − 1)(1 + Uντv/2)
1 + Uντv(1− U/2)+
Uρ(C − 1)M(1 + Uντv/2)
1 + Uντv(1− Uρ/2)
](4.5.36)
4.5. Statistics of the total afferent current 125
Each of the three terms of the sum within the squared brackets, come from the auto-
correlation at a single contact (A(t′ − t)), the cross-correlation between contacts sharing
the same pre-synaptic neuron (B(t′ − t)), and from the cross-correlation between contacts
with synchronized inputs (C(t′ − t)). The most interesting aspect of the last two terms is
the effect of the interaction between the positive correlations, regardless whether they come
from the several contacts per neuron (in the first) or fromad hocsynchrony (in the second),
and STD. This interaction is expressed in the dependence of their denominators on the input
rate. When the productUντv becomes large enough, the trace of the correlation vanishes.
In other words, the effect of the positive correlations will be less and less detectable as the
synapses start to saturate.
We will define here the quantityσ2lim, which will be helpful to describe the behavior of
the current as a function of the input rate, and is defined as follows:
σ2lim ≡ lim
ν→∞σ2
w =C M J2 (1 + ∆2)
τv
(4.5.37)
We do not define the limit ofΣ2 because it is simply zero.
The second interesting feature is the symmetry which exists between the two terms of the
cross-correlations. More exactly, for every choice of the parameters not including synchrony
(ρ = 0) but including several contacts per connection (M > 1), there is an “equivalent”
election of parameters withM = 1 andρ > 0, which produce approximately the same
statistics of the current. The transformation is the following
C
M > 1
ρ = 0
−→
C ′ = CM
M ′ = 1
ρ′ ' 1C
(4.5.38)
This transformation leads to a new current with the same meanµ and with variancesσ2w
andΣ2 which areapproximatelythe same. Although the numerators of the cross-correlation
terms are invariant under this transformation, the denominators are not, so this is just an
approximate invariant transformation. However, the qualitative behavior is very similar and
the limiting value of the delta varianceσ2lim is just the same. Therefore, from hereafter we
may talk about the positive correlations present in the input referring to the two somehow
equivalentsources of positive correlations we have implemented in the input:M > 1 and
ρ > 0.
Now, what is the intuitive meaning of the delta varianceσ2w and the exponential variance
Σ2 and how do they affect the response of the neuron if they do at all? To answer this question
126 Chapter 4: Synaptic current produced by synchronized neurons
we will first give a more qualitative argument in terms of the familiar quantity of the variance
of the event count, and in the next chapter introduce the mathematical formalism to quantify
the effects of the correlations on the response of a model neuron.
The variance of the synaptic release count
Since the current has been finally simplified to a summation of point events which are the
releases (eq.4.5.1) the noise around the mean valueµ that the output neuron will perceive,
is related to the variance of the number of releases falling in a time window equal to the
integration timeτm. We can easily defined the stochastic train of releases as
R(t) =C∑i
M∑α
∑k
δ(t− tki,α) (4.5.39)
By inspection we can realize thatR(t) equals the currentI(t) if we assume thatJi,α = 1 for
all i andα. Thus, we can infer that the connected cross-correlation function of the releases,
CR(t, t′), equals that of the current, when we makeJ = 1 and∆ = 0.
Now if we define the random variableN(T ) as the number of releases that occur within a
time window of lengthT , its varianceσN(T ), which depends onT , is related to the connected
cross-correlation of the releasesCR(t, t′) through (see e.g. [Rieke et al., 1997])
σ2N(T )(t) =
∫ t+T
t
∫ t+T
tds ds′ CR(s, s′) (4.5.40)
Since our input is stationary, the result must be independent of the absolute timet. We
now take equation4.5.34and integrate it,obtaining
σ2N(T ) = σ2
w T + Σ2
[T − τc(1− e−T/τc)
](4.5.41)
It must be remembered, that bothσ2w and Σ2 appearing in this equation are obtained
from the original ones (eqs.4.5.35and4.5.36) by makingJ = 1 and∆ = 0. Finally, we
can check the dependence ofσ2N(T ) on the length of the time windowT . If the time window
T << τc we expand the exponential up to first order inTτc
and the dependence onΣ2 vanishes
resulting inσ2N(T ) ' σ2
w T . On the opposite limitT >> τc we neglect the term proportional
to τc obtainingσ2N(T ) ' (σ2
w + Σ2) T . So in both cases the variance of the count grows
linearly withT (which is a general property of renewal processes [Cox, 1962]), but the slope
depends on the ratioT/τc. It is easy to observe thatfor the same pricewe have computed
the variance of the input current integrated in a window of durationT defined as:
IT (t) =∫ t+T
tdsI(s) (4.5.42)
4.5. Statistics of the total afferent current 127
and whose variance reads
σ2IT
(t) =∫ t+T
t
∫ t+T
tds ds′ C(s, s′) (4.5.43)
= σ2w T + Σ2
[T − τc(1− e−T/τc)
]where the expression is identical to that ofσ2
N(T ) but now the coefficientsσ2w andΣ2 are those
defined in eqs.4.5.35and4.5.36. Now a pertinent question to prompt is: what is the noise
magnitude perceived by the neuron when integrates this current? or in other words, which
is the relevant time window size in which we should measure the variance of the current?
This is a profound and difficult question, to which we will give an heuristic answer based
on the empirical analysis of the response function of the neuron (see [Moreno et al., 2002]
for details). As it will be shown later, the current fluctuations detected by the neuron are
those observed at the temporal scale given by the membrane time constantτm (see section
5.1). This implies that the variance of the current integrated over a time windowτm, σ2Iτm
, is
the relevant noise measure. As a consequence, if the time scale of the negative exponential
correlations induced by synaptic depression is much longer than the membrane time constant,
τc >> τm, thenσ2Iτm
= σ2w τm and depression will have barely any effect on the input noise.
On the contrary ifτc << τm, thenσ2Iτm
= (σ2w + Σ2) τm and the input variance is then
affected by the negative correlations introduced by depression, which, due toΣ2 < 0, will
diminish the magnitude of the fluctuations. If we define an effective varianceσ2eff , in this
last situation it would equalσ2eff = σ2
w + Σ2 whereas in the case of long range correlations
it will equal σ2w. This addition of the two variancesσ2
w andΣ2, whenτc is small, can be
predicted by inspection of the correlation function of the currentC(t′ − t) of eq. 4.5.34. If
we take the limit of this function asτc goes to zero, we obtain that the exponential becomes
a Dirac delta function centered int′− t, so that that the two deltas add. The result is a single
delta function with a coefficient equal toσ2w +Σ2. The next question we have to address then
is, does it ever happen thatτc << τm? If we recall the expression ofτc it reads
τc =τv
1 + Uντv
(4.5.44)
Sinceτv ∼ 500 − 1500 ms andτm ∼ 10 − 20 ms, we immediately realize thatτc << τm
will only be possible ifUντv > 100 which means that more than a hundredU -diluted spikes
would reach the synapse during the time it takes, on average, to a new vesicle to get ready,τv
. In this saturating regime the release statistics are mainly dictated by the docking dynamics
of the vesicles, which were set to be Poisson. The negative correlations then arise because
it is not exactly the case that when a vesicle docks a release occurs (which would imply a
de-correlated train of releases). After the vesicle docks, a short time elapses until a new
spike makes it fuse the membrane. The resulting output process seems like a Poisson with a
128 Chapter 4: Synaptic current produced by synchronized neurons
variable refractory period of mean length1Uν
(the mean time it waits a prepared vesicle until
a spike makes it undergo exocytosis). Thus, in this regime, there are negative correlations
of time scale 1Uν
in the release process. The quantitative effect of these short-range negative
correlations will be illustrated in the next chapter.
Chapter 5
The response of a LIF neuron with
depressing synapses
5.1 The Leaky Integrate and fire (LIF) neuron
The model of neuron we have chosen to investigate the impact of synchrony and synaptic
depression on the output response, is the well-known integrate-and-fire (IF) neuron (see e.g.
[Ricciardi, 1977, Tuckwell, 1988]). Due to its simplicity, this model has allowed neurosci-
entists to extract, by means of analytical calculations and simulations, correct basic ideas
about the function of single neurons and neural networks [Amit and Tsodyks, 1991, Shadlen
and Newsome, 1994, Amit and Brunel, 1997, Wang, 2001]. Its first assumption consists in
reducing the neuron to a point in space, neglecting the spatial distribution of the dendrites
and assuming that the potential inside the neuronV , which actually should be a space and
time functionV (x, y, z, t), depends only on time,V (t). Besides, the mechanism underlying
the generation of an action potential, are quite stereotyped: when the membrane potential of
a neuron reaches a value around−55 mV it approximately follows a stereotyped trajectory.
Therefore the IF model avoids to explicitely model its generation, and focuses on how the
neuron integrates the current when the membrane potential is below threshold. In its sim-
plest version, the integrate-and-fire model ignores the active conductances of the membrane
and reduces the properties of all membrane channels to a single passive leakage conductance
gL, which produces a leak current which pushes the membrane potential towards the resting
valuesEL. Thus, in the Leaky integrate-and-fire model (LIF), the membrane acts as a capac-
itor with a resistor set in parallel. The synaptic current loads the capacitor with positive or
negative charge (depending on whether the inputs are excitatory or inhibitory) and the leak
current allows for the capacitor to unload. This is expressed by the first order differential
129
130 Chapter 5: The response of a LIF neuron with depressing synapses
equation
CmdV (t)
dt= gL(EL − V (t)) + I(t) , if V (t) < θ (5.1.1)
whereCm is the capacitance of the membrane andθ is the voltage threshold. Whenever the
potential reaches this threshold, a spike is emitted and the potential is artificially placed at a
reset value,V (t+) = H, where it remains constant during a refractory timeτref . Thus, the
generation of an AP and refractoriness, are explicitely modeledad hoc. The equation can
be rewritten in more common and simpler manner, if one multiplies both sides by1gL
and
changes the origin of the potential settingEL = 0:
τmdV (t)
dt= −V (t) + RmI(t) , V (t) < θ (5.1.2)
whereτm = CmRm is what is called the membrane time constant, and the membrane resis-
tanceRm is just the inverse of the leak conductance. Thus, the four parameters of the model,
namelyτm, Rm, θ andτref are free parameters which have to be phenomenologically fitted
from single intra-cellular recordings.
In a recent work,Rauch et al.[2002] have explored the validity of the LIF neuron model
by comparing the theoretical output rate prediction, with the response of a real neuron stim-
ulated in vitro through a dynamic clamp with an artificially generated current resembling
synaptic activity. The results seem to demonstrate how in almost all cases the model param-
eters could be fitted so that the model neuron imitates the response in the experiment with
great accuracy.
The choice of the LIF neuronal model turns out to be twofold advantageous: i) it is very
simple to be programmed and computationally inexpensive, allowing us to run simulations
of long time periods without consuming too much CPU time; and ii) what is more important,
we will make use of a well developed set of analytical tools with which we will obtain a
theoretical expression for the output rate of a LIF neuron integrating a current produced by
the activity of depressing synapses receiving synchronous inputs.
5.2 The analytical calculation of the output rate of a LIF
neuron
In this section we will give a brief overview of the mathematical tools coming from the
theory of stochastic processes, which will allow us to obtain a formula for the output rate of
a LIF neuron excited by the current described in previous sections. Therefore our starting
point will be the equations describing the statistics of the current, eq.4.5.4and eq.4.5.34for
5.2. The analytical calculation of the output rate of a LIF neuron 131
the meanµ and the correlation functionC(t′ − t), and the equation describing the evolution
of the membrane potentialV (t) (eq.5.1.2).
Our aim is to describe the dynamics of the potential of the neuronV (t) as it follows from
the differential equation of the LIF model (eq.5.1.2) when the current termI(t) is replaced
by the synaptic current given in eq.4.5.1:
τmdV (t)
dt= −V (t) + Rm Cm
C∑i
M∑α
Ji,α
∑k
δ(t− tki,α) (5.2.1)
Using the definition of the membrane time constantτm ≡ RmCm this can be written as:
dV (t)
dt= −V (t)
τm
+C∑i
M∑α
Ji,α
∑k
δ(t− tki,α) (5.2.2)
The trajectory followed byV (t) can be described easily: the arrival of a release at timetki,αproduces a discontinuity in its trajectory by a jump of sizeJi,α. This is an EPSP. After the
jump, V (t) smoothly relaxes to zero in an exponential manner. Because the generation of
releases is a stochastic process, the potential itself becomes also a stochastic process, which
is continuous in time (the time is not discretized), but discontinuous in space, that is, the
trajectory ofV (t) in the voltage axis is not continuous.
The problem of quantifying the output rate of a LIF neuron given the statistics of the
input current, is equivalent to solve themean first-passage timeproblem (see e.g. [Ricciardi,
1977, Tuckwell, 1988]). This refers to compute the mean of the random variableT defined
as the first time the potential hits thresholdV (T ) = θ when the neuron has emitted a spike at
time zero and has then been reseted toH . In other words,T is the inter-spike-interval (ISI),
and its mean< T > gives us the output rate through
νout ≡1
< T >(5.2.3)
However, solving this problem whenV (t) is discontinuous is a rather complex job, that
can be simplified by what is called the diffusion approximation.
5.2.1 The diffusion approximation
In order to characterize the trajectories of the potential as continuous, an extra assumption
must be made on the input current. Let us suppose that we replace the input current, which is
a stochasticpoint process, by aGaussianprocessG(t) (see e.g. [Ricciardi, 1977]) with the
same mean< G(t) >= µ and correlation function< G(t)G(t′) >= C(t− t′). Because this
132 Chapter 5: The response of a LIF neuron with depressing synapses
process is continuous in time, the potentialV (t) becomes a continuous variable too. Now the
question is, when is this substitution a good approximation? The diffusion approximation
turns out to begood (it roughly gives the same results as the point process current) when
two conditions are fulfilled: i) the sizes of the EPSP are small compared with the distance
from the reset potential to the threshold, i.e. the number of simultaneous jumps required for
the potential to reachθ starting fromH must be large. ii) The number of synaptic releases
arriving in a time window of the order ofτm must be large. The second condition is naturally
satisfied since the number of impinging afferent synapses received by a cortical neuron is of
the order of103 − 104 [Braitenberg and Schuz, 1991], even in the case where they fire at
low spontaneous rates of about∼ 2 Hz, the number of events occurring in a time window
of length τm is of the order of 100 [Shadlen and Newsome, 1998]. The first assumption
depends critically on the size of the EPSC’s (and IPSC’s) entering the neuron when a vesicle
release is triggered, but it could also be violated if for instance the synchronous arrival of
n releases occur frequently, resulting (in our simple linear model of integration of EPSC’s)
in a EPSP which would ben times larger than the individual ones. Experiments have re-
ported a wide range of EPSP’s and EPSC’s sizes, which in many cases is hard to discern if
the depolarization was monosynaptic (produced by the release of one transmitter vesicle) or
was the superimposed print of the synchronous releases occurring in multiple contacts. An
interesting way to measure the amplitude of the EPSP which leads to no confusion about
whether is a monosynaptic or multi-synaptic depolarization, consists in measuring the am-
plitudes of the spontaneous EPSC’s evoked by the spontaneous release of vesicles. Because
in this case release is not triggered by spiking activity, synchronous releases are rather rare.
Typical numbers to the size of EPSC’s range from5 to 30 pA [Berretta and Jones, 1996,
Stevens and Zador, 1998], which taking a membrane resistanceRm ∼ 10 − 100 MΩ gives
an EPSP range ofJ ∼ 0.01 − 0.3 mV. However, double whole-cell patch clamp recordings
in cortical pyramidal neurons, in which the pre-synaptic neuron is stimulated to fire, and the
EPSP’s are recorded at the soma of the post-synaptic cell, give EPSP’s amplitudes which can
be bigger than1 mV. ([Markram and Tsodyks, 1996a], see also [Larkman et al., 1997] who
used the method ofminimal extracellular stimulationin hippocampal CA1 synapses). This
prominent depolarization is likely to occur because of multiple contacts existing between
cells.
If in addition, we add an extra degree of synchronizationρ in the activity of the afferent
cells, the probability that the membrane potential undergoes large discontinuities, due to the
arrival APs at the same time and the subsequent release of vesicles in multiple contacts,
increases and spoils the diffusion approximation. Nevertheless, in a wide range of values of
J andM this approximation will turn out to give an excellent fit of the simulation data.
5.2. The analytical calculation of the output rate of a LIF neuron 133
5.2.2 The solution ofνout for a white noiseinput
Let us first consider the case where the input current can be approximated by a stationary
Gaussian processG(t) with mean< G(t) >= µ and connected correlation function of the
form
C(t′ − t) = < G(t)G(t′) > −µ2 = σ2wδ(t′ − t) (5.2.4)
Because the correlation function has only the delta term, its Fourier transform is just a con-
stant, expressing that all frequencies are equally represented. For this reason, this type of
stochastic process is commonly called awhite noise. If in the case of our input, depression
were negligible (τv ∼ 0), thenΣ2 ∼ 0 (see eq.4.5.36) and the correlation function would be
a white noise. This does not mean that all correlations in the input have vanished. The cross-
correlations due to synchrony in the pre-synaptic population (ρ) and the cross-correlations
due to multiple-contacts are still there and its effect is contained inσ2w (see eq.4.5.35). How-
ever, these two sources of cross-correlations produce zero-lag correlations, which do not
change the white noise nature of the input, that is based on the fact that at two different in-
stants of time, the process is uncorrelated. Therefore, including zero-lag correlations (like
perfect synchrony or multiple contacts) has the effective result of a renormalizing the input
varianceσ2w [Salinas and Sejnowski, 2000, Moreno et al., 2002].
If now we make a LIF neuron integrate the white noise defined by the two parametersµ
andσ2w, the solution of the mean first passage time reads [Ricciardi, 1977, Tuckwell, 1988]
ν−10 =< T >0= τref +
√πτ∫ Θ
Hdtet2(1 + erf(t)) (5.2.5)
whereerf(t) is the error function defined as
erf(t) =2√π
∫ t
0dx e−x2
, (5.2.6)
the limits of the integral are functions ofµ andσ2w defined as
H =H − µτ
σw√
τm
(5.2.7)
Θ =θ − µτ
σw√
τm
(5.2.8)
and we have denoted the output rate byν0, to distinguish it from its final expressionνout
when correlations, due to depression, are present.
134 Chapter 5: The response of a LIF neuron with depressing synapses
5.2.3 Perturbative solution ofνout for a correlated input
The calculation outlined here is described in detail in [Moreno et al., 2002]. If we assume
that the input current is a stationary Gaussian processG(t) with correlation function given
by (see eq.4.5.34)
C(t′ − t) = < G(t)G(t′) > −µ2 = σ2wδ(t′ − t) +
Σ2
2τc
e−|t′−t|/τc (5.2.9)
then the current is completely determined by the following four variables:
• Mean current:µ
• Delta variance:σ2w
• The temporal scale of the correlation, redefined by the parameter:k ≡√
τc
τm
• The exponential varianceΣ2, redefined as thecorrelation magnitude: α ≡ Σ2
σ2w
Now, as we previously mentioned, in the limit whereτc → 0 the exponential becomes a
Dirac delta function and a white noise current with an effective varianceσ2eff = σ2
w + Σ2
is recovered. The solution in this case would be the same asν0 but with the new effective
varianceσ2eff :
ν−1eff = τref +
√πτ∫ Θeff
Heff
dtet2(1 + erf(t)) (5.2.10)
Heff =H − µτm
σeff√
τm
(5.2.11)
Θeff =θ − µτm
σeff√
τm
(5.2.12)
On the other hand whenτc → ∞ the exponential vanishes and we recover exactlyν0.
Nonetheless, an exact analytical solution for a generalτc cannot be computed. However if
the parameterα is small andτc << τm or τc >> τm a perturbative solution aroundνeff and
ν0 respectively can be obtained [Moreno et al., 2002].
Whenτc << τm , the quantitiesk andα are treated as perturbative parameters. The
solutionνout is analytically found by expanding the Fokker-Planck equation associated with
the diffusion process [Moreno et al., 2002] in powers ofk =√
τc/τ , and calculating the
terms exactly for allα = Σ2/σ2w for the zero order, and perturbatively inα ≥ 0 up to the first
non trivial correction for the first order. The obtained firing rate can be written as
νout = νeff − α√
τcτν20R(Θ) (5.2.13)
5.2. The analytical calculation of the output rate of a LIF neuron 135
whereR(t) is defined as
R(t) =
√π
2et2(1 + erf(t)) (5.2.14)
In the opposite limitτc >> τm, the perturbative parameter isk−1. The solutionνout up
to the orderk−2, and exact for allα reads
νout = ν0 +D
τc
(5.2.15)
where the auxiliary variableD reads
D ≡ ατ 2mν2
0 [τmν0(R(Θ)−R(H))2
1− ν0τref
− ΘR(Θ)− HR(H)√2
]
In the next section we will compare the output rateνout of a simulated LIF neuron with
the analytical predictions given by equations5.2.13and5.2.15, as a function of the input rate
ν. Because the correlation timeτc depends on the input rate (see eq.4.5.44), we will go from
the regimeτc >> τm to the regimeτc << τm passing through the intermediate regime in
which τc ∼ τm. In order to cover the whole range ofτc we have performed an interpolation
of νout between the two regimes. The interpolating curves have been determined by setting
the firing rate in the smallτc range (τc < τm) asνout = νeff +A1√
τc+A2τc whereA1 andA2
are unknown functions ofα and of the neuron and input parameters. Although the function
A1 has been analytically calculated (eq.5.2.13) for smallα, this procedure takes into account
higher order corrections which match more accurately the observed data for larger values
of α. In any case, the interpolation and the analytical prediction are very similar for small
α. In the large correlation time limit (τc > τ ) the analytically computed expression given
in eq.(5.2.15), νout = ν0 + C/τc, was used. The functionsA1 andA2 were determined by
interpolating these two expressions, with conditions of continuity of its rate and its derivative,
at a convenient interpolation pointτc,inter ∼ τm, which is indeed the only parameter which
has to be fitted.
5.2.4 Several input populations
We have considered so far that theC pre-synaptic neurons have all the same properties
(the same firing rateν, same correlation coefficientρ, same number of contactsM , etc).
However, we will often need to put together cells with common physiological properties
(e.g. excitatory and inhibitory cells) or cells with similar selectivity properties.
In the simulations we have performed we have set a excitatory pre-synaptic population
made up ofCe cells with a mean synaptic efficacyJe > 0, and an inhibitory population of
Ci neurons with mean synaptic efficacyJi < 0. For simplicity we have assumed that neu-
rons from different population are not cross-correlated and the synaptic properties were set
136 Chapter 5: The response of a LIF neuron with depressing synapses
to be the same, i.e. the synaptic parameters were equal for both types of synapses. Although
these simplifications may seem to over-simplify the natural situation where, for instance, the
dynamical properties of the synapses depend strongly on whether they connect interneurons,
pyramidal neurons or pairs of these two types of cells [Gupta et al., 2000], our purpose id
to understand the combined role of synchrony and depression alone. The more complex sit-
uation in which different pre-synaptic populations make connections with the target neuron
with different properties, such as different magnitudes of depression (U, τv), different num-
ber of contactsM or even display other activity dependent mechanism such as facilitation
[Zucker and Regehr, 2002], is left for future work.
Therefore, in this simplified scenario the expressions of the mean current and of the
variances are modified by simply adding the contributions from each population:
µ = µe + µi (5.2.16)
σ2w = σ2
w,e + σ2w,i (5.2.17)
Σ2 = Σ2,e + Σ2,i (5.2.18)
5.3 Results
In this section we will analyze the implications of synchronization of input spike trains
plus short-term synaptic depression on the response of a neuron which integrates this synap-
tic activity. The results obtained are independent of the neuron model chosen, a LIF neuron,
because, as we will discuss latter, they can be deduced from the properties of the input current
itself. In other words, the most exciting result, which will be a non-trivial non-monotonic
behavior of the output of the neuron, is already present in the input current so that any other
more sophisticated spike generator model (e.g. a Hodgkin-Huxley model) would lead, pre-
sumably, to the same qualitative behavior.
In order to compare different instances in which the input connections are made of dif-
ferent number of contacts, that is examples of currents with differentMs, we need to adopt
a certain criterion to compare things which are equivalent in some sense. ChangingM while
keeping the rest of the parameters constant would imply a new current with a different mean
µ, delta varianceσ2w and exponential varianceΣ2. The change in the output would be due to
the composition of several effects and it would be hard to discriminate what really caused a
certain change. We have adopted, therefore, two criteria to study these situations, in which
the mean currentµ = CMJνr remains constant while varyingM , so that the changes in the
response are entirely due to the change in the current variance:
1. To change the number of pre-synaptic neuronsC so thatCM remains constant. In this
5.3. Results 137
case less input cells would provide the same mean current because they would establish
more synaptic contacts with the target neuron. Besides, with this criterion also the
limit varianceσ2lim = C M J2 (1+∆2)
τv(see eq.4.5.37) remains invariant so that in the
limit where the input rate is very big, all choices ofC andM , such thatCM = const.,
have to converge to the same response statistics. Another advantage of this comparison
criterion is that (as mentioned in section4.5.2.4) it will turn out that setting the number
of contacts larger than oneM > 1 while renormalizing the number of pre-synaptic
cells, is approximately equivalent to introducing a certain amount of synchronyρ > 0
while keepingM = 1 andC constant (see eq.4.5.38). Because of this property, when
studying different values ofM (and renormalizingC) we will also be studying the case
in which while all the other parameters are kept fixed, the synchronization is tuned by
changingρ.
2. The second reasonable way to absorb the change inM keepingµ invariant is to renor-
malize the synaptic efficaciesJ so thatMJ = const. This can be thought of as the case
in which a synaptic bouton creates, from a single active zone, several synaptic special-
izations where transmitter release occurs. This genesis of new contacts would occur,
however, with no increase of the synaptic resources such as transmitter molecules or
post-synaptic receptors but only with a redistribution of synaptic efficacies between
the newly created active zones. This hypothetical redistribution would imply that the
efficacyJi,α at each of theM contacts would decrease because the same amount of
resources (transmitter molecules, transmitter receptors) are now spread and used by
M contacts. This situation may result a little artificial because, both the probability
of releaseU and the recovery rate1/τv would remain constant at each contact while
vesicles would be filled with less glutamate, for instance1. In this second compara-
tive scenario, only the mean rateµ remains constant to variations ofM , and the limit
varianceσ2lim (eq.4.5.37) decreases as1/M . This is in agreement with a naive applica-
tion of the Central Limit Theorem: increasing the number of independent realizations
of the random release, the renormalized summed response approximates the mean re-
sponse with an error that vanishes as1/√
M . In this situation, the limitM →∞ leads
us to the phenomenological model of short-term depression of Tsodyks and Markram
(see e.g. [Tsodyks and Markram, 1997]). In this model the noise coming from the
stochastic recovery of the vesicle, has been washed out by pooling the response from
1Redistribution of the post-synaptic receptors into several zones would imply a rescaling of the synaptic
efficacy parameterJ only in the case where the released transmitter saturates the receptors in such a way that
the amount of synaptic current entering the cell is not bounded by the quantity of released transmitter but by
the number of receptors.
138 Chapter 5: The response of a LIF neuron with depressing synapses
many independent synaptic contacts (or by averaging over many experimental trials,
as they did to obtain the nice response traces which are accurately fitted by their deter-
ministic model). The consequences of assuming such anaveragedmodel of STD over
an stochastic one would be revealed by the comparison of the result obtained for finite
M ’s with the caseM = ∞.
Now we turn to analyze the implications of the major constraint imposed by synaptic
depression: the saturation of the mean current.
5.3.1 The saturation ofµ and the subthreshold regime
Before going into the analysis of the current saturation, we will define the sub- and supra-
threshold regimes. By inspection of the equation of the LIF neuron, eq.5.2.2, we can in-
tuitively see that the stochastic process defined by the temporal evolution of the membrane
potentialV (t) of the cell, has a mean value2 µ[V ] = µτm whereµ is the mean of the Gaus-
sian coloredcurrent by which we have replaced the source term of eq.5.2.2. This can be
rigorously proven [Tuckwell, 1988]. This Gaussian process has no current units because it
is not exactly the currentI(t) defined in eq.4.5.1. Indeed, it isRm/τm times the currentI(t)
(see how we have transformed eq.5.2.1into 5.2.2). Therefore, this Gaussian source term
(which will be called current for simplicity) has units of voltage divided by time. Now, if
µ[V ] = µτm < θ this implies that the mean membrane potential, after it has been reset toH
and stayed there during a periodτref , starts integrating the current and approaches the value
µ[V ]. Once it is there, it fluctuates around it with a varianceσ2[V ] proportional to the current
varianceσ2w (for details see [Tuckwell, 1988]). Because this mean valueµ[V ] falls belowθ
, the potential reaches threshold because of the input fluctuations, i.e. the input noise. In
the limit case in which the input has no noise, ifµτm < θ the neuron would never fire. In
conclusion, in the sub-threshold regime3 the input fluctuations are crucial to make the neu-
ron fire [Tsodyks and Sejnowski, 1995, vanVreeswijk and Sompolinsky, 1996, Shadlen and
Newsome, 1998, Salinas and Sejnowski, 2000]. If on the contraryµτm > θ the evolution
of V (t) would be driven mostly by the mean currentµ and not so much by the fluctuations
σ2w: on its way fromH to the mean current the potential will cross threshold and a moderate
amount of noise would jitter a little the precise crossing time. Figure5.1shows an example
2To be precise, this is the mean voltage of an IF neuronwithout threshold, that is, of the variableV (t)governed by equation5.2.2alone. If the effects of the threshold are taken into account the meanµ[ν] would be
smaller than the given value.3We will characterize the sub-threshold regime by the inequalityµ < θ instead of specifying thatµ refers
to the mean voltage. When plotting the source current, in order to visualize if it determines a sub- or supra-
threshold regime we will usually plot the current timesτm so that it has the proper voltage units.
5.3. Results 139
of the evolution of the membrane potential in two situations, one in which the current is
sub-threshold (top) and one in which it is supra-threshold (bottom). Another major conse-
quence of these two different modes of activity is the regularity of the output spike pattern
measured by the coefficient of variation of the IRIs,CVisi [Shadlen and Newsome, 1998,
Feng and Brown, 2000, Salinas and Sejnowski, 2000, Renart, 2000]. As can be seen in the
figure, the variability of ISIs in the case where the firing activity is dominated by the fluctu-
ations is much larger (CV = 1.06) than in the case in which the potential is mainly driven
by the mean current (CV = 0.68). Because cortical neurons seem to fire in a very irregular
fashion [Softky and Koch, 1993], close to a Poisson process, the subthreshold regime has
been postulated as a potentialmechanismto generate such a high variability [Shadlen and
Newsome, 1998, Stevens and Zador, 1998]. Under this hypothesis the subthreshold regime
was obtained by abalancebetween the excitatory and inhibitory currents (that is why this
regime is also known as the balanced regime).
Now, does short-term depression play any role in determining if the working regime of
the neuron is sub or supra-threshold? Let us take a closer look at the saturation value of the
mean currentµlim, defined as the value of the mean current when the pre-synaptic cells fire
with infinite rateν = ∞ (see eq.4.5.5in section4.5.1):
µlim =CMJ
τv
(5.3.1)
If we put into this expression biologically plausible values for the parametersτv ∼ 500−1500
ms andJ ∼ 0.01 − 0.5 mV, we can compute the number of active synapsesCM needed to
obtain a mean currentµ above threshold, i.e. µlim > θ. Assuming thatθ = 20 mV and that
the membrane time constantτm = 10 ms, τv = 1000 ms andJ = 0.2 mV the inequality
reads
CMJ
τv
>θ
τm
(5.3.2)
CM > τvθJτm
' 10000 (5.3.3)
In the lower limit of our intervals, that is,τv = 500 ms andJ = 0.5 mV, this number comes
down to500. This means that in other to enter in the supra-threshold regime (µτm > θ)
at least500 strong synaptic contacts with fast short-term depression need to be activated
with infinite spike rate, or up to10000 contacts in the case that their efficacies are moderate
(J = 0.2 mV) and depression is not so fast recovering (τv = 1000 ms).
These figures give an idea of how large a population of pre-synaptic neurons has to
be in the case itwantsto put the target neuron in a supra-threshold state (notice that if the
connections are not mono-synaptic,M > 1, the numbers given above do not correspond with
the number of pre-synaptic cells). They motivate us to state that depressionmay imposethe
140 Chapter 5: The response of a LIF neuron with depressing synapses
1200 1250 1300
t (ms)
-10
0
10
20
30
40
50
V (
mV
)
1000 1200 1400 1600 1800 2000
t (ms)
-30
-15
0
15
30
45
V (
mV
)
CV= 0.68
CV= 1.06
Figure 5.1: Two examples showing the simulation of the membrane potential evolution of
a leaky integrate-and-fire neuron.Top plot: Sub-threshold balanced regime: the synaptic
efficacies were chosen so that the mean current is zero, i.e.µτm = 0 (dashed line). Thus, the
potential fluctuates around zero, and eventually crosses the threshold (dotted line) driven by
a positive fluctuation of the current (σwτ 1/2m = 14 mV). As a result the firing pattern is very
irregular (CV = 1.06). Bottom: Supra-threshold regime: a different choice of the synaptic
efficacies leads to a mean currentµτm = 33.75 > θ (dashed line) larger than the threshold
(dotted line). The potential evolves from the reset potentialH towardsµ, crossingθ in its
way. This results in a regular firing pattern (the inter-spike-intervals have little variability,
CV = 0.68) which is quite insensitive to the fluctuations of the current (which are about the
same size as in the top plot,σwτ 1/2m = 13.66 mV). Neuron parameters:θ = 20 mV, H = 10
mV,τm = 20 ms,τref = 2 ms. Current parameters:Ce = 4000, Ci = 1000, νe = νi = 25
Hz. Top: Je = 0.14 mV,Ji = 0.56 mV Bottom:Je = 0.14 mV,Ji = 0.53 mV
5.3. Results 141
working regime to be sub-threshold in those circumstances in which the pre-synaptic signal
is not transmitted by a huge number of cells, but by an amount of the order of a thousand
neurons. This number seems to be of the order of a cortical functional population size, or
perhaps a little bit above. For all these reasons we have chosen the sub-threshold regime
as the relevant case to study, in which in addition, the current noise plays a critical role.
Therefore, since our aim was to study the effect of synchrony plus STD on the response of a
neuron, we observe that depression naturally sets a working regime in which the modulation
of noise by the input synchrony is maximally expressed.
The particular situation in which we will analyze the output of the neuron is the follow-
ing: Let us suppose that the target neuron is receiving inputs from many cells, pyramidal cells
and interneurons at low rates (∼ 2 Hz) If this excitatory and inhibitory activity is approxi-
mately balanced, it could be modeled as a white noise with zero mean an a certain variance
σ2bg. Now, over this background we focus on a specific population trying to transmit a certain
signal to the target neuron through depressing synapses. This population consists ofC neu-
rons whose activity can be correlated with zero time-lag, that is, some of their spikes might
be synchronous. Because the number of contacts that this population makes onto the output
neuron will not surpass the critical number needed to produce a supra-threshold current, the
modulation of the input noise is crucial to make the output neuron fire. In the following,
we will show how critical is the number of contacts in each connectionM and the level of
synchronyρ in the modulation of the input fluctuations and the way this modulation is read
out by the target neuron.
5.3.2 The modulation of the variance
We will first show the effect of multiple-contact connections between the cells on the pre-
synaptic population and the target neuron. Afterwards, we will investigate the effects due to
synchrony. As explained above, we will vary the number of contactsM while keeping the
mean rateµ constant in two ways:
5.3.2.1 VaryingM , with CM constant
Varying M in this way, is equivalent to compare the currents produced by populations
with different sizesC, such that each one establishes the same number of total contacts,CM .
Thus, the mean current,µ, produced by each of these populations is equal, but the variance,
σ2w, is not. What is then the value ofσ2
w, as a function of the input rateν, whenM is varied?
Figure5.2 exhibits a three dimensional plot in whichσ2w is plotted as a function ofM and
ν, for CM = 1. The first striking feature is that as soon asM > 1, the varianceσ2w as a
142 Chapter 5: The response of a LIF neuron with depressing synapses
0 10 20 30 40 50
Rate
5
10
15
M
0
1
2
3
4
5
6
7
Figure 5.2: Current varianceσ2w (per contact) as a function of the number of contactsM and
the input rateν whenCM is kept fixed and equal to one.σ2w is given in units ofJ2 andν is
given in Hz. Other parameters are:∆ = 0, U = 0.7, τv = 500 ms andρ = 0.
function ofν shows anon-monotonicbehavior with a maximum. This maximum becomes
more and more prominent asM increases. Although it is not completely clear in the figure
(because of the smallν range shown), in the limitν → ∞ the variance converges toσ2lim
independently of the number of contactsM . As discussed before in this section, this happens
becauseσ2lim depends on the productCM which is held constant. On the other hand, we also
have the correlation magnitudeα = Σ2
σ2w
, which measures the importance of the exponential
correlations in the input and whose value4 −1 < α < 0 tells us how much the input current
deviates from an uncorrelated white noise (see section5.2.3). This correlation magnitudeα
does not depend neither on the number of pre-synaptic cellsC nor on the synaptic efficacy
J , but it does depend onM (see eqs.4.5.35and eq.4.5.36). Therefore the way it varies
with M is the same in both comparison schemes. Figure5.3 illustratesα as a function ofM
andν. First we observe that in the limitν → ∞, α vanishes. This happens because while
σ2w tends to a finite value,Σ2 goes to zero asν increases. The second observation is that,
although qualitatively similar, for differentMsα exhibits a behavior which is scaled up, i.e.
the magnitude of the minimum grows with the number of contactsM and the convergence
4The correlation magnitude is bounded by−1 and zero because the exponential varianceΣ2 is negative due
to intrinsic properties of the correlations introduced by depression whichsubtractprobability for a subsequent
release after we observe one. Besides,Σ2 < σ2w because as we showed, the variance of the release count in
a time windowT , which obeysT >> τC , equals(σ2w + Σ2)T . Because the variance is defined non-negative
|Σ2| cannot be larger thanσ2w
5.3. Results 143
10
20
30
40
50
M0
1020
3040
50nu
-1
-0.8
-0.6
-0.4
-0.2
0
Figure 5.3: Correlation magnitudeα as a function of the number of contactsM and the input
rateν. Parameter values were chosen as figure5.2: ∆ = 0, U = 0.7, τv = 500 ms,ρ = 0.
to zero is also slower (asM increases, the rate needed to make the correlation magnitude
α negligible, is bigger). In figure5.3 all the current properties (µ, τc , σ2w, Σ2 andα ) are
plotted together as a function of the input rateν for several values of the number of contacts
(M = 2, 10, 50, 150) when besides the population current, a background zero mean white
noise has been included in the input. In the top plot the mean currentµ(ν) and the correlation
time scaleτc are common to allM values. We can check that the mean currentµ, which is
exhibited in voltage units (that is, what is shown is the productµτm), saturates below the
threshold value20 mV. The time constantτc displays a similar though inverse behavior: it
saturates to zero with the same convergence rate. The first consequence can be extracted
already at this point: since the exponential correlations will be detectable by the output
neuron only ifτc <τm (see section5.2.3), the input current can be taken as a white noise
with varianceσ2w until the mean current starts to saturate, because as it saturatesτc becomes
smaller thanτm and corrections can be observed in the output. The middle plot showsσ2w and
Σ2 for severalMs. Here,σ2w contains the contribution of the input population under analysis
plus a constant termσbg due to background activity. Allσ2w lines show a maximum (second
plot), although for smallM it is not detectable in the plot. In the same plot we observe that
Σ2 displays a similar behavior asσ2w but for negative values. However, the ratio between
the two variances, that is the correlation magnitudeα shown in the bottom plot, grows as
M increases. As a consequence we can assert that the effect of the exponential negative
correlations parameterized byα andτc , will only be detectable whenM is large. The reason
144 Chapter 5: The response of a LIF neuron with depressing synapses
0
5
10
15
20
µ [
mV
]
Effects on the input current caused by multiple-contacts M>1
-100
-50
0
50
100
150
σ w
2 , Σ2 [
mV
]
M= 2M= 10M= 50M= 150M=1, ρ= 0.05
0 20 40 60 80 100ν [Hz]
-0,8
-0,6
-0,4
-0,2
0
α
0
100
200
300
400
500
τ c [ms]
Figure 5.4: Current parameters as a function ofν for several values of(C, M) whereCM
is invariant. The current is composed of: i) a background created by an excitatory popula-
tion of 2000 neurons with synaptic efficacyJ = 0.05 and an inhibitory population of500
interneurons withJ = −0.2, both firing at a constant rate of2 Hz. ii) The population under
focus is composed ofC excitatory neurons withJ = 0.19, each one makingM contacts so
that the total number of contacts is alwaysCM = 3750. In all cases except when mentioned,
ρ = 0. Top plot: Superposition of two graphs, namely the mean currentµτm (left axis) and
the correlation timeτc (right axis). Both apply for all combinations of(C, M) becauseτc
does not depend on either of them, andµτm depends on the product. It is important to notice
that µlimτm falls below the thresholdθ, located at20 mV. Middle plot: Current variance
σ2wτm (solid lines) and exponential varianceΣ2τm (dashed lines) for four different values of
(C, M). The dotted line represents a mono-synaptic configuration with synchronyρ = 0.055
to illustrate the equivalence betweenM > 1 andρ > 0 (see text).Bottom plot: Correlation
magnitudeα = Σ2/σ2w. Inset values apply for middle and bottom plots.
5.3. Results 145
is that, although for anyM α exhibits a minimum (more pronounced the bigger isM ) and
there is aν range in which it is not negligible, for low number of contacts this range is small
and, what is more important, in this rangeτc >τm . Then, asM is increased, the range where
α is not negligible scales up whileτc remains insensitive to the variation ofM . Thus, for
largeM , τc becomes smaller thanτm whenα is still noticeable and therefore the correction
from the white noise will be important. This will be illustrated in the coming figures of the
output rate.
The position of the maxima inσ2w also differs from oneM to another. Actually an
expression for the maximum positionνmax, can be easily obtained from the formula ofσ2w,
eq.4.5.35. When one assumes thatρ = 0 (no synchronization) and that∆ = 0 (all efficacies
are equal) the maximum is achieved at
νmax = 2−2 + U −
√(1− 3 M + 2 M2) U2 (2− U)
τ U (4− U2 − 4 UM + 2 U2M)(5.3.4)
which only depends on the parametersM andU . In fact, from this expression we can deduce
for which pairs ofM andU there exists a maximum. This happens, whenever the previous
expression ofνmax is non-negative, which occurs if the number of contacts exceeds
M >U + 2
2U(5.3.5)
This implies that the smaller isU , the more contacts are needed to produce a non-monotonic
behavior ofσ2w. If release is completely reliableU = 1, two contacts are enough to produce
a maximum, though it will not be very prominent. The value ofνmax is plotted as a function
of the two parametersU andM in figure 5.5 for τv = 500 ms. Although at a first glance
it seems to be widely modulated by these parameters in a sort of hyperbolic behavior, we
will show that effectively it always falls in a low rate value. To measure how prominent
is the bump in the variance we have plotted the ratio between theσ2w at the maximum and
the limit valueσ2lim. This ratio is shown in fig.5.5 (bottom plot) for the sameτv as before.
Comparing both plots, we now see that the bump becomes prominent when bothM andU
are large, and in this regionνmax shows a large plateau at low rates. Changing the value of
τv within a plausible range is of little help. Therefore, we conclude that this non-monotonic
behavior becomes more relevant the bigger the number of contacts and the closer the release
probability is to one. Whenever these two things happen, the maximum is located at low
rates (∼ 5− 10 Hz).
What is the impact of the synchronization of the incoming spikes (ρ > 1) on the mod-
ulation of the varianceσ2w and the correlation magnitudeα? As it was commented when
the expression forσ2w was derived, the implications ofρ > 1 seemed to be equivalent to
settingM > 1 while renormalizingC to keepCM constant. This is shown in figure5.4
146 Chapter 5: The response of a LIF neuron with depressing synapses
24
68
1012
1416
1820
M
0.2
0.4
0.6
0.8
1
U
0
10
20
30
40
50
[hz]
2 4 6 8 10 12 14 16 18 20
M0.2
0.4
0.6
0.8
1
U
1
2
3
4
5
ratio
Figure 5.5: Positionνmax of the variance maximum (top) and the ratioσ2w(νmax)σ2
lim(bottom) as
a function of the number of contactsM and the input rateν, when the productCM is held
constant. Notice that the region in the(M, U) plane where the ratio is significantly above
one, is the plateau of theνmax plot where the rate always falls below10 Hz. The white corner
in the top plot represents a region where there is no maximum, and corresponds to the red
closer corner in the bottom plot. Other parameters:τv = 500 ms,∆ = 0
5.3. Results 147
where, together with the case in which(C = 25, M = 150, ρ = 0), we have included the
case(C = 3750, M = 1, ρ = 0.05, dotted line). Although lines do not overlap, the behavior
of σ2w, Σ2 and thereforeα is very similar. The meanµ and the correlation timeτc do not
depend onρ. Thus, we have demonstrated that the synchronization of the Poisson activity
within a given population ofC cells which connect mono-synaptically (M = 1) to a given
target neuron, is qualitatively equivalent to a decrease inC together with an increase inM
in such a way that the product remains invariant. This is interesting for practical applica-
tions: first, it simplifies the situation from an input in which the trains are cross-correlated
to a situation in which fewer afferent trains are independent but make several contacts onto
the post-synaptic cell. Dealing with independent trains is always simpler than dealing with
cross-correlated ones. Secondly, simulations of correlated trains are always harder to im-
plement than those with independent afferent trains. Thus, we have performed numerical
simulations only for the case of independent Poisson5 spike trains impinging on connections
made up of several contacts (by means of this equivalence, these simulations cover the cases
with cross-correlated inputs).
Figure5.6confirms this point by showing an example in which a population with a fixed
number of cells (C = constant) establishing mono-synaptic contacts (M = 1) increases
the zero-lag cross-correlations among its neurons. The outcoming current defined by all its
parametersµ, σ2w, Σ2, α andτc resembles very much the previous situation in whichM was
above one. The values ofρ used here are all lower than0.05 which falls in the range of the
experimental data [Zohary et al., 1994].
5.3.2.2 VaryingM , with MJ constant
We turn now to the second case which consists in determining the effect of an increase
in the number of contactsM while renormalizing the synaptic efficaciesJ , so that the mean
current remains invariant. A common argument among neuroscientists has been that, since
synaptic release is unreliable, i.e.U < 1, neurons create several contacts among them to
average out the intrinsic unreliability by pooling independent responses. A few works have
shown that this idea indeed works: Zador (1998) computes the information conveyed in the
output spike pattern of a neuron about the input spike times when synapses are unreliable.
This work shows that this information grows monotonically as the number of contacts be-
tween the cells increase.Fuhrmann et al.[2002] show that the information contained in the
5Without going into details, we must also bound this equivalence to the case in which the inputs are Poisson.
In the case in which, for instance, spikes come in bursts, this equivalence would not hold. In other words,
depression would make different the cases in which the groups of contacts which are correlated are quenched
(M > 1, ρ = 0) from the situation in which they are all time-varying (M = 1, ρ > 0) (see [Senn et al., 1998]
for an example where this subtle point makes the difference)
148 Chapter 5: The response of a LIF neuron with depressing synapses
0
5
10
15
20
µ [m
V]Synchrony effects on the input current caused by ρ>0
-100
-50
0
50
100
σ w2 , Σ2 [
mV]
ρ= 0 ρ= 0.01ρ= 0.02ρ= 0.05
0 10 20 30 40 50 60 70 80 90 100ν [Hz]
-0,8
-0,6
-0,4
-0,2
0
α
0
100
200
300
400
500
τ c [ms]
Figure 5.6: Current parameters as a function ofν for several values of the correlationρ
where all theC = 3750 neurons make a mono-synaptic connectionM = 1. The current is
composed of a background generated as in fig5.4and the component coming from the focus
population.Top plot: Superposition of two graphs, namely the mean currentµτm (left axis)
and the correlation timeτc (right axis). Both apply for all values ofρ because neither of them
depend onρ. The mean currentµ saturates again below thresholdµlimτm < θ. Middle plot:
Current varianceσ2wτm (solid lines) and exponential varianceΣ2τm (dashed lines) for four
different values ofρ (see inset).Bottom plot: Correlation magnitudeα = Σ2/σ2w. Inset
values apply for middle and bottom plots. Notice that this figure resembles the previous
figure5.4
5.3. Results 149
5
10
15
M
0 10 20 30 40 50nu
0
0.2
0.4
0.6
0.8
1
Figure 5.7: Current varianceσ2w (per contact) as a function of the number of contactsM and
the input rateν whenMJ = J ′ is kept fixed.σ2w is given in units ofJ ′2. Other parameters
are:∆ = 0, U = 0.7, τv = 500 ms,ρ = 0 andC = 1 so that it represents the variance per
contact.
amplitude of an EPSP about the time at which previous spikes arrived at the synaptic ter-
minal also increases monotonically withM , because many contacts eliminate noise in this
output signal. However the argument of establishing multiple contacts as a mechanism for
pooling out the unreliability, seems at odds with the existence of cortical synapses with a
high probability of release [Paulsen and Heggelund, 1994, 1996, Bellingham et al., 1998,
Stratford et al., 1996]. If unreliability may be overcome by tuning the biophysical mecha-
nism at the single synaptic bouton so that release probability increases, why would neurons
not try to beat this limitation by establishing more and more contacts? The question seems
to be still opened, suggesting that multiple contacts between neurons may be useful for other
purposes. In this section we study the transformation of the current second order statistics
when a single contact spreads up into severalsmalleractive zones6. The net effect of this
increase is a strong decrease of the varianceσ2w. This is shown in figure5.7. In the limit
M →∞, atMJ fixed,σ2w becomes
limM→∞
σ2w = σ2
dm
C J ′2 Uν
1 + Uντv
[UM
1 + Uντv(1− U/2)
](5.3.6)
6As it is done in the whole thesis, we do not distinguish whether this multiplication offunctionalcontacts
occur within a synaptic bouton or if the pre-synaptic axon creates new synapses.
150 Chapter 5: The response of a LIF neuron with depressing synapses
whereJ ′ is the constant product ofMJ . This limit varianceσ2dm displays again a non-
monotonic behavior with a maximum at
νmax = 1/21−
√9− 4 U
(−2 + U) Uτv
(5.3.7)
which again, for realistic values ofτv , is always lower than10 Hz. In this limit, the cur-
rent coincides with the current derived from the deterministic model (dm) of Tsodyks and
Markram[1997]. Although not shown in this thesis, we have analytically proven this iden-
tity by computing the correlation function of the deterministic model, obtaining the same
function one obtains in the limitM →∞ andJ → 0, while MJ =constant, of the stochas-
tic multiple contact model. In this phenomenological model the synaptic response is not
stochastic. Thus the varianceσ2dm is due only to the pre-synaptic stochasticity, and, there-
fore, its magnitude is smaller thanσ2w at any finite value ofM for all ν. A remarkable feature
of this model arises when the input rateν is very large and depression makes the synapse
saturate: in this regime the fluctuations in the current vanish. This occurs because the size of
the synaptic responses becomes smaller asν increases. In theν →∞ limit, the magnitude of
each EPSC becomes zero, as the number of them per unit time is infinite: the current is not a
point process any more but a constant flux of ions into the cell. Nevertheless,one should bear
in mind that this is an averaged model which essentially violates the quantal description of
the synaptic response because it assumes a continuum of EPSC sizes. It is useful for many
purposes but, when modeling the input current to a neuron up to second order statistics,
important differences between the deterministic and a stochastic model appear.
Figure5.8 shows several parameters of the synaptic current as a function of the input
rate forM = 1, 2, 10, 100. The last of thisM ’s is qualitatively similar to the deterministic
model. This time we have taken a population of both excitatory and inhibitory neurons, both
firing at the same rateν. Although inhibition is present, the net mean current is not balanced.
This happens because the number of excitatory neurons was taken bigger than the number of
inhibitory ones. Even though the meanµ is not zero, it saturates under threshold,µlimτn < θ.
For the chosen values ofU and∆ the exponential variance,Σ2, is the same for allM , but this
is not true in general. Like in the previous section, increasingM results in an enhancement
of the correlation magnitudeα with no change in the correlation timeτc . For this reason,
when the number of contacts is small, the effect of the exponential negative correlations is
weak, whereas for largeM or in the deterministic model, this correlation cannot be ignored
in order to quantify the neuron response rate, as we will see in next section.
5.3. Results 151
0
5
10
15
20µτ
(mV)
Effects on the input current cuased by multilpe contacts (M*J=const.)
-20
0
20
40
60
80
σ w2 τ, Σ 2τ
(mV)
M=1M=2M=10M=100 (= det. mod.)Σ
2
0 20 40 60 80 100 ν [Hz]
-1
-0,8
-0,6
-0,4
-0,2
0
α
0
100
200
300
400
500
τ c (ms)
Figure 5.8: Current parameters as a function of the input rateν for several values of(M, J)
whereMJ = J ′ is constant. The current is composed of: i) a excitatory population made
up ofCE = 2000 units with efficacyJ ′e = 0.6 mV and an inhibitory population ofCi = 500
interneurons withJ ′i = −1.6 mV, both firing at the same rateν. In each population neurons
are makingM contacts so that the total number of contacts grows as(Ce + Ci)M . In all
casesρ = 0. Top plot Superposition of two graphs, namely the mean currentµτm (left axis)
and the correlation timeτc (right axis). Both apply for all combinations of(M, J). Notice
thatµlimτm falls below thresholdθ = 20 mV. Middle plot Current varianceσ2wτm (colored
solid lines) and exponential varianceΣ2τm (dashed line) for four different values of(M, J)
(dashed line corresponds to all four cases).Bottom plot Correlation magnitudeα = Σ2/σ2w.
Inset values apply for middle and bottom plots. Parameters:U=1, τv = 500 ms.
152 Chapter 5: The response of a LIF neuron with depressing synapses
5.3.3 The output rate of a LIF neuron
5.3.3.1 Output rate atCM fixed
Finally, we are going to analyze how the modulation of the meanµ, the varianceσ2w
and the correlation magnitudeα, affect the firing properties of an integrate-and -fire neu-
ron. We have done it both analytically and numerically through simulations. The analytical
procedure used to compute the output rateνout has been described in section5.2.3. It takes
into account corrections, arising from exponential correlations in the input current, to the
standard expression for the rate of a LIF neuron driven by a white noise [Ricciardi, 1977].
We start by plotting the output rateνout andCVout for the currents shown in figure5.4
which illustrated the case withCM fixed. Besides the input from this population, we added a
balanced current component with zero mean and varianceσ2bg, wherebg refers to background.
This background current was produced by the activity of an excitatory and an inhibitory pop-
ulation composed of2000 and500 neurons respectively, connected mono-synaptically, and
firing at 2 Hz. To distinguish between the two components of the current, namely the back-
ground and the current from the population under analysis, we will refer to the neurons
belonging to the latter as neurons frompop. As discussed previously in the Results section,
these examples for the current are qualitatively equivalent to the case in which a given pop-
ulation correlates its activity with different degrees of synchronizationρ. Therefore, there
exits a matching between theMs picked for the figure and the values ofρ which would pro-
duce the same response:M = 2, 10, 50 and150 correspond toρ = 0.00025, 0.0022, 0.013
and0.04 respectively. Figure5.9 shows: the simulation results; the analytical prediction
taking into account the corrections due to the exponential negative correlations (solid lines
top plots) along with the prediction neglecting this correction (dashed lines top plots); the
current parametersµ andσ2w (second from the bottom plot),α andτc (bottom plot) for com-
parison. The first thing to notice is that the output rate is strongly modulated by the variance
σ2w, whereas the mean current plays a less important role. This, of course, occurs because
all these currents are subthreshold. Thus, the output rate is much higher whenM is big,
because in that case the fluctuations of the current are larger and enable the potential to fluc-
tuate and hit threshold more often. The non-monotonic behavior exhibited by the current
varianceσ2w for largeM , is now inherited by the output rate. This is only evident in the
two cases with biggerM . In those cases (M = 50, 150 or ρ = 0.013, 0.04) , νout increases
very fast reaching a maximum value at aboutν ' 10 Hz. From thereafter, the rate decreases
as a negative power (fit not shown) to a non zero value independent ofM . This saturation
value coincides with the rate which would result from a Gaussian white noise current with
meanµlim (eq. 4.5.5) and varianceσ2lim (eq. 4.5.37). The maximum does not coincide with
the maximum ofσ2w. The reason is that whenσ2
w reaches its maximum, the meanµ is still
5.3. Results 153
0
5
10
15
20
µ [
mV
]
0
50
100
150
σ w
2 [mV
]M= 2M= 10M= 50M= 150
0,6
0,7
0,8
0,9
1
1,1
CV
out
0
100
200
300
400
τ c [ms]
0
10
20
30ν ou
t [Hz]
Response of a LIF neuron for different Ms
0 50 100 150 200
νout
[Hz]
-0,8
-0,6
-0,4
-0,2
0
α
Figure 5.9: Numerical results and theoretical prediction of thenon-monotonicresponse of a
LIF neuron for input current examples with differentMs, whileCM is held constant. Input
current is taken from fig.5.4. Top plot: Output rateνout as a function of the input rateν.
Solid lines are the theoretical prediction with the correction due to the exponential correlation
introduced by depression. Dashed lines are the theoretical predictions when neglecting the
effect of the exponential correlation, and therefore assuming the input to be a white noise.
Symbols (in a slightly different color for a better visualization) are the simulation results.
Orange squares mark the position of the simulation examples shown in figs.5.10-5.13.
Second plot: Output coefficient of variation of the ISI’s,CVout, for the three cases with
higher output rates.Third plot: (The same as in fig.5.4 has been plotted here to visualize
the input together with the output). Superposition of the mean currentµ (magenta line, right
axis) and the current varianceσ2w (left axis). Bottom plot: Superposition of the correlation
magnitudeα (left axis) and the correlation time constantτc (magenta line, right axis). Colors
in the top plot inset apply to all plots.
154 Chapter 5: The response of a LIF neuron with depressing synapses
growing significantly so that it still contributes to the increase ofνout.
What about the correction to the white noise? As can be seen in the analytical predictions
of the output rate, significant differences only arise whenM is either50 or 150 (see fig.
5.9). For less contacts, the exponential correlations produce a small deviation from the white
noise case. Besides, forM = 50, 150, although the correlation magnitude (bottom plot) for
an input rate of about∼ 10 Hz is prominent,α ∼ −0.8, sinceτc τv , the correction is
small. Only for higher input rates, whenτc < τv, the correction becomes important. The
modification produced by these negative correlations is todecreasethe output rate because,
as theτc becomes smaller, the effective current variance that the LIF neuron starts to observe
is the sum of the varianceσ2w and the exponential varianceΣ2 (which is negative).
The comparison between simulation and theory is excellent except near to the maxima
in the casesM = 50, 150. The reason of this discrepancy is the violation of the diffusion
approximation (see section5.2.1). First, whenM is large, upon arrival of a spike along a
pre-synaptic fiber, up toM EPSC’s might be produced at the same time. This provokes a
substantial jump of the membrane potential and the diffusion approximation no longer holds:
the size of the discontinuities ofV (t) is too large to be ignored. Why do these cases with
largeM give a good fit for lager input rates then? The answer is once again saturation: as the
synapse starts to saturate a larger fraction of theM contacts (from the same fiber) are empty
when the spikes arrives so that only the rest have a chance to release transmitter. Therefore.
the synchronous arrival ofM EPSC’s to the output neuron no longer occurs and the diffusion
approximation becomes valid.
5.3.3.2 Saturations eliminates synchrony
The influence of depression on the spatial structure of the inputs, is illustrated in four
snap-shots of the neuronal response and its input current at four different input ratesν (fig-
ures5.10,5.11,5.12 and 5.13). We have chosen the most extreme of the examples(C =
25, M = 150) to clearly capture this transformation. The snap-shots have been taken at the
input rates indicated by the orange squares in the top plot of figure5.9.
These four figures (5.10,5.11,5.12,5.13) are composed of the following plots: i)Top
plot: A rastergram of the input spikes, where the the population under focus represents only
the first25 neurons, while indexes from25 up to 100 represent a fraction of the neurons
firing in the background at2 Hz. ii) Second plot:A rastergram of the synaptic releases. The
population under focus represents the first3750 contacts (because it has25 neurons making
150 contacts each), although just660 belonging to four pre-synaptic neurons are shown.
The rest of the contacts are mono-synaptic synapses established by background neurons. iii)
Third plot: Instantaneous current,Iτm(t), defined as the total current received in a time
5.3. Results 155
window of lengthτm . The blue line is the balanced background current, while the black
line represents the total currentIτm(t), that is, the background plus the current from the
population under focus. The red dashed line represents the averaged ofIτm(t) which equals
µτm iv)Bottom plot: Membrane potentialV (t). The spikes are addeda posteriorifor clarity.
The dotted line represents the threshold (θ = 20 mV) and the dashed red line represents the
mean potential< V (t) >= µτm. The four figures5.10,5.11,5.12 and5.13, differ in the
input rateν which is2, 10, 80 and200 Hz, respectively. Depression was parameterized with
τv = 500 ms andU = 1. This choice of reliable synapses was done to make the effect of
desynchronization more obvious. A different value ofU would only attenuate the modulation
of the fluctuations.
• In the first figure5.10, the population under focus is firing at the background level, so
it cannot be distinguished from the rest of the neurons in the spike rastergram. In the
release raster gram, the difference cannot be more obvious: while a background spike
produces one release, a spike from our population, produces up to150 releases at the
same time (although it looks like a vertical line, is in fact the composition of150 dots).
We should make clear that this rastergram of releases, only shows the EPSC’s produced
by four neurons of the input population (and about600 more from the background).
The rateν is small enough to leave time between spikes for the vesicles to recover, so
most of the times all the150 EPSC’s occur at the same time. Therefore, the arrival
of a single spike from this population, produces a big fluctuation in the input current
(third plot), which after the spike goes back to zero and then up again with the next
synchronous arrival of EPSC’s. In this way,Iτm(t) displays a large variance,σwτ 1/2m =
12 mV, but a small meanµτm = 6.8 mV. Meanwhile, the membrane potential keeps
going up and down driven by these large fluctuations. However, not all of them make
V (t) reach threshold because the baseline where it iswaiting between fluctuations is
too low (the average potential isµτm = 6.8 mV). Because of these big jumps in the
potential, the diffusion approximation gives a rather rough description of the process,
and the estimation ofνout is not so good.
• In the next figure5.11, the input rate has been increased toν = 10 Hz, so that now
the input spikes frompop arrive at a higher rate than the background. Still, the syn-
chronous spatial structure of the releases produced by neurons frompopis maintained:
the smallbarsare not so solid now but arrive much more often. This makes the fluctu-
ations of the current to be still very big and to occur more often. The result is that the
potential keeps making sudden jumps towards threshold. Because input fluctuations
occur in an almost continuous fashion, the mean potential has come up to almost12
mV making the distance to threshold shorter and much easier for a fluctuation to make
156 Chapter 5: The response of a LIF neuron with depressing synapses
the neuron fire. At this point, although the squared input variance is smaller than before
σwτ 1/2m = 10.3 mV, the maximal rateνout is achieved because of a higher mean current
µτm = 11.8 mV. The difference between the actualνout and the analytical prediction
is due to the synchronous arrival of EPSC’s, which make the diffusion approximation
a coarse estimation. However, it still gives a qualitative correct behavior.
• In the third figure5.12, things have changed qualitatively. The input rate is well above
saturation,ν = 80 Hz so that more than97% of the contacts frompophave not ready-
for-release vesicles, so that the incoming spikes only trigger a response in a small
fraction of the contacts theyvisit. For this reason, the spatial structure in the raster-
gram of releases has been blurred, and the vertical bars are harder to detect. As a
result, the fluctuations of the current have dropped toσwτ 1/2m = 5.4 mV, while the
average line is almost at its highest valueµlimτm = 14.2 mV. The potentialV (t) lives
now closer toθ than before but it crosses it much fewer times because there are no
more big abrupt jumps. Now its trajectory looks smoother, although still shows some
sudden appreciable jumps. In any case, the diffusion approximation gives an accurate
prediction of the output rate which now isνout = 12.7 Hz (fig. 5.12).
• In figure5.13, we have set the input rate to a very large valueν = 200 Hz to emphasize
this idea: the spatially synchronized input pattern, which was present in the release
rastergram when the input rate was low, has now almost disappeared, and looks very
much the same as the pattern produced by the Poisson background activity (second
plot). Fluctuations in the current are now as low as those of a de-correlated white
noise.
The point we have tried to stress by this series of pictures can be summarized as follows:
if a given population of neurons fires with a certain degree of synchronization, short-term
synaptic depression would de-correlate those inputs if its firing rate is high enough (well
beyond the saturation frequencyνsat of eq. 2.3.5). We will make a brief discussion of this
result in the next section.
5.3.3.3 Output rate atMJ fixed
If we maintainMJ constant as we increaseM , the main effect is a substantial decrease
of the current varianceσ2w. Thus, since we have imposed that the population under focus
puts the output neuron in a subthreshold regime (µlimτm < θ), a decrease inσ2w results in a
decrement of the output rate. This is exactly what figure5.14shows. We have taken exactly
the same example as in figure5.8where a population composed of inhibitory and excitatory
neurons fires with rateν in such a way that the saturation current falls below threshold. We
5.3. Results 157
0
20
40
60
80
100
pre-
syna
ptic
neur
on
Input current: ν= 2 Hz, µ= 6.8 mV, σw
= 12 mV
3250
3500
3750
4000
4250
syna
ptic
cont
act
1000 1500 2000 2500t [ms]
0
10
20
30
40
50
V [m
V]
0
20
40
60
I τ m(t)
[mV]
νout
= 20 Hz, CVout
= 0.98
Figure 5.10: Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (I). The current is the same as in fig.5.4: a background component
of excitatory and inhibitory neurons (indexed in top plot from25 to 2524) making a single
contact (contacts indexed in second plot from3750 − 6249) onto the target neuron, and
firing as Poisson processes at a constant rate of2 Hz. The populationpophas25 excitatory
neurons (indexed in top plot from0 − 24) making M = 150 contacts each (indexed in
second plot as:0 − 149 neuron0, 150 − 299 neuron1, ..., 3600 − 3749 neuron24). The
statistics are Poisson with firing rate of2 Hz (in this figure). Top plot: Rastergram of the
incoming spikes labeled by input neuron index (each dot represent the arrival of a spike).
Second plot:Rastergram of the synaptic releases labeled by contact index. Vertical lines are
the superposition of synchronous releases by contacts belonging to the same neuron.Third
plot: Instantaneous currentIτm(t) defined as the afferent current in a time window of length
τm. Black line, represents the total current, while the blue line represents the background.
Red line represents the temporal average of the total current.Bottom plot: V (t) of the LIF
neuron integrating the current depicted in the plots above. Action potentials are addeda
posteriori for visualization. Brown line representsθ = 20 mV while re line the temporal
average ofV (t). Parameters:τm = 10 ms;τref = 2 ms;H = 10 mV.
158 Chapter 5: The response of a LIF neuron with depressing synapses
0
20
40
60
80
100
pre-
syna
ptic
neur
on
Input current: ν= 10 Hz, µ= 11.8 mV, σw
= 10.3 mV
3250
3500
3750
4000
4250
syna
ptic
cont
act
1000 1500 2000 2500t [ms]
0
10
20
30
40
50
V [m
V]
0
20
40
60
I τ m(t)
[mV]
νout
= 27 Hz, CVout
= 0.88
Figure 5.11: Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (II). The same as in figure5.10but with an input rateν = 10 Hz.
5.3. Results 159
0
20
40
60
80
100
pre-
syna
ptic
neur
on
Input current: ν= 80 Hz, µ= 13.9 mV, σw
= 5.4 mV
3250
3500
3750
4000
4250
syna
ptic
cont
act
1000 1500 2000 2500t [ms]
0
10
20
30
40
50
V [m
V]
0
20
40
60
I τ m(t)
[mV]
νout
= 12, 7 Hz., CVout
= 0.72
Figure 5.12: Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (III). The same as in figure5.10but with an input rateν = 80 Hz.
160 Chapter 5: The response of a LIF neuron with depressing synapses
0
20
40
60
80
100
pre-
syna
ptic
neur
on
Input current: ν= 200 Hz, µ= 14.1 mV, σw
= 4.3 mV
3250
3500
3750
4000
4250
syna
ptic
cont
act
1000 1500 2000 2500t [ms]
0
10
20
30
40
50
V [m
V]
0
20
40
60
I τ m(t)
[mV]
νout
= 6, 5 Hz., CVout
= 0.95
Figure 5.13: Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (IV). The same as in figure5.10but with an input rateν = 200 Hz.
5.3. Results 161
considered four cases in which the population made mono-synaptic contacts with the target
neuron, or madeM = 2, 10, 100 contacts, while the efficacyJ was properly adjusted. This
last example whereM = 100 produced almost the same results as the deterministic model
(data not shown).
The output rate of the neuron (top plot fig.5.14) is severely affected by the redistribution
of contacts. Even when the change is fromM = 1 to M = 2 produces an important change
in νout. As in the previous scenario, whenM becomes large enough, the non-monotonic
behavior ofνout appears. However, now the absolute rates whenM is large have become
very small (νout ∼ 5 Hz). The reason is that since,σ2w decreases rapidly asM increases
(see middle plot), the distance from the mean currentµ to the thresholdθ, which remains
invariant to changes inM , becomes too big compared withσ2w. We could have done fine
tuning to setθ−µlim very small, so that even whenM is 100 νout would not be so small, but
in that case the differences between the differentM ’s would not be very clear.
As before, the effect of the negative correlation is significant only whenM is large, and
the correction setsνout below the prediction obtained with a white noise input.
The comparison between simulation and theory, becomes this time better asM increases,
because of the associated reduction ofJ which, in turn, makes the discontinuities of the
potential smaller.
5.3.4 Information beyond saturation of the mean current
One of the most important implications of saturation is that it restricts the range in which
informationabout the input ratecan be transmitted to the regime of low input firing rates
[Tsodyks and Markram, 1997]. This low-rate range has been bounded by the saturation
frequencyνsat (see chapter2, [Tsodyks and Markram, 1997, Abbott et al., 1997]) which sets
the magnitude of the input rate above which there is little chance to transmit information.
However, under the sub-threshold hypothesis the activity of the output neuron is not only
governed by the mean currentµ but also by the varianceσ2w and the correlation magnitude
α. So a question immediately arises: what is the saturation frequency of the varianceσ2w and
the correlation magnitudeα? If one takes the expression ofσ2w from eq.4.5.35and computes
the saturation frequencyν ′sat the result is7
7The computation ofν′sat was performed in the following way: an expansion ofσ2w around its asymptotic
valueσ2lim was performed up to first order in1ν . Thus,ν′sat is defined as the frequency at which the first order
correction equals the zero order term. Ifν > ν′sat then the leading term is larger than any other term. Although
the definition of the saturation frequency of the mean,νsat, derived in chapter2 (eqs. 2.3.5and2.3.10) was
defined in a different more simple way, both definitions coincide.
162 Chapter 5: The response of a LIF neuron with depressing synapses
0
5
10
15
20
µτ (m
V)
0
20
40
60
80
σ w2 τ (m
V)
M=1M=2M=10M=100 (= det. mod.)
0 20 40 60 80 100
ν [Hz]
-1
-0,8
-0,6
-0,4
-0,2
0
α
0
100
200
300
400
500
τ c (ms)
0
5
10
15
20
25
ν out [H
z]Response of a LIF neuron for different Ms (M*J=const.)
Figure 5.14: Numerical results and theoretical prediction of thenon-monotonicresponse of
a LIF neuron for examples of input current with differentMs, whileMJ is held constant.
Input current is taken from fig.5.8. Top plot: Output rateνout as a function of the input rate
ν. Solid lines are the theoretical prediction with the correction due to the exponential correla-
tion introduced by depression. Dashed lines are the theoretical predictions when neglecting
the effect of the exponential correlation. Symbols are the simulation results.Second plot:
(The same as in fig.5.8has been plotted here to visualize the input together with the output).
Superposition of the mean currentµ (magenta line, right axis) and the current varianceσ2w
(left axis). Bottom plot: Superposition of the correlation magnitudeα (left axis) and the
correlation time constantτc (magenta line, right axis). Colors in the top plot inset apply to
all plots.
5.4. An interesting experiment 163
ν ′sat =1
Uτv
(1 +
U(M − 1)
1− U/2+
Uρ(C − 1)M
1− Uρ/2
)(5.3.8)
This formula shows that if the input is de-correlated (ρ = 0) and the connections are
mono-synaptic (M = 1) both the saturation frequency of the mean currentνsat and the
saturation frequency of the varianceν ′sat coincide at 1Uτv
. However if the number of contacts
is bigger than one or the inputs are synchronizedν ′sat>νsat. This inequality only tells us that
the rate at which the varianceσ2w approaches its asymptotic value is slower than that of the
mean. Of course, this is not enough to guarantee that the response of the neuron depends on
the inputν whenν >> νsat. However, figure5.9shows that this is the case for some values
of M (and consequently ofρ), e.g. M = 150. Comparing the variation ofνout with that of
µ, σ2w andα for high rates (∼ 50 − 100 Hz) we can extract the conclusion that a significant
variation of the output is occurring while the mean currentµ is almost constant: information
is being transmitted: i) first, by means of the variance which is still varying as we can see
in the third plot of the figure. ii) Second, by means of a change in the negative exponential
correlationα whose convergence rate is much slower than those ofµ andσ2w (see bottom
plot). Therefore, the second order statistics of the input may enlarge the frequency range in
which information about the rate can be transmitted through depressing synapses.
5.4 An interesting experiment
One of the main results of this chapter is that synaptic short-term depression can filter
out cross-correlations if the input rateν is large enough to provoke the saturation of the
recovery machinery. This result makes a simple and direct prediction which can be tested
in a simple experiment. First, we need to choose a neural system in which synaptic short-
term depression has been observed. The experiment has to be performed in a slice, where
we need to record from an individual neuron under voltage clamp. With this configuration
we could register the current entering the cell. At the same time we should stimulate
extracellularly a bundle of fibers which are known to project onto the patched cell (this
is particularly easy in the hippocampus, for example, where one can oblate the CA3 area
from the slice and stimulate the Schaffer collateral fibers and record from a neuron in CA1).
The stimulation would be a train of Poisson electrical pulses which would excite a large
pool of proximal fibers creating a set of Poisson spike trains impinging the cell at a very
synchronous manner. Thus, synchrony is present because of the nature of the stimulation,
that is, because all the afferent activity was generated by the same external pulses. Then we
would record the afferent current for different stimulation frequencies,R. With that data,
164 Chapter 5: The response of a LIF neuron with depressing synapses
we would compute the mean and the variance of the incoming current as a function ofR.
The prediction then would be the following: the mean of the current has to be a monotonic
function ofR that saturates to a certain value. On the contrary, thevariance of the current
will show a non-monotonic behavior with the stimulation frequencyR. In particular, for
high enough rates, when the mean current is saturating its upper bound, the variance should
decrease asR is increased. We could go a step further and, in the case were this afferent
current had an average below threshold (this could be adjusted by diminishing the amplitude
of the extracellular stimulus so that a minor number of fibers are excited), register the spiking
activity of this neuron. In that case, we should observe a non-monotonic behavior of the firing
rateνout as the stimulation frequencyR is increased, demonstrating that the signalR is being
transmitted by the variance of the current which decreases as a function ofR. Because high
stimulation frequencies are necessary to reach the saturating regime, one should take care of
possible adaptation occurring in the afferent axons or in the target cell.
5.5 Conclusions
In this chapter we have explored the impact of short-term synaptic depression on the
output rate of a neuron when the EPSC’s created by the afferent spikes have certain degree of
temporal synchrony. This spatial correlation among the release times was introduced by two
equivalent means: i) By the introduction of a generalization of the mono-synaptic connection
in which a pre-synaptic neuron makes only a single synaptic contact with the target neuron,
to a model in which the pre-synaptic neurons make a numberM of contacts, whereM can
be any integer number. ii) By means of a cross-correlation of zero time-lag among the spike
times of different cells. Since the study was performed first analytically, this allowed us
to extract deeper conclusions and to understand better the implications of the model. For
example we concluded that, two different features of the input, synchrony and multiple-
contacts, happened to give a qualitatively equivalent description of the afferent current, if
the input spike had Poisson statistics: they both effectively increase the magnitude of the
fluctuations of the currentσ2w and the magnitude of the negative exponential correlations
introduced by synaptic recovery of vesicles,α. This enhancement of the current varianceσ2w
was in agreement with previous works [Salinas and Sejnowski, 2000, Moreno et al., 2002].
The new effects exposed here are the result of a non-trivial inter-action between the positive
correlations and synaptic depression. They can be summarized as follows:
1. Saturation. The most noticeable effect of short-term depression is that it imposes
a saturation of the mean currentµ, i.e. the mean current tends, as the input rate in-
creases, asymptotically to a finite valueµlim. Many implications about this effect have
5.5. Conclusions 165
been previously discussed [Tsodyks and Markram, 1997, Abbott et al., 1997]. In the
present work we have been interested in the idea that this saturation might lead to a
sub-threshold mean current (eq.5.3.1). Thus, given a population of neurons, if their
size is not very big, or their connections are not very strong, the mean current they
project to other neurons, will be sub-threshold, regardless the pre-synaptic firing rate.
A subthreshold regime has been commonly proposed under the excitation-inhibition
balance hypothesis [Shadlen and Newsome, 1994, vanVreeswijk and Sompolinsky,
1996, Tsodyks and Sejnowski, 1995, Amit and Brunel, 1997, Renart, 2000]. In this
scenario, both the excitatory and the inhibitory activity of the network are closely
balanced so that the fluctuations of the current acquire maximal importance. We are
not proposing that synaptic depression may result as an equivalent mechanism to the
balanced current, because both have very different properties. We would like just to
remark that both share the important common feature that the mean current remains
sub-threshold allowing the variance to play a central role in driving the neurons re-
sponse. The interesting issue of how this important constraint of saturation alters the
dynamics of a network, for instance during persistent activity [Wang, 2001], is the aim
of our future work.
2. Depression, a filter of spatial correlationsSince we have explored examples in which
the current was confined to be sub-threshold, the scaling up of the fluctuations due to
positive input correlations had a big impact, increasing the output rate. However, the
most interesting feature found, was that this positive correlation was wiped out by the
vesicle dynamics as soon as the input rate became large enough. In the first chapter
we repeated many times the idea that the release statistics becomes Poisson when
the input rate goes beyond saturation, no matter what the input statistics were. This
happened because in saturation, the synaptic response statistics were dictated more by
the recovery of the vesicle than by the input statistics. In the present chapter we have
simply taken that idea a little further to realize that, since the recovery of vesicles is
an independent process at each synaptic bouton, saturation has to impose also a spatial
de-correlation. And this is exactly what happens when a cross-correlated input crosses
a population of depressing synapses at a very high firing rate: the cross-correlations
are filtered away, implying that the enhancement of the fluctuations due to synchrony
are eliminated, resulting in anon-monotonicbehavior of the current varianceσ2w.
3. Information transmission at high rates We have shown how the output in the sub-
threshold regime can be tuned by the input fluctuations. Signaling by means of the
current variance is not a new idea and it has been shown to have important advantages
over the standard transmission by means of the mean current, like, for instance, a much
166 Chapter 5: The response of a LIF neuron with depressing synapses
faster response by the output neuron [Silberberg et al., 2002, Moreno et al., 2002]. In
this work we have proven that the variablesσ2w andα converge to their asymptotic
values in a much slower way than the mean currentµ. This result suggests the idea
that transmission of information may occur for high rates at whichµ does not change,
butα andσ2w are still modulated by the input rate.
4. Non-monotonic response functionThe response function of a LIF neuron when the
input is positively correlated and it has depressing synapses can be non-monotonic
if µlimτm < θ, that is, the mean current is always subthreshold. The modulation of
σ2w can be detected by the input neuron even whenµ > θ but the decrease is always
much more subtle than in the subthreshold regime (data not shown). We have shown
however that, whenever the maximum is prominent, it is always located at low input
rates (< 10 Hz). This implies that this simple effect cannot be used in a straightforward
manner to build a tuned response function to apreferred input rate. Nevertheless,
the combination of a decreasing transfer function with an increasing one, may give
rise to a peaked response function in a subsequent stage of processing. Moreover,
this non-typical decreasing behavior prompts the question of whether single neurons
with the help of dynamical synapses may perform more complex computations than
simple integration. Here we have demonstrated that a simple combination of positive
correlations plus depression give, in a stationary situation, a complex behavior of the
output, but the addition of more realistic elements like auto-correlations among spikes
within a single spike train, facilitation, or the more interesting question of the response
to a time-varying input, may permit a single neuron to perform non-trivial tasks, which
should be studied in future work.
5. Cross-correlations vs auto-correlationsWhen synaptic dynamics are not considered,
the effect of positive cross- or auto-correlations in the input is qualitatively the same:
they rescale the fluctuations of the afferent current [Moreno et al., 2002]. However,
in this thesis we have shown how this is not the case when STD is considered. In the
first two chapters we worked out the idea that STD can transform the auto-correlations
by elimination of redundant spikes. Choosing the adequate values of the parameters,
this could be achieved in an efficient way in terms of information trasnmission. Be-
sides, auto-correlations modify the synaptic transfer function by boosting the saturat-
ing frequency. Cross-correlations, on the other hand, do not alter this function because
their effect does not appear in the first order statistics of the current. Moreover, cross-
correlations are filtered by the depressive synapses when the spike train saturates them.
Although considering STD when integrating a current with cross- or auto-correlation
can lead to de-correlation in both cases, the mechanism used in each case is different.
Appendix A
The exponentially correlated input
In this appendix we will show that any renewal process with exponential correlations has
an ISI distributionρisi(t) equal to the linear sum of two exponentials (see eq.2.2.3).
Our starting point is the assumption that the input spikes are modeled as renewal pro-
cesses withexponentialauto-correlation among the spikes times. This is formalized by set-
ting the correlation functionC(t) equal to
C(t) = νδ(t) +A
τc
e−t
τc + ν2 (A.0.1)
Notice that the un-connected version of the two point correlation was used. The parameter
A measures the area under the exponential, whileτc is the decaying time constant of the
exponential.ν is the input spike rate defined as the inverse of the mean inter-spike-interval:
ν ≡ 1<ISI>
. Now, what is the physical meaning of the area under the exponential? does it
depend on the input parametersν andτc or is it an independent degree of freedom? This
has a simple answer if one recalls the relation between the correlation function and the the
variability of the spike count (see section4.5.2.4). If we define the the random variableN(T )
as the number of spikes in a time windowT , its varianceV ar[N(t)] is then related to the
correlation function (without the Dirac delta term, i.e.C∗(t)) by means of (see e.g. [Gabbiani
and Koch, 1998])
V ar[N(T )] =∫ T
0
∫ T
0(C∗(t)− ν2) dt dt′ (A.0.2)
At the same time we know that the mean ofN(T ) is νT . The Fano factor of the spike count
is defined as the ratio between its variance and its mean
FT ≡V ar[N(t)]
< N(T ) >(A.0.3)
Thus, computing the integrals in eq.A.0.2, substituting them in the expression ofFT and
taking the limit of a large time windowT →∞, one obtains
F = 2A + 1 (A.0.4)
167
168 Appendix A: The exponentially correlated input
In the largeT limit, for a renewal process, the Fano factor is related to the coefficient of
variation of the ISICVisi, defined as the ratio between the standard deviation and the mean
of the ISI’s:
CVisi ≡√
< ISI2 > − < ISI >2
< ISI >. (A.0.5)
This relation reads [Gabbiani and Koch, 1998]
CV 2isi ' FT (A.0.6)
Finally, we can express the areaA under the exponential of the correlation function as a
function of theCVisi which is simply,
A =CV 2
isi − 1
2(A.0.7)
A direct relationship can be established between the correlation functionC(t) and the
p.d.f. of the ISIs,ρisi(t) . If we assume thatρisi(t) is defined to be zero fort < 0, the relation
can be written as the following:
C(t) = ν(δ(t) + ρisi(t) +
∫ t
0dxρisi(x)ρisi(t− x)+ (A.0.8)
+∫ t
0dx∫ t
0dyρisi(x)ρisi(y)ρisi(t− x− y) + . . .
)This infinite series represents all the possible ways of finding two spikes at times zero and
t, namely:νδ(t) is the density joint probability of finding a spike at time zero and thesame
spike at timet. That is why the distribution functionδ(t) is centered at time zero, showing
that if the second timet is zero then the probability will be one, and otherwise it will be
null. The second term,νρisi(t), represents the joint probability density of finding a spike at
time zero and another one at timet, when there arezerospikes among them. The third term
stands for the joint p.d.f of having a spike at time zero and another one at timet, when there
was only one spike among them. Etc. Each of these terms is indeed the convolution ofn
functionsρisi(t) (n = 0, 1, 2, . . .), so that the previous formula can be expressed
C(t) = νδ(t) + ν∞∑
n=1
(∗ρisi(t))n (A.0.9)
where the asterisk∗ denotes the convolution operation, and(∗ρisi(t))n the convolution ofn
functionsρisi(t) . To perform this calculation explicity we resort to a well known property
of the convolution operation, namely that the Laplace transform [Gradshtein et al., 1980] of
the convolution of functions is the product of the Laplace transformed of the functions. We
therefore take the Laplace transform of the previous equation obtaining:
169
C(s) = ν + ν∑∞
n=1(ρisi(s))n = ν
∞∑n=0
(ρisi(s))n (A.0.10)
But, not by chance, the last sum is simply a geometrical series which can be explicity
summed resulting
C(s) =ν
1− ρisi(s)(A.0.11)
Now, independently of this, we can compute the Laplace transform ofC(t) (defined in
equationA.0.1), resulting in
C(s) = ν +A
1 + τcs+
ν2
s(A.0.12)
We can then substitute this expression into eq.A.0.11 and after some algebra we obtain an
expression for the Laplace transform of ISI p.d.f.:
ρisi(s) =s(A + ν2τc) + ν2
s2ντc + s(A + ν2τc + ν) + ν2(A.0.13)
Denoting byβ1 andβ2 the roots of the denominator of this fraction we can rewriteρisi(s) as
ρisi(s) =β1(1− ε)
β1 + s+
β2ε
β2 + s(A.0.14)
where the three parameters[β1, β2, ε] are complex functions of the input parameters[ν, τc, CVisi],
namely
ε = 1/2
(2 τc ν + CV 2 + 1−
√λ) (
2 τc ν + CV 2 − 3−√
λ)
4 τ 2c ν2 + 4 τc ν (CV 2 − 3)− 2 τc ν
√λ + (CV 2 + 1)2 −
√λ(CV 2 + 1)
(A.0.15)
β1 = 4ν
2 τc ν + CV 2 + 1−√
λ(A.0.16)
β2 = 1/42 τc ν + CV 2 + 1−
√λ
τc
(A.0.17)
where the auxiliary variableλ is
λ ≡ 4 τc2ν2 + 4 τc ν CV 2 − 12 τc ν + (CV 2 + 1)2 (A.0.18)
The final step is to take the Laplace anti-transform ofρisi(s) given in eq. A.0.14. We
obtain the announced result:
ρisi(t) = (1− ε)β1e−β1t + εβ2e
−β2t , t > 0 (A.0.19)
170 Appendix A: The exponentially correlated input
This p.d.f. displays a symmetry: it remains invariant under the transformationβ1 ↔β2, β2 ↔ β1, ε ↔ 1 − ε. Because of this, the mapping eq.A.0.15-A.0.18 is one of the two
possible variable choices. The inverse mapping, however, does not have any ambiguous
freedom, and it reads
ν =β1β2
(1− ε)β2 + εβ1
(A.0.20)
CV 2 =2((1− ε)β2
2 + εβ21
)((1− ε)β2 + εβ1)2
− 1 (A.0.21)
τc =1
(1− ε)β2 + εβ1
(A.0.22)
But are the three parameters[ν, τc, CVisi] independent? or in other words, doesρisi(t)
represent a well-defined probability distribution function for any choice of[ν, τc, CVisi]? The
answer is no. The renewal character of the process, constraints the space of input parame-
ters because the functionρisi(t) must satisfy a few conditions in order to be a well-defined
p.d.f. Thus, we must find the restrictions on[ν, τc, CVisi] by mapping the allowed values
of [β1, β2, ε], which in turn are obtained by formulating the conditionsρisi(t) must hold.
These conditions are simply: i) to be integrable, ii) to have norm one and iii) to be defined
as non-negative in the ranget > 0, that is,ρisi(t) ≥ 0 for all t > 0. The first and second
conditions hold by construction by just ensuring that the coefficients of the exponentials are
non-negative:β1, β2 ≥ 0. The condition which ensures thatρisi(t) is always positive or zero
can be broken up in two equivalent statements: a) thatρisi(t) is non-negative at the origin:
ρisi(t = 0) ≥ 0 and that the tail of the function is also positive1. This second statement is
equivalent to say that the coefficient of theslowestexponential (the one with the smallestβ)
is positive. Thus, let us assume without any lost of generality (due to the symmetry inρisi(t)
) thatβ2 > β1. Then, these two conditions are formalized by the following inequalities
(1− ε)β1 + εβ2 ≥ 0 (A.0.23)
(1− ε) > 0 (A.0.24)
After some manipulations, both inequalities can be summarized in
−β1
β2 − β1
< ε < 1 if β2 > β1 (A.0.25)
If we now take these inequalities, using the transformations given by equationsA.0.20-
A.0.22we arrive at the inequalities
ν, τc, CVisi ≥ 0 (A.0.26)
1It can be rigorously demonstrated that if a combination of two exponentials at the origin with a non-
negative value and converges to zero above the x-axis, then it is positive for all positivex values
171
τc >1− CV 2
2νif CV < 1 (A.0.27)
Thus, for positively correlated inputs (CVisi > 1) any positive value ofν andτc is allowed,
whereas for negative correlation (CVisi < 1), the correlation time constant must exceed a
certain fraction of the mean ISI.
Appendix B
The computation ofρiri(t|ν) for a general
renewal input process
In this appendix we will show how to obtain the expression of the p.d.f. of the inter-
response intervals (IRI)ρiri(t|ν), given a general renewal inputρisi(t|ν), for the model with
a single synaptic contact described in section2.2.1.
This synaptic model is completely described by two functions:
i) The release probability (we adopt themono-vesicle release hypothesisso only one
vesicle can be released at a time, see section1.2.4) when the ready-for-release pool
(RRP) hasN vesicles:
pr(N) = UΘ(N − 1) N = 0 . . . N0 (B.0.1)
which is the step-like function introduced in section2.2.1, whereN0 denotes the max-
imum size of the RRP. Notice as well, that we have already substituted the initial
parameterNth (see2.2.1) by one. As explained in section2.2.3, consideringNth dif-
ferent from one is only relevant during transient states. In this appendix we consider
the stationary situation and therefore we can setNth = 1.
ii) The probability of refillingn docking sites during a time-window∆, whenN were
empty:
Prec(n, ∆|N) =
N
n
(1− e−∆/τv)n (e−∆/τv)(N−n) (B.0.2)
This expression relies on the assumption that the refilling of each empty docking site
occurs independently at a constant rate1τv
. This is a binomial distribution of parameters
N (the number of avaliable sites) and(1− e−∆/τv) the refilling probability of a single
docking site.
173
174 Appendix B: Computing ρiri(t|ν) for any renewal input
sr
sr
sr sr
sr
sr
∆spk spk spk spk
spkspkspk
spk spkρ (∆)(1)
sr
ρ (∆)sr
ρ (∆)sr
(2)
(3)
+
+
+
. . .
. . .
n n
n n n
n 1
1
1 2
2
3
N N−1 N−1+n1ves. ves.ves.
t t
t 1
1 2
Figure B.1: Diagram showing the nomenclature and logic of the terms in whichρiri(t) is
expanded
We need compute now the probabilities℘N(t) of havingN vesicles ready in the RRP
at time t. We have already described in chapter2 section2.2.3, a method to obtain these
quantities for ageneralrenewal process in the stationary state. We will then assume that
℘N(t) (N = 0 . . . N0) are known in the stationary state (℘ssN ).
To computeρiri(∆|ν), we expand it in the following way,
ρiri(∆|ν) = ρ(1)iri (∆|ν) + ρ
(2)iri (∆|ν) + ρ
(3)iri (∆|ν) + . . . (B.0.3)
whereρ(i)iri(t|ν) is the p.d.f. of having an IRI of length∆ composed ofi consecutive ISIs.
FigureB.1 shows a diagram illustrating the meaning of the first three terms and the variables
used in their calculation. The first one reads
ρ(1)iri (∆|ν) =
N0∑N=0
℘ssN
N0−N+1∑n1=0
Prec(n1, ∆|N0 −N + 1) ρisi(∆|ν) pr(N − 1 + n1)
The structure of this first term may give us some intuition about the others. It can be de-
scribed as the probability to find a second release at time∆ given that there was one at time
zero and that no spikes failed to trigger a response within this time window∆. Term by term
it can be explained as:
i)∑N0
N=0 ℘ssN : The summed probabilities of havingN vesicles upon arrival of a spike at
time zero given that it elicits a release. Thehat above℘ssN stands for the conditioning,
175
and it can be solved using the Bayes rule:
℘ssN =
℘ssNpr(N)∑N0
k=0 ℘ssk pr(k)
(B.0.4)
ii)∑N0−N+1
n1=0 Prec(n1, ∆|N0 −N + 1): The summed probabilities thatn1 empty sites are
recovered in the time window∆, whenN0 −N + 1 were avaliable at time0+.
iii) ρisi(∆|ν): The probability that a new spike, after the one which arrived at time zero,
comes at time∆.
iv) pr(N − 1 + n1): The probability that a release occurs, given that at time∆ there were
N − 1 + n1 vesicles in the RRP (there wereN at time0−, N − 1 at time0+ and
N − 1 + n1 at time∆.
Let us write now the second term of the expansion to see how these summations are
nested one in another:
ρ(2)iri (∆|ν) =
N0∑N=0
℘ssN
∫ ∆
0dt1ρisi(t1|ν)
N0−N+1∑n1=0
Prec(n1, t1|N0 −N + 1)×
× (1− pr(N − 1 + n1)) ρisi(∆− t1|ν) ×
×N0−N+1−n1∑
n2=0
Prec(n2, ∆− t1|N0 −N + 1− n1) pr(N − 1 + n1 + n2)
The structure of this term is similar to the termρ(1)iri (∆|ν). However we now need to consider
the occurrence of an unsuccessful spike at timet1 < ∆, and we must to integrate over this
time. Thus, we have added the probability that the spike att1 does not trigger any release:
(1− pr(N − 1 + n1). The recovery of vesicles has to be done now in two stages namely,n1
empty sites are occupied beforet1, andn2 are refilled betweent1 and∆.
These terms get complicated very soon as the number of ISIs in∆ is increased. Therefore
it is not feasible to sum up the complete series before substituting the functionspr(N) and
Prec(n, ∆|N) defined by the synaptic model. Other simple models of releasepr(N) have
been tested at this point. For instance, a linear release functionpr(N) = U N allows the
calculation of the two or three further terms, but we could not achieve the addition of the
complete series. An exponential release function, makes things even harder. Thus, we intro-
duce our synaptic model by substituting the functionspr(N) andPrec(n, ∆|N) of equations
B.0.1andB.0.2. We need to recall here the strategy adopted to perform the sum of expansion
B.0.3 (as introduced in section2.2.3): we will divide the synaptic channel into two stages,
namely i) a purely random channel which decimates the input with a probabilityU giving a
176 Appendix B: Computing ρiri(t|ν) for any renewal input
decimated version of it,ρisi(t)→ρdisi(t) ; and ii) the vesicle dynamics which takes the diluted
input and gives the synaptic responses:ρdisi(t) → ρiri(t). This division of the problem into
two steps simplifies enormously the calculation since now, a spike may fail to provoke the
release of a vesicle only if it finds the RRP empty. This important point, allows us to reduce
the number of possibletrajectoriesthe system may follow from time0 to ∆.
Let us start with the termρ(1)iri (∆|ν). Substitutingpr(N) andPrec(n, ∆|N) in equation
B.0.4
ρ(1)iri (∆|ν) = ρd
isi(∆|ν)N0∑
N=0
℘ssN ×
×N0−N+1∑
n1=0
N0 −N + 1
n1
(1− e−∆/τv)n1 (e−∆/τv)(N0−N+1−n1) ×
× Θ(N + n1 − 1) (B.0.5)
In first place, the probability℘ssN=0 equals zero because, if at time zero the RRP is empty,
there is no chance to produced the first release (and the conditioning states that it happened).
The second thing to notice is that, if at time zeroN > 1, then the Heaviside function is
simply one for alln1 cases, and we can sum up to one all the recovery probabilities. When at
time zeroN = 1, only the case in which no vesicles are recovered (n1 = 0), is zero because
of Θ(N +n1−2), so we can sum up the recovery probabilities to one, and at the end subtract
the probability thatn1 = 0:
ρ(1)iri (∆|ν) = ρd
isi(∆|ν)
N0∑N=2
℘ssN + ℘ss
1 (1− e−N0∆/τv)
= ρd
isi(∆|ν)(1− ℘ss
1 e−N0∆/τv
)(B.0.6)
In the rest of the termsρ(i)iri (i > 1) only the caseN = 1 survives because, since there
have to be failures, we need the RRP to be empty at the intermediate timesti. After some
manipulation the second term reads
ρ(2)iri (∆|ν) =
∫ ∆
0dt1ρ
disi(t1|ν)ρd
isi(∆− t1|ν)℘ss1 e−N0t1/τv
(1− e−N0(∆−t1)/τv
)Following the same methodology and denoting the initial time ast0, we can write the
expression of thej − th term
ρ(j)iri(∆|ν) =
∫ ∆
0
j−1∏i=1
(dti ρd
isi(ti − ti−1|ν) e−N0(ti−ti−1)/τv
)×
× ρdisi(∆− tj−1|ν) ℘ss
1
(1− e−N0(∆−tj−1)/τv
)(B.0.7)
177
In order to perform the integrals we need to observe that each term is composed by the
convolutionof j functionsρdisi(t), and make use of the well known property of the Laplace
transform [Gradshtein et al., 1980], which says that the transform of the convolution of two
functions is the product of the transform functions, that is
L[∫ ∆
0ρ(t) ρ(t−∆) dt
]= L [ρ(t)]L [ρ(t)] = ρ(s)2 (B.0.8)
We also need the property
L[ρ(t) e−t/τ
]= ρ(s +
1
τ) (B.0.9)
which can be easily proven from the definition of the Laplace transform. With these two
properties we can now transform each termρ(j)iri(∆|ν) obtaining
ρ(1)iri (s) = ρd
isi(s) − ℘ss1 ρd
isi(s + N0/τv) (B.0.10)
ρ(2)iri (s) = ℘ss
1
[ρd
isi(s) ρdisi(s + N0/τv) − (ρd
isi(s + N0/τv))2]
(B.0.11)
...
ρ(j)iri(s) = ℘ss
1
[ρd
isi(s) (ρdisi(s + N0/τv))
j−1 − (ρdisi(s + N0/τv))
j]
(B.0.12)
Finally, we can sum the Laplace transform of the expansion:
ρiri =∞∑
j=1
ρ(j)iri(s) =
= ρdisi(s)− ℘ss
1
(1− ρd
isi(s)) ∞∑
j=1
(ρd
isi(s + N0/τv))j
=
= ρdisi(s)− ℘ss
1
(1− ρd
isi(s))
ρdisi(s + N0/τv)
1− ρdisi(s + N0/τv)
(B.0.13)
The last step has been performed observing that the last sum is simply a geometric series.
Now we can express the conditioned℘ss1 in terms of the non-conditioned probabilities (see
eq.2.2.19)
℘ssN=1 ≡
℘ss(1)
℘ss(1) + ℘ss(2) + . . . + ℘ss(N0)(B.0.14)
We have then reached the expression given in equation2.2.18. It gives the Laplace transform
of the p.d.f. of the IRIsρiri for a generalrenewal input, given by the transform of the p.d.f.
of thedilutedinput versionρdisi(s). The link between this decimated function and the original
ρisi(t) , was derived in section2.2.3and is given by equation2.2.21.
Appendix C
The calculation of the population
distribution D(U,N0)
In this section we will derive the computation of the distribution of the release probability
U and the number of docking sitesN0 across a population of synapses. This distribution ful-
fills two constraints derived from measurements done in hippocampal CA3-CA1 synapses,
namely: i) the marginal distribution ofU follows a Gamma function of order twoΓλ(U)
(whereMurthy et al.[1997] reportλ = 7.4). ii) the population averaged ofU for a fixedN0,
〈U〉R(U |N0), follows an exponential dependence inN0 〈U〉R(U |N0) = 1 − (1 − pv)N0 ' aN0
(whereMurthy et al. [2001] report pv = 0.055 andHanse and Gustafsson[2001a] report
pv ' 0.3− 0.7).
First, the joint distributionD(U,N0) is expressed in the following way:
D(U,N0) = f(N0) Rq(U |N0) (C.0.1)
Now we make theansatzthat the conditional distributionRq(U |N0) is a Gamma function of
orderN0 + 1 and parameterq:
Rq(U |N0) =qN0+1
N0!UN0e−qU (C.0.2)
Thus, the first constraint implies:
Γλ(U) ≡ λ2
2Ue−λU =
∞∑N0=1
f(N0)Rq(U |N0) (C.0.3)
and expanding both sides of the equality in powers ofU we obtain:
∞∑k=0
[(−λ)kλ2
2 k!
]Uk+1 =
∞∑j=0
∞∑N0=1
f(N0)Rq(U |N0)
(j)∣∣∣U=0
j!
U j (C.0.4)
179
180 Appendix C: The calculation of the population distribution D(U,N0)
Identifying coefficients order by order (makingj = k + 1) we have
(−λ)k+2
2 k!=
∞∑N0=1
f(N0)Rq(U |N0)
(k+1)∣∣∣U=0
(k + 1)!k = 0, 1 . . .∞ (C.0.5)
But thenth derivative of the functionRq(U |N0) reads,
dnRq(U |N0)
dUn
∣∣∣∣∣U=0
=qN0+1
N0!
dn(UN0 e−qU)
dUn
∣∣∣∣∣U=0
=
=
qN0+1
N0!n!
(n−N0)!(−q)n−N0 if N0 ≤ n
0 otherwise(C.0.6)
Inserting this into eq.C.0.5, we obtain for each value ofk = 0, 1 . . .∞ a simple first
order equation withk+1 distribution coefficientsf(N0). These can be obtained in a recurrent
manner from:
(−λ)k+2
2 k!=
k+1∑N0=1
f(N0)(−)k+1−N0qk+2
N0! (k + 1−N0)!k = 0, 1 . . .∞ (C.0.7)
which after some manipulation reads
1
2
(λ
q
)k+2
=k+1∑
N0=1
[(−)N0−1k!
N0! (k + 1−N0)!
]f(N0) k = 0, 1 . . .∞ (C.0.8)
that can be written using the new variablej = k + 1(λ
q
)j+1
=j∑
N0=1
j
N0
(−)N0−1
jf(N0) j = 1 . . .∞ (C.0.9)
We show the first three values ofj:
f(1) =
(λ
q
)2
(j=1)
f(1)− f(2)
2=
(λ
q
)3
(j=2)
f(1)− f(2) +1
3f(3) =
(λ
q
)4
(j=3)
...
These equations form an infinite diagonal system of linear equations. We can easily
obtain the values off(N0) for N0 = 1, 2, . . . Nmax0 , where the cutoffNmax
0 is chosen in such
a way that the probability∑∞
N=Nmax0
f(N) is smaller than a given tolerance value. As can
181
also be deduced from the system, the probabilitiesf(N0) are polynomials in(
λq
)of grade
N0 + 1.
To obtain the value ofq we need to make use of the second constraint, which yields,
〈U〉R(U |N0) =∫ ∞
0dU U Rq(U |N0) =
=N0 + 1
q(C.0.10)
All is left to do, is to take the fit to the data〈U〉R(U |N0) = 1− (1− pv)N0 [Murthy et al.,
2001], and set the following equality
N0 + 1
q' 1− (1− pv)
N0 (C.0.11)
Although this equality seems hard to fulfill, sincepv has a very small value (∼ 0.05) the
r.h.d. can be expanded up to first order, and then the slopes of both sides can be identified.
Figure2.5, shows how for the values of the experiments found byMurthy et al.[1997, 2001],
the approximation is rather good. Once we have set the value ofq, taking a value forλ, we
can obtain the probabilitiesf(N0) for all N0’s up to the upper cutoffNmax0 .
Appendix D
Computation of the output statistics for
the exponentially correlated input when
N0 = 1
D.1 Computation of the p.d.f. of the IRIsρiri(t)
In this appendix we will compute the final exact expression of the probability distribution
function of the inter-response-intervals (IRIs) for the particular cases: i) one single docking
site, i.e. the maximum size of the ready-releasable-pool (RRP) is only oneN0 = 1, and ii)
the case where the input is the renewal process with exponential auto-correlation function
defined in appendixA. Thus our starting point is the expression of the Laplace transform of
the p.d.f. of the IRIs (ρiri) as a function of the distribution of the decimated version of the
input ρdisi(s), given by equationB.0.13derived in appendixB that we now rewrite
ρiri = ρdisi(s)− ℘ss
1
(1− ρd
isi(s))
ρdisi(s + N0/τv)
1− ρdisi(s + N0/τv)
(D.1.1)
We also need equation2.2.21, which givesρdisi(s) in terms of the original inputρisi(t) , which
we also rewrite here,
ρdisi(s) =
Uρisi(s)
1− (1− U)ρisi(s)
We now take eq.D.1.1and makeN0 = 1. The probability℘ss1 becomes naturally one (see
eq. 2.2.19). We then take the expression ofρdisi(s) given by the second of these equations
(eq. D.1.2) and introduce it in the first. After some simple algebra we obtain an expression
for ρiri in terms of the original distribution of input ISIs:
ρiri =U [ρisi(s)− ρisi(s + 1/τv)]
[1− (1− U)ρisi(s)] [1− ρisi(s + 1/τv)](D.1.2)
183
184 Appendix D: Computation of the output when N0 = 1
All we need to do now, is to take the expression ofρisi(s) derived in appendixA
ρisi(s) =s(A + ν2τc) + ν2
s2ντc + s(A + ν2τc + ν) + ν2(D.1.3)
and to introduce it inD.1.2. After some manipulations, it reads
ρiri(s) = Us2(Aτcτv + ντ 2
c τv) + s(Aτc + ντc(τv + τc)) + ν(τc + τv)
(sτv + 1)(sτcτv + τc + τv)(s2τc + s(1 + UA + ντcU) + νU)(D.1.4)
We must now anti-transform this equation. This is done by decomposing it in four frac-
tions with a single pole in the variables. Each of the four poles represents the decaying time
of an exponential inρisi(t) . After some algebra we arrive at the desired expression, a sum
of 4 exponentials (which we showed in eq.2.3.6):
ρiri(∆) =U
τv
(C1 e∆s1 + C2 e∆s2 + C3 e−
∆τv + C4 e
− ∆τ1
)(D.1.5)
where the coefficientsCi are:
Ci =si
(A
τcτ2+ ν
τ1− UE2
2
)+ ν
τcτ1− E2νU
τc
Di
(D.1.6)
Di = si
(−E1
(1
τv
+1
τ1
− E1
)+
2
τvτ1
− 2Uν
τc
)+
UνE1
τc
− 2
(1τv
+ 1τ1
)Uν
τc
+E1
τvτ1
for i = 1, 2. The other two read
C3 = A τvτ−1c
(1− Uν τ1 −
Uτv A
τc
)−1
(D.1.7)
C4 = −ν τv
(1− Uν τv −
Uτ2
τc
)−1
(D.1.8)
The decaying time constants of the four exponentials, come from the four poles of equa-
tion D.1.4. They are the vesicle recovery timeτv, the new time constant
τ1 ≡ τv τc
τc + τv
(D.1.9)
and the inverse of the two solutionss1, s2 of equation
s2 + s(1/τc + UA/τc + νU) + νU/τc = 0 (D.1.10)
which read
s1 = −1/2E1 + 1/2
√E1
2 − 4Uν
τc
(D.1.11)
s2 = −1/2E1 − 1/2
√E1
2 − 4Uν
τc
(D.1.12)
D.2. Computation of the correlation function of the IRIs 185
Besides, we have defined the following auxiliary variables
τ2 ≡ τv τc
τc − τ(D.1.13)
E1 ≡ τ−1c + U
(A
τc
+ ν)
(D.1.14)
E2 ≡ A
τc
+ ν (D.1.15)
Finally, to show the dependence of all variables on the physical parameters, we write here
again the expression of the areaA in terms of the coefficient of variationCVisi (see appendix
A),
A =CV 2
isi − 1
2(D.1.16)
D.2 Computation of the correlation function of the IRIs
We will now obtain the output autocorrelation function of the synaptic responses. Since
for N0 = 1 the train of releases is a renewal process (see section2.2.3), we can relate the
p.d.f. of the IRIs,ρiri(t) , with the auto-correlation function of the responses,Cr(t), by means
of eq.A.0.11, originally derived for the input spike train, but now applied to the output train:
Cr(s) =νr
1− ρiri(s)(D.2.1)
However, to simplify the calculation, we will not compute the correlation,Cr(t) , but a the
conditional rate of responses,Cr(t) (see eq.2.2.5), which equals the correlation function
withoutthe Dirac function, and normalized by the rateνr. Its Laplace transform is
Cr(s) = Cr(s)/νr − 1 =ρiri(s)
1− ρiri(s)(D.2.2)
All we need to do now,substitute here equationD.1.4. After some algebra, we reach
Cr(s) = UQ1
Q2Q3
(D.2.3)
where
Q1 = 2 ν τc + 2 ν s2τc2τv + s22Aτc τv + 4 ν τc sτv + 2 ν τv + 2sAτc + 2 ν sτc
2
Q2 = s−1 (sτc + 1)−1
Q3 = 2 τc s2τv2 + 4 τc sτv + 2 sτv
2ν τc U + 2sτv2AU +
+ 2 sτv2 + 2 τv + 2 τv ν τc U + 2 τv
2ν U + 2 τc + 2τv AU
186 Appendix D: Computation of the output when N0 = 1
The denominatorQ2 Q3 has four roots, one of thems = 0. This means that, when anti-
transforming we get a linear combination of three exponentials plus a constant term, namely
Cr(t) = K1 e−t( 1τv−s1) + K2 e−t( 1
τv−s2) + K3 e−
tτc + K4 (D.2.4)
where the coefficients of the exponentials are
K1 = − (s1 τc + 1) s1 (s1 τv − s2 τv − 1)
(s1 − s2) (s1 τv − 1) (s1 τc τv − τc + τv)
K2 = − (s2 τc + 1) s2 (s2 τv − s1 τv − 1)
(s2 − s1) (s2 τv − 1) (s2 τc τv − τc + τv)
K3 =(τv − τc) (s1 τc + 1) (s2 τc + 1)
(−τc + τv + s2 τc τv) (s1 τc τv − τc + τv)
K4 =(τc + τv) s2 s1
(−1 + s2 τv) (s1 τv − 1)
ands1 ares2 are the same as in eqs.D.1.11-D.1.12. Thus we have obtained the expression
for the conditional rate of the responses given in eq.2.3.7. With this expression we are able
to compute the rate of releasesνr and the coefficient of variation of the IRI’sCViri.
D.3 Release firing rateνr and coefficient of variationCViri
Once we have the IRI distribution and the autocorrelation function of the output train
we can easily calculate the firing rate by merely taking the limitt → ∞ of the conditional
rateCr(t). This is because it has to converge to the non-conditioned rate if things are well-
defined. Thus, the response rateνr equals the constant coefficientK4 of Cr(t). Using the
expressions ofs1 ands2 in terms of the input parameters (eqs.D.1.11-D.1.12) and simplify-
ing the result, we obtain
νr =νU
1 + τvνU + τvUCV 2−12(τv+τc)
(D.3.1)
which is the expression given in eq.2.3.8introduced in section2.2.3.
To calculate the coefficient of variation of the postsynaptic responses we use the relation-
ship established in appendixA, between the correlation function of a input renewal process
and itsCV , eqs. A.0.2, A.0.3 andA.0.6. There, it was expressed as [Gabbiani and Koch,
1998]:
D.3. Release firing rateνr and coefficient of variationCViri 187
CV 2iri − 1
2=
∫∞0
∫∞0 (C∗r (t)− ν2
r ) dt dt′
νr T
=∫ ∞
0Cc
r(t)dt =
= K1 (1
τv
− s1)−1 + K2 (
1
τv
− s2)−1 + K3 τc (D.3.2)
All we need to do now, is to replace the expressions of the coefficientsKi and of the roots
si and do lots of algebra. The final expression of the coefficient of variation of the responses
reads
CV 2iri =
H
(τc + τv + τv Uν τc + τv UA + τv2Uν)2 (D.3.3)
where
H = τv4U2ν2 + 2 τv
3U2ν2τc + τv2U2A2 + 4 τv
2U2ν A τc + 2 UA τv2 + τv
2 + τc2 +
+τv2U2ν2τc
2 + 2 τv U2A2τc + 4 τv UA τc + 2 τv U2ν A τc2 + 2 τv τc + 2 UAτc
2
whereA is the area under the input exponential autocorrelation which equalsA = (CV 2isi −
1)/2.
Appendix E
Computation of the conditioned
probability 〈pv(β|α)〉
In this appendix we will compute the probability that a spike reaching theβ-th synaptic
terminal finds a vesicle ready for release, known that at theα-th contact, which belongs to the
samepre-synaptic neuron, there is one. This probability is averaged over the input ensemble
(see angle brackets) so that it does not depend on the times of the afferent spikes.
By direct application of the Bayes rule we find that
〈pv(β|α)〉 =〈pv(α, β)〉〈pv(β)〉
(E.0.1)
where〈pv(α, β)〉 is the joint probability that both contacts have their docking sites occu-
pied, and〈pv(β)〉 is the probability that a single contact (regardless of the index) is recovered
(computed in section4.4.1). So, in order to obtain the conditional probability〈pv(β|α)〉, we
need to calculate the joint probability〈pv(α, β)〉 of finding the two contacts filled.
At any time t, its state is well defined by the three probabilities℘2(t) ≡ 〈pv(α, β)〉,℘1(t) and℘0(t). These three functions stand for the probabilities of finding both, only one
or none of the two vesicles ready upon the arrival of a spike at timet. Note, that℘1(t) is
not equal to〈pv(β)〉, because the former represents the system with one contact recovered
and one empty, but the later considers only a single contact. To derive the expression of all
these probabilities, we must establish the system of differential equations which govern the
dynamics of the three probabilities.
FigureE.1 shows the probability flow diagram of the system of two contacts, like the
one shown before illustrating the single contact case (fig.4.2). The state of the system at
time t is defined by the number of vesicles avaliable for release, which can be two, one or
none1 We define the transition probabilitiesT [m → n; t] (n, m = 0, 1, 2) as the probabilities
1Do not confuse this system with the synaptic model of a single contact withN0 vesicle docking sites,
189
190 Appendix E: Computation of the conditioned probability 〈pv(β|α)〉
P2(t)
P1(t+dt)
P0(t)
P1(t)
P2(t+dt)
P0(t+dt)
T[(2-->1]
T[0-->1]
T[(2-->1]
T[(2-->2]
Figure E.1: Diagram of the temporal evolution of a system composed of two synaptic con-
tacts belonging to the same pre-synaptic neuron. At timet the state of the system can be2
contacts ready for release with probability℘2(t), 1 contact ready with prob.℘1(t), or zero
contacts ready with prob.℘0(t). A time stepdt later, a transition may have occurred: a
release of one or two vesicles or the recovery of one vesicle, depending on the state of the
system at timet. (Note:Pi(t) the figure represent℘2(t) in the text.)
that the system switches its state, fromn vesicles at timet, to a new state withm vesicles at
time t + dt. Transitions between states occur by means of two processes: arrival of a spike
and subsequent release, or recovery of vesicles at any of the contacts. The probability that
a spikes arrives at both contacts equalsνdt, while the probability that a vesicle is recovered
depends on the number of avaliable ones: if both boutons are empty we havetwo recovery
Poisson processes acting at the same time (solid green line in fig.E.1), so the probability is
twice as much as if one contact is already occupied (dashed green line in fig.E.1). Thus, the
described in section2.2.1. Now, we are considering a system made up of a group of contacts (two in this
calculus) each of them having one or cero vesicles avaliable. The dynamics of both systems is different mainly
because now, more than one vesicle can be released at the same time, while in the single contact model, even if
there areN > 1 vesicles ready-for-release, at most one finally fuses the membrane.
191
recovery transitions read
T [0 → 1; t] = ℘0(t)dt
τv
(E.0.2)
T [1 → 2; t] = ℘1(t)2 dt
τv
(E.0.3)
T [0 → 2; t] = ℘0(t)
(dt
τv
)2
(E.0.4)
On the other hand the release transitions (red lines in fig.E.1) depend on the arrival of a
spike and on the probability of releaseU :
T [2 → 1; t] = ℘2(t) ν dt 2 U (1− U) (E.0.5)
T [2 → 0; t] = ℘2(t) ν dt U2 (E.0.6)
T [1 → 0; t] = ℘1(t) ν dt U (E.0.7)
Naturally, the transitions in which the system state does not change (remaining transi-
tions, blue lines in fig.E.1) are obtained by applying the normalization principle which sets
that when the system is at any state, a transition must occur:
T [2 → 2; t] = ℘2(t)− T [2 → 1; t]− T [2 → 0; t] (E.0.8)
T [1 → 1; t] = ℘1(t)− T [1 → 0; t]− T [1 → 2; t] (E.0.9)
Thus, the probabilities at timet + dt can be obtained through
℘(n; t + dt) =∑m
T [m → n; t] = (E.0.10)
= T [n → n; t] +∑m6=n
T [m → n; t] =
= ℘(n; t)−∑m6=n
T [n → m; t] +∑m6=n
T [m → n; t]
n,m = 0, 1, 2
Taking the term℘(n; t) to the left side, dividing bydt and taking the limitdt → 0 one
reaches a three dimensional differential system. But using the normalization condition
℘(0; t) + ℘(1; t) + ℘(2; t) = 1 , ∀t (E.0.11)
we can get read of one of the three probabilities reaching the 2-dim. differential system
192 Appendix E: Computation of the conditioned probability 〈pv(β|α)〉
d℘1(t)
dt= −℘1(t) (νU +
3
τv
) + 2[Uν(1− U)− 1
τv
]+
2
τv
(E.0.12)
d℘2(t)
dt= ℘1(t)
1
τv
− ℘2(t) Uν(2− U)
Making the derivatives equal to zero and solving the linear system we obtain the stationary
state solution which reads
℘ss1 =
U ν τv (2− U)
(1 + Uντv)[1 + Uντv (1− U/2)](E.0.13)
℘ss2 =
1
(1 + Uντv)[1 + Uντv (1− U/2)](E.0.14)
where the super indexss stands for stationary state. Finally, we can compute the condi-
tioned probability〈pv(β|α)〉 of finding a vesicle in theβ-th contact when a vesicle has been
observed in theα-th contact at any time in the stationary state2
〈pv(β|α)〉 =〈pv(α, β)〉〈pv(β)〉
=℘ss
2
〈pv(β)〉 =1
[1 + Uντv (1− U/2)](E.0.15)
2The lector must note how we indistinctly use observations atanytime or at the time of the arrival of a spike.
This ambiguous use is not accidental but due to the fact that when using Poisson spike trains, the probability
for the vesicle to be docked at an arbitrary timet is the same that at the arrival of a spike. This happens because
the well known property of the Poisson process of having no memory, so that an event can occur with the same
constant probability at any time.
Appendix F
Computation of the conditioned
probability 〈pv(j|i)〉
In this appendix we will compute the probability that a spike reaching thej-th synaptic
terminal finds a vesicle ready for release, known that at thei-th contact, which receives
spikes from a different neuron whose activity is cross-correlated with thej-th neuron, there
is already one. The calculations follows closely what was done in AppendixE, where the
same probability was derived for contacts belonging to the same neuron. In this sense, this
calculation is a generalization of the other one, since if we set the synchronization parameter
ρ = 1, it would be equivalent to the situations in which both contacts belong to the same
neuron.
The system is again described by the three probabilities℘2(t) , ℘1(t) and℘0(t), and the
flux diagram which describes its evolution is the same as in the previous case, fig.E.1. Be-
cause the recovery dynamics have not been altered, the three recovery transitionsT [0 → 1; t],
T [1 → 2; t] andT [0 → 2; t] are the same as in equationsE.0.2,E.0.3andE.0.4, respectively.
The release transitions, however, have changed because of the new configuration of the in-
coming fibers (compare figs.4.6and4.5), reading
T [2 → 1; t] = ℘2(t) [2νdt (1− ρ) U + νdt ρ2 U (1− U)] (F.0.1)
T [2 → 0; t] = ℘2(t) ν dt ρ U2 (F.0.2)
T [1 → 0; t] = ℘1(t) ν dt U (F.0.3)
The most significant change occurs in transitionT [2 → 1; t], in which the first term on
the right hand side equals the probability of having only one vesicle reaching one terminal
in the interval(t, t + dt) and succeeding in the release, while the second term constitutes
193
194 Appendix F: Computation of the conditioned probability 〈pv(j|i)〉
the probability of observing two vesicles hitting the synapses at the same time but only one
triggering a release.
Now the probabilities at timet+dt are obtained from the probabilities a time step earlier
by means of
℘(n; t + dt) = T [n → n; t] +∑m6=n
T [m → n; t] = (F.0.4)
= ℘(n; t)−∑m6=n
T [n → m; t] +∑m6=n
T [m → n; t]
and reorganizing this expressions, taking the limitdt → 0 and making use of the normaliza-
tion condition eq.E.0.11, one obtains the following system of differential equations
d℘1(t)
dt= −℘1(t) (νU +
3
τv
) + 2℘2(t)[Uν(1− Uρ)− 1
τv
]+
2
τv
(F.0.5)
d℘2(t)
dt= ℘1(t)
1
τv
− ℘2(t) Uν(2− Uρ)
We take the system to the stationary state (℘i(t) = 0), and the solution reads
℘ss1 =
U ν τv (2− Uρ)
(1 + Uντv)[1 + Uντv (1− Uρ2
)](F.0.6)
℘ss2 =
1
(1 + Uντv)[1 + Uντv (1− Uρ2
)](F.0.7)
Finally, we compute the conditioned probability〈pv(j|i)〉 of finding a vesicle in thej-th
synapse when a vesicle has been observed in thei-th synapse, in the stationary state
〈pv(j|i)〉 =℘ss
2
〈pv(j)〉 =1[
1 + Uντv (1− Uρ2
)] (F.0.8)
On can now check, that ifρ = 1 we recover expression4.4.6concerning two contacts
from the same neuron.
Bibliography
E. D. Adrian. The impulses produced by sensory nerve endings: Part I.J. Physiol. (Lond.),
61:49–72, 1926.
G. Werner and V. B. Mountcastle. Neural activity in mechanoreceptive cutaneous afferents:
Stimulus-response relations, Weber functions, and information transmission.J. Neurophys-
iol., 28:359–397, 1965.
D.J. Tolhurst, J.A. Movshon, and A.F. Dean. The statistical reliability of signals in single
neurons in cat and monkey visual cortex.Vision Res., 23:775–785, 1983.
D.J. Tolhurst. The amount of information transmitted about contrast by neurons in the cats
visual cortex.Vis. Neurosci., 2:409–413, 1989.
K H Britten, M N Shadlen, W T Newsome, and J A Movshon. The analysis of visual
motion: a comparison of neuronal and psychophysical performance.J. Neurosci., 12:4745–
4765, 1992.
M. J. Tovee, E. T. Rolls, A. Treves, and R. P. Bellis. Information encoding and the response
of single neurons in the primate temporal visual cortex.J. Neurophysiol., 70:640–654,
1993.
C. Koch and I. Segev. The role of single neurons in information processing.Nature Neuro-
science, 3:1171–1177, 2000.
H. B. Barlow. Possible principles underlying the transformation of sensory messages. In
W. Rosenblith, editor,Sensory Communication, page 217. M.I.T. Press, Cambridge MA,
1961.
J. J. Atick. Could information theory provide an ecological theory of sensory processing?
Network: Comput. Neural Syst., 3:213–251, 1992.
J.-P. Nadal and N. Parga. Nonlinear neurons in the low-noise limit: a factorial code maxi-
mizes information transfer.Network: Computation in Neural Systems, 5:565–581, 1994.
195
196 Bibliography
Y. Dan, J. J. Atick, and R. C. Reid. Efficient coding of natural scenes in the lateral geniculate
nucleus: experimental test of a computational theory.J Neurosci, 16:3351–3362, 1996.
J.-P. Nadal, N. Brunel, and N. Parga. Nonlinear feedforward networks with stochastic
outputs: infomax implies redundancy reduction.Network: Computation in Neural Systems,
9:1–11, 1998.
M. S. Goldman, P. Maldonado, and L. F. Abbott. Redundancy reduction and sustained
firing with stochastic depressing synapses.J Neurosci., 22:584–91, 2002.
C. E. Shannon. A mathematical theory of communication.AT&T Bell Labs.
Tech. J., 27:379–423, 1948. URLhttp://cm.bell-labs.com/cm/ms/what/
shannonday/paper.htm .
A. Borst and F. E. Theunissen. Information theory and neural coding.Nature Neuroscience,
2:947–958, 1999.
L. F. Abbott, J. A. Varela, K. Sen, and S. B. Nelson. Synaptic depression and cortical gain
control. Science, 275:179–80, 1997.
M. V. Tsodyks and H. Markram. The neural code between neocortical pyramidal neurons
depends on neurotransmitter release probability.Proc Natl Acad Sci USA, 94:719–723,
1997.
John E. Lisman. Bursts as a unit of neural information: making unreliable synapses reliable.
TINS, 20:38–43, 1997.
W. Senn, I. Segev, and M. Tsodyks. Reading neuronal synchrony with depressing synapses.
Neural Computation, 10:815–819, 1998.
F. S. Chance, S. B. Nelson, and L. F. Abbott. Synaptic depression and the temporal response
characteristics of v1 cells.J. Neurosci., 12:4785–99, 1998a.
W. Maass and A. M. Zador. Dynamic stochastic synapses as computational units.Neural
Computation, 11(4):903–917, 1999.
V Matveev and X-J Wang. Differential short-term synaptic plasticity and transmission of
complex spike trains: to depress or to facilitate?Cereb Cortex., 10:1143–53, 2000a.
Thomas Natschlager and Wolfgang Maass. Computing the optimally fitted spike train for a
synapse.Neural Comp., 13:2477–2494, 2001.
Bibliography 197
W. Maass and H. Markram. Synapses as dynamic memory buffers.Neural Networks, 15:
155–161, March 2002.
G. Fuhrmann, I. Segev, H. Markram, and M. Tsodyks. Coding of temporal information by
activity-dependent synapses.J. Neurophysiol, 87:140–148, 2002.
J. de la Rocha, A. Nevado, and N. Parga. Information transmission by stochastic synapses
with short-term depression: neural coding and optimization.Neucomputing, 44:85–90,
2002.
D. J. Amit and M. V. Tsodyks. Quantitative study of attractor neural network retrieving at
low spike rates: I. substrate–spikes, rates and neuronal gain.Network, 2:259–273, 1991.
M. N. Shadlen and W. T. Newsome. Noise, neural codes and cortical organization.Curr.
Op. Neurobiology, 4:569–579, 1994.
M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neurons: implica-
tions for connectivity, computation, and information coding.J. Neurosci., 18:3870–3896,
1998.
M. F. Bear, B. W. Connors, and M. A. Paradiso.Neuroscience: Exploring the Brain. Wil-
iams & Wilkins, 1996.
KE Sorra and KM Harris. Occurrence and three-dimensional structure of multiple synapses
between individual radiatum axons and their target pyramidal cells in hippocampal area ca1.
J Neurosci, 13(9):3736–48, 1993.
Thomas Schikorski and Charles F. Stevens. Quantitative Ultrastructural Analysis of Hip-
pocampal Excitatory Synapses.J. Neurosci., 17(15):5858–5867, 1997. URLhttp:
//www.jneurosci.org/cgi/content/abstract/17/15/5858 .
B. Walmsley.Prog. Neurobiol., 36:391–423, 1991.
J. del Castillo and B. Katz. Quantal components of the end-plate potential.J. Physiol., 124:
560–573, 1954a.
B. Katz and R. Miledi. The role of calcium in neuromuscular facilitation.J. Physiol., 195:
481–492, 1968.
H. Held. Die centrale gehorleitung.Arch. Anat. Physiol., pages 201–248, 1893.
D.K. Ryugo, M.M. Wu, and T. Pongstaporn. Activity-related features of synapse morphol-
ogy: a study of endbulbs of held.J. Comp. Neurol., 365(1):141–158, 1996.
198 Bibliography
Bruce Walmsley, Francisco J. Alvarez, and Robert E. W. Fyffe. Diversity of structure and
function at mammalian central synapses.Trends in Neurosciences, 21(2):81–88, 1998.
N.A. Hessler, A.M. Shirke, and R. Malinow. The probability of transmitter release at a
mammalian central synapse.Nature, 366:569–572, 1993.
C. Rosenmund, J.D. Clements, and G.L. Westbrook. Nonuniform probability of glutamate
release at a hippocampal synapse.Science, 262:754–757, 1993.
C. Allen and C. F. Stevens. An evaluation of causes for unreliability of synaptic transmis-
sion. Proc. Natl. Acad. Sci. USA, 91:10380–10383, 1994.
C. F. Stevens and Y. Wang. Changes in reliability of synaptic function as a mechanism for
plasticity. Nature, 371:704–707, 1994.
I. A. Boyd and A. R. Martin. The end-plate potential in mammalian muscle.J. Physiol.
Lond., 132:74–91, 1956.
B. Katz. The Release of Neural Transmitter Substances. Liverpool University Press, 1969.
H.C. Tuckwell. Introduction to Theoretical Neurobiology II. Cambridge University Press,
Cambridge, UK., 1988.
D.K. Smetters and A. Zador. Synaptic transmission: noisy synapses and noisy neurons.
Current Biology, 6(10):1217–1218, 1996.
O Paulsen and P Heggelund. The quantal size at retinogeniculate synapses determined
from spontaneous and evoked epscs in guinea-pig thalamic slices.J Physiol (Lond), 480:
505–511, 1994.
O Paulsen and P Heggelund. Quantal properties of spontaneous epscs in neurones of the
guinea-pig dorsal lateral geniculate nucleus.J Physiol (Lond), 496:759–772, 1996.
MC Bellingham, R Lim, and B Walmsley. Developmental changes in epsc quantal size and
quantal content at a central glutamatergic synapse in rat.J Physiol (Lond), 511:861–869,
1998.
KJ Stratford, K Tarczy-Hornoch, KA Martin, NJ Bannister, and JJ Jack. Excitatory synaptic
inputs to spiny stellate cells in cat visual cortex.Nature, 382:258–261, 1996.
William B Levy and Robert A. Baxter. Energy-Efficient Neuronal Computation via Quan-
tal Synaptic Failures. J. Neurosci., 22(11):4746–4755, 2002. URLhttp://www.
jneurosci.org/cgi/content/abstract/22/11/4746 .
Bibliography 199
A. Zador. The impact of synaptic unreliability on the information transmitted by spiking
neurons.J. Neurophysiol., 79:1219–1229, 1998.
H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyra-
midal neurons.Nature, 382:807–810, 1996a.
Harold L. Atwood and Shanker Karunanithi. Diversification of synaptic strength: presy-
naptic elements.Nature Reviews Neuroscience, 3:497–516, 2002.
K. L. Magelby. Short-term changes in synaptic efficacy. In G.M. Edelman, W.E. Gall, and
W.M. Cowan, editors,Synaptic Function, pages 21–56. John Wiley and Sons, New York,
1987.
S.A. Fisher, T.M. Fischer, and T.J. Carew. Multiple overlapping processes underlying short-
term synaptic enhancement.Trends Neurosc., 20:170–177, 1997.
Robert S. Zucker. Short term synaptic plasticity.Annu. Rev. Neuroscience., 12:13–31, 1989.
Anthony M. Zador and Lynn E. Dobrunz. Dynamic synapses in the cortex.Neuron, 19:
1–4, 1997.
Henrique von Gersdorff and J. Gerard G. Borst. Short-term plasticity at the calyx of held.
Nature Reviews Neuroscience, 3:53–64, 2002.
Robert S. Zucker and Wade G. Regehr. Short-term synaptic plasticity.Annu. Rev. Physiol.,
64:355–405, 2002.
A.W. Liley and K.A.K. North. An electrical investigation of e ects of repetitive stimulation
on mammlian neuromuscular junction.J. Neurophysiol., 16:509–527, 1953.
J.I. Hubbard. Repetitive stimulation of the mammalian neuromuscular junction, and the
mobilization of transmitter.Journal of Physiology, 169:641–662, 1963.
C. F. Stevens and Y. Wang. Facilitation and depression at single central synapses.Neuron,
14:795–802, 1995.
Robert S. Zucker. Exocytosis: A molecular and physiological perspective.Neuron, 17:
1049, 1996.
E Neher. Vesicle pools and ca2+ microdomains: New tools for understanding their roles in
neurotransmitter release.Neuron, 20:389, 1998.
200 Bibliography
R. Schneggenburger, T. Sakaba, and E. Neher. Vesicle pools and short-term synaptic de-
pression: lessons from a large synapse.Trends in Neurosciences, 25(4):206–212, 2002.
L. E. Dobrunz and C. F. Stevens. Heterogeneity of release probability, facilitation, and
depletion at central synapses.Neuron, 18:995–1008, 1997.
Sibylle Weis, Ralf Schneggenburger, and Erwin Neher. Properties of a Model of Ca++-
Dependent Vesicle Pool Dynamics and Short Term Synaptic Depression.Biophys.
J., 77(5):2418–2429, 1999. URLhttp://www.biophysj.org/cgi/content/
abstract/77/5/2418 .
V Matveev and X-J Wang. Implications of all-or-none synaptic transmission and short-term
depression beyond vesicle depletion: A computational study.J. Neurosci., 20:1575–1588,
2000b.
Julia Trommershauser. A semi-microscopic model of synaptic transmission and plastic-
ity. Master’s thesis, Georg-August-Universit at zu Gottingen, 2000. URLhttp://www.
theorie.physik.uni-goettingen.de/˜trommer/written.html .
Christian Rosenmund and Charles F. Stevens. Definition of the readily releasable pool of
vesicles at hippocampal synapses.Neuron, 16:1197, 1996.
T. Schikorski and CF. Stevens. Morphological correlates of functionally defined synaptic
vesicle populations.Nat Neurosci., 4(4):391–395, 2001.
D. Zenisek, J. A. Steyer, and W. Almers. Transport, capture and exocytosis of single synap-
tic vesicles at active zones.Nature, 406:849–854, 2000.
L. E. Dobrunz. Release probability is regulated by the size of the readily releasable vesicle
pool at excitatory synapses in hippocampus.Int. J. Dev. Neuroscience, 730:1–12, 2002.
V N Murthy and C F Stevens. Synaptic vesicles retain their identity through the endocytic
cycle. Nature, 392(6675):497–501, 1998.
Jian-Yuan Sun, Xin-Sheng Wu, and Ling-Gang Wu. Single and multiple vesicle fusion
induce different rates of endocytosis at a central synapse.Nature, 417:555–559, 2002.
C. F. Stevens and T. Tsujimoto. Estimates for the pool size of releasable quanta at a single
central synapse and for the time required to refill the pool.PNAS, 92:846–849, 1994.
Henry Markram, Yun Wang, , and Misha Tsodyks. Differential signaling via the same axon
of neocortical pyramidal neurons.PNAS, 95:977–979, 1998a.
Bibliography 201
Lu-Yang Wang and Leonard K. Kaczmarek. High-frequency firing helps replenish the
readily releasable pool of synaptic vesicles.Nature, 394:384–388, July 1998.
Gerald T. Finnerty, Langdon S. E. Roberts, and Barry W. Connors. Sensory experience
modifies the short-term dynamics of neocortical synapses.Nature, 400:367–371, 1999.
Juan A. Varela, Sen Song, Gina G. Turrigiano, and Sacha B. Nelson. Differential Depres-
sion at Excitatory and Inhibitory Synapses in Visual Cortex.J. Neurosci., 19(11):4293–
4304, 1999. URLhttp://www.jneurosci.org/cgi/content/abstract/
19/11/4293 .
Carl C.H. Petersen. Short-Term Dynamics of Synaptic Transmission Within the Excitatory
Neuronal Network of Rat Layer 4 Barrel Cortex.J Neurophysiol, 87(6):2904–2914, 2002.
URL http://jn.physiology.org/cgi/content/abstract/87/6/2904 .
J. Varela, K. Sen, J. Gibson, J Fost, LF Abbott, and SB Nelson. A quantitative description
of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex.J
Neuroscience, 17:7926–7940, 1997.
Jeremy S. Dittman and Wade G. Regehr. Calcium dependence and recovery kinetics of
presynaptic depression at the climbing fiber to purkinje cell synapse.J. Neurosci., 18:
6147–6162, 1998.
CF Stevens and JF Wesseling. Activity-dependent modulation of the rate at which synaptic
vesicles become available to undergo exocytosis.Neuron, 20:1243–1253, 1998.
E Neher and T Sakaba. Estimating transmitter release rates from postsynaptic current fluc-
tuations.J Neurosci., 21:9638–54, 2001.
Jeremy S. Dittman, Anatol C. Kreitzer, and Wade G. Regehr. Interplay between Facili-
tation, Depression, and Residual Calcium at Three Presynaptic Terminals.J. Neurosci.,
20(4):1374–1385, 2000. URLhttp://www.jneurosci.org/cgi/content/
abstract/20/4/1374 .
L. E. Dobrunz, E. P. Huang, and C. F. Stevens. Very short-term plasticity in hippocampal
synapses.Proc. Natl. Acad. Sci. USA, 94:14843–14847, 1997.
Eric Hanse and Bengt Gustafsson. Release Dependence to a Paired Stimulus at a Synaptic
Release Site with a Small Variable Pool of Immediately Releasable Vesicles.J. Neurosci.,
22(11):4381–4387, 2002. URLhttp://www.jneurosci.org/cgi/content/
abstract/22/11/4381 .
202 Bibliography
SF Hsu, GJ Augustine, and MB Jackson. Adaptation of ca(2+)-triggered exocytosis in
presynaptic terminals.Neuron, 17(3):501–12, 1996.
Mathew V. Jones and Gary L. Westbrook. The impact of receptor desensitization on fast
synaptic transmission.Trends in Neurosciences, 19(3):96–101, March 1996.
J. del Castillo and B. Katz. Statistical factors involved in neuromuscular facilitation and
depression.J. Physiol., 124:574–85, 1954b.
Anirudh Gupta, Yun Wang, , and Henry Markram. Organizing principles for a diversity of
gabaergic interneurons and synapses in the neocortex.Science, 287:273–278, 2000.
CM Hempel, KH Hartman, X-J Wang, GG Turrigiano, and SB Nelson. Multiple forms of
short-term plasticity at excitatory synapses in rat medial prefrontal cortex.J. Neurophysiol.,
83:3031–3041, 2000.
FR Edwards, SJ Redman, and B Walmsley. Statistical fluctuations in charge transfer at Ia
synapses on spinal motoneurones.J Physiol (Lond), 259(3):665–688, 1976. URLhttp:
//www.jphysiol.org/cgi/content/abstract/259/3/665 .
A Triller and H Korn. Transmission at a central inhibitory synapse. iii. ultrastructure of
physiologically identified and stained terminals.J Neurophysiol, 48:708–736, 1982.
S. J. Redman. Quantal analysis of synaptic potentials in neurons of the central nervous
system.Physiological Reviews, 70:165–198, 1990.
G Tong and CE Jahr. Multivesicular release from excitatory synapses of cultured hippocam-
pal neurons.Neuron, 12:51–59, 1994.
C. Auger, S. Kondo, and A. Marty. Multivesicular release at single f unctional synaptic
sites in cerebellar stellate and basket cells.J Neurosci, 18:4532–4547, 1998.
T.G. Oertner, B.L. Sabatini, E.A. Nimchinsky, and K. Svoboda. Facilitation at single
synapses probed with optical quantal analysis.Nat Neurosci., 5(7):657–664, 2002.
H. Markram, J. Lubke, M. Protscher, A. Roth, and B. Sakmann. Physiology and anatomy
of synaptic connections between thick tufted pyramidal neurones in the developing rat neo-
cortex.J. Physiol., 500:409–440, 1997a.
V Murthy, T Sejnowski, and C F Stevens. Heterogeneous release properties of visualized
individual hippocampal synapses.Neuron, 18:599–612, 1997.
Bibliography 203
W. Senn, M. Schneider, and B. Ruf. Activity-dependent development of axonal and den-
dritic delays or, why synaptic transmission should be unreliable.Neural Computation, 14
(3):583–620, 2002a.
Michael J. O’Donovan and John Rinzel. Synaptic depression: a dynamic regulator of synap-
tic communication with varied functional roles.Trends in Neurosciences, 20:431–433,
1997.
Frances S. Chance, Sacha B. Nelson, and L. F. Abbott. Synaptic depression and the tem-
poral response characteristics of v1 cells.Journal of Neuroscience, 18(12):4785–4799,
1998b.
P. Adorjan and K. Obermayer. Contrast adaptation in simple cells by changing the transmit-
ter release probability. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors,Advances in
Neural Information Processing Systems NIPS, volume 11, pages 76–82. MIT Press, 1999.
T. Natschlager, W. Maass, and A. Zador. Efficient temporal processing with biologically
realistic dynamic synapses.Network: Computation in Neural Systems, 12:75–87, 2001.
W Senn, M Schneider, and B. Ruf. Activity-dependent development of axonal and dendritic
delays, or, why synaptic transmission should be unreliable.Neural Comput., 14(3):583–
619, 2002b.
H. Markram and M. V. Tsodyks. Redistribution of synaptic efficacy: A mechanism to gen-
erate infinite synaptic input diversity from a homogeneous population of neurons without
changing absolute synaptic efficacies.J. Physiology, 90:229–232, 1996b.
Henry Markram, Dimitri Pikus, Anirudh Gupta, and Misha Tsodyks. Potential for multiple
mechanisms, phenomena and algorithms for synaptic plasticity at single synapses.Neu-
ropharmacology, 37:489–500, 1998b.
D.J. Jr. Hagler and Y. Goda. Properties of synchronous and asynchronous release during
pulse train depression in cultured hippocampal neurons.J. Neurophysiol., 85:2324–2334,
2001.
Eric Hanse and Bengt Gustafsson. Vesicle release probability and pre-primed pool at gluta-
matergic synapses in area ca1 of the rat neonatal hippocampus.The Journal of Physiology,
531(2):481–493, 2001a.
Eric Hanse and Bengt Gustafsson. Factors explaining heterogeneity in short-term synap-
tic dynamics of hippocampal glutamatergic synapses in the neonatal rat.The Journal of
Physiology, 531(1):141–149, 2001b.
204 Bibliography
M. Raastad, J.F. Storm, and P. Anderson. Putative single quantum and single fiber excita-
tory postsynaptic currents show similar amplitude range and variability in rat hippocampal
slices.Eur. J. Neursci., 4:113–117, 1992.
AM Thomson and J Deuchars. Synaptic interactions in neocortical local circuits: dual
intracellular recordings in vitro. Cereb. Cortex, 7(6):510–522, 1997. URLhttp:
//cercor.oupjournals.org/cgi/content/abstract/7/6/510 .
V Murthy, T Schikorski, C F Stevens, and Zhu Yongling. Inactivity produces increases in
neurotransmitter release and synapse size.Neuron, 32:673–682, 2001.
Ling-Gang Wu and J. Gerard G. Borst. The reduced release probability of releasable vesi-
cles during recovery from short-term synaptic depression.Neuron, 23:821, 1999.
AM Thomson. Activity-dependent properties of synaptic transmission at two classes of
connections made by rat neocortical pyramidal axons in vitro.J Physiol (Lond), 502(1):
131–147, 1997. URLhttp://www.jphysiol.org/cgi/content/abstract/
502/1/131 .
T.C. Sudholf. The synaptic vesicle cycle revised.Neuron, 28:317–320, 2000.
Walter Senn, Henry Markram, and Misha Tsodyks. An algorithm for modifying neuro-
transmitter release probability based on pre- and postsynaptic spike timing.Neural Comp.,
13:35–67, 2001.
W. Softky and C. Koch. The highly irregular firing of cortical cells is incosistent with
temporal integration of random epsp’s.J. Neurosci., 13:334–350, 1993.
C. van Vreeswijk. Using renewal neurons to transmit information.European Biophysics
Journal, 29:245, 2000.
D.R. Cox.Renewal theory. Jonh Wiley, New York, 1962.
G Mandl. Coding for stimulus velocity by temporal patterning of spike discharges in visual
cells of cat superior colliculus.Vis. Res., 33:1451–1475, 1993.
E. Zohary, M. N. Shadlen, and W. T. Newsome. Correlated neuronal discharge rate and its
implication for psychophysical performance.Nature, 370:140–143, 1994.
W Bair, C Koch, W Newsome, and K Britten. Power spectrum analysis of bursting cells of
area mt in the behaving monkey.J. Neurosci., 14:2870–2892, 1994.
Bibliography 205
Andre A. Fenton and Robert U. Muller. Place cell discharge is extremely variable during
individual passes of the rat through the firing field.PNAS, 95(6):3182–3187, 1998. URL
http://www.pnas.org/cgi/content/abstract/95/6/3182 .
E.R. Kandel and W.A. Spencer.J. Neurophysiol., 24:243–259, 1991.
J.B. Ranck.Exp. Neurol., 41:462–531, 1973.
R. J. Baddeley, L. F. Abbott, M. Booth, F. Sengpiel, T. Freeman, E. A. Wakeman, and E. T.
Rolls. Responses of neurons in primary and inferior temporal visual cortices to natural
scenes.Proc. R. Soc. Lond. Ser. B, 264(1389):1775–1783, 1997.
H. Markram, J. Lubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by
concidence of postsynaptic aps and epsps.Science, 275:213–215, 1997b.
C. van Vreeswijk. Information transmission with renewal neurons.Neurocomputing, 39:
417–422, 2001.
M. S. Goldman, S. B. Nelson, and L. F. Abbott. Decorrelation of spike trains by synaptic
depression.Neurocomputing, 26:147–153, 1999.
M. S. Goldman.Computational Implications of Activity-Dependent Neuronal Processes.
Ph. D. Thesis. Harvard University., 2000.
R. E. Blahut.Principles and Practice of Information Theory. Addison-Wesley, Cambridge,
MA, 1988.
B. Roy Frieden.Physics from Fisher Information: A Unification. Cambridge University
Press, 1999.
F. Gabbiani and C. Koch.Principles of spike train analysis, in ”Methods in neuronal
modeling: From ions to networks”, C. Koch and I. Segev ed.MIT Press, 1998.
T. M. Cover and J. A. Thomas.Elements of information theory. John Wiley, New York,
1991.
J.P. Nadal. Information theoretic approach to neural coding and parameter estimation: a
perspective. In B. Olshausena R. Rao and M. Lewicki, editors,Statistical Theories of the
Brain. MIT Press, 2000.
N. Brunel and J. P. Nadal. Mutual information, fisher information and population coding.
Neural Comp., 10:1731–57, 1998.
206 Bibliography
A. Treves, S. Panzeri, E. T. Rolls, M. Booth, and E. A. Wakeman. Firing rate distributions
and efficiency of information transmission of inferior temporal cortex neurons to natural
visual stimuli.Neural Computation, 11:601–632, 1999.
J.J. Rissanen. Fisher information and stochastic complexity.IEEE Transactions on Infor-
mation Theory, 42(1):40 –47, January 1996.
WB Levy and RA Baxter. Energy efficient neural codes.Neural Comp., 8(3):531–
543, 1996. URLhttp://neco.mitpress.org/cgi/content/abstract/8/
3/531 .
L B Laughlin, R R de Ruyter van Steveninck, and J C Anderson. The metabolic cost of
neural information.Nature Neuroscience, 1:36 – 41, 1998.
Vijay Balasubramanian, Don Kimber, and II Michael J. Berry. Metabolically Efficient
Information Processing.Neural Comp., 13(4):799–815, 2001. URLhttp://neco.
mitpress.org/cgi/content/abstract/13/4/799 .
Gonzalo Garcia de Polavieja. Errors drive the evolution of biological signalling to costly
codes.J- theor. Biol., 214:657–664, 2002.
William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery, editors.
”Numerical Recipes in C: The Art of Scientific Computing ”. Cambridge University Press.,
2nd edition, February 2002.
H. Markram. A network of tufted layer 5 pyramidal neurons.Cereb. Cortex, 7(6):523–533,
1997. URL http://cercor.oupjournals.org/cgi/content/abstract/
7/6/523 .
E. Riva Sanseverino, L. F. Agnati, M. G. Maioli, and C. Galletti. Maintained activity of
single neurons in striate and non-striate areas of the cat visual cortex.Brain Res., 54:225,
1973.
M. Abeles.Local Cortical Circuits. An Electrophysiological Study. Springer-Verlag, 1982.
C. R. Legendy and M. Salcman. Bursts and recurrences of bursts in the spike trains of
spontaneously active striate cortex neurons.J. Neurophysiol., 53(926), 1985.
William Feller. An introduction to Probability Theory and Its Applications, volume 1. John
Wiley and Sons, 3 edition, 1950.
Bibliography 207
F. Rieke, D. Warland, R. R. de Ruyter van Steveninck, and W. Bialek.Spikes: exploring
the neural code. MIT Press, Cambridge, MA, 1996.
Peter Dayan and Larry F. Abbot.Theoretical Neuroscience: Computational and Mathe-
matical Modeling of Neural Systems. MIT Press, 1st edition, 2001.
Misha V. Tsodyks, Klaus Pawelzik, and Henry Markram. Neural networks with dynamic
synapses.Neural Comp., 10:821–835, 1998.
Misha Tsodyks, Asher Uziel, and Henry Markram. Synchrony generation in recurrent
networks with frequency-dependent synapses.J. Neurosci., 20:RC50, 2000.
J Reutimann, S Fusi, W Senn, V Yakovlev, and E Zohary. A model of expectation effects
in inferior temporal cortex.Neurocomputing, 2001.
Alex Loebel and Misha Tsodyks. Computation by ensemble synchronization in recurrent
networks with synaptic depression.ournal of Computational Neuroscience, 13(2):111–124,
2002.
Lovorka Pantic, Joaquin J. Torres, Hilbert J. Kappen, and Stan C.A.M. Gielen. Associative
memory with dynamic synapses.Neural Computation, 14(12), 2002.
L. E. Dobrunz and C. F. Stevens. Response os hippocampal synapses to natural stimulation
patterns.Neuron, 22:157–166, 1999.
Mircea Steriade.The Intact and Sliced Brain. MIT Press, 2001.
V. Braitenberg and A. Schuz. Anatomy of the cortex: Statistics and geometry. Spriger-
Verlag, Berlin, 1991.
W. Softky. Simple codes versus efficient codes.Curr. Opin. Neurobiol., 5:239–245, 1995.
Stefano Panzeri, Rasmus S. Petersen, Simon R. Schultz, Michael Lebedev, and Mathew E.
Diamond. The role of spike timing in the coding of stimulus location in rat somatosensory
cortex.Neuron, 29:769–777, March 2001.
F. Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek.Spikes: exploring the
neural code. MIT Press, Cambridge, MA, USA, 1997.
W. Bialek, F. Rieke, R. R. de Ruyter van Steveninck, and D. Warland. Reading a neural
code.Science, 252:1854–1857, 1991.
208 Bibliography
R. C. deCharms and M. M. Merzenich. Primary cortical representation of sounds by the
coordination of action potentials.Nature, 381:610–613, 1996.
M. Abeles H. Bergman, I. Gat, I. Meilijson, E. Seidemann, N. Tishby, and E. Vaadia. Cor-
tical activity flips among quasi-stationary states.Proc. Nat. Acad. Sci. USA, 92:8616–8620,
1995.
V. N. Murthy and E. E. Fetz. Synchronization of neurons during local field potential os-
cillations in sensorimotor cortex of awake monkeys.J. Neurophysiol., 76(6):3968–3982,
1996.
E. Vaadia, I. Haalman, M. Abeles, H. Bergman, Y. Prut, H. Slovin, and A. Aertsen. Dy-
namics of neuronal interactions in monkey cortex in relation to behavioural events.Nature,
373:515–518, 1995.
Y Prut, E Vaadia, H Bergman, I Haalman, H Slovin, and M Abeles. Spatiotemporal struc-
ture of cortical activity: properties and behavioral relevance.J. Neurophysiol., 79:2857–74,
1998.
C deCharms and A Zador. Neural representations and the cortical code.Ann. Review of
Neurosci., 23:613–847, 2000.
CE. Carr. Processing of temporal information in the brain.Annu. Rev. Neurosci., 16:223–
243, 1993.
JJ. Hopfield. Pattern recognition com-putation using action potential timing for stimulus
representation.Nature, 376:33–36, 1995.
JJ. Hopfield. Transforming neural com-putations and representing time.Proc. Natl. Acad.
Sci. USA, 93:15440–44, 1996.
C. M. Gray and D. A. McCormick. Chattering cells: Superficial pyramidal neurons con-
tributing to the generation of synchronous oscillations in the visual cortex.Science, 274:
109–113, 1996.
W. Singer. Neuronal synchrony: A versatile code for the definition of relations?Neuron,
24:49–65, 1999.
E. Salinas and T. J. Sejnowski. Correlated neuronal activity and the flow of neural informa-
tion. Nature Reviews Neuroscience, 2:539–550, 2001.
Bibliography 209
O. Bernander, C. Koch, and M. Usher. The effect of synchronized inputs at the single
neuron level.Neural. Comput, 6:622–641, 1994.
V. N. Murthy and E. E. Fetz. Effects of input synchrony on the firing rate of a three-
conductance cortical neuron model.Neural. Comput, 6:1111–11126, 1994.
C.F. Stevens and A. Zador. Input synchrony and the irregular firing of cortical neurons.
Nature Neuroscience, 3:210–217, 1998.
Jianfeng Feng and David Brown. Impact of correlated inputs on the output of the integrate-
and-fire model.Neural Computation, 12:671–692, 2000.
S. M. Bohte, H. Spekreijse, and P. R. Roelfsema. The effects of pair-wise and higher-order
correlations on the ring rate of a postsynaptic neuron.Neural. Comput, 12:153–179, 2000.
E. Salinas and T. J. Sejnowski. Impact of correlated synaptic input on output firing rate and
variability in simple neuronal models.J. Neurosci., 20:6193–6209, 2000.
Michael Rudolph and Alain Destexhe. Correlation detection and resonance in neural sys-
tems with distributed noise sources.Physical Review Letters, 86:3662–3665, 2001.
A. Kuhn, S. Rotter, and A. Aersten. Correlated input spike trains and their effects on the
response of a leaky integrate-and-fire neuron.Neucomputing, 44-46:121–126, 2002.
R. Moreno, J. de la Rocha, A. Renart, and N. Parga. Response of spiking neurons to
correlated inputs.Physical Review Letters, 89(28):288101, 2002.
W. Rall and I. Segev. Functional possibilities for synapses on dendrites and dendritic spines.
In G. Edelman and J.D. Cowan, editors,Synaptic Function, pages 605–636, New York,
1987. John Wiley.
I Segev and W Rall. Excitable dendrites and spines: earlier theoretical insights elucidate
recent direct observations.Trends Neurosci, 21(11):453–60, 1998.
E De Schutter. Using realistic models to study synaptic integration in cerebellar purkinje
cells. Rev Neurosci, 10(3-4):233–45, 1999.
JC Magee. Dendritic integration of excitatory synaptic input.Nat Rev Neurosci, 1(3):
181–90, 2000.
Thomas Natschlager. Efficient computation in Networks oif Spiking neurons- Simulation
and theory. PhD thesis, Technishe Universitat Graz, 1999.
210 Bibliography
M. Abeles. Corticonics: Neural circuits of the cerebral cortex. Cambridge University
Press, Cambridge, 1991.
AU Larkman, JJ Jack, and KJ Stratford. Quantal analysis of excitatory synapses in rat
hippocampal ca1 in vitro during low-frequency depression.J Physiol, 505:457–71, 1997.
C Stricker, AC Field, and SJ Redman. Statistical analysis of amplitude fluctuations in
EPSCs evoked in rat CA1 pyramidal neurones in vitro.J Physiol (Lond), 490(2):419–
441, 1996. URLhttp://www.jphysiol.org/cgi/content/abstract/490/
2/419 .
B. Katz. Neural transmitter release: from quantal secretion to exocytosis and beyond. the
fenn lecture.J Neurocytol, 25(12):677–86, 1996.
Santiago Ramon y Cajal.Histology of the nervious system of man and vertebrates. Oxford
University Press, 1995.
H. von Gersdor, R. Schneggenburger, S. Weis, and E. Neher. Presynaptic depression at a
calyx synapse: the small contribution of metabotropic glutamate receptors.J. Neurosci.,
17:8137–8146, 1997.
Laurence O. Trussell. Modulation of transmitter release at giant synapses of
the auditory system. Current Opinion in Neurobiology, 12(4):400–404, Au-
gust 2002. URL http://www.sciencedirect.com/science/article/
B6VS3-468D8Y1-1/1/cb2dbf04e3530071ec8a3adf70f2cc02 .
H. Korn, D. S. Faber, Y. Burnod, and A. Triller. Quantal analysis and synaptic efficacy in
the cns.Trends Neurosci, 4:125–130, 1984.
L. M. Ricciardi.Diffusion processes and Related topics in biology. Springer-Verlag, Berlin,
1977.
D. J. Amit and N. Brunel. Model of global spontaneous activity and local structured delay
activity during delay periods in the cerebral cortex.Cerebral Cortex, 7:237–252, 1997.
X-J. Wang. Synaptic reverberation underlying mnemonic persistent activity.Trends in
Neuroscience, 24:455–463, 2001.
A. Rauch, G. La Camera, H.R. Luscher, W. Senn, and S. Fusi. Neorcortical pyramidal cells
respond as integrate-and-fire neurons to in vivo-like input currents. In Press, 2002.
N. Berretta and R. S. Jones. A comparison of spontaneous EPSCs in layer II and layer IV-V
neurons of the rat entorhinal cortex in vitro.J Neurophysiol, 76(2):1089–1100, 1996. URL
http://jn.physiology.org/cgi/content/abstract/76/2/1089 .
M. V. Tsodyks and T. J. Sejnowski. Rapid state switching in balanced cortical network
models.Network, 6:111, 1995.
C. vanVreeswijk and H. Sompolinsky. Chaos in neural networks with balanced excitatory
and inhibitory activity.Science, 274:1724–1726, 1996.
Alfonso Renart.Models of multi-areal cortical processing. PhD thesis, Universidad Au-
tonoma de Madrid, 2000.
G. Silberberg, M. Bethge, M. Tsodyks H. Markram, and K. Pawelzik.Submitted, 10:815–
819, 2002.
I.S. Gradshtein, I.M. Ryzhik, and A. Jeffrey.Table of integrals, series, and products. Acam-
demic Press, Inc., London, 1980.
212 Bibliography
List of Figures
1.1 Electron micrograph of a synapse from the stratum radiatum in CA1 in the
hippocampus of an adult mouse.. . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Picture of the morphology of a synapse. . . . . . . . . . . . . . . . . . . 3
2.1 Release probability as a function of the number of vesicles in the RRP. . . 16
2.2 Schematic picture of the synaptic dynamics seen as a system composed of
two pools of vesicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Examples of input spike trains with exponential autocorrelations.. . . . . . 21
2.4 Population ofM neurons making single contacts onto a target cell. . . . . 28
2.5 Model of the population distribution of synaptic parameters,D(U,N0). . . 31
2.6 Synaptic transfer function: the synaptic response rateνsr vs the input rateν.
Squared coefficient of variation of the IRI’sCV 2iri vs τv . . . . . . . . . . . 33
2.7 Why do correlated trains saturate for higher input rates than Poisson trains?36
2.8 Connected conditional rateCcr(∆) of the synaptic responses. . . . . . . . 38
2.9 Distribution of synaptic responsesρiri(∆|ν) for several values of the number
of docking sitesN0 and different recovery rates1/τv. . . . . . . . . . . . . 39
2.10 Synaptic response rateνsr vs the input rateν for different values ofN0 . . . 41
3.1 Relative reconstruction errorε versus input rate. . . . . . . . . . . . . . . 60
3.2 InformationI(∆; ν) and information rateR(∆; ν) νr I(∆; ν) versus input
rateν. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3 Input Gamma distributionsf(ν) . . . . . . . . . . . . . . . . . . . . . . . 64
3.4 InformationI(∆; ν) versus the mean input rateν , for three different input
distributionsf(ν). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5 Optimization of the recovery time constantτv regarding the Fisher informa-
tion per responseJsr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.6 Optimization of the recovery time constantτv , and comparison of the Fisher
information per responseJsr, per unit timeJ and the mutual information
I(∆; ν) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
213
214 List of Figures
3.7 Analysis of the mutual informationI(∆; ν) and entropiesH(∆) and〈H(∆|ν)〉νas a function ofτv (smallτv scale). . . . . . . . . . . . . . . . . . . . . . . 72
3.8 Analysis of the mutual informationI(∆; ν) and entropiesH(∆) and〈H(∆|ν)〉νas a function ofτv (largeτv scale) . . . . . . . . . . . . . . . . . . . . . . 73
3.9 Optimization ofτv regarding the Fisher information per response, for differ-
ent values of the readily releaseable pool sizeN0 . . . . . . . . . . . . . . 75
3.10 Optimization ofT regarding the Fisher information per response, for dif-
ferent values of the readily releaseable pool sizeN0 (whereτv in each case
varies such thatτv = N0 T .) . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.11 Optimization ofτv andT regarding the Mutual informationI(∆; ν) , for
different values of the readily releaseable pool sizeN0 . . . . . . . . . . . 77
3.12 Optimal recovery timeτopt as a function of the input rateν. . . . . . . . . . 78
3.13 Optimal recovery timeτopt of the Fisher information per response, as a func-
tion of the inputCVisi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.14 Optimal recovery timeτopt of the Fisher information per response, as a func-
tion of the time scaleτc of the input correlations. . . . . . . . . . . . . . . 80
3.15 Optimal recovery timeτopt of the Fisher information per response, as a func-
tion of the number of docking sitesN0. . . . . . . . . . . . . . . . . . . . . 81
3.16 Optimal recovery timeτopt of the mutual informationI(∆; ν) , as a function
of the number of docking sitesN0. . . . . . . . . . . . . . . . . . . . . . . 82
3.17 Optimization of the recovery time constantτv regarding the mutual informa-
tion per unit energyI(∆; ν) . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.18 Optimization of the recovery time constantτv regarding the Fisher informa-
tion per unit energyI(∆; ν) . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.19 Analysis of the transfer functionνr(ν) for two values of the release proba-
bility U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.20 Optimization of the release probability ,U , regarding the Fisher information
per second, when the output code is the number of responses.. . . . . . . . 88
3.21 3-D plot of the optimal release probability ,Uopt , as a function ofν andτv,
when the input is Poisson.. . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.22 Optimal release probability ,Uopt , as a function ofν, when the input is
correlated.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.23 Optimization of the release probability regarding the mutual information
I(∆; ν) for different values of the input correlations magnitudeCV and the
recovery time constantτv . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.24 Optimization of the population distributionD(U,N0, τv) . . . . . . . . . . 94
List of Figures 215
4.1 Model of a single synaptic contact.. . . . . . . . . . . . . . . . . . . . . . 110
4.2 Diagram of the temporal evolution of a system composed of a single synaptic
contact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
4.3 Illustration of the different ways two neurons may be connected. . . . . . 113
4.4 Schematic picture of the calyx of Held synapse. . . . . . . . . . . . . . . 114
4.5 Schematic description of a model connection with two synaptic contacts
(M = 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
4.6 Schematic description of two synaptic contacts belonging to different pre-
synaptic neurons whose activity is correlated.. . . . . . . . . . . . . . . . 118
5.1 Two examples showing the simulation of the membrane potential evolution
of a LIF neuron in a sub-threshold balanced regime (top) and in a supra-
threshold situation (bottom).. . . . . . . . . . . . . . . . . . . . . . . . . 140
5.2 Current varianceσ2w (per contact) as a function of the number of contactsM
and the input rateν whenCM is kept fixed. . . . . . . . . . . . . . . . . . 142
5.3 Correlation magnitudeα as a function of the number of contactsM and the
input rateν. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
5.4 Current parameters as a function ofν for several values of(C, M) where
CM is invariant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.5 Positionνmax of the variance maximum and the ratioσ2w(νmax)σ2
limas a function
of the number of contactsM and the input rateν, when the productCM is
held constant.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146
5.6 Current parameters as a function ofν for several values of the correlationρ
where all theC = 3750 neurons make a mono-synaptic connectionM = 1. 148
5.7 Current varianceσ2w (per contact) as a function of the number of contactsM
and the input rateν whenMJ = J ′ is kept fixed. . . . . . . . . . . . . . . 149
5.8 Current parameters as a function of the input rateν for several values of
(M, J) whereMJ = J ′ is constant. . . . . . . . . . . . . . . . . . . . . . 151
5.9 Numerical results and theoretical prediction of thenon-monotonicresponse
of a LIF neuron for input current examples with differentMs, whileCM is
held constant.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
5.10 Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (I).. . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.11 Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (II). . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.12 Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (III).. . . . . . . . . . . . . . . . . . . . . . . . . . . 159
216 List of Figures
5.13 Evolution of the afferent spikes, synaptic releases, total afferent current and
membrane potential (IV).. . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.14 Numerical results and theoretical prediction of thenon-monotonicresponse
of a LIF neuron for examples of input current with differentMs, whileMJ
is held constant.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162
B.1 Diagram showing the nomenclature and logic of the terms in whichρiri(t) is
expanded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
E.1 Diagram of the temporal evolution of a system composed of two synaptic
contacts belonging to the same pre-synaptic neuron.. . . . . . . . . . . . . 190
List of Tables
2.1 Parameters and functions of the model of a pre-synaptic terminal. . . . . . 42
2.2 Parameters and functions of the population of synapses distribution. . . . . 43
2.3 Parameters and functions used to model the input spike statistics.. . . . . . 44
2.4 Parameters and functions used to model the synaptic response statistics. . 45
217