13
Biol Cybern (2006) 94: 33–45 DOI 10.1007/s00422-005-0023-y ORIGINAL PAPER Cengiz G ¨ unay · Anthony S. Maida A stochastic population approach to the problem of stable recruitment hierarchies in spiking neural networks Received: 15 June 2005 / Accepted: 9 September 2005 / Published online: 10 November 2005 © Springer-Verlag 2005 Abstract Synchrony-driven recruitment learning addresses the question of how arbitrary concepts, represented by syn- chronously active ensembles, may be acquired within a ran- domly connected static graph of neuron-like elements. Recruitment learning in hierarchies is an inherently unstable process. This paper presents conditions on parameters for a feedforward network to ensure stable recruitment hierarchies. The parameter analysis is conducted by using a stochastic population approach to model a spiking neural network. The resulting network converges to activate a desired number of units at each stage of the hierarchy. The original recruit- ment method is modified first by increasing feedforward con- nection density for ensuring sufficient activation, then by incorporating temporally distributed feedforward delays for separating inputs temporally, and finally by limiting excess activation via lateral inhibition. The task of activating a de- sired number of units from a population is performed simi- larly to a temporal k-winners-take-all network. Keywords Population dynamics · Temporal binding · Synchrony-driven recruitment learning · Spiking neuron · Stable equilibrium · k-WTA 1 Introduction Layered feedforward and recurrent networks are known to be successful in pattern recognition tasks (Hinton et al., 1986). With these networks, information is represented monolithi- cally in fully distributed form. Consequently, these nets are less suited for more general tasks requiring the representation C. G ¨ unay (B ) · A. S. Maida Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette, LA 70504-4330 USA E-mail: [email protected] Dept. of Biology, Emory University, Atlanta, GA 30322, USA A. S. Maida Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette, LA 70504-4330 USA Institute of Cognitive Science, University of Louisiana at Lafayette, Lafayette, LA 70504-3772 USA of structured knowledge than nets employing localist, or mod- ularly distributed, representations (Feldman, 1990; von der Malsburg, 1995). Localist nets, however, have often been criticized for unreliability due to lack of redundancy (loss of a neuron causes loss of the corresponding concept) and for having implausible brain-like representations due to combi- natorial explosion (Page, 2000). A synchrony-driven recruitment learning method for acquiring new representations in localist nets addresses these criticisms (Feldman, 1990; Shastri and Ajjanagadde, 1993; Valiant, 1994; Shastri, 2001; G¨ unay and Maida, 2005). This method allows recruitment of sets of neuron-like units to redundantly represent new conjunctions of existing concepts in a random graph. The units here are called neuroids, which are neuron-like units. These nets are more reliable, because each concept is represented redundantly with a randomly dis- tributed, but compact, set of neuroids (on the order of tens). The combinatorial infeasibility criticism is addressed by us- ing binding with temporal correlation to drive the recruitment process. Temporal binding proposes to dynamically repre- sent concepts using temporal synchrony in order to prevent the combinatorial explosion associated with static binding methods (von der Malsburg, 1994, 1995). The temporal binding proposal has been criticized for cognitive incompleteness due to its transient nature (Shastri and Ajjanagadde, 1993; O’Reilly et al., 2003). When con- sidering the full range of cognitive tasks, it is necessary to address the question of how long-term storage is formed. The methods can be combined by using temporal binding to trig- ger recruitment for creating permanent memories (Valiant, 1994, 2000; Shastri, 1999, 2001; G¨ unay and Maida, 2003, 2005). Once recruited, a neuroid will be reactivated only by the inputs that initially caused its recruitment, and it will ignore other inputs. The neuroid can be said to have “mem- orized” the inputs. A novel concept is thus formed by a set of recruited neuroids, that redundantly represent the same inputs, out of the free neuroids in the network. Novel con- junctive concepts can be created by simultaneously activat- ing existing concepts in the network. This scheme, while not forming dynamic representations, still offers economy in the

A stochastic population approach to the problem of stable recruitment hierarchies in spiking neural networks

Embed Size (px)

Citation preview

Biol Cybern (2006) 94: 33–45DOI 10.1007/s00422-005-0023-y

ORIGINAL PAPER

Cengiz Gunay · Anthony S. Maida

A stochastic population approach to the problem of stablerecruitment hierarchies in spiking neural networks

Received: 15 June 2005 / Accepted: 9 September 2005 / Published online: 10 November 2005© Springer-Verlag 2005

Abstract Synchrony-driven recruitment learning addressesthe question of how arbitrary concepts, represented by syn-chronously active ensembles, may be acquired within a ran-domly connected static graph of neuron-like elements.Recruitment learning in hierarchies is an inherently unstableprocess. This paper presents conditions on parameters for afeedforward network to ensure stable recruitment hierarchies.The parameter analysis is conducted by using a stochasticpopulation approach to model a spiking neural network. Theresulting network converges to activate a desired numberof units at each stage of the hierarchy. The original recruit-ment method is modified first by increasing feedforward con-nection density for ensuring sufficient activation, then byincorporating temporally distributed feedforward delays forseparating inputs temporally, and finally by limiting excessactivation via lateral inhibition. The task of activating a de-sired number of units from a population is performed simi-larly to a temporal k-winners-take-all network.

Keywords Population dynamics · Temporal binding ·Synchrony-driven recruitment learning · Spiking neuron ·Stable equilibrium · k-WTA

1 Introduction

Layered feedforward and recurrent networks are known to besuccessful in pattern recognition tasks (Hinton et al., 1986).With these networks, information is represented monolithi-cally in fully distributed form. Consequently, these nets areless suited for more general tasks requiring the representation

C. Gunay (B) · A. S. MaidaCenter for Advanced Computer Studies,University of Louisiana at Lafayette, Lafayette, LA 70504-4330 USAE-mail: [email protected]. of Biology, Emory University, Atlanta, GA 30322, USA

A. S. MaidaCenter for Advanced Computer Studies,University of Louisiana at Lafayette, Lafayette, LA 70504-4330 USAInstitute of Cognitive Science, University of Louisiana at Lafayette,Lafayette, LA 70504-3772 USA

of structured knowledge than nets employing localist, or mod-ularly distributed, representations (Feldman, 1990; von derMalsburg, 1995). Localist nets, however, have often beencriticized for unreliability due to lack of redundancy (loss ofa neuron causes loss of the corresponding concept) and forhaving implausible brain-like representations due to combi-natorial explosion (Page, 2000).

A synchrony-driven recruitment learning method foracquiring new representations in localist nets addresses thesecriticisms (Feldman, 1990; Shastri and Ajjanagadde, 1993;Valiant, 1994; Shastri, 2001; Gunay and Maida, 2005). Thismethod allows recruitment of sets of neuron-like units toredundantly represent new conjunctions of existing conceptsin a random graph. The units here are called neuroids, whichare neuron-like units. These nets are more reliable, becauseeach concept is represented redundantly with a randomly dis-tributed, but compact, set of neuroids (on the order of tens).The combinatorial infeasibility criticism is addressed by us-ing binding with temporal correlation to drive the recruitmentprocess. Temporal binding proposes to dynamically repre-sent concepts using temporal synchrony in order to preventthe combinatorial explosion associated with static bindingmethods (von der Malsburg, 1994, 1995).

The temporal binding proposal has been criticized forcognitive incompleteness due to its transient nature (Shastriand Ajjanagadde, 1993; O’Reilly et al., 2003). When con-sidering the full range of cognitive tasks, it is necessary toaddress the question of how long-term storage is formed. Themethods can be combined by using temporal binding to trig-ger recruitment for creating permanent memories (Valiant,1994, 2000; Shastri, 1999, 2001; Gunay and Maida, 2003,2005). Once recruited, a neuroid will be reactivated only bythe inputs that initially caused its recruitment, and it willignore other inputs. The neuroid can be said to have “mem-orized” the inputs. A novel concept is thus formed by a setof recruited neuroids, that redundantly represent the sameinputs, out of the free neuroids in the network. Novel con-junctive concepts can be created by simultaneously activat-ing existing concepts in the network. This scheme, while notforming dynamic representations, still offers economy in the

34 C. Gunay and A. S. Maida

amount of storage by keeping only the concepts to which thecognitive system is exposed so far (Feldman, 1990; Valiant,1994; Page, 2000; Gunay and Maida, 2003).

However, recruitment learning is prone to instability whena chain of concepts is recruited in cascade as seen in Fig. 1.The statistical variance inherent in the recruitment methodcauses increasing perturbations to the recruited set size, andthus instability (Valiant, 1994). We previously proposed aboost-and-limit algorithm to improve recruitment stability,and verified the applicability of this method with a soft-ware model in a spiking neuroidal net simulator (Gunay andMaida, 2001, 2005). In that model, excess recruitment can-didates were rejected to enforce a stable recruitment level.The present work proposes a biologically supported mecha-nism that may serve to implement the previously proposedboost-and-limit method in neural hardware.

The boost-and-limit method, sketched in Fig. 2, works byfirst increasing the statistical expectation of the recruited setsize. This limits the probability of under-recruitment. Then,to control this increase, negative feedback is applied by set-ting a hard limit on the size of the recruitment set. This limitsthe possibility of over-recruitment. We propose a biologicalmodel with similar function, using both the variable delaysinherent in cortical networks, and lateral inhibitory effectsbetween principal neurons as the negative feedback. In this

Fig. 1 A hierarchical recruitment scenario. The circles indicate the setof neuroids that represent each concept, whereas the large ellipses indi-cate projection sets of these neuroids. Number of neuroids in intersec-tions of projections vary in an unstable manner when recruitment isused repetitively

Fig. 2 Basic structure of the boost-and-limit mechanism. Boosting sig-nifies increased connectivity between A and B. Limiting applies to thesize of the recruited set via negative feedback (possibly lateral inhibi-tion)

model, the initially synchronized spike volley intended tocause recruitment is assumed to be subject to varying delaysin the individual spikes. The delays are caused by spikestravelling through axons of slightly varying lengths and bydendritic placement of synapses. The background noise incortical networks interacts with membrane charge time andthereby can affect the timing of a spike emitted by the post-synaptic neuron. The time to charge up the membrane to thethreshold depends on the current membrane voltage achievedby the background noise. This adds another source of uncer-tainty to the spike time. These varying delays in the spikearrival times cause the destination neuroids to fire and becomerecruited in a temporally dispersed sequence. During thisprocess, we propose using the local lateral inhibition as amechanism that saturates to fully inhibiting the localized areaafter enough neuroids are recruited. This is possible if eachrecruited neuroid emits a delayed lateral inhibitory signalwithin a properly connected structure. In other words, recruit-ment causes the neuroid to fire (as proposed by Valiant) andemit a lateral inhibitory spike (our proposal), thereby slowingdown further recruitment. In this work, we assume that neu-roids are capable of projecting both excitatory and inhibitorysynapses.

In Sect. 3, we describe the stochastic population approachthat we employ to study the properties of the proposed boost-and-limit mechanism within the context of an integrate-and-fire (I/F) neuroidal network. We are interested in variationsof the expected population size of recruited neuroids withrespect to perturbations to the input population size. We firstintroduce a feedback control system which models recruit-ment using activated synaptic populations. The expected sizeof the recruited neuroid set has an equilibrium point ro. InSect. 4 we confirm that in the original open-loop recruitmentsystem, the equilibrium point is unstable. That is, for pertur-bations in the size of the input set, the size of the recruitedset diverges from the equilibrium point. We also verify thepopulation model results with Monte Carlo simulations.

In the closed-loop system of our model in Sect. 5 withexcitatory and inhibitory synaptic populations, the equilib-rium point becomes stable, but the model is prone to oscilla-tions. Section 5.2 describes the low pass filter that is requiredat the output of this model to prevent the unwanted oscilla-tions in the activity level which is an artifact of the popula-tion model. Monte Carlo simulations, in Sect. 5.3, verify thestability of the feedback model and exhibit no oscillations.Using Monte Carlo simulations, the effect of changing thefeedback delay on stability results is analyzed in Sect. 5.4. InSect. 6, we extend the recruitment process to allow an arbi-trary number l of synapses, instead of just two, for improvedbiological realism.

The model allows choosing the desired recruitment size,ro, for representing concepts, and the number of neuroids perlocalized area N , arbitrarily. Another free parameter of themodel is the feedforward excitatory connection density. Thisis calculated for a given ro and N according to the defini-tion of recruitment learning. The connection density can alsobe adjusted with a positive gain constant λ. The choice of

Stable recruitment hierarchies 35

λ affects the rate of convergence to the stable equilibriumpoint. Given these parameters, we can calculate the requiredlateral inhibitory projection density for stable recruitment inhierarchies.

2 Related work

The problem of recruitment instability was originally ex-plored by Valiant (1994) in the so-called vicinal algorithmsin randomly connected graphs. Valiant proposed that hierar-chical recruitment with a depth of three to four levels can beachieved if the parameters of the system are chosen appro-priately, based on the work of Gerbessiotis (1993, 2003).This study assumed a replication factor of r = 50 neuroidsfor representing each concept. It was also assumed that thetotal number of neuroids in the system was large, approach-ing infinity, which is reasonable given the large number ofprincipal neurons (pyramidal cells) in the cerebral cortex.Gerbessiotis (2003) provided a rigorous formalism of theexpected recruited set size in random graphs. Gerbessiotis(1998) also showed that a constructed graph can guaranteethe replication factor to be a constant r only for a graph with3r vertices (neuroids), and not for a graph with an arbitrarysize.

Our earlier work (Gunay and Maida, 2005) suggested thatthe instability becomes graver when the total number of neu-roids in the network is low (e.g., on the order of hundreds).In our case, networks that are divided into localized areaswith a small number of neuroids are interesting because theyare better suited for computer simulation. Even though themammalian brain contains a large number of neurons in total,there are smaller substructures where our analysis could beapplied. For instance, cortical areas and microcolumns arepossible candidates.

Levy (1996) presented a hippocampal-like model for se-quence prediction which used a recurrent network having ran-dom asymmetric connectivity. They analyzed parameters sta-tistically, in a manner similar to our work, to find appropriateneural threshold and weight parameters for maintaining sta-ble activity levels (Minai and Levy, 1993, 1994). Their modeldiffers from ours in having excitatory feedback connections,and employing a rate model with discrete time steps, unlikethe continuous spiking model used in the present work. Yet,our model lacks effects of spike rates and variable thresholdssince these are of secondary importance in our framework(Gunay and Maida, 2003). Previous work using statisticalanalysis on network parameters goes back to Amari (1974).This kind of analysis is closely related to mean-field methodsin statistical physics.

Shastri (2001) modeled recruitment learning based onthe biological phenomena of long-term potentiation (LTP)and long-term depression (LTD) with idealized I/F neurons.Then, assuming that recruitment learning is employed in theinteractions between the enthorinal cortex of the medial tem-poral lobe, and the dentate gyrus of the hippocampal for-mation, he calculated the probability of finding a sufficientnumber of recruitment candidates according to the anatomical

properties of these structures and a suitable selection ofparameters. Shastri also extended the recruitment learningmethod to: (1) allow multiple redundant connections fromeach of the input concepts, which makes the method morerobust; and, (2) allow a recruited neuron to take part in repre-senting other concepts, which increases the concept capacityof a network containing a finite number of units.

Diesmann et al. (1999) and Tetzlaff et al. (2002) haveanalyzed the boundary conditions, for which propagation ofsynchronized spike packets within a feedforward subgraph ispossible. These architectures are composed of feedforwardlayers of I/F neurons with each neuron having convergentinputs from the previous layer. Diesmann et al. found thatfor a synchronized spike packet to propagate undisturbed,there are lower bounds for the size of the packet and theconnection density. Tetzlaff et al. found that there are alsoupper bounds, above which, meaningful propagation of activ-ity is no longer possible. Their results were shown for fullyconnected feedforward networks with stationary backgroundactivity. Tetzlaff et al. also point out that feedback from inhib-itory populations could potentially stabilize the network toenable controlled synchrony over a larger range of networkparameters. Our networks use inhibitory feedback and do nothave background activity.

Other research relevant to the controlled propagation ofsynchronous activity is by van Rossum et al. (2002) and Lit-vak et al. (2003). The primary issue addressed by this researchis whether rate code, coupled with a high-input regime andassociated background activity is a viable representation ofthe neural code. van Rossum et al., extending earlier resultsof Shadlen and Newsome (1994, 1998), provide simulationsshowing that, in the presence of noisy background current, fir-ing rates propagate rapidly and linearly through a deeply lay-ered feedforward network. Furthermore, they find that back-ground noise is essential but does not lead to deteriorationof the propagated activity. Litvak et al. are skeptical that ratecode in the presence of background activity is possible at all.They point out that van Rossum et al. had to fine-tune theirnoise parameters to obtain propagation of rate code, and thateach neuron fired in an almost periodic manner. Such firingpatterns are not commonly observed in biological corticalactivity. For the purposes of the present paper, the relevantissue is whether controlled propagation is possible, and onthis point the answer is in the affirmative.

2.1 Relation to winner-take-all mechanisms

In the brain, neural firings result in stereotypical action poten-tials (APs) with constant magnitude. However, the firing timesand spike rates carry important information (Gerstner, 1999).In our model, since all neuroids are assumed to fire once oronly a few times during the period of analysis, the time-to-first-spike is the most significant variable. In this sense, ourmodel can be considered as a winner-take-all (WTA) mecha-nism (Feldman and Ballard, 1982) if the winners are chosenaccording to temporal precedence, similar to the work ofIndiveri (2000). Specifically, our model is a k-WTA, because

36 C. Gunay and A. S. Maida

it allows roughly k winners to be fired and recruited, wherek is the number of neuroids redundantly representing a con-cept. It is also a soft WTA, which sorts k real valued outputsaccording to the magnitude of the corresponding real valuedinputs, in contrast to a hard WTA whose outputs are binary(Maass, 2000). Regarding the computational power of WTAnetworks, Maass (2000) showed that networks that use lat-eral inhibition as a WTA mechanism have the same universalcomputing power as layered feedforward networks. UsingWTA networks in a biological context goes back to Elias andGrossberg (1975).

Shastri (2001) suggested that the set of recruited neu-rons should be controlled by a soft WTA mechanism, withoutactually implementing it. Knoblauch and Palm (2001) use aterminating inhibition similar to Shastri’s model for ensuringthe retrieval of only a single item from a spiking associativenetwork.

There is much previous work on WTA networks, in thecontext of associative attractor networks with symmetricalconnections studied by Hopfield (1982). Recent work in-cludes WTA networks (e.g., Tymoshchuk and Kaszkurewicz,2003), and k-WTA networks (e.g., Calvert and Marinov, 2000)which feature stable equilibrium points, based on other re-sults on global stability in neural networks (Kaszkurewiczand Bhaya, 1994; Arik, 2002). These differ from our ap-proach because they employ full-recurrent-connection net-works and use sigmoidal rate-coded neurons. They also re-quire iteration until converging to a stable solution and somemay suffer from local minima. WTAs built with competitivelearning networks such as ours, are superior to Hopfield-typenetworks, because they do not have the local minima problem(Urahama and Nagao, 1995).

3 Components of the model framework

We start with a simple model of the recruitment process ata destination area B, caused by inputs from an initial areaA, where A and B are disjoint sets of neuroids. The reasonfor choosing this two-area model is to assess the stability inthe size of the recruited set when the process is repeated. Theinput RA, which represents a concept of interest, is some sub-set of neuroids in area A that projects to area B. Inputs fromthe neuroids in set RA in area A(1), which project to area B(1),cause a set RB(1) of neuroids to be recruited (see Fig. 3b). Thisprocess can be repeated by assuming the set RB(1) is in an areaA(2) and it is used to recruit a new RB(2) set in the next areaB(2). We wish to show that the variance of |RB(k) | is suffi-ciently small after k iterations using our proposed method.We call this the variance problem.

Generalized recruitment Notice that we employ a general-ized recruitment learning method to generate an output setfrom a single input set (see Fig. 3b). This general solution canlater be transformed into specific networks requiring multi-ple inputs. Recruitment learning is originally designed to re-quire two input sets to be activated for creating an output set

Fig. 3 Type of recruitment learning employed. a Original recruitmentrequires two inputs to create an output concept. b We generalize recruit-ment to require a single set of neuroids to create an output set of neuroidsby adjusting the connection probability. The set sB represents the set ofactivated synapses in area B

(Feldman, 1982, 1990; Valiant, 1994). Synchronous activa-tion from two input sets indicating a temporal binding causesthe neuroids receiving inputs from both sets to be recruitedas seen in Fig. 3a. Recruitment learning requires the con-nections between the source and the destination to form arandom graph with connection probability chosen to satisfyrecruitment of a set to be equal in size to that of the input sets.Here we adjust the probability for the random connectionssuch that a set RA causes the recruitment of a set RB withequal sizes, rA = |RA| � |RB | = rB .

The approximate probability of having exactly one excit-atory connection from a neuroid in A and a neuroid in B isgiven with

p+AB =

√λ/NB(ro − 1) , (1)

where 0 < λ ≤ √NB(ro − 1) is the amplification factor,

NB = |B| is the total number of neuroids in B and ro =rA = rB is the desired size of the neuroid set representinga concept. Equation 1 ensures that the expected size of arecruited set has an equilibrium point rB = ro when rA = ro

and λ = 1. The derivation of this property, and its accuracy,is described in Appendix A.1. However, ensuring the expec-tation does not solve the variance problem in hierarchicallearning.

The boost-and-limit mechanism To address the variance prob-lem, we propose a boost-and-limit mechanism that keeps therecruited set size rB in a stable equilibrium. This mechanismassumes an increased connection density (by manipulating λ)between the source A and destination B to ensure sufficientrecruitment at B, and then dynamically limits the recruit-ment with a negative feedback component (controlled by thecurrent value of rB) projecting via lateral inhibition within B.

Purpose of temporally distributed delays It is reasonableto propose that the negative feedback is applied via lateralinhibition when recruited neuroids in B fire. This ensuresneuroids to be actually recruited before feedback is applied.

Stable recruitment hierarchies 37

Fig. 4 The cases with (a) and without (b) noisy delays and their effecton the controllability of the recruitment process

The negative feedback should inhibit further recruitment can-didates after the desired number of neuroids is recruited.Assuming that the initial input is a single synchronous spikevolley from the set RA and that the delays between A and Bare uniform and there is no background noise, then the recruit-ment process inB becomes instantaneous.This does not leavetime for the inhibitory feedback mechanism to sense the con-trolling quantity rB due to delays (see Fig. 4a). However,if the recruitment process is temporally dispersed, then theinhibitory feedback strengthens continuously with increasingnumber of recruited neuroids. This continues until a balanceis reached between the input excitation and lateral inhibitionto yield a desired recruitment level as in Fig. 4b, assumingthe feedback is fast enough.A realistic dispersion of recruitedneuroids can be achieved if the connections between A andB have slightly varying delays. We model these delays with anormal distribution having mean µAB and standard deviationσAB . 1 The instantaneous spike rate of activity originatingfrom A and received by excitatory synapses in B, nAB(t), isgiven by

nAB(t) = rA p+AB NB G(µAB, σAB; t) , (2)

where rA = |RA|, the normal distribution is given by theGaussian kernel G(µ, σ ; t) = 1

σ√

2πexp[− 1

2 (µ−t

σ)2], and p+

AB

is defined in Eq. 1.

A non-leaky spike integrator A spiking neuron model is em-ployed, which causes incoming action potentials at the excit-atory synapses to have a prolonged excitatory postsynapticpotential (EPSP) on the somatic membrane component of theneuroids. In this model, we assume the decay constant of themembrane is larger than the transmission delays, or no decayis present at all. Therefore, all incoming spikes to a neuroidcause a constant EPSP, and the EPSPs are accumulated overthe course of the recruitment process, which is roughly afew tens of milliseconds (namely, the interval [0, 40] in theanalyses below).

The recruitment process The main variable of interest rB(t)is the total number of recruited neuroids in area B until time t .

1 Variable delays can also be caused by background noise and higherspike thresholds. To model this, the time to reach threshold would bedetermined, not only by incoming spike arrival times, but also by arandom variable representing background noise.

According to the definition of recruitment learning (Valiant,1994), if a neuroid receives two or more spikes at its excit-atory synapses, it is recruited and therefore emits an AP.Therefore the threshold of each neuroid is adjusted to fireat the sum of two input EPSPs. Statistical methods can beused to estimate the number of recruited neuroids under aspatially uniform synapse distribution. It is desired that thisnumber asymptotically approach a maximum level rB , andexhibit a stable equilibrium at a fixed point rA = rB = ro.That is, for perturbations to the input size rA, the variationof rB should converge to the fixed point when the processis repeated. This method can be extended to have a largerrecruitment threshold to require an arbitrary number of EP-SPs before firing (see Section 6).

It is assumed in the present model, all effects of the acti-vation caused by excitatory synapses may be disrupted at thesoma of a neuroid when an inhibitory synapse is activated,if the effect of the inhibitory synapse is divisive rather thansubtractive (Koch et al., 1983). In other words, an inhibi-tory synapse positioned on an axon hillock (spike initiationzone) of a neuron can act as a veto mechanism to an arbitraryexcitatory input.

The following sections describe a model to achieve thestable equilibrium described above. In the analyses, the sys-tem parameters are chosen as NB = 100 neuroids, ro = 10neuroids, µAB = 20 ms, and σAB = 5 ms, unless otherwiseindicated. The time window for the recruitment process istaken as the period [0, 40] ms. The λ values used range fromunity, λ = 1, where p+

AB = 0.03, to various degrees of ampli-fied connectivity (e.g., for λ = 20 causes p+

AB = 0.15).

4 The open-loop population model

For comparing the performance of the proposed model, wefirst look at the open-loop system characteristics without thenegative feedback. This scenario is similar to the methodoriginally described by Valiant (1994), except that we usethe generalized recruitment method defined above.

Using the rate of feedforward excitatory spikes nAB de-fined in (2), we represent the number sB of activated excit-atory synapses in area B as

sB(t) =t∫

0

nAB(τ)dτ . (3)

Given sB(t), the number of recruited neuroids in area B,rB(t), can be obtained by using a statistical expectation oper-ator. Using the synapse-count s and N = NB , the probabilityof a neuroid to receive two or more of the excitatory synapsesis defined as

p∗ =∑s

k=2

(s

2

)(N − 1)s−k

Ns

= 1 −∑1

k=0

(s

2

)(N − 1)s−k

Ns

= 1 −(

N − 1

N

)s−1N + s − 1

N. (4)

38 C. Gunay and A. S. Maida

Fig. 5 Comparing the population approximation of rB(s) in Eq. 5 to aMonte Carlo simulation of a network with the same parameters (ro =10, N = 100), averaged over 50 runs. The number of recruitments in theopen-loop system changes almost linearly with the number of activatedexcitatory synapses s

Thus, rB(t) can be given as the expected number of neuroidsrecruited in B,

rB(s) = p∗N = N −(

N − 1

N

)s−1

(N + s − 1) . (5)

To test the quality of this population approximation, we com-pare it to Monte Carlo simulations in Fig. 5. To assess thebehavior of rB(s) with respect to time and changes in theinput rA, one can observe sB(t) since these two quantities aredirectly related by Eq. 5.

We can estimate the upper limit asymptote, or steady-statevalue, of sB by

sB = limt→∞ sB(t) � rA p+

AB N , (6)

since limt→∞∫ t

0 G(µAB, σAB; τ)dτ � 1 when µAB > 2σAB ,and the term rA p+

AB N is constant. Using this, the final ex-pected number of recruited neuroids rB , as a function of thenumber of activated input neuroids rA, becomes

rB(rA) = N −(

N − 1

N

)rA pAB N−1

(N + rA pAB N − 1) . (7)

rB is the maximum of rB(t) for this recruitment process, sinceall the spikes initiated at area A have reached area B.

If we want to find the value of the amplification factor λ,for a desired fixed point rB = ro, there is no simple analyticsolution of Eq. 7. Therefore, we define the function

g(λ) = N −(

N − 1

N

)rA pAB N−1

(N + rA pAB N − 1) − rB

and numerically find λ values that makes g(λ) = 0. Forro = 10, we get λ = 2.5681. 2

Once λ is known, the return map rB(rA) in Eq. 7 canbe plotted as seen in Fig. 6. Numerical integration of rB(t)

2 Earlier, it was claimed for the open-loop case that λ = 1 shouldgive the desired fixed-point. However, hereλ compensates for theO(p3)term ignored in Eq. 1, discussed in Appendix A.1.

Fig. 6 Return map of the change in the r value from rA to rB of theopen-loop system according to Eq. 7. Note that the chosen fixed pointwith the parameters indicated in the plot is unstable

Fig. 7 Open-loop simulation for the change in the size of the recruitedset rB with different selections for the size of the input set rA

obtained from Eqs. 5 and 3 are plotted in Fig. 7 for threechoices of rA, which verifies Fig. 6. As expected, perturba-tions to the value of rA cause rB to diverge from the desiredvalue ro = 10, which is the reason for recruitment instability.

5 The closed-loop inhibitory feedback population model

We introduce a model which has two populations of synapses,sP and sN , for excitatory and inhibitory synapses, respec-tively. The block schema of this closed-loop model is givenin Fig. 8. The purpose of the delayed inhibitory feedback is tocontrol the recruitment instability observed in the open-loopsystem.

The model is given by the following equations. The num-ber of activated excitatory synapses is given by

sP (t) =t∫

0

nAB (τ )dτ , (8)

similar to sB in Eq. 3. The number of activated inhibitorysynapses caused by the feedback is given by

sN(t) = pBB N rB(t − µBB) , (9)

Stable recruitment hierarchies 39

Fig. 8 Block schema of the closed-loop synaptic population model.In the diagram, rA is the neuroid count in the input RA in [neuroids];sP , sN are the activated excitatory and inhibitory synapse counts in B,respectively, in [synapses]; and rB is the recruited/fired neuroid countin B in [neuroids]. Note that, the input rA is a scalar indicating the mag-nitude of a one-time synchronous input to the system, whereas otherquantities are functions of time t

where µBB is the feedback delay, pBB is the lateral inhibi-tory connectivity parameter, and N ≡ NB for simplicity. Itis assumed that the inhibitory synapses have a multiplicativeeffect and thus can veto all excitatory inputs to the neuroid.

The expected number of recruited neuroids rB can be cal-culated probabilistically, similar to the method followed inSect. 4. The probability of having one or more inhibitorysynapses on a neuron is

p∗N = NsN − (N − 1)sN

NsN= 1 − (

N − 1

N)sN .

Then, the expected number of recruited neuroids rB can becalculated as having two or more activated excitatory syn-apses and no activated inhibitory synapses

rB = p∗(1 − p∗N)N , (10)

where p∗ is defined in Eq. 4. Expanding, it becomes

rB(t) =[

N −(

N − 1

N

)sP (t)−1

(N + sP (t) − 1)

]

(N − 1

N

)sN (t)

. (11)

As before, the initial conditions are taken as rB(t) = 0 fort ≤ 0.

To allow further analysis, Eq. 11 can be written in therecursive form

rB(t) =[

N −(

N − 1

N

)sP (t)−1

(N + sP (t) − 1)

]

(N − 1

N

)pBB N rB(t−µBB)

. (12)

If the value of rB is projected to its steady-state, its behav-ior can be observed by changing rA to find the return-mapof the iterative hierarchical recruitment process as in Sect. 4.Assuming T is sufficiently large, we define the steady-statevalue of rB(t) as

rB = rB(t)|t>T ,

and the steady-state value of sP (t) as sP = sB � rA p+AB N

from Eq. 6. First, we define

χ(λ) = N −(

N − 1

N

)rA pAB N−1

(N + rA pAB N − 1) .

Then, we can express the steady-state of Eq. 12 as

rB = χ

(N − 1

N

)pBB N rB

(13)

for t > T + µBB .A corollary of Eq. 13 is that the lateral inhibitory con-

nectivity parameter pBB can be calculated in terms of othernetwork parameters for a given fixed point ro. To calculatepBB and rB numerically, we define the function

h(pBB, rB) = χ

(N − 1

N

)pBB N rB

− rB , (14)

and look for its zeros. pBB can be found for rB = ro andgiven λ. For λ = 20 and ro = 10, we get pBB = 0.14739.

Plots of Eq. 13 in Fig. 9 confirm the behavior of rB andshow effects of several parameters on the convergence speed.These results show that the system behaves as desired byexhibiting a stable equilibrium.

5.1 Oscillations in activity levels

Numeric integration of the model Eqs. 8–11 indicates that theassumed steady-state is not reached due to undesired oscilla-tion of the recruitment level rB . The oscillatory behavior inthe system is expected from control system analysis as delaysin feedback often result in instabilities. However, conven-tional control system tools for analysis of oscillations do notapply to our non-linear model (Phillips and Harbor, 1991).

The oscillations appear because rB can change infinitelyfast; i.e., the change of recruitment rate, drB/dt , has no upperbound. To stop the oscillations, it is reasonable to suggest that

Fig. 9 Plots of the return map of rB (rA) for iterative application ofthe recruitment described here. Note that the indicated fixed point atrB (10) = 10 is stable. Increasing the amplification factor λ that affectsthe value of p+

AB from Eq. 1 results in a more flattened curve, thus afaster convergence to a stable fixed point

40 C. Gunay and A. S. Maida

the model of a physical system must have an upper bound fora parameter like this. A natural candidate for imposing anupper bound is using a low-pass filter which slows down therate of change.

5.2 Low-pass filter

In order to prevent oscillations, we tried without success, themethods of using a decaying inhibitory synapse population

Fig. 10 Block schema of the low-pass filter on rB for preventing oscil-lations in the dipolar synaptic population model

a

b

Fig. 11 Effect of a low-pass filter on the oscillations. The plot featuresthe population model, with uniform-delay feedback case and a low-passfilter on rB . It can be seen that the gain of the oscillation in the originalmodel (a) is eliminated in the case when the filter is applied (b)

as a low-pass filter, and of applying excitatory feedback tobalance the unstable equilibrium created by the inhibitoryfeedback in previous work (Gunay, 2003). Applying a non-decaying low-pass filter on rB as seen in the block diagramof Fig. 10 stops the oscillations. The low-pass filter atten-uates high-frequency components of the original rB signal.The cut-off frequency is inversely proportional to the RCconstant of the low-pass circuit shown in the figure.

The dynamics of the circuit is represented by the differ-ential equation

drB(t)

dt= rB(t) − rB(t)

τ,

where τ = RC is the time constant of the circuit, and rB isthe recruitment level as previously defined by Eq. 11. Simula-tions plotted in Fig. 11 indicate that the circuit is attenuatingunwanted oscillations.

5.3 Monte Carlo simulations

Finally, we ran Monte Carlo simulations for both the openand closed-loop cases. The results are given in Fig. 12. Thesimulations verify both that the size of recruitment is unstableopen-loop case, and that it is stable in the closed-loop case.However, the closed-loop simulations indicate that the cal-culated value of pBB from Eq. 14 for the fixed point ro = 10gives a higher than expected steady-state value. This is dueto the simplification of the population model in modeling thetime course of the recruitment process. The population modeltends to oscillate and find its optimal steady-state, however, inreal neural simulations such as the Monte Carlo simulations,the number recruited neuroids can only increase and thus willalways provide a slightly higher number than expected. Eventhough we acknowledge that this indicates a possible pointfor future improvement in the model, it does not affect theresults on stability. In order to achieve the desired fixed point,pBB needs to be chosen higher than originally calculated withEq. 14.

Fig. 12 Monte Carlo simulations of networks with parameters emulat-ing the open-loop (left) and closed-loop (right) models, averaged over50 runs

Stable recruitment hierarchies 41

Fig. 13 Monte Carlo simulations of networks with different feedback delays, averaged over 50 runs. The three rA values are the same as in thelegend of Fig. 12

5.4 The effect of feedback inhibition speed on stability

The above stability analysis, based on the return-map, doesnot address the effect of feedback delays, since it only uses thesteady-state values. To test the model’s response with varyingfeedback delays, we ran Monte Carlo simulations shown inFig. 13. The figure shows networks with 1, 3 and 5 ms delaysfor the feedback delay, µBB . As the feedback delay increases,both the fixed-point value of rB , when rA = ro = 10, andthe speed of convergence to stable point changes. For µBB =5 ms, the network is only asymptotically stable. For largerdelays, the controlling feedback becomes too slow to drivethe size of the recruited set towards the fixed-point.

6 Recruitment caused by l or more synapses

So far we looked at only two or more synapses to be suffi-cient for recruitment. Considering that real neurons receiveon the order of 10,000 synapses and a constant input of back-ground noise and that the effects of individual synaptic inputsare small, two inputs may not be realistic to fire a neuron. Tomake a more realistic model, we extend the population modeldefinition to allow l arbitrary number of synaptic inputs tocause a neuroid to fire and be recruited.

6.1 Feedforward excitation

In a population of N neurons, the probability of having l ormore active synapses on a neuron, out of s active excitatorysynapses, is

p∗ =∑s

k=l

(s

k

)(N − 1)s−k

Ns

= 1 −∑l−1

k=0

(s

k

)(N − 1)s−k

Ns. (15)

Fig. 14 Open-loop simulation for the change in the size of the recruitedset rB with different selections for the size of the input set rA for l = 10inputs

Fig. 15 Return map of rB (rA) in the open-loop model for l = 10 inputs

Then, the expected number of recruited neurons become

rB(s, l) = p∗N = N −∑l−1

k=0

(s

k

)(N − 1)s−k

Ns−1.

42 C. Gunay and A. S. Maida

Fig. 16 Plots of the return map of rB (rA) for iterative application of the recruitment described here. The case with l = 2 and ζ = 20 replicatesresults from Fig. 9. When l = 10, rB is stable only if the combined multiplicity factor ζ , that affects the value of fAB from Eq. 18, is sufficientlylarge

We rearrange it to reduce exponents for simulations

rB(s, l) = N −l−1∑

k=0

( s

k

)(N − 1

N

)s−k 1

Nk−1. (16)

Here, we use a different formula for s,

s(t) = rA fAB NB

t∫

0

G(µAB, σAB; τ)dτ . (17)

where the normal distribution is given by the Gaussian kernelG(µ, σ ; t) = 1

σ√

2πexp[− 1

2 (µ−t

σ)2], and fAB is the projection

factor from area A to B. The latter is defined as

fAB = √ϕ pAB , (18)

where ϕ = (ρν)2 with ρ is the average number of synapsesbetween two neurons and ν is the average number of spikesin a synchronized spike train. In the following figures we usethe combined multiplicity factor ζ = ϕλ to indicate variationfrom the original projection factor.

Thus, the steady-state value of s becomes

s = rA fAB N . (19)

This can be used to get the steady-state value of rB as rB =rB(s, l). To find appropriate values of ζ for a fixed-pointrB = ro, we define the function

gff (ζ, rB) = rB(s, l) − rB (20)

and find its zeros numerically. Given ro = 10, we find thefollowing ζ values for each l:

l = 2 ⇒ ζ = 2.56

l = 3 ⇒ ζ = 11.021

l = 10 ⇒ ζ = 349.91

Time response of rB in Eq. 16 is given in Fig. 14 andreturn map is given in Fig. 15.

6.2 Adding inhibitory feedback

The rB with inhibitory feedback is defined by substitutingEq. 15 into Eq. 10,

rB(sP , sN , l) = p∗(1 − p∗N)N

=(

N −l−1∑

k=0

( sP

k

)(N − 1

N

)sP −k 1

Nk−1

)

(N − 1

N

)sN

.

The steady-state values are defined as

rB(l) = rB(sP , sN , l) ,

where

sN = pBBNrB

and sP = s as in Eq. 19. Return map simulation results withvarying ζ and l values are given in Fig. 16. pBB values arecalculated according to selection of ζ and l parameters.

6.3 Discussion

With Figs. 15 and 16, we verified that in the special case ofrequiring l inputs, the population model behaves similarlyto the case with two inputs. The time course of activity inthe feedback case is not shown because of the oscillations.Also, Monte Carlo simulation results are not shown since thebehavior was already explored in the two-input case.

The l-input model is desirable because it exhibits a higherthreshold. This makes the neuroids both more accurate repre-sentations of real neurons and also makes them noise resilient.This model can potentially be used when background noiseis present, whereas the two-input model would easily fail.

Stable recruitment hierarchies 43

7 Conclusions

The research used a stochastic population approach to modelthe dynamics of the recruitment process. In particular, this pa-per explored the boost-and-limit approach (Gunay, 2003) as amechanism to maintain the stability of the size of recruitmentsets as they cross cortical areas iteratively. This approach pri-marily proposes that temporally dispersed spike times and useof inhibitory feedback can help stabilize the size of propa-gating recruitment sets. Other manipulations studied were theinter-area excitatory connectivity, within-area inhibitory con-nectivity, recruitment threshold, and feedback delay. It wasfound that if in the closed-loop model, inhibitory feedbackwas fast enough, then the size of such recruited sets could bemaintained at a desired size, ro, in the presence of perturba-tions to the input set size, thereby solving the variance prob-lem in recruitment hierarchies. Small networks were studiedconsisting of 100 neurons per layer with the desired recruit-ment set size held to 10 neurons. For the open-loop case ofonly feedforward propagation, we established, using both thestochastic population model and Monte Carlo methods, thatrecruitment set size is unstable. Then, as the main result ofthis paper, we show that stability can be achieved by use oflateral inhibitory feedback in the output layer by using boththe population model and Monte Carlo simulations.

Having achieved stability, the proposed model allows pre-dicting system parameters that enable stable recruitment. Itallows estimating the lateral inhibitory feedback connectivityparameter pBB in terms of the free parameters of the network.These free parameters are: the replication factor ro, the num-ber of neuroids NB at a localized area, and the amplificationfactor λ for the feedforward excitatory connectivity betweenareas. The model also shows how the choice of λ affectsthe speed of convergence to a desired replication factor. Thespeed of inhibitory feedback is important in achieving stabil-ity. For the example network, stability could not be achievedif the feedback delay was longer than 5 ms.

Ultimately, this approach serves to maintain controlledpropagation of synchrony across layers. Existing studies ofcontrolled synfire activity use layered feedforward networksin the presence of background noise, but without lateral inhi-bition (Diesmann et al., 1999; Tetzlaff et al., 2002; Litvaket al., 2003). In our networks, the temporally dispersed trans-mission delays tend to disrupt the synchrony of inputs to laterlayers. That is, although the output of layer A is synchro-nized, the output of layer B is less synchronized because ofthe temporally dispersed transmission delays. Presumably,the window of synchrony widens at each stage of the hierar-chy. Spiking neurons, however, are known to act as band-passfilters because they only fire when incoming spikes are withina short time interval (Diesmann et al., 1999). Thus, the effectof spikes which stray too far from the center of the synchronywindow are lost. This stops the synchrony window from wid-ening infinitely. Even though the lost spikes may decrease thesize of the spike packet, we showed that for variations in inputthe output size will always converge to the desired fixed-pointas the process is repeated. Thus, because of resynchronization

effects, temporally dispersed delays may not disrupt recruit-ment of hierarchies. Additional studies are needed to verifythis issue.

It is known from other studies (Diesmann et al., 1999;Tetzlaff et al., 2002) that there are lower and upper bounds onthe number of neurons or connection density in layers of feed-forward networks to propagate synfire chains in the presenceof noise. Our networks were not studied in the presence ofbackground noise. However, we can predict that the networkwill operate properly under background noise because of itsconvergence property. We extended our model to require anarbitrary number of spikes (such as 10–40) before reach-ing firing threshold. This demonstrated that the model canexhibit noise resilience that was not possible with the modelwhich required only two spikes to fire. Further simulationsare required to verify the conditions of noise resilience.

In summary, further research needs to, (1) study noiseresilience of spike propagation; and, (2) explore larger scalesimulations to determine the robustness of our results in thosecontexts, and (3) improve quality of population estimationsto match results of Monte Carlo simulations more closely.

Acknowledgements The authors would like to thank Anca Doloc-Mihu, Benjamin Rowland, and Mehmet Kerem Muezzinoglu for help-ful comments and suggestions. We thank the anonymous referee forpointing out relevant work and useful comments. This work was par-tially funded by the University Doctoral Fellowship of the Universityof Louisiana at Lafayette granted to author C.G.

A Appendix

A.1 Connection probability in the generalized recruitmentscenario

Here, we show the derivation of Eq. 1, which is the requiredconnection probability p+

AB of a neuroid in area A to a neuroidin area B as shown in Fig. 3. This derivation is adapted fromthe probability calculation for recruitment of conjunctions oftwo inputs by Valiant (1994). We assume the size of the inputset is |RA| = ro, which is the desired replication factor. Weuse p ≡ p+

AB, r ≡ ro here for simplicity.First, we look at the probability of a neuroid in B to be

recruited. Recruitment requires two connections from activesources. Since we only have one input set RA, a recruitmentcandidate in B needs to have two connections from neuroidsin set RA. The probability of the candidate being in the pro-jection set of RA can be calculated from its probability of nothaving connections to any neuroid in set RA with (1 − p)r .Thus, its probability of being in the projection set becomes1−(1−p)r . Since we require the candidate neuroid to receiveprojections from at least two neuroids in RA, we can definethe probability of a neuroid in B being recruited as

p∗ = (1 − (1 − p)r

) (1 − (1 − p)r−1

). (21)

Expanding Eq. 21, we get

p∗ = 1 − (1 − p)r−1(2 − p) + (1 − p)2r−1 ,

44 C. Gunay and A. S. Maida

where the power terms can be expanded using the binomialtheorem. Assuming p 1, we can ignore terms with pk fork > 3

p∗ =1−(

1 − (r − 1)p + (r−1)(r−2)

2p2 + O(p3)

)(2−p)

+(

1 − (2r − 1)p + (2r − 1)(2r − 2)

2p2 + O(p3)

).

Simplifying, we get

p∗ = −rp2 + r2p2 + O(p3) . (22)

The desired property of recruitment is to yield an output setwith the same size of the input set. The expected size of therecruited set RB is

|RB | = p∗N � (−rp2 + r2p2)N .

We aim to satisfy the equality p∗N = r . Solving for p, weget

p � 1√N(r − 1)

with an error decreasing with O(p3) as N increases. Add-ing an amplification factor

√λ similar to Valiant’s proposals

yields the definition in Eq. 1.

References

Amari S (1974) A method of statistical neurodynamics. Kybernetik14:201–215

Arik S (2002) A note on the global stability of dynamical neural net-works. IEEE Trans Circ Syst I: Fundam Theory Appl 49(4):502–504

Calvert BD, Marinov CA (2000) Another k-winners-take-all analogneural network. IEEE Trans Neural Netw 11(4):829–838

Diesmann M, Gewaltig M-O, Aertsen A (1999) Stable propagation ofsynchronous spiking in cortical neural networks. Nature 402:529–533

Elias SA, Grossberg S (1975) Pattern formation, contrast control, andoscillations in the short term memory of shunting on-center off-sur-round networks. Biol Cybern 20:69–98

Feldman JA (1982) Dynamic connections in neural networks. Biol Cy-bern 46:27–39

Feldman JA (1990) Computational constraints on higher neural repre-sentations. In: Schwartz EL (ed) Computational neuroscience, sys-tem development foundation benchmark series, chap 13. MIT Press,Cambridge, pp 163–178

Feldman JA, Ballard DH (1982) Connectionist models and their prop-erties. Cogn Sci 6:205–254

Gerbessiotis AV (1993) Topics in parallel and distributed computation.PhD thesis, The Division of Applied Sciences, Harvard University,Cambridge, Massachusetts

Gerbessiotis AV (1998) A graph-theoretic result for a model of neuralcomputation. Discrete Appl Math 82:257–262

Gerbessiotis AV (2003) Random graphs in a neural computation model.Int J Comput Math 80:689–707

Gerstner W (1999) Spiking neurons. In: Maass W, Bishop CM (eds)Pulsed neural networks, chap 1. MIT Press, Cambridge, pp 3–54

Gunay C (2003) Hierarchical learning of conjunctive concepts in spik-ing neural networks. PhD thesis, Center for Advanced ComputerStudies, University of Louisiana at Lafayette, Lafayette, LA 70504-4330, USA

Gunay C, Maida AS (2001) The required measures of phase segrega-tion in distributed cortical processing. In: Proceedings of the interna-tional joint conference on neural networks, Washington, DC, vol 1,pp 290–295,

Gunay C, Maida AS (2003) Temporal binding as an inducer for connec-tionist recruitment learning over delayed lines. Neural Netw 16(5–6):593–600

Gunay C, Maida AS (2005) Using temporal binding for hierarchicalrecruitment of conjunctive concepts over delayed lines. Neurocom-puting (in print)

Hinton GE, McClelland JL, Rumelhart DE (1986) Distributed represen-tations. In: Rumelhart DE, McClelland JL, the PDP Research Group(eds) Parallel distributed processing: explorations in the microstruc-ture of cognition, Foundations, vol 1. MIT Press, Cambridge, pp 77–109

Hopfield JJ (1982) Neural networks and physical systems with emergentcollective computational properties. Proc Natl Acad Sci 79:2554–2558

Indiveri G (2000) Modeling selective attention using a neuromorphicanalog VLSI device. Neural Comput 12(12):2857–2880

Kaszkurewicz E, Bhaya A (1994) On a class of globally stable neuralcircuits. IEEE Trans Circuits Syst I: Fundam TheoryAppl 41(2):171–174

Knoblauch A, Palm G (2001) Pattern separation and synchronization inspiking associative memories and visual areas. Neural Netw 14:763–780

Koch C, Poggio T, Torre V (1983) Nonlinear interactions in a dendritictree: localization timing and role in information processing. ProcNatl Acad Sci 80:2799–2802

Levy WB (1996) A sequence predicting CA3 is a flexible associatorthat learns and uses context to solve Hippocampal-like tasks. Hip-pocampus 6:576–590

LitvakV, Sompolinsky H, Segev I,Abeles M (2003) On the transmissionof rate codes in long feedforward networks with excitatory-inhibitorybalance. J Neurosci 23(7):3006–3015

Maass W (2000) On the computational power of winner-take-all. NeuralComput 12(11):2519–2536

Minai AA, Levy WB (1993) The dynamics of sparse random networks.Biol Cybern 70:177–187

Minai AA, Levy WB (1994) Setting the activity level in sparse randomnetworks. Neural Comput 6:85–99

O’Reilly RC, Busby RS, Soto R (2003) Three forms of binding and theirneural substrates: alternatives to temporal synchrony. In: CleeremansA (ed) The unity of consciousness: binding, integration and dissoci-ation. Oxford University Press, Oxford

Page M (2000) Connectionist modelling in psychology: a localist man-ifesto. Behav Brain Sci 23(4):443–467

Phillips CL, Harbor RD (1991) Feedback control systems, 2nd edn.Prentice Hall, New Jersey

Shadlen MN, Newsome WT (1994) Noise, neural codes and corticalorganization. Curr Opin Neurobiol 4:569–579

Shadlen MN, Newsome WT (1998) The variable discharge of corticalneurons: implications for connectivity, computation, and informa-tion coding. J Neurosci 18(10):3870–3896

Shastri L (1999) Recruitment of binding and binding-error detector cir-cuits via long-term potentiation. Neurocomputing 26–7:865–874

Shastri L (2001) Biological grounding of recruitment learning andvicinal algorithms in long-term potentiation. In: Wermter S, Aus-tin J, Willshaw DJ (eds) Emergent neural computational architec-tures based on neuroscience, Lecture Notes in Computer Science,vol 2036. Springer, Berlin Heidelberg New York pp 348–367

Shastri L, Ajjanagadde V (1993) From simple associations to system-atic reasoning: a connectionist representation of rules, variables,and dynamic bindings using temporal synchrony. Behav Brain Sci16(3):417–451

Tetzlaff T, Geisel T, Diesmann M (2002) The ground state of corticalfeed-forward networks. Neurocomputing 44–46:673–678

Tymoshchuk P, Kaszkurewicz E (2003) A winner-take-all circuit basedon second order Hopfield neural networks as building blocks. In:Hasselmo M, Wunsch DC (eds) Proceedings of the internationaljoint conference on neural networks. Portland, Oregon, pp 891–896

Urahama K, Nagao T (1995) K-winners-take-all circuit with O(N) com-plexity. IEEE Trans Neural Netw 6(3):776–778

Valiant LG (1994) Circuits of the mind. Oxford University Press, Oxford

Stable recruitment hierarchies 45

Valiant LG (2000) A neuroidal architecture for cognitive computation.J ACM 47(5):854–882

van Rossum M, Turrigiano G, Nelson S (2002) Fast propagation offiring rates through layered networks of noisy neurons. J Neurosci22(5):1956–1966

von der Malsburg C (1994) The correlation theory of brain function. In:Domany E, van Hemmen JL, Schulten K (eds) Models of neural net-works, volume 2 Physics of neural networks, chap 2, vol 2. Springer,Berlin Heidelberg New York, pp 95–120. Originally appeared as aTechnical Report at the Max-Planck Institute for Biophysical Chem-istry, Gottingen, 1981

von der Malsburg C (1995) Binding in models of perception and brainfunction. Current Opinion in neurobiology vol 5 Elsevier, Nether-lands, pp 520–526