33
Vorlesung Biophysik Braun - Neurronale Netze Overview: - Anatomy of Neuronal Networks - Formal Neural Networks - Are they realistic? - Oscillations and Phase locking - Mapping problem: Kohonen Networks Neural Networks Nice books to start reading: e.g. Manfred Spitzer: Geist im Netz Brick-like text-books: From Neuron to Brain by John G. Nicholls, John G. Nicholls, Bruce G. Wallace, Paul A. Fuchs, A. Robert Martin Principles of Neural Science by Eric R. Kandel, James H. Schwartz, Thomas M. Jessell Models of Neural Networks I-III, Domany, van Hem- men, Schulten, Springer 1991,1995

Neural Networks - physik.uni-muenchen.de · Vorlesung Biophysik Braun - Neurronale Netze Neuroanatomy The brain consists of about 1011 neurons, divided into approx. 10,000 cell types

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Vorlesung Biophysik Braun - Neurronale Netze

Overview:- Anatomy of Neuronal Networks- Formal Neural Networks- Are they realistic?- Oscillations and Phase locking- Mapping problem: Kohonen Networks

Neural NetworksNice books to start reading:e.g. Manfred Spitzer: Geist im NetzBrick-like text-books:From Neuron to Brain by John G. Nicholls, John G.Nicholls, Bruce G. Wallace, Paul A. Fuchs, A. RobertMartinPrinciples of Neural Science by Eric R. Kandel, JamesH. Schwartz, Thomas M. JessellModels of Neural Networks I-III, Domany, van Hem-men, Schulten, Springer 1991,1995

Vorlesung Biophysik Braun - Neurronale Netze

The brain mostly consists NOT of neu-rons, there are about 10-50 times moreglia (greek: “glue”) cells in the centralnervous tissue of vertebrates.

The function of glia is not understood infull detail, but their active role in signaltransduction in the brain is probablysmall.

Electrical and chemical synapses allowfor excitatory or inhibitory stimulation.They most often sit at the dendritic tree,but some also at the surface of a neuron.

In many neuron types, these inputs arecan trigger an action potential in theaxon which makes connections withother dendrites.

However, only recently, it was found, thataction potentials also travel back intothe dendritic tree, a crucial prerequisitefor learning.

Neuroanatomy

From: Principles of Neural ScienceKandel, Schwartz, Jessel, 1991

Vorlesung Biophysik Braun - Neurronale Netze

NeuroanatomyThe brain consists of about 1011 neurons,divided into approx. 10,000 cell types withhighly diverse functions.

The cortex, the outer “skin” of the brain,appears to be very similar all over the brain,only more detailed analysis also shows here spe-cialization in different regions of the cortex.

Most of the brain volume are “wires” in thewhite matter of the brain.

From: Principles of Neural ScienceKandel, Schwartz, Jessel, 1991

Vorlesung Biophysik Braun - Neurronale Netze

Cortex LayersThe Cortex is organized intolayers which are numberedfrom I to VI.

Different types of cells arefound in the layers.

The layer structure differs fordifferent parts of the brain.

Vorlesung Biophysik Braun - Neurronale Netze

Cortex LayersI. Molecular layer: few scat-tered neurons, extensions ofapical dendrites and horizon-tally oriented axons.II. External granular layer:small pyramidal neurons andnumerous stellate neurons.III. External pyramidal layer:predominantly small andmedium sized pyramidal neu-rons and non-pyramidal neu-rons. I-III are main target and LayerIII the principal source of ofintercortical connections.

From: Principles of Neural ScienceKandel, Schwartz, Jessel, 1991

Cortex

CortexThalamus

Motor

Thalamus

IV. Internal granular layer: stellate and pyra-midal neurons. Main target from thamalus. V. Internal pyramidal layer: large pyramidalneurons and interneurons. Source of motor-related signals.

VI. Multiform layer contains few large pyra-midal neurons and many small spindle-likepyramidal and multiform neurons. Source ofthalamus connections.

Vorlesung Biophysik Braun - Neurronale Netze

A typical synapse delivers about 10 -30 pA into the neuron. In many cases,this means that it increases the mem-brane voltage at the cell body by about0.2-1 mV.

Therefore, many synaptic inputs have tohappen synchronously to trigger anaction potential.

Neuronal Signals

From: Principles of Neural ScienceKandel, Schwartz, Jessel, 1991

Vorlesung Biophysik Braun - Neurronale Netze

Dendritic Spines: Inputs for SynapsesExcitatory synapses form oftenat spines which are bulges ofdendritic membrane.

Although much is unknown,they probably act as local diffu-sion reservoir for Calcium sig-nals and change their shapeupon learning.

Vorlesung Biophysik Braun - Neurronale Netze

The interplay of currents alongthe dendritic tree can be intricateand allows the neuronal network toimplement various logical opera-tions (left):

A: Inhibitory synapses can vetomore distal excitatory synapses:output = [e3 and not (i3 or i2 ori1)] or [e2 and not (i2 or i1)] or[e1 and not i1].

B: Branches can overcome theinhibitory effects. For example [e5and not i5] and not i7.

So the assumption that a dendritictree is a simple addition is verysimplistic.

Dendritic Logics

From: The Synaptic Organization of the Brain,Gordon M. Shepherd 1998

Excitatory

Inhibitory

Vorlesung Biophysik Braun - Neurronale Netze

Experimentally, one can excite large trains ofaction potential (top).

Thus, for long, the average firing rateswere taken as main parameter of neural net-works.

Sparse Firing

From: Principles of Neural ScienceKandel, Schwartz, Jessel, 1991

Fast spiking is not the normal mode ofoperation for most neurons in the brain.Typically, neurons fire sparsely whereeach action potential counts (below).

Vorlesung Biophysik Braun - Neurronale Netze

McCulloch and Pitts simplified neuronal sig-nalling to two states:- Neurons i=1..N are either in state Si=-1 or

Si=+1, i.e they are silent or fire an action potential

In the simplest model of an associativememory, the neurons are connected tothemselves with a coupling strengthmatrix Jij. It contains the “strength” orsynaptic weight of the connections betweenthe neurons.

Assume that the dendrites of neuron i onlyadd the signals. The internal signal of theneuron hi is then the matrix product ofincoming neuronal states Sj according tohi=JijSj (sum over common indexes).

In the simplest form, neuron i fires if hi ispositive: Si=sign[hi].

This update can be performed with timelags, sequentially or in parallel and definesa dynamic of the neuronal net.

Simple Model: Associative Memory

SOUT t JSIN( )=

Outp

ut

Neu

rons

iInput Neurons j

Coupling StrengthMatrix Jij

Dynamics from OUT=IN

t h( ) sign h[ ]=

Si t tΔ+( ) sign hi t( )[ ]=

Vorlesung Biophysik Braun - Neurronale Netze

Outp

ut

Neu

rons

i

Input Neurons j

Coupling StrengthMatrix Jij

The dynamics has a number ofdefined fix points. By setting Jij,activity patterns can be memorizedand dynamically retrieved.

You want to memorize the patterns into the network. The recipe to

do this is reminiscient of an old pos-tulate in neuroscience.

Hebb postulated in 1949: “When anaxon of cell A is near enough toexcite a cell B and repeatedly or per-sistently takes part in firing it, somegrowth process or metabolic changetakes place in one or both cells suchthat A’s efficiency, as one of the cellsfiring B, is increased.”.

Both proportionalities are stillpresent in the learning rule for Jij onthe left.

ξμ

Jij2

N 1 a2–( )----------------------- ξi

μ ξjμ a–( )

μ 0=

q

∑=

Pattern µ for neuron i:

ξiμ 1±=

Probability for “+1”:

1 a±2

-----------

Learning the Patterns with a Hebbianlearning rule leads to:

Simple Model: Associative Memory

Vorlesung Biophysik Braun - Neurronale Netze

Images of the size IxI are often used to show thememorizing capability of neural networks. Thus,the image is the pattern vector with length I^2and the coupling strength matrix J has a size ofI^2 x I^2.

For example we store 8 letters with I=10 usingN=100 neurons and a coupling matrix of 100x100weights. The retrieval from highly noisy input ispossible, but shows some artefacts (F,G).

Retrieval is performed by starting at the noisy pat-tern, following the neuronal update dynamics to itsfixpoint.

The capacity of a fully connected formal neuralnetwork scales with N. The number of patternswhich can be stored is about 0.14xN. Thus inabove network we can store about 14 letters.

An associative memory with the same number ofsynapses (1015) than the brain could save0.14*107.5=5x106 different patterns.

But the connections in the brain are much morecomplex.

Outp

ut

Neu

rons

Input Neurons

Coupling StrengthMatrix J

Coupling StrengthMatrix J

InputNeurons

OutputNeurons

From: Models of Neural Networks I, Domany, van Hemmen, Schulten, Springer 1995

Capacity q of a fully

μ 1…q= q 0.14N≈

Simple Model: Associative Memory

connected network:

Vorlesung Biophysik Braun - Neurronale Netze

J.J. Hopfield showed 1982 that formalneural networks are analogous to spinglasses.

A spin glass is an amorphous materialwhich fixes spins in a 3D matrix. Thespins can be oriented up or down.

The magnetic field from each spininfluences the other spins. This“crosstalk” between spins is describedby a coupling strength matrix J.

Such a spin glass is described by theHamilton operator H to the left. Thefixpoints are now simply the groundstates of the system to which thedynamics converge.

The analogy made neuronal networksmore accessible to theoretical physicists.

Hopfield-Analogy to Spin GlassesO

utp

ut

Neu

rons

Input Neurons

Coupling StrengthMatrix J

==

Hopfield:

Neural Networks

Strength

Spin Glasses

H 12--- JijSiSj

i j≠∑–=

Hamilton-Operator for Spin Glasses

Coupling

Vorlesung Biophysik Braun - Neurronale Netze

Synapses of real neural networksshow intrinsic noise. For example,chemical synapses either release asynaptic vesicle, or they don’t (“quan-tal” noise).

It is implemented into neuronal net-works with a probabilistic functionof t(h) with t being the probability tofind the output neuron in the stateSi=+1.

As expected, noise does not changethe property of neural networks dra-matically.

As everywhere in biophysics, the inclu-sion of noise in a model is a good testfor its robustness.

Towards realistic neurons

With Randomness:

Prob t h( )[ ] 1 βh Θ–[ ]tanh–2

------------------------------------------=

Deterministic:

t h( ) sign h[ ]=

From: Gerstner, Ritz, van Hemmen,Biol. Cybern. 68,363-374 (1993)

SOUT t JSIN( )=

Vorlesung Biophysik Braun - Neurronale Netze

Until now, we have assumed instan-taneous propagation of signals inneural networks. This is not the caseand typical delays are on the 5-20mstime scale.

Delays leads to new dynamics of thenetwork and can trigger oscillations(left).

We will discuss a compelling modelwhich uses these delays in the follow-ing.

Towards realistic neurons

From: Models of Neural Networks I, Domany, van Hemmen, Schulten, Springer 1995

Sparse Firing and Oscillations

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern RecognitionOne old theory of pattern gecognition is theso called “grandmother cell” proposal. Itassumes that partial patterns converge toone cell and if that cell fires, the grand-mother is seen. However this approach hassevere problems:- What happens if this cell dies?- Not much experimental evidence- “Combinatorical Explosion”: any combi-nation of patterns would require a novelgrandmother cell, much more than even thebrain can have.

The detection of patterns by cell groups asassociated memory does not have that prob-lem. Noisy signals can still be detected andthe model is robust against death of singlecells.

However there are two major problems:- How should the pattern be read out?- “Superposition catastropy”: a superpo-sition of patterns is not recognized since itacts as novel pattern.

SuperpositionCatastrophy

B. Recognition by Cell Groups

A. Recognition by “grandmother cell”

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

Phase Locking and Pattern Recognition

A biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale NetzeA biologically motivated and analytically soluble model of collective oscillations in the cortex.Gerstner W, Ritz R, van Hemmen JL, Biol Cybern. 1993;68(4):363-74.

Vorlesung Biophysik Braun - Neurronale Netze

How does the Hebbian learning para-digm keep up with experiments?

Single neurons before and after a syn-aptic transmission are excited exter-nally with different time delays. Theefficiency of the synapse is recordedbefore and after the learning protocol.

This allows to infer the time resolutionand direction of learning increment ΔJijfor a synapse (left).

These results would for sure havepleased Hebb. Indeed, the precise tim-ing of neuron modulates the learningof a synapse with a very high time res-olution on the ms time scale.

Towards realistic neurons: Temporal Learning

Hebbian in time

JijΔ 1NT-------- eij t( )Si t( )Sj t τ–( ) td

0

T

∫=

From: L. F. Abbott and Sacha B. Nelson,

eij t( )Shapes of

Nature Neuroscience Suppl., 3:1178 (2000)

Vorlesung Biophysik Braun - Neurronale Netze

Temporal Patterns

If we start from a distribution ofaxonal lengths, different synapsestransport the information of both timedelay and strength.

This can actually be used to extendthe associative memory of networksonto the temporal domain: a sequenceof patterns can be stored. If triggered,movie of patterns is generated (left).

from:Retrieval of spatio-temporal sequencein asynchronous neural network,Hidetoshi Nishimori and Tota Naka-mura, Physica Review A, 41, 3346-3354 (1990)

Vorlesung Biophysik Braun - Neurronale Netze

Finding: sensory maps are found inthe brain with a high large scale orga-nization.

Problem: how does the brain map andwire similar outputs next to eachother although there is no master toorder the things?

Sensory Maps: Kohonen Network

Approach: Kohonen assumed a“winner takes all” approach wheredirect neighbors profit from a map-ping and more distant ones are pun-ished. With this, a simple algorithm(next page) generates beautiful sen-sory maps.

Disadvantage: We can only guess thereal microscopic algorithm behindthe approach since it appears that weneed a master to determine the win-ner.

Vorlesung Biophysik Braun - Neurronale Netze

Step 0: Initialization.Synaptic weights Jvl=random.

Step 1: StimulusChoice of Stimulus Vector v.

Step 2: Find WinnerFind Stimulation-Winner location awith minimal weight vector - distancefrom stimulus v.

Kohonen Network Algorithm

Input stimulus vector v with index l

Synaptic weight Ja l, from V to ATarget map location a

Step 3: AdaptationMove the weights of winner (and itssurrounding h) towards the stimulus v

v Ja'– v Ja–≤

Janew( ) Ja

old( ) εha a', v Jaold( )–[ ]+=

and go to Step 1. This will convergetowards a mapping given by:

Input of a given by Ja l, vll∑ withv a→ v Ja– minimal.

Vorlesung Biophysik Braun - Neurronale Netze

Kohonen Example: 2D to 2D mapping

Example.

Input vector v is the logarithmicamplitude of two microphoneswhich record a sound in a 2Dspace.

We start with random weights J.

The Kohonen algorithm results ina map that reflects the setting ofthe sound location in 2D space.

The Kohonen-map has memo-rized neighborhood informa-tion into their synapticweights.

Vorlesung Biophysik Braun - Neurronale Netze

Kohonen Example: 2D to 1D mappingQuite impressive is the Kohonenmapping between differentdimensions - in this case betweenthe locations in 2D to a 1D recep-tive Kohonen map.

The mapping problem in this caseis similar to the traveling sales-man-problem.