19
Computing with physics From biological to artificial intelligence and back again Mihai A. Petrovici Electronic Vision(s) Kirchhoff Institute for Physics University of Heidelberg Computational Neuroscience Group Department of Physiology University of Bern

Computational Neuroscience Group Department of Physiology

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Computing with physics

From biologica l to a rtificia l intelligenceand back aga in

Mihai A. Petrovici

Electronic Vision(s)Kirchhoff Institute for Physics

University of Heidelberg

Computa tiona l Neuroscience GroupDepartment of Physiology

University of Bern

Bio-inspired artificial intelligence

Why spikes?

Sampling-based Bayesian computa tion

Petrovici & Bill et a l. (2013, 2016), Leng & Petrovici et a l. (2016, 2018), Kungl et a l. (in prep.)

Whence stochasticity?

Stochasticity in (dedica ted) discrete systems

Stochasticity in continuous systemsInjection of stochasticity

Embedded stochasticity

(Pseudo)randomness in deterministic spiking networks

Jordan et a l. (2017)

Dold & Bytschok & Petrovici et a l. (2018)

Stochasticity from function: Bayesian inference & dreaming

Dold & Bytschok & Petrovici et a l. (2018), Kungl et a l. (in prep.)

Physica l stochastic computa tion without noise

Leng & Petrovici et a l. (2016, 2018), Zenk (2018), Kungl et a l. (in prep.)

Superior mixing in spiking networks

Biologica l mechanisms for superior mixing

Leng & Petrovici et a l. (2016, 2018), Korcsak-Gorzo & Baumbach & Petrovici et a l., in prep.

E z

cortica l oscilla tions

short-term synaptic plasticity

Which way is down?

What’s wrong with Euclidean gradient descent?

Δ𝑤𝑤syn = −𝜂𝜂𝜕𝜕𝜕𝜕 𝛼𝛼𝑤𝑤syn

𝜕𝜕𝑤𝑤syn = −𝜂𝜂𝛼𝛼𝜕𝜕𝜕𝜕 𝑤𝑤syn

𝜕𝜕𝑤𝑤syn

𝛻𝛻n = 𝐺𝐺(𝒘𝒘)−1𝛻𝛻e

Kreutzer et a l., in prep.

Synaptic plasticity as na tura l gradient descent

loca l lea rning:Δ𝑛𝑛𝑤𝑤 = 𝜂𝜂 ∫0𝑇𝑇 𝛾𝛾0 𝑦𝑦∗ − 𝜙𝜙 𝑉𝑉 𝜙𝜙′ 𝑉𝑉

𝜙𝜙 𝑉𝑉𝒙𝒙𝜖𝜖

𝒓𝒓− 𝛾𝛾1 + 𝛾𝛾2𝒘𝒘 𝑑𝑑𝑑𝑑

Chen et a l. (2013)Häusser et a l. (2001) Loewenstein et a l. (2011)

Kreutzer et a l., in prep.

Hierarchica l networks and the credit-assignment problem

propagate and assign error

reduce cost (gradient descent)

?

Lagrangian mechanics

fundamenta l principle in→ mechanics→ geometrica l optics→ electrodynamics→ quantum mechanics→ …→ neurobiology?

principle of (least) sta tionary action

𝛿𝛿 �𝑑𝑑𝑑𝑑 𝐿𝐿 𝒒𝒒, �̇�𝒒 = 0

𝜕𝜕𝐿𝐿𝜕𝜕𝑞𝑞𝑖𝑖

−𝑑𝑑𝑑𝑑𝑑𝑑

𝜕𝜕𝐿𝐿𝜕𝜕�̇�𝑞𝑖𝑖

= 0

Euler-Lagrange equations of motion

𝐸𝐸 𝒖𝒖 = �𝑖𝑖

12𝒖𝒖𝑖𝑖 −𝑾𝑾𝑖𝑖�𝒓𝒓𝑖𝑖−1 2 + 𝛽𝛽

12𝒖𝒖𝑁𝑁 − 𝒖𝒖𝑁𝑁

tgt 2

𝜕𝜕𝐿𝐿𝜕𝜕�𝒖𝒖𝑖𝑖

−𝑑𝑑𝑑𝑑𝑑𝑑

𝜕𝜕𝐿𝐿𝜕𝜕�̇𝒖𝒖𝑖𝑖

= 0

�̇�𝑾𝑖𝑖 = −𝜂𝜂𝜕𝜕𝐸𝐸𝜕𝜕𝑾𝑾𝑖𝑖

𝜏𝜏�̇�𝒖𝑖𝑖 = 𝑾𝑾𝑖𝑖𝒓𝒓𝑖𝑖−1 − 𝒖𝒖𝑖𝑖 + 𝒆𝒆𝑖𝑖 → neuron dynamics!

�𝒆𝒆𝑖𝑖 = �𝒓𝒓𝑖𝑖′ 𝑾𝑾𝑖𝑖+1𝑇𝑇 𝒖𝒖𝑖𝑖+1 −𝑾𝑾𝑖𝑖+1�𝒓𝒓𝑖𝑖 → error backprop!

�̇�𝑾𝑖𝑖 = 𝜂𝜂 𝒖𝒖𝑖𝑖 −𝑾𝑾𝑖𝑖�𝒓𝒓𝑖𝑖−1 �𝒓𝒓𝑖𝑖−1𝑇𝑇 → Urbanczik-Senn learning rule!

1

𝑖𝑖

𝑖𝑖 − 1

𝑁𝑁

Dold et a l. (Cosyne abstracts 2019, paper in prep.)

Lagrangian mechanics for neuronal networks

𝒖𝒖 = �𝒖𝒖 − 𝜏𝜏�̇𝒖𝒖𝐿𝐿 �𝒖𝒖, �̇𝒖𝒖

energy 𝐸𝐸

𝜷𝜷 = 𝟎𝟎, �̇�𝑾 = 𝟎𝟎→ 𝑬𝑬 = 𝟎𝟎

𝜷𝜷 ≠ 𝟎𝟎, �̇�𝑾 = 𝟎𝟎→ 𝑬𝑬 ≠ 𝟎𝟎

𝜷𝜷 ≠ 𝟎𝟎, �̇�𝑾 ≠ 𝟎𝟎→ 𝑬𝑬 = 𝟎𝟎

Sacramento et a l. (2017), Dold et a l. (Cosyne abstracts 2019, paper in prep.)

Biophysica l implementa tion

𝒓𝒓 𝒕𝒕 ≈ 𝝋𝝋(𝒖𝒖 𝒕𝒕 + 𝝉𝝉 )

• loca l representa tion of errors for plausible synaptic plasticity

• prospective coding for continuous dynamics

Bio-inspired artificial intelligence

Some of the neura l networks behind our neura l networks

Dominik Dold

Elena Kreutzer

Akos Kungl Andi Baumbach

Luziwei Leng Oliver Breitwieser

Walter Senn Johannes Schemmel Karlheinz Meier

Jakob Jordan

Joao Sacramento

Agnes Korcsak-Gorzo