Computing with physics
From biologica l to a rtificia l intelligenceand back aga in
Mihai A. Petrovici
Electronic Vision(s)Kirchhoff Institute for Physics
University of Heidelberg
Computa tiona l Neuroscience GroupDepartment of Physiology
University of Bern
Sampling-based Bayesian computa tion
Petrovici & Bill et a l. (2013, 2016), Leng & Petrovici et a l. (2016, 2018), Kungl et a l. (in prep.)
Whence stochasticity?
Stochasticity in (dedica ted) discrete systems
Stochasticity in continuous systemsInjection of stochasticity
Embedded stochasticity
Dold & Bytschok & Petrovici et a l. (2018)
Stochasticity from function: Bayesian inference & dreaming
Dold & Bytschok & Petrovici et a l. (2018), Kungl et a l. (in prep.)
Physica l stochastic computa tion without noise
Leng & Petrovici et a l. (2016, 2018), Zenk (2018), Kungl et a l. (in prep.)
Superior mixing in spiking networks
Biologica l mechanisms for superior mixing
Leng & Petrovici et a l. (2016, 2018), Korcsak-Gorzo & Baumbach & Petrovici et a l., in prep.
E z
cortica l oscilla tions
short-term synaptic plasticity
Whatโs wrong with Euclidean gradient descent?
ฮ๐ค๐คsyn = โ๐๐๐๐๐๐ ๐ผ๐ผ๐ค๐คsyn
๐๐๐ค๐คsyn = โ๐๐๐ผ๐ผ๐๐๐๐ ๐ค๐คsyn
๐๐๐ค๐คsyn
๐ป๐ปn = ๐บ๐บ(๐๐)โ1๐ป๐ปe
Kreutzer et a l., in prep.
Synaptic plasticity as na tura l gradient descent
loca l lea rning:ฮ๐๐๐ค๐ค = ๐๐ โซ0๐๐ ๐พ๐พ0 ๐ฆ๐ฆโ โ ๐๐ ๐๐ ๐๐โฒ ๐๐
๐๐ ๐๐๐๐๐๐
๐๐โ ๐พ๐พ1 + ๐พ๐พ2๐๐ ๐๐๐๐
Chen et a l. (2013)Hรคusser et a l. (2001) Loewenstein et a l. (2011)
Kreutzer et a l., in prep.
Hierarchica l networks and the credit-assignment problem
propagate and assign error
reduce cost (gradient descent)
?
Lagrangian mechanics
fundamenta l principle inโ mechanicsโ geometrica l opticsโ electrodynamicsโ quantum mechanicsโ โฆโ neurobiology?
principle of (least) sta tionary action
๐ฟ๐ฟ ๏ฟฝ๐๐๐๐ ๐ฟ๐ฟ ๐๐, ๏ฟฝฬ๏ฟฝ๐ = 0
๐๐๐ฟ๐ฟ๐๐๐๐๐๐
โ๐๐๐๐๐๐
๐๐๐ฟ๐ฟ๐๐๏ฟฝฬ๏ฟฝ๐๐๐
= 0
Euler-Lagrange equations of motion
โ
๐ธ๐ธ ๐๐ = ๏ฟฝ๐๐
12๐๐๐๐ โ๐พ๐พ๐๐๏ฟฝ๐๐๐๐โ1 2 + ๐ฝ๐ฝ
12๐๐๐๐ โ ๐๐๐๐
tgt 2
๐๐๐ฟ๐ฟ๐๐๏ฟฝ๐๐๐๐
โ๐๐๐๐๐๐
๐๐๐ฟ๐ฟ๐๐๏ฟฝฬ๐๐๐๐
= 0
๏ฟฝฬ๏ฟฝ๐พ๐๐ = โ๐๐๐๐๐ธ๐ธ๐๐๐พ๐พ๐๐
๐๐๏ฟฝฬ๏ฟฝ๐๐๐ = ๐พ๐พ๐๐๐๐๐๐โ1 โ ๐๐๐๐ + ๐๐๐๐ โ neuron dynamics!
๏ฟฝ๐๐๐๐ = ๏ฟฝ๐๐๐๐โฒ ๐พ๐พ๐๐+1๐๐ ๐๐๐๐+1 โ๐พ๐พ๐๐+1๏ฟฝ๐๐๐๐ โ error backprop!
๏ฟฝฬ๏ฟฝ๐พ๐๐ = ๐๐ ๐๐๐๐ โ๐พ๐พ๐๐๏ฟฝ๐๐๐๐โ1 ๏ฟฝ๐๐๐๐โ1๐๐ โ Urbanczik-Senn learning rule!
โ
1
๐๐
๐๐ โ 1
๐๐
โฎ
โฎ
Dold et a l. (Cosyne abstracts 2019, paper in prep.)
Lagrangian mechanics for neuronal networks
๐๐ = ๏ฟฝ๐๐ โ ๐๐๏ฟฝฬ๐๐๐ฟ๐ฟ ๏ฟฝ๐๐, ๏ฟฝฬ๐๐
energy ๐ธ๐ธ
๐ท๐ท = ๐๐, ๏ฟฝฬ๏ฟฝ๐พ = ๐๐โ ๐ฌ๐ฌ = ๐๐
๐ท๐ท โ ๐๐, ๏ฟฝฬ๏ฟฝ๐พ = ๐๐โ ๐ฌ๐ฌ โ ๐๐
๐ท๐ท โ ๐๐, ๏ฟฝฬ๏ฟฝ๐พ โ ๐๐โ ๐ฌ๐ฌ = ๐๐
Sacramento et a l. (2017), Dold et a l. (Cosyne abstracts 2019, paper in prep.)
Biophysica l implementa tion
๐๐ ๐๐ โ ๐๐(๐๐ ๐๐ + ๐๐ )
โข loca l representa tion of errors for plausible synaptic plasticity
โข prospective coding for continuous dynamics