+ All Categories
Home > Documents > Bayesian Computing with Spikes - Heidelberg University

Bayesian Computing with Spikes - Heidelberg University

Date post: 06-Apr-2022
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
1
Bayesian Computing with Spikes A. Baumbach, M. A. Petrovici, I. Bytschok, D. St¨ockel, L. Leng, O. Breitwieser, J. Schemmel, K. Meier Heidelberg University, Kirchhoff-Institute for Physics Sampling with Spiking Neurons You are able to raed this snetence, even thgouh mnay of the wdros are mieepsllsd and yuor biarn has to dcdiee which wrod was maent to be wtreitn. The fact that you can read the opening sentence is a sign of the brain’s ability to infer knowledge based on partial, ambiguous or even false informa- tion. Any such decision can be thought of as a proba- bilistic process. Here the used letters are correct, but their order is scrambled. Evidence has been accumulating that the brain’s internal representation of both its surrounding and decisions is sample based (e.g. [1]). An interesting research question therefore is: How can sample- based probabilistic inference be implemented in a network of spiking neurons? In the LIF sampling framework [2, 3] a neuron rep- resents a binary random variable. The network samples from a distribution that is shaped by its synaptic connections (weights) and external inputs (biases). Accelerated Neuromorphic Sampling The LIF model is the de facto standard for neu- romorphic hardware platforms (here: Spikey chip with 384 neurons, 100k synapses) [4]. Replacing single units with small subnetworks en- ables robustness towards analog-hardware-inherent parameter restrictions and noise [5]. The hardware speed-up factor of 10 4 with respect to biological real-time provides a massive advantage for com- putation with sampled distributions. Generative and Discriminative Models Adding hidden layers allows the representation of ar- bitrarily complex distributions. Immediate applica- tions lie, for example, in the field of pattern recogni- tion. The LIF sampling framework enables the application of various learning algorithms. With a local learning rule, small networks of LIF neurons can achieve high classification performance (97% on MNIST [6]). Additionally, these networks are, inherently, genera- tive models of the learned data. This is essential for, e.g., pattern completion. While classical machine- learning networks often have difficulties with such tasks, spiking networks can profit from biological mechanisms such as short term plasticity to boost their performance. References [1] J. Fiser, P. Berkes, G. Orb´ an, and M. Lengyel, “Statistically optimal perception and learning: from behavior to neural representations,” Trends in cognitive sciences, vol. 14, no. 3, pp. 119–130, 2010. [2] M. A. Petrovici, J. Bill, I. Bytschok, J. Schemmel, and K. Meier, “Stochastic inference with spiking neurons in the high-conductance state,” Phys. Rev. E, vol. 94, p. 042312, Oct 2016. [3] M. A. Petrovici, Form Versus Function: Theory and Models for Neuronal Substrates. Springer, 2016. [4] T. Pfeil, A. Gr¨ ubl, S. Jeltsch, E. M¨ uller, P. M¨ uller, M. A. Petrovici, M. Schmuker, D. Br¨ uderle, et al., “Six networks on a universal neuromorphic computing substrate.,” Front. in neurosci., vol. 7, pp. 11–11, 2012. [5] M. A. Petrovici, D. St¨ockel, I. Bytschok, J. Bill, T. Pfeil, J. Schemmel, and K. Meier, “Fast sampling with neuromorphic hardware,” in Advances in Neural Information Processing Systems (NIPS), vol. 28, 2015. [6] L. Leng, M. A. Petrovici, R. Martel, I. Bytschok, O. Breitwieser, J. Bill, et al., “Spiking neural networks as superior generative and discriminative models,” in Cosyne Abstracts 2016, (Salt Lake City USA), 2015. [7] J. Schemmel, J. Fieres, and K. Meier, “Wafer-scale integration of analog neural networks,” in 2008 IEEE International Joint Conference on Neural Networks, pp. 431–438, IEEE, 2008. Ensemble Phenomena LIF sampling networks share the same hamilto- nian (energy function) with well-studies systems from solid-state physics. The Ising model, used to describe magnetic phe- nomena, is one such example. In the Ising model, the two-state units represent electron spins. Inter- actions between these spins correspond to synap- tic weights in a neural network. An exact parameter translation from the LIF do- main to the Ising domain can be formulated, al- lowing the emulation of various well-studied phe- nomena from statistical physics with networks of spiking neurons, such as the Curie law or hystere- sis. The arguably most interesting ensemble phe- nomena happen around so-called critical points, where ensembles undergo phase transitions, alter- ing their macroscopic behavior. In ferromagnetic materials, such a transition happens at the Curie temperature, above which they become paramag- netic. Intriguingly, equivalent LIF networks show no such phase transition. The reason lies in the long- tailed shape of synaptic interaction kernels in bi- ological neural networks. This calls into question the use of oversimplified ab- stract models for predicting spiking network behavior. Outlook The physical emulation of LIF sampling networks in large-scale accelerated analoge systems [7] en- ables a multitude of applications: Fast pattern recognition and completion Long-term learning Emulation of statistical physics phenomena (Quantum) annealing solutions of NP-hard problems http://www.kip.uni-heidelberg.de/vision/publications/ [email protected]
Transcript

Bayesian Computing with SpikesA. Baumbach, M. A. Petrovici, I. Bytschok, D. Stockel, L. Leng,

O. Breitwieser, J. Schemmel, K. Meier

Heidelberg University, Kirchhoff-Institute for Physics

Sampling with Spiking Neurons

You are able to raed this snetence, even thgouhmnay of the wdros are mieepsllsd and yuor biarnhas to dcdiee which wrod was maent to be wtreitn.

The fact that you can read the opening sentenceis a sign of the brain’s ability to infer knowledgebased on partial, ambiguous or even false informa-tion.Any such decision can be thought of as a proba-bilistic process. Here the used letters are correct,but their order is scrambled.

Evidence has been accumulating that the brain’sinternal representation of both its surrounding anddecisions is sample based (e.g. [1]). An interestingresearch question therefore is: How can sample-based probabilistic inference be implemented in anetwork of spiking neurons?

In the LIF sampling framework [2, 3] a neuron rep-resents a binary random variable. The networksamples from a distribution that is shaped by itssynaptic connections (weights) and external inputs(biases).

Accelerated Neuromorphic Sampling

The LIF model is the de facto standard for neu-romorphic hardware platforms (here: Spikey chipwith 384 neurons, 100k synapses) [4].

Replacing single units with small subnetworks en-ables robustness towards analog-hardware-inherentparameter restrictions and noise [5]. The hardwarespeed-up factor of 104 with respect to biologicalreal-time provides a massive advantage for com-putation with sampled distributions.

Generative and Discriminative Models

Adding hidden layers allows the representation of ar-bitrarily complex distributions. Immediate applica-tions lie, for example, in the field of pattern recogni-tion.

The LIF sampling framework enables the applicationof various learning algorithms. With a local learningrule, small networks of LIF neurons can achieve highclassification performance (97% on MNIST [6]).

Additionally, these networks are, inherently, genera-tive models of the learned data. This is essential for,e.g., pattern completion. While classical machine-learning networks often have difficulties with suchtasks, spiking networks can profit from biologicalmechanisms such as short term plasticity to boosttheir performance.

References

[1] J. Fiser, P. Berkes, G. Orban, and M. Lengyel, “Statistically optimal perception and learning: from behavior to neural representations,” Trends in cognitive sciences, vol. 14, no. 3, pp. 119–130, 2010.

[2] M. A. Petrovici, J. Bill, I. Bytschok, J. Schemmel, and K. Meier, “Stochastic inference with spiking neurons in the high-conductance state,” Phys. Rev. E, vol. 94, p. 042312, Oct 2016.

[3] M. A. Petrovici, Form Versus Function: Theory and Models for Neuronal Substrates. Springer, 2016.

[4] T. Pfeil, A. Grubl, S. Jeltsch, E. Muller, P. Muller, M. A. Petrovici, M. Schmuker, D. Bruderle, et al., “Six networks on a universal neuromorphic computing substrate.,” Front. in neurosci., vol. 7, pp. 11–11, 2012.

[5] M. A. Petrovici, D. Stockel, I. Bytschok, J. Bill, T. Pfeil, J. Schemmel, and K. Meier, “Fast sampling with neuromorphic hardware,” in Advances in Neural Information Processing Systems (NIPS), vol. 28, 2015.

[6] L. Leng, M. A. Petrovici, R. Martel, I. Bytschok, O. Breitwieser, J. Bill, et al., “Spiking neural networks as superior generative and discriminative models,” in Cosyne Abstracts 2016, (Salt Lake City USA), 2015.

[7] J. Schemmel, J. Fieres, and K. Meier, “Wafer-scale integration of analog neural networks,” in 2008 IEEE International Joint Conference on Neural Networks, pp. 431–438, IEEE, 2008.

Ensemble Phenomena

LIF sampling networks share the same hamilto-nian (energy function) with well-studies systemsfrom solid-state physics.The Ising model, used to describe magnetic phe-nomena, is one such example. In the Ising model,the two-state units represent electron spins. Inter-actions between these spins correspond to synap-tic weights in a neural network.An exact parameter translation from the LIF do-main to the Ising domain can be formulated, al-lowing the emulation of various well-studied phe-nomena from statistical physics with networks ofspiking neurons, such as the Curie law or hystere-sis.

10-2 10-1 100

Temperature

0.0

0.2

0.4

0.6

0.8

1.0

Mean a

ctiv

ity

Curie Law⟨∑zi⟩= tanh

(CBT

)

The arguably most interesting ensemble phe-nomena happen around so-called critical points,where ensembles undergo phase transitions, alter-ing their macroscopic behavior. In ferromagneticmaterials, such a transition happens at the Curietemperature, above which they become paramag-netic.

Curie Temperature

Intriguingly, equivalent LIF networks show nosuch phase transition. The reason lies in the long-tailed shape of synaptic interaction kernels in bi-ological neural networks.

0.5 0.0 0.5 1.0 1.5

log(T)

0.0

0.5

1.0

Mean A

ctiv

ity [

1]

0.5 0.0 0.5 1.0 1.5

log(T)

0.0

0.5

1.0

This calls into question theuse of oversimplified ab-stract models for predictingspiking network behavior.

time0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

inte

ract

ion s

trength Spin

Neuron

Outlook

The physical emulation of LIF sampling networksin large-scale accelerated analoge systems [7] en-ables a multitude of applications:

▶ Fast patternrecognition andcompletion

▶ Long-term learning▶ Emulation of

statistical physicsphenomena

▶ (Quantum)annealing solutionsof NP-hardproblems

http://www.kip.uni-heidelberg.de/vision/publications/ [email protected]

Recommended