+ All Categories
Home > Documents > Network tuning by genetic algorithmshenger.org/~henger/publ/neuroinf09.pdf · Network tuning by...

Network tuning by genetic algorithmshenger.org/~henger/publ/neuroinf09.pdf · Network tuning by...

Date post: 19-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
1
Network tuning by genetic algorithms akon Enger, Tom Tetzlaff, and Gaute T. Einevoll Dept. of Mathematical Sciences and Technology (IMT), Norwegian University of Life Sciences (UMB), ˚ As, Norway Introduction Neuronal network parameters: network architecture dynamics of synapses and single cells Biologically realistic systems: high-dimensional parameter space constrained only to some extent Genetic algorithms provide a potential solution to this problem Questions: To what extent can network parameters be determined by fitting the population statistics of neural activity? What’s a good fit strategy? What’s the precision of the fit result? How important are individual parameters for the functional per- formance of the model? Balanced Random Network Model X E I η θ, τ m θ, τ m J J J gJ J gJ A Network model: Multi-population random network with fixed in-degrees (Brunel, 2000) Populations E (excitatory): LIF neurons (see B), size N E I (inhibitory): LIF neurons (see B), size N I X (external): Poisson point processes with rate ην θ , size K E (N E + N I ) Connectivity EE, IE: Random convergent K E 1, excitatory synapses EI, II: Random convergent K I 1, inhibitory synapses (see C) EX, IX: Non-overlapping K E 1, excitatory synapses (see C) Parameters Population sizes N {E,I} , in-degrees K {E,I} = ǫN {E,I} , connectivity ǫ, relative external drive η B Neuron model: Leaky integrate-and-fire (LIF) neuron (Lapicque, 1907; Tuckwell, 1988) Spike emission Neuron k [1,N E + N I ] fires at all times {t j k |V k (t j k )= θ, j k N} Subthreshold dy- namics τ m ˙ V k = V k + RI k (t) if j k : t/ (t j k ,t j k + τ ref ] Total synaptic input current I k (t)= l j l i kl (t t j l ) (see C) Reset + refractori- ness V k (t)= V reset if j k : t (t j k ,t j k + τ ref ] Parameters Membrane time constant τ m , membrane resistance R, spike threshold θ, reset potential V reset , refractory period τ ref C Synapse model: Static current synapse with α-function shaped PSC PSC kernel i kl (t + d)= J kl 1 s te t/τ s t> 0 0 else Synaptic weights J kl = J if synapse kl exists and is excitatory gJ if synapse kl exists and is inhibitory 0 else Parameters Excitatory synaptic weight J , relative strength g of inhibition, synaptic time constant τ s , synaptic delay d D Spike-train analysis Spike trains s k (t)= j k δ (t t j k ) Population aver- aged firing rate r 0 = s k (t)k,t Coefficient of vari- ation of inter-spike interval CV = T 2 j k j k T j k 2 j k / T j k j k k with T j k = t j k +1 t j k Population aver- aged spike-train coherence κ(ω)= C (ω)/P (ω) with cross-spectrum C (ω)= F τ s k (t)s l (t + τ )k,l=k,t (ω) and power-spectrum P (ω)= F τ s k (t)s k (t + τ )k,t (ω) Total coherence κ 0 = ω max ω max dωκ(ω) Δω ω max ω=ω max κ(ω) Description of the model and the spike-train analysis. Blue-marked parameters are varied during the optimisation. Average rate, CV and total coherence for the 2-dimensional (g,η ) parameter space, other parameters kept constant. marks reference point g =8=1.25,J psp =0.1mV= 20 mVs =0.01ms. Variability of measures Sources of variability in measured states: due to recording from only a selection of neurons in the network due to statistical fluctuations in network structure The statistical fluctuations may be studied by varying the seed of the pseudorandom number generator used to construct the network. Varying random seed Varying observed neurons Alternative cost functions The cost function used in the optimization algorithm should be di- mensionless. There are two natural ways of achieving this: Scaling by target value: f (p)= i (s i (p) t i ) 2 /t 2 i Scaling by natural target variability: f (p)= i (s i (p) t i ) 2 /Δt 2 i (p: parameter vector, s i (p): measured state, t i : target state, Δt i : standard deviation of target state.) Local cost landscapes Observation: shallow valley near minimum. Indicates insensitivity to certain parameter combinations. (Gutenkunst et al., 2007) Exploration of full cost landscape not practical for high-dimensional parameter spaces. Instead, find the local curvature of the cost landscape by studying the Hessian H jk = 2 f/∂p j ∂p k of the cost function. Ellipses show isocontour lines of quadratic approximation of cost function found by analysis of the Hessian (Gutenkunst et al., 2007). Because of the intrinsic statistical fluctuations of the measured quan- tities, the data used to calculate the Hessian is noisy. The noise is amplified when calculating derivatives, since this involves subtracting two data points of equal magnitude. In order to obtain the Hessian used for the above plot, we averaged over 200 different network real- izations in each point. Acknowledgements We acknowledge support by the NOTUR and eScience programs of the Research Council of Norway. All network simulations were carried out with the neural simulation tool NEST (see http://www.nest-initiative.org). N E T S Genetic algorithm Evaluating the cost function is very time expensive. Must use a minimal number of individuals per generation and a strategy which converges quickly to the minimum. We use a genetic algorithm with 20 individuals in each generation, and generational replacement except for an elitist rule where the best individual from the previous generation is kept. We employ roulette wheel selection with linear ranking and crossover mating. The muta- tion probability is 0.01 per bit and the selective pressure is 2.0. Results from repeated searches using different initial generation: 2-dimensional parameter space Figures show the best result from 50 optimizations using a genetic algorithm. 5-dimensional parameter space Results and correlations: Parameter Result g 14 ± 5 η 2.3 ± 0.6 J psp 0.2 ± 0.1 mV θ 29 ± 8 mV τ s 0.5 ± 0.2 ms Variations in measured quantities: Blue: Target variability; Green: Best results from genetic algorithm using cost function scaled by target value; Red: Best results using cost function scaled by experimental error. Comparing network activity from “best” and “worst” result of opti- mization: Conclusion Intrinsic variability in network state Inevitable variability in fitted parameters Different observables have different precision Cost landscape in vicinity of minima often shallow along certain directions, i.e. similar performance for different parameter combi- nations Choice of cost function affects the accuracy of the genetic algo- rithm References Brunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8(3): 183–208 Gutenkunst RN, et al. (2007). Universally Sloppy Parameter Sensitivities in Systems Biology Models. PLoS Comput Biol 3(10): e189 Lapicque L (1907). Recherches quantitatives sur l’excitation ´ electrique des nerfs trait ´ ee comme une polarisation. J Physiol Pathol Gen 9: 620–635 Tuckwell HC (1988). Introduction to Theoretical Neurobiology, vol. 1 (Cambridge University Press)
Transcript
Page 1: Network tuning by genetic algorithmshenger.org/~henger/publ/neuroinf09.pdf · Network tuning by genetic algorithms Hakon Enger, Tom Tetzlaff, and Gaute T. Einevoll˚ Dept. of Mathematical

Network tuning by genetic algorithms

Hakon Enger, Tom Tetzlaff, and Gaute T. Einevoll

Dept. of Mathematical Sciences and Technology (IMT),Norwegian University of Life Sciences (UMB), As, Norway

Introduction◮ Neuronal network parameters:

– network architecture– dynamics of synapses and single cells

◮ Biologically realistic systems:

– high-dimensional parameter space– constrained only to some extent

◮ Genetic algorithms provide a potential solution to this problem

◮ Questions:

– To what extent can network parameters be determined by fittingthe population statistics of neural activity?

– What’s a good fit strategy?– What’s the precision of the fit result?– How important are individual parameters for the functional per-

formance of the model?

Balanced Random Network Model

X

E

I

η

θ, τm

θ, τm

J

J

J −gJ

J

−gJ

A Network model: Multi-population random network with fixed in-degrees (Brunel, 2000)

Populations E (excitatory): LIF neurons (see B), size NE

I (inhibitory): LIF neurons (see B), size NI

X (external): Poisson point processes with rate ηνθ , size KE(NE + NI)

Connectivity EE, IE: Random convergent KE → 1, excitatory synapses

EI, II: Random convergent KI → 1, inhibitory synapses (see C)

EX, IX: Non-overlapping KE → 1, excitatory synapses (see C)

Parameters Population sizes N{E,I}, in-degrees K{E,I} = ǫN{E,I}, connectivity ǫ,relative external drive η

B Neuron model: Leaky integrate-and-fire (LIF) neuron (Lapicque, 1907; Tuckwell, 1988)

Spike emission Neuron k ∈ [1, NE + NI] fires at all times {tjk|Vk(tjk) = θ, jk ∈ N}

Subthreshold dy-namics

τmVk = −Vk + RIk(t) if ∀jk : t /∈ (tjk, tjk + τref] Total synaptic input currentIk(t) =

l

jlikl(t − tjl) (see C)

Reset + refractori-ness

Vk(t) = Vreset if ∀jk : t ∈ (tjk, tjk + τref]

Parameters Membrane time constant τm, membrane resistance R, spike threshold θ,reset potential Vreset, refractory period τref

C Synapse model: Static current synapse with α-function shaped PSC

PSC kernel ikl(t + d) =

{

Jkleτ−1s t e−t/τs t > 0

0 else

Synaptic weights Jkl =

J if synapse kl exists and is excitatory

−gJ if synapse kl exists and is inhibitory

0 else

Parameters Excitatory synaptic weight J , relative strength g of inhibition ,synaptic time constant τs, synaptic delay d

D Spike-train analysis

Spike trains sk(t) =∑

jkδ(t − tjk)

Population aver-aged firing rate

r0 = 〈sk(t)〉k,t

Coefficient of vari-ation of inter-spikeinterval

CV =

⟨√

T 2jk

jk−

Tjk

⟩2

jk/⟨

Tjk

jk

k

with Tjk = tjk+1 − tjk

Population aver-aged spike-traincoherence

κ(ω) = C(ω)/P (ω)

with cross-spectrum C(ω) = Fτ

[

〈sk(t)sl(t + τ )〉k,l 6=k,t

]

(ω)

and power-spectrum P (ω) = Fτ

[

〈sk(t)sk(t + τ )〉k,t

]

(ω)

Total coherence κ0 =

ωmax∫

−ωmax

dω κ(ω) ≈ ∆ω

ωmax∑

ω=−ωmax

κ(ω)

Description of the model and the spike-train analysis. Blue-marked parameters are varied duringthe optimisation.

5 10 15 20 25g

0.51.01.52.02.53.03.54.0�

0124102040

100200

Rate

(H

z)

5 10 15 20 25g

0.51.01.52.02.53.03.54.0�

0.000.150.300.450.600.750.901.05

CV

5 10 15 20 25g

0.51.01.52.02.53.03.54.0�

0124102040

100

Tota

l C

ohere

nce

(H

z)

Average rate, CV and total coherence for the 2-dimensional (g, η)parameter space, other parameters kept constant. ⋆ marks referencepoint g = 8, η = 1.25, Jpsp = 0.1mV, θ = 20 mV, τs = 0.01ms.

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Variability of measures◮ Sources of variability in measured states:

– due to recording from only a selection of neurons in the network– due to statistical fluctuations in network structure

The statistical fluctuations may be studied by varying the seed of thepseudorandom number generator used to construct the network.

102 103 104

Neurons observed

10-3

10-2

10-1

100

Sta

ndard

devia

tion (

Hz)

Rate

102 103 104

10-3

10-2

10-1

100

Rela

tive s

td. dev.

102 103 104

Neurons observed

10-4

10-3

10-2

10-1

Sta

ndard

devia

tion CV

102 103 104

10-3

10-2

10-1

100

Rela

tive s

td. dev.

102 103 104

Neurons observed

10-3

10-2

10-1

100

Sta

ndard

devia

tion (

Hz)

Total Coherence

102 103 104

10-3

10-2

10-1

100

Rela

tive s

td. dev.

Varying random seed ⋆ Varying observed neurons

Alternative cost functionsThe cost function used in the optimization algorithm should be di-mensionless. There are two natural ways of achieving this:

− Scaling by target value: f (p) =∑

i(si(p) − ti)2/t2i

− Scaling by natural target variability: f (p) =∑

i(si(p) − ti)2/∆t2i

(p: parameter vector, si(p): measured state, ti: target state, ∆ti:standard deviation of target state.)

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y t

arg

et

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y e

xp.

err

or

Local cost landscapes◮ Observation: shallow valley near minimum.

– Indicates insensitivity to certain parameter combinations.(Gutenkunst et al., 2007)

◮ Exploration of full cost landscape not practical for high-dimensionalparameter spaces.

◮ Instead, find the local curvature of the cost landscape by studyingthe Hessian Hjk = ∂2f/∂pj∂pk of the cost function.

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y t

arg

et

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y e

xp.

err

or

Ellipses show isocontour lines of quadratic approximation of costfunction found by analysis of the Hessian (Gutenkunst et al., 2007).

Because of the intrinsic statistical fluctuations of the measured quan-tities, the data used to calculate the Hessian is noisy. The noise isamplified when calculating derivatives, since this involves subtractingtwo data points of equal magnitude. In order to obtain the Hessianused for the above plot, we averaged over 200 different network real-izations in each point.

AcknowledgementsWe acknowledge support by the NOTUR andeScience programs of the Research Council ofNorway. All network simulations were carriedout with the neural simulation tool NEST (seehttp://www.nest-initiative.org).

N E

TS

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Genetic algorithm◮ Evaluating the cost function is very time expensive.

◮ Must use a minimal number of individuals per generation and astrategy which converges quickly to the minimum.

We use a genetic algorithm with 20 individuals in each generation,and generational replacement except for an elitist rule where the bestindividual from the previous generation is kept. We employ roulettewheel selection with linear ranking and crossover mating. The muta-tion probability is 0.01 per bit and the selective pressure is 2.0.

◮ Results from repeated searches using different initial generation:

2-dimensional parameter space

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y t

arg

et

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y e

xp.

err

or

Figures show the best result from 50 optimizations using a geneticalgorithm.

5-dimensional parameter space

Results and correlations:

Parameter Resultg 14 ± 5η 2.3 ± 0.6Jpsp 0.2 ± 0.1 mVθ 29 ± 8 mVτs 0.5 ± 0.2 ms

g J �Sg

J�S

�1.0

�0.8

�0.6

�0.4

�0.2

0.0

0.2

0.4

0.6

0.8

1.0

Corr

ela

tion c

oeff

icie

nt

Variations in measured quantities:

02040

N

0

10N

4 5 6 7 8Rate (Hz)

0

10N

02040

N

0

15

N0.5 0.6 0.7 0.8 0.9 1.0

CV

051015

N

0

10N

0

10N

1.0 1.5 2.0 2.5 3.0Total Coherence (Hz)

0

4

N

Blue: Target variability; Green: Best results from genetic algorithmusing cost function scaled by target value; Red: Best results usingcost function scaled by experimental error.

◮ Comparing network activity from “best” and “worst” result of opti-mization:

4000 4200 4400 4600 4800 5000Time (ms)

0

500

1000

1500

2000

Neuro

n

Best result from GA

4000 4200 4400 4600 4800 5000Time (ms)

0

500

1000

1500

2000

Neuro

n

Worst result from GA

Conclusion◮ Intrinsic variability in network state

⇒ Inevitable variability in fitted parameters

◮ Different observables have different precision

◮ Cost landscape in vicinity of minima often shallow along certaindirections, i.e. similar performance for different parameter combi-nations

◮ Choice of cost function affects the accuracy of the genetic algo-rithm

ReferencesBrunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8(3): 183–208

Gutenkunst RN, et al. (2007). Universally Sloppy Parameter Sensitivities in Systems Biology Models. PLoS Comput Biol 3(10): e189

Lapicque L (1907). Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarisation. J Physiol Pathol Gen 9: 620–635

Tuckwell HC (1988). Introduction to Theoretical Neurobiology, vol. 1 (Cambridge University Press)

Recommended