+ All Categories
Home > Documents > The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The...

The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The...

Date post: 02-Nov-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
13
J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout · Bryce Beverlin II · Todd Troyer · Theoden I. Netoff Received: 19 May 2010 / Revised: 1 December 2010 / Accepted: 14 December 2010 © Springer Science+Business Media, LLC 2011 Abstract Phase resetting curves (PRCs) provide a mea- sure of the sensitivity of oscillators to perturbations. In a noisy environment, these curves are themselves very noisy. Using perturbation theory, we compute the mean and the variance for PRCs for arbitrary limit cycle os- cillators when the noise is small. Phase resetting curves and phase dependent variance are fit to experimental data and the variance is computed using an ad-hoc method. The theoretical curves of this phase dependent method match both simulations and experimental data significantly better than an ad-hoc method. A dual cell network simulation is compared to predictions using the analytical phase dependent variance estimation pre- sented in this paper. We also discuss how entrainment Action Editor: Charles Wilson G. B. Ermentrout Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvannia, USA e-mail: [email protected] B. Beverlin II Department of Physics, University of Minnesota, Minneapolis, USA e-mail: [email protected] T. Troyer Department of Biology, University of Texas at San Antonio, San Antonio, Texas, USA e-mail: [email protected] T. I. Netoff (B ) Department of Biomedical Engineering, University of Minnesota, Minneapolis, USA e-mail: [email protected] of a neuron to a periodic pulse depends on the noise amplitude. Keywords Phase resetting · Noise · Neural oscillators · Variance · Synchrony 1 Introduction Phase resetting curves (PRCs) have become increas- ingly popular as a tool to study response properties of neural and other biological oscillators (Winfree 1967; Forger and Paydarfar 2004). The PRC quantifies how the timing of a perturbation shifts the timing of the rhythm and they have been widely studied in many biological systems (Ariaratnam and Strogatz 2001; Guevara and Glass 1982). The experimental measure- ment of these curves is associated with some degree of noisiness in the data (Reyes and Fetz 1993; Stoop et al. 2000; Galan et al. 2005; Netoff et al. 2005a). For example, if a neuron is injected with sufficient constant current to cause it to fire repetitively, the distribution of interspike intervals (ISIs) is often quite broad (Abouzeid and Ermentrout 2009). This means the measurement of phase can also be quite broad, thus producing a noisy PRC. Previously, we characterized this noise in experimentally measured PRCs using an ad-hoc fit function to the variance of the PRC and found the variance to be phase dependent (Netoff et al. 2005b). In a recent paper Ermentrout and Saunders showed that phase-dependent noise may create noise- induced bifurcations in the PRC for a simple model neuron when compared to a deterministic system (Ermentrout and Saunders 2006). Furthermore, it has been shown that phase-dependent variance could affect
Transcript
Page 1: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput NeurosciDOI 10.1007/s10827-010-0305-9

The variance of phase-resetting curves

G. Bard Ermentrout · Bryce Beverlin II ·Todd Troyer · Theoden I. Netoff

Received: 19 May 2010 / Revised: 1 December 2010 / Accepted: 14 December 2010© Springer Science+Business Media, LLC 2011

Abstract Phase resetting curves (PRCs) provide a mea-sure of the sensitivity of oscillators to perturbations. Ina noisy environment, these curves are themselves verynoisy. Using perturbation theory, we compute the meanand the variance for PRCs for arbitrary limit cycle os-cillators when the noise is small. Phase resetting curvesand phase dependent variance are fit to experimentaldata and the variance is computed using an ad-hocmethod. The theoretical curves of this phase dependentmethod match both simulations and experimental datasignificantly better than an ad-hoc method. A dual cellnetwork simulation is compared to predictions usingthe analytical phase dependent variance estimation pre-sented in this paper. We also discuss how entrainment

Action Editor: Charles Wilson

G. B. ErmentroutDepartment of Mathematics, University of Pittsburgh,Pittsburgh, Pennsylvannia, USAe-mail: [email protected]

B. Beverlin IIDepartment of Physics, University of Minnesota,Minneapolis, USAe-mail: [email protected]

T. TroyerDepartment of Biology, University of Texas at San Antonio,San Antonio, Texas, USAe-mail: [email protected]

T. I. Netoff (B)Department of Biomedical Engineering,University of Minnesota, Minneapolis, USAe-mail: [email protected]

of a neuron to a periodic pulse depends on the noiseamplitude.

Keywords Phase resetting · Noise · Neural oscillators ·Variance · Synchrony

1 Introduction

Phase resetting curves (PRCs) have become increas-ingly popular as a tool to study response properties ofneural and other biological oscillators (Winfree 1967;Forger and Paydarfar 2004). The PRC quantifies howthe timing of a perturbation shifts the timing of therhythm and they have been widely studied in manybiological systems (Ariaratnam and Strogatz 2001;Guevara and Glass 1982). The experimental measure-ment of these curves is associated with some degreeof noisiness in the data (Reyes and Fetz 1993; Stoopet al. 2000; Galan et al. 2005; Netoff et al. 2005a).For example, if a neuron is injected with sufficientconstant current to cause it to fire repetitively, thedistribution of interspike intervals (ISIs) is often quitebroad (Abouzeid and Ermentrout 2009). This meansthe measurement of phase can also be quite broad, thusproducing a noisy PRC. Previously, we characterizedthis noise in experimentally measured PRCs using anad-hoc fit function to the variance of the PRC andfound the variance to be phase dependent (Netoff et al.2005b). In a recent paper Ermentrout and Saundersshowed that phase-dependent noise may create noise-induced bifurcations in the PRC for a simple modelneuron when compared to a deterministic system(Ermentrout and Saunders 2006). Furthermore, it hasbeen shown that phase-dependent variance could affect

Page 2: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

the dynamics of coupled oscillators in the sense thata stable synchronous state with a flat variance couldbe rendered unstable by applying a greater variance atzero phase (Ly and Ermentrout 2010).

In this note, we use perturbation methods to deter-mine the phase-dependence of the variance for arbi-trary phase-resetting curves. We show that the varianceis not a simple function of the PRC, but, rather, is afunctional involving the PRC, its derivative, and theintegral of its square. We fit PRCs and ad-hoc variancecurves to experimental data obtained from hippocam-pal pyramidal neurons using a dynamic patch clamptechnique. Individually fit PRCs are used to calculateanalytical variance functions as presented in this paper.To demonstrate the effect of using the analytical phasedependent variance compared to an ad-hoc variancefit, we construct a dual cell network simulation. Wefind the analytical variance as presented in this paperpredicts the synchrony of the dual cell network betterthan when using a flat (constant, independent of phase)variance.

2 Derivation

The Phase Resetting Curve (PRC) of an oscillator char-acterizes how the timing of the oscillation is shifted asa function of the timing of a perturbation. There aremany different techniques for experimentally measur-ing PRCs (Torben-Nielsen et al. 2010); many of theminvolve defining the time of a spike to be the zerophase, applying a brief current pulse after a fixed timeτ , and measuring how this shifts the time of the nextevent. If a current, I0 is injected for short length oftime, w, then the injected charge is just wI0 and theresulting voltage jumps by β = wI0/C, where C is thecapacitance. Then the PRC, P(β, τ ), is parameterizedby the stimulus magnitude and the time of the pulse andis defined by

P(β, τ ) = T − T̂(β, τ ),

where T is the natural period of the oscillator andT̂(β, τ ) is the time of the next spike. The natural periodT can be viewed as resulting from a stimulus with zeromagnitude: T̂(0, τ ) = T. Thus, P(0, τ ) = 0. As long asP is sufficiently smooth, it follows for small β thatP(β, τ ) ≈ �(τ)β. The function �(τ), which arises inany theory of small perturbations of limit cycles, hasunits msec/mV and is called the infinitesimal PRC oriPRC. It should be noted that in experiments it mayonly be possible to inject current to perturb the cell,so the phase may not be manipulated directly. In theremainder of this paper, we will assume that we are in

the linear range and that the PRC is proportional tothe iPRC. The iPRC can be estimated experimentallyby letting the conductance go to infinitesimal conduc-tance (Preyer and Butera 2005) or current (Netoffet al. 2005a), however in the limit they are equiva-lent (Achuthan and Canavier 2009). We will refer tothe PRC when the stimulus is of finite amplitude andduration, and refer to �, the iPRC, when we do thederivation.

2.1 Perturbations and the distribution of phase

We start with an arbitrary differential equation:

dXdt

= F(X)

and assume that there is an orbitally stable limit cyclesolution, X0(t) with period T. We define the phase ofthe oscillation, θ(t), as a circular variable representinghow far the oscillation has progressed along its limitcycle, i.e. we write X0(t) = X0(θ(t)). The phase is oftendimensionless and defined on [0, 1) or [0, 2π). We willview the phase as a time-like variable defined on theinterval [0, T).

We now introduce a general time-dependent pertur-bation G(t, X(t)) which includes both the pulse stim-ulus delivered for computing the PRC and the back-ground noise:

dXdt

= F(X(t)) + G(t, X(t)) (1)

If G is sufficiently small, then we can retain phase coor-dinates (Kuramoto 1984), with X(t) = X0(θ(t)) and

dt= 1 + Z (θ) · G(t, X0(θ(t))).

Z (θ) is the solution to a certain linear differentialequation (called the adjoint equation) and can be eas-ily found numerically for any limit cycle. In general,the function G perturbs multiple components of thestate vector X(t); each component of the vector valuedfunction Z (θ) represents the iPRC for perturbationsapplied to that component. For example, the voltagecomponent of Z (θ) is equal to �(θ), the iPRC for volt-age perturbations resulting from brief current pulses.

We can simplify the above phase model substantiallyif we consider instantaneous current pulses (Dirac deltafunctions) with a specified magnitude, β, and additivenoise, εξ(t), applied only to the voltage equation, whereξ(t) is a zero mean Wiener process. Then the voltage

Page 3: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

perturbations can be written GV(t, X0(θ(t))) = εξ(t) +βδ(t − τ) and

dt= 1 + [εξ(t) + βδ(t − τ)]�(θ) + O(ε2). (2)

Higher order terms come from the Ito correction (e.g.in the case of white noise, (ε2/2)�′(θ)�(θ)) (Ito 1946;Kloeden and Platen 1992; Gardiner 2004). We assumethat the noise amplitude ε is small, and formally expandθ(t) as a series in orders of ε:

θ(t) = θ0(t) + εθ1(t) + . . . .

Note that θ0(t) represents the deterministic componentof the phase and θ1(t) is the noise-induced deviation.Substituting this into Eq. (2) and gathering terms to firstorder, we obtain:

dθ0

dt= 1 + βδ(t − τ)�(θ0) (3)

dθ1

dt= ξ(t)�(θ0(t)) + βδ(t − τ)�′(θ0(t))θ1(t). (4)

Here, we have approximated �(θ(t)) by �(θ0(t) +εθ1(t)) and expanded in a Taylor series to get Eq. (4).

Integrating Eq. (3), we obtain

θ0(t) = t + β H(t − τ)�(τ) (5)

where the step function H(x) arises as the integral ofthe Dirac delta function. We next integrate θ1(t):

θ1(t) =∫ t

0ξ(s)�(θ0(s)) ds + β�′(τ )H(t − τ)θ1(τ

−).

We use u(t−) to denote the limit of u(x) as x approachest from below, and have used the fact that θ0(τ

−) = τ .For t less than the time of the stimulus τ , θ1(t) is justgiven by

θ1(t) =∫ t

0ξ(s)�(s) ds.

This integral can be used to evaluate the term θ1(τ−) in

the second part of the equation. We can now write

θ1(t) =∫ t

0ξ(s)�(θ0(s)) ds

+H(t − τ)β�′(τ )

∫ τ

0ξ(s)�(s) ds. (6)

The first integral represents the integrated noise pertur-bations, and the last term represents the direct effect ofthe stimulus, which depends on the integral of the noise

at the time of the stimulus. For t > τ , we can evaluatethe step functions and rewrite this as

θ1(t) = (1 + β�′(τ ))

∫ τ

0ξ(s)�(s) ds

+∫ t

τ

ξ(s)�(s + β�(τ)) ds. (7)

Note that for t > τ the time dependence of θ1(t) isconfined to the upper limit of the second integral. Thefirst integral is the drift in the phase due to noise priorto the perturbation plus the deviation in the resettingcaused by the drift while the second integral is the noisedriven drift after the perturbation.

2.2 Mean and variance of the interspike interval

Recall that in the noiseless case we defined the PRC byP(β, τ ) = T − T̂(β, τ ), where T̂(β, τ ) is the perturbedinterspike interval, defined as the time at which thephase equals T, θ(T̂) = T. For mathematical conve-nience we make the assumption that τ + β�(τ) < Twhich guarantees that the neuron will not spike in-stantaneously upon receiving the perturbation. In thepresence of noise, the interspike interval is stochastic,and we can define the mean and variance of the PRCas the mean and variance of T − T̂(β, τ ). To computethese statistics, we expand T̂ in orders of ε and use ourprevious expansion of phase to obtain

θ0(T0 + εT1) + εθ1(T0 + εT1) + O(ε2) = T

Here, we have approximated T̂ = T0 + εT1. We evalu-ate t in Eq. (5) at T̂ = T0 + εT1 since we need order ε

accuracy. We evaluate t = T̂ = T0 in Eq. (6) since θ1 isalready order ε.

Gathering terms of the same order, we obtain:

T0 + β�(τ) = T (8)

T1 + θ1(T0) = 0 (9)

Therefore, T0 = T − β�(τ) and T1 = −θ1(T0). Sincethe noise ξ(s) has zero mean, the mean PRC is equalto T − T0 = β�(τ), the same as in the noiseless case.Thus, the effect of the noise on the mean PRC isnegligible, at least up to first order in ε. Letting E[X]denote the expected value of the random variable X,the variance of the PRC is equal to E[(T̂ − T − E[T̂ −T])2] = ε2 E[T2

1 ] = ε2 E[θ1(T0)2]. In squaring θ1(T0) we

note that ξ(s) in the first and second integrals of Eq. (7)refer to non-overlapping intervals in time. If we assume

Page 4: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

the noise is white, then E[ξ(s)ξ(s′)] = δ(s − s′) and weobtain the main result of this paper:

Var(τ ) = ε2(

[1 + β�′(τ )]2∫ τ

0�2(s) ds

+∫ T−β�(τ)

τ

�2(s + β�(τ)) ds)

. (10)

Equation (10) is appealing in that when β = 0, it pro-vides an expression for the variance of the interspikeinterval of the noisy oscillator:

VarISI = ε2∫ T

0�2(s) ds.

Once the mean iPRC, �(t), is known, this can be usedto estimate the strength of the noise, ε2. This means thatthere are no free parameters in Eq. (10).

We refer to this method as the phase dependentvariance, or PDV. This approach can be generalizedto include stimuli and noise correlations having finiteduration. However, this introduces several mathemat-ical complications that are beyond the scope of thispaper. The assumptions of fast noise and short pulsesprovide reasonable approximations to many experi-mental situations. In the remainder of this paper, wecompare Eq. (10) to numerically computed statisticsfor equations of the form (Eq. (2)) for the Hodgkin–Huxley equations, and to experimental data.

2.3 Underlying sources of phase dependent variance

Greater insight into the sources affecting PRC variancecan be obtained by changing variables in the secondintegral and assuming that the perturbation is small, sothat [1 + β�′(τ )]2 ≈ [1 + 2β�′(τ )]. Then

Var(τ ) ≈ ε2(

[1 + 2β�′(τ )]∫ τ

0�2(s) ds

+∫ T

τ+β�(τ)

�2(s) ds)

= ε2(∫ T

0�2(s) ds + 2β�′(τ )

∫ τ

0�2(s) ds

−∫ τ+β�(τ)

τ

�2(s) ds)

.

The first term is just the variance of the unperturbed in-terspike interval distribution. The second term is equalto the derivative of the PRC, β�′(τ ), multiplied by thevariance of the phase at the time of the stimulation,ε2

∫ τ

0 �2(s) ds. Thus, the stimulus acts to compress orexpand the variance, depending on the sign of the PRC.This can be understood by considering a positive per-turbation given at a time when the PRC is increasing. In

this case, trajectories that are phase advanced relativeto the mean phase will experience a more positivephase shift than average, whereas trajectories that arephase delayed will experience a less positive shift. Thisis illustrated in Fig. 1. This will result in an increasein the latent phase variance. Conversely, for a positivepulse given when the PRC is decreasing, trajectoriesthat are phase advanced will get a less positive shift thantrajectories that are phase delayed, causing the latentphase variance to decrease.

To understand the third term, consider a case wherethe mean PRC at the time of the stimulus is positive,β�(τ). The pulse will cause an overall phase advance,reducing the variance by the amount that would haveaccumulated for the phases that are ‘skipped over’ dueto the perturbation. If the pulse causes a phase delay,then τ + β�(τ) < τ and the sign of the integral will be

0.6 0.7 0.8 0.9 10.6

0.7

0.8

0.9

1

θ(i)

θ(i+

1)

Fig. 1 Illustration of how PRC affects variance. Immediatelyfollowing an action potential, the phase is very well defined.However, because neurons are noisy, as time progresses, thephase of the neuron becomes more uncertain, indicated by aprobability distribution. When the synaptic input is applied, thedistribution is mapped through the PRC to determine the newphase and then integrated until the end of the phase to determinefinal distribution. Here we illustrate a synaptic input applied latein the phase, at 0.8 (note that phase from 0.6–1 is plotted), wherethe uncertainty is illustrated as a Gaussian distribution aroundthe mean phase plotted. The mean and points one standarddeviation above and below the mean are mapped through thePRC to illustrate the distribution after the stimulus. The originaldistribution is plotted with dotted lines on the y-axis for compar-ison. If the slope of the PRC PRC′(τ ) > 1, the distribution iswidened after the stimulus and if PRC′(θ) < 1, the distributionwill contract

Page 5: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

reversed. The final term represents additional variancethat accumulated as the phases between (τ + β�(τ))and τ are ‘replayed.’

3 Comparison to simulations and experiments

In order to test the asymptotic theory for the phasedependent variance of Eq. (10), we simulate a variety ofmodels ranging from simple phase models to biophys-ical models such as the Hodgkin–Huxley equations. Ineach case, we start the model equations on their limitcycle at an initial condition that corresponds to thezero phase. For biophysical models, we use the point atwhich V(t) crosses 0 since, with noise, the peak of theaction potential cannot be accurately determined. Weadd white noise of a specified magnitude. Small briefpulses of current are given at different times during thecycle and the time of the next crossing is found. Werun each set of stimuli 1,000 times. We compute thePRC by subtracting the resulting list of crossing timesfrom the unforced, noise-free period. Then, for eachstimulus, we compute the mean shift and the standarddeviation. These are plotted in the figures. Captionsgive the magnitude of the noise as well as the shape ofthe perturbing stimulus.

The simplest comparison of the theory with simula-tions is to solve Eq. (2) and compute the relevant sta-tistics for simple PRCs. Two standard PRCs are − sin θ

(Type II) and (1 − cos θ) (Type I) as they arise near bi-furcation to limit cycles (Brown et al. 2004). Figure 2(a),(b) shows that the phase dependence of the standarddeviation (the square root of the variance) is quitedifferent for the two types of PRC. The match withMonte Carlo simulations (1,000 simulations at eachof 50 different phase points) is excellent. Figure 2(c)shows that the relationship between the standard de-viation and the mean (the shape of the actual PRC) isnot simple. That is, for example, it is not the case thatthe standard deviation is maximal where the slope ofthe PRC is maximal. Figure 2(d) shows the shape ofthe standard deviation as the PRC is transformed fromtype I to type II via �(θ) = A(r)(r sin 2πθ + (1 − r)(1 −cos 2πθ)) where A(r) is chosen so that the integralof �2(θ) is constant. Recall that this integral sets thevariance of the ISI. Type II PRCs have a minimumvariance near the onset of the spike and a maximumabout halfway through the cycle. Type I PRCs have anearly constant variance except for a dip three quartersthrough the cycle. Figure 3 shows the theory appliedto the Hodgkin–Huxley model for the squid axon. Aconstant current (10 μA/cm2) is applied to the modelto generate approximately 60 Hz oscillations. A white

0.120.130.140.150.160.170.180.19

0.20.21

0 0.2 0.4 0.6 0.8

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-1 -0.5 0 0.5 1 1.5 2

0.2

0.22

0.24

0.26

0.28

0.3

0.32

0.34

SD

SD

SD

SD

r=0

r=1

mean

sin(2πθ)

1 - cos(2πθ)

time1 0 0.2 0.4 0.6 0.8

time1

0 0.2 0.4 0.6 0.8time

1

(b)(a)

(d)(c)

Fig. 2 The standard deviation (square root of the variance) fortwo examples of Eq. (2) comparing simulation of 1,000 trials withEq. (10). (a) �(θ) = sin 2πθ and (b) �(θ) = (1 − cos 2πθ). (Insetsshow the two PRCs) (c) Plot of standard deviation versus themean. (d) Plot of the standard deviation for a series of PRCsof the form, A(r)(r sin 2πθ + (1 − r)(1 − cos 2πθ)) where A(r) ischosen so the L2 norm is constant. Stimulus consists of noisewith ε = 0.1 and a rectangular pulse of amplitude 2 and width0.125/2π

noise stimulus is added to the voltage equation withvariance 0.0625 mV2/msec. To compute the PRC, per-turbations of magnitude 2 μA/cm2 for 0.5 msec areapplied at a series of 50 times during the cycle. Wecompute the time of zero crossings for the potential toestimate the effect of a perturbation. Figure 3(a) showsthe raw data for 100 of the 1,000 trials along with themean PRC and the PRC with no noise. The latter twocurves are nearly indistinguishable. Using the noise freePRC as an approximation to �(θ), we apply Eq. (10) toestimate the SD. Figure 3(b) shows the SD of the datain panel A (points) compared to the theory (smoothcurve). There is a small discrepancy at the half cyclepoint; the SD is larger than the theory predicts, but theoverall shape and magnitude are very similar.

3.1 Application of phase dependent varianceto real neuronal data

We have previously shown a phase dependence ofvariance in spike advance from neuronal data col-lected from hippocampal excitatory pyramidal neurons(Netoff et al. 2005b). In this section, we compare theaccuracy of describing the phase dependent varianceof a real neuron using the analytical phase depen-dent variance compared to an ad-hoc method we haveused previously. The two phase dependent variance

Page 6: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

-1.5

-1

-0.5

0

0.5

1

0 2 4 6 8 10 12 140.1

0.12

0.14

0.16

0.18

0.2

0.22

0.24

0.26

0.28

0 2 4 6 8 10 12 14time (msec) time (msec)

time

shift

(m

sec)

s.d.

(m

sec)

(b)(a)

Fig. 3 Hodgkin–Huxley model. The standard HH model is in-jected with 10 μA/cm2 current and ε = 0.25 mV/ms1/2 whitenoise. The stimulus consists of a 0.5 ms current pulse with am-plitude 2 μA/cm2. (Since the capacitance is 1 μF/cm2, the totalvoltage shift is 1 mV.) Zero crossings of the voltage determine thetiming. (a) Distribution of the time shifts for 100 trials at 50 time

points during the cycle. Superimposed is the PRC computedfor the deterministic system (no noise) and the average PRCcomputed from a simulation with 1,000 trials. Note that the twocurves are indistinguishable. (b) Standard deviation determinedfrom the 1,000 trials as jagged dotted line and standard deviationfrom Eq. (10) as solid line

functions are fit to the residuals of the PRC fit usinga maximum likelihood estimation. The accuracy of thefit to the variance is measured using a χ2 metric.

3.1.1 Fitting phase dependent variance distributionto neuronal data

In order to validate our theory with real neuronal data,we fit the variance function (Eq. (10)) to the datawith a maximum likelihood fit. If the variance is toolarge, too many points will be close to the mean butthe probability at the mean will be small and totalprobability will be small. If the variance is too small,too many points will look like outliers and the total es-timated probability will be small. When the coefficientsof our function estimating the variance fit the data, wehave the maximum likelihood. Therefore, we can use agradient descent method to maximize the likelihood tofind the coefficients of our phase dependent variancefunctions (Harris and Stocker 1998). The analyticalphase dependent variance will be estimated by fitting ε

and β to maximize the likelihood. Maximum likelihoodis found by first estimating the probability of observinga certain distribution given a function for the expecteddistribution. The probability of observing a particulardistribution is the product of the probabilities of theobservation at each point. If the residuals at a particularphase are assumed to be Gaussian, the total probabilitycan be calculated as:

L(ε, β) =N∏

i=1

(1√

2πσi(θ |ε, β)

)e−

(yi−�(θi)2σi(θ |ε,β)

)2

(11)

where σi(θ |ε, β) is the expected standard deviation atphase θ given the coefficients for the phase dependentvariance, ε and β, and (ydatai − �(θi)) are the residualsof the PRC. For numerical reasons, this equation is usu-ally calculated as the sum of the logs of the probabilitiesas:

� = − N2

log(2π) −N∑

i=1

log(σi(θ |ε, β))

−12

N∑i=1

(yi − �(θi)

σi(θ |ε, β)

)2

, (12)

where σ 2 is the variance. Thus, we fit both the ad-hoc and the analytical model by minimizing the log-likelihood �, which provides us with the values for ε

and β in Eq. (10).

3.1.2 Application to experimental data

In this section we test the analytical phase dependentvariance to experimentally attained PRCs measuredfrom pyramidal neurons in the hippocampal formation.The accuracy of the analytical phase dependent vari-ance will be compared to an ad-hoc phase dependentvariance function we have used in a previous paper(Netoff et al. 2005b).

The data is pulled from the Netoff lab’s experimen-tal database containing phase response curves fromhundreds of neurons collected for various experiments(Pervouchine et al. 2006; Netoff et al. 2005a, b). De-tails of the experimental protocols are published inthose papers. Briefly, neurons are patch clamped using

Page 7: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

whole-cell patch clamp technique. PRCs are measuredusing a dynamic patch clamp running in real-time Linux(Dorval et al. 2001). The dynamic clamp is used to(1) control the spiking rate of the neuron using a closedloop spike rate controller, (2) deliver current pulses tosimulate synaptic conductance, and (3) record the pre-and post-stimulus ISI’s to create a PRC. In all cases,the neuron is controlled to fire at 10 Hz by adjustingthe applied current. The phase response curve is mea-sured using a stimulus waveform resembling a synapticconductance. The synaptic conductance waveform is analpha function. The synaptic current is time dependentand is calculated as:

α = A(e−t/τ f − e−t/τr

)(Vm − Esyn), (13)

where A controls the stimulus amplitude, τr and τ f

are the characteristic rise and fall times of the synapticinputs with values of 2.61 msec and 6.23 msec, respec-tively, Esyn = 0 is the reversal potential of the synapse,and Vm is the membrane voltage of the cell. Synapticinput is applied at a randomly selected phase on every6th period and the resulting period recorded. Phaseadvances caused by synaptic inputs are fit with thefollowing polynomial to estimate the PRC:

�(θ) = θ(θ − 1)(a3θ

3 + a2θ2 + a1θ

).

This function forces the PRC to be zero at θ = 0 andθ = 2π . The polynomial is fit to the data with a leastsquares fit.

To estimate the phase dependent variance, we sub-tract our fit PRC function from each point to obtainthe residuals. The function for the variance of the resid-uals are fit by maximizing the likelihood, as describedabove. The analytical phase dependent variance func-tion will be compared to an ad-hoc method used tofit the variance in an earlier publication. An exampleof the standard deviation for the ad-hoc fit and phasedependent fit are given in Fig. 4 for comparison. In thead-hoc method, we assume that the error is a randomwalk, such that the variance increases linearly, andthe standard deviation increases as a square root oftime. The standard deviation as a function of phase iscalculated using the function:

σ(θ) = Y + Z√

1 − θ, (14)

where Y and Z are fit using a least squares minimizationalgorithm based on the maximum likelihood function asdescribed in Eq. (12).

Residual values and a standard deviation fit for areal neuron are shown in Fig. 5. Although the fitsdo not seem particularly compelling, the phase de-pendent variance fits the residual data better than an

0 0.2 0.4 0.6 0.8 10

0.025

0.05

0.075

0.1

0.125

0 0.2 0.4 0.6 0.8 1

0

0.05

0.1

0.15

0.2

phase

phase

SD

Theory

Ad Hoc

PR

C

Fig. 4 Comparison of this paper’s theoretical estimate of thestandard deviation using Eq. (10) to an ad hoc function fromEq. (14). Coefficients for the polynomial used to fit the PRC are(−1.5967x3 + 4.2555x2 − 3.7951x + 0.5483)x(x − 1). Coefficientsfor the ad-hoc function are 0.0297 + 0.0710

√1 − x. Inset shows

the PRC used to compute the standard deviations

ad-hoc method. In this example, the ad-hoc methodover-estimates the variance while the analytical methodunder-estimates the variance.

Phase dependent variance is estimated on PRCsmeasured from 74 different neurons in the hippocam-pal formation. The accuracy of the spike dependentvariance is quantified using a chi-squared distribution:

χ2reduced = 1

N − 1

N∑i=1

(yi − �(θi)

σ (θi)

)2

,

where σ 2 is the variance of the PRC fit (Plackett 1983).The reduced χ2 value is the average of the residualssquared divided by the expected variance. χ2 = 1 ifthe estimate of the variance as a function of phase isideal. For the 74 cells the average reduced χ2 value forthe ad-hoc variance method is 1.7697 while the valuefor the analytical form is nearly ideal at 0.9992. TheEq. (10) has a lower χ2 value than the ad-hoc methodin 63 of 74 cells (p = 1.07E-5 using Welch’s t-test)(Welch 1947). Because the χ2 value much closer to 1 onaverage and represents most data sets better than thead-hoc method, we conclude that the phase dependentvariance method is more accurate in describing theresiduals of experimental data.

3.2 Dual cell simulations

In this section we seek to test if accounting for the phasedependent variance actually provides any significantimprovement over using phase independent noise ina simulation. To answer this, we simulate a coupled

Page 8: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

0 20 40 60 80 100−20

−10

0

10

20

30

40

50

Spi

ke ti

me

adva

nce

(mse

c)

Time since last spike (msec)0 0.2 0.4 0.6 0.8 1

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

Phase

Res

idua

ls (

phas

e)

(b)(a)

Fig. 5 PRC and phase dependent variance fits. (a) Experimentaldata collected from an excitatory pyramidal neuron in hippocam-pal brain slice with period of approximately 100 ms. A polynomialin solid line has been fit to the data with a 5th order polynomialconstrained at the beginning and end of the phase. (b) Residual

values from comparing raw data to the polynomial fit PRC. Thesolid line is the standard deviation fit using the phase dependentvariance method of this paper, Eq. (10) and the dashed line is thesquare root ad-hoc variance

pair of excitatory neurons using a Hodgkin–Huxley-like conductance based model as described by Golomband Amitai (1997). This model has the functional form:

dVdt

= 1C

(INa+ INaP + IKdr + IKA + IKslow + IL+ Is+Noise)

where C is the membrane capacitance, V is the volt-age difference between intercellular and extracellularspace, Is is the synaptic current, as an Alpha functioninput described in Eq. (13), along with various ionicmembrane currents: sodium (Na), persistent sodium(NaP), delayed rectifier potassium (Kdr), A-type potas-sium (KA), slow potassium (Kslow), leak current (L)

and a noise component, which is used to create spiketime variability. The parameters used for the currentsand membrane capacitance are as described by Golomband Amitai (1997). The noise applied is of the sameform assumed in Eq. (10).

When one neuron fires an action potential, a synapticinput is applied to the other neuron. Here we definean action potential as a positive zero-crossing of thevoltage. This full scale network simulation is com-pared to simulations performed using an iterative PRCmodel where the PRC and the variance are fit to thedata taken directly from the Golomb–Amitai model.In the iterative model, at each firing of a neuron inthe network, the phase of the post-synaptic neuronis advanced according to the PRC with a noise termwhose amplitude is determined by the phase dependentvariance equation. For both the full Golomb–Amitaimodel simulation and the two reduced PRC model

simulations (one with a constant phase independentvariance and the other with a phase dependent vari-ance), a histogram of the spike time differences are

−1 −0.5 0 0.5 10

0.5

1

1.5

2

2.5

3

Phase

Nor

mal

ized

Spi

kes

Var const

Var phase

GA Cell

Fig. 6 Histogram of spike counts as a function of oscillationphase for dual cell network simulations. Comparison of dual cellnetwork simulations where the PRC determines the cell’s phaseadvance to dual cell network simulations using full scale GAmodel neurons. Gray points are the result of a dual cell simulationusing the polynomial fit PRC from experimental data with a flatvariance based on the standard deviation of all residuals fromthe fit. X-marks are the result of using the experimental fit PRCwith the phase dependent variance of Eq. (10). Black pointsrepresent the full scale GA cell model’s dual cell simulation.The simulation with phase dependent variance more accuratelypredicts the results of the GA model

Page 9: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

made. The results of the three simulations can be seenin Fig. 6. In the simulation where the phase dependentvariance is accounted for, the iterated PRC model isable to reproduce the full Golomb–Amitai model sim-ulation much more accurately than the iterative modelsimulation using the phase independent noise.

4 Neuronal entrainment as a function of ε and β

In this section we use an example of a neuron, orpopulation of uncoupled neurons, entrained to a peri-odic stimulus to illustrate how synchrony depends onthe variance. An example PRC will be used with aphase dependent variance calculated using Eq. (10).We will determine how the entrainment depends onthe parameters of the phase dependent variance byvarying the amplitude of the noise ε and the amplitudeof the stimulus pulse β. A transfer operator will beused to map a neuron’s phase probabilistically into thenext cycle. By analyzing this map, we can predict thesteady state probability distribution of a neuron, or apopulation of neurons over infinite time, as well asdetermine how quickly the neuron or population willapproach this steady state solution. The purpose of thissection is to provide a more intuitive understandingof Eq. (10).

4.1 Estimating synchrony in a stochastic system

For a periodically stimulated oscillator with aPRC = �(θ) stimulated at period P, the newphase after each stimulus can be calculated asθi+1 = θi + T − P − �(P − θi). If the period of thestimulus and the neuron are the same, T = P, thenthis equation becomes θi+1 = θi − �(T − θi). If thechange in the phase is calculated from cycle tocycle, this is the H-function for a single oscillatorwith a periodic input: H = θi+1 − θi = −�(T − θi)

(Neu 1979; Goel and Ermentrout 2002). The fixedpoints of the system, where the neuron will phase-lockto the stimulus, occurs where the H-function crosseszero, indicating that there is no change in phase fromone cycle to the next. The stable fixed points, are thosewhere the slope of the H-function at the zero crossingis negative.

This H-function is a map that determines how thephase is changed from one stimulus cycle to the nextfor deterministic systems. In noisy systems, the phaseadvance is probabilistic, with trajectories that start ata single phase θ mapped onto a distribution of phasesρ(θ) = N (θ − �(T − θ), σ (�i)), where N (μ, σ ) repre-sents a normal distribution with mean μ and standard

deviation σ . This mapping can be used to define aFrobenius Perron operator P , that maps the density ofphases at iteration i onto the density of phases at iter-ations i + 1: ρn+1(θ) = Pρn(θ). This operator is linear,and by discretizing phase and taking a piecewise linearapproximation to the densities ρ(θ), we can approx-imate the transfer operator using a transition matrixwhich we also call P . The matrix P has all positive en-tries and because it conserves probability, the columnsof P summate to 1. As a result, the largest eigenvalueof P is real and has magnitude 1. The correspondingeigenvector represents the steady state distribution of

PRCH- fxn

dist(i)

spikePopulation

dist

(i+1)

Fig. 7 Stochastic map. This figure illustrates the effect of a stim-ulus pulse on the phase distribution of a neuron or populationof neurons. The top panel shows an arbitrary PRC (dashed lines)with its corresponding H − f unction (H(θ)) (solid line) as a func-tion of phase. The central panel is a graphical illustration of thefunction that calculates the probability distribution of the neuronafter the stimulus (y-axis) given the probability distribution priorto the stimulus (x-axis). The center of probability is along the lineof identity to which the H-function is added and the width ofthe probability distribution is determined by Eq. (10). The lineof identity is indicated as a thin black line along the diagonal.Two examples of a-priori phase distributions are shown, one is aDirac delta function (vertical line in bottom panel), representingthe probability distribution in the phase of a single neuron, ora synchronous population, the other is a uniform distribution(dotted line in bottom panel) representing an asynchronous pop-ulation evenly distributed across the phases at the time the firststimulus is applied. The distribution after applying the map (leftpanel) indicates the probability distribution after the stimulus.The result of the stochastic map is that the delta function ismapped to a Gaussian and the flat distribution is no longer flat

Page 10: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

the dynamical system obtained from iterated mapping,ρn(θ) = P nρ0(θ).

Figure 7 shows the stochastic map correspondingto PRC equal to �(θ) = 0.1θ(1 − θ)0.6. The gray scaleshows the magnitude of the elements of P . The solidlines show the map applied to an initial state consistingof a delta function centered at a phase = 0.6. Thisdelta function is mapped to a Gaussian distributionwith mean shifted to a phase slightly smaller than0.6. Similarly, we can start with a population of un-connected neurons distributed uniformly in phase anditerate the distribution through the transfer function(dashed lines). The output distribution is not perfectlyuniform, but slightly heavier at the negative zero cross-ings of the H-function and where the variance of thePRC is lower. In both cases, the phase distribution isattracted to the phase where the H-function crosseszero with a negative slope. In this example, the H-function has two zeros, one at each end of the phase.The crossing at θ = 0 has a negative slope while thecrossing at θ = 1 has a positive slope. This indicates thatthe system is asymptotically stable from one side andunstable from the other predicting that the neuron willsynchronize to the stimulus as long as the stimulus isleading in phase, but the moment the neuron is pushedby noise into a leader position, the two will phase slipuntil the stimulus is leading again. This results in asharp peak on the right hand side of the stimulus anda wider peak on the preceding phase. To show howthe distributions change smoothly in time, we haveincluded a movie that can be downloaded and viewedfrom http://neuralnetoff.umn.edu/public/PDV.

The transfer operator can be applied iteratively to adistribution of neurons to determine the distribution af-ter n number of stimuli, ρi+n(θ) = ρi(θ)P n. In Fig. 8 weshow the how the Dirac delta function and the uniformdistribution are iteratively mapped through the transferfunction, 2, 8, 32 and 64 times. The steady state solutionof the phase distribution, for either the single neuronor the population, is the eigenvector with the largesteigenvalue of the transfer operator. This is plottedas the infinite time solution. Over subsequent stimuli,the distribution starting with the Dirac delta functionor a uniform distribution approaches the steady statesolution. It can be seen that it does not matter if theinitial distribution started out as a delta function or auniform distribution, the final solution is the same.

4.2 Synchrony dependency on ε and β

A neuron or population of neurons will synchronizeto a periodic stimulus depending on the amplitude of

−0.04

−0.03

−0.02

−0.01

0

0.01

H−f

unct

ion

0.05

0.1

0.15

0.2

Spi

ke D

ist

28163264inf

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

0.05

0.1

0.15

0.2

Phase

Pop

. Dis

t

28163264inf

Fig. 8 Iteration of phase distributions through the stochasticmap. Top panel shows H(θ) for two cycles where the negativelysloped roots predict a synchronous solution at the correspondingphase. Middle panel labled “Spike Dist.” illustrates an initial dis-tribution starting as a delta function mapped through stochasticmap for 1 through 64 iterations. The bottom panel, labled “Pop.Dist.”, represents a population of neurons starting with an initialdistribution that is uniform mapped through the stochastic map.Two consecutive iterations are plotted next to each other sothat zero phase is plotted in the center of the plot. The thickdarkest line labeled inf is the inf inite solution, computed fromthe eigenvector of the largest eigenvalue of the stochastic map

the noise in the neuron, ε, and stimulus amplitude, β.How strongly a population of neurons will synchronizeto a periodic stimulus can be determined by measuringthe amplitude of the peak of the eigenvector with thelargest eigenvalue. By varying these parameters we candetermine how they affect the ability of the neuron tosynchronize to the periodic stimulus.

In Fig. 9 the noise amplitude ε of the neuron is variedwhile the strength of the stimulus, β, is kept constant.Not surprisingly, as the noise amplitude is increased thepeak gets smeared and the peak amplitude decreases.How quickly the neuron approaches the steady-statesolution can also be determined from the transitionmatrix. The largest eigenvalue is always one, so the rateof the convergence to the steady state is determinedby the amplitude of the second eigenvalue |E2|. Thecloser |E2| is to one, the more attracting it is, thereforeit will compete with the first eigenvector resulting ina longer time for the system to converge to steadystate. By measuring |E2| as a function of the ε, it canbe seen that |E2| gets smaller as the noise amplitude

Page 11: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

gets larger, this indicates that the system will convergeto the steady state faster in higher noise conditions. Ifthe second eigenvector E2 is complex, it indicates thatthe system will oscillate as it approaches the steadystate solution. The angle of ∠E2, will determine thefrequency of the oscillation as it converges. It is difficultto show this oscillation in a static figure, but it can beseen in a movie that can be downloaded and viewedfrom http://neuralnetoff.umn.edu/public/PDV. As ε in-creases ∠E2 decreases indicating that the oscillationsare increasing in frequency.

Similarly, we examine how synchrony depends onthe amplitude of the stimulus, β, while keeping thenoise amplitude ε constant is shown in Fig. 10. As β isdecreased, the strength of the entrainment weakens and

0

0.5

1

Syn

chro

ny

0.975

0.985

0.995

|E2|

2 3 4 5 6 7 8 9 10

x 10- 3

0.08

0.1

0.12

Fre

quen

cy

ε

−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08

0.1

0.2

0.3

0.4

0.5

0.6

Phase

Pop

Den

sity

ε = 0.002ε = 0.005ε = 0.01

(a)

(b)

(c)

(d)

Fig. 9 Synchrony as a function of neuronal noise amplitude ε.The eigenvector with the largest eigenvalue indicates the steadystate phase distribution after an infinite number of stimuli. Onbottom (d) is shown the distribution of the stimulus phase forthree values of ε and β = 1. As ε increases noise smears thepeak out and synchrony decreases. Panel (a) shows how the peakamplitude of the eigenvector changes as a function of the ε. Panel(b) shows how the rate of the convergence to the steady statesolution depends on ε. The rate the population converges tothe steady state solution is determined by the amplitude of thesecond eigenvalue |E2|, the closer |E2| is to one the slower thesystem converges. As ε increases |E2| decreases indicating thatthe population converges to the steady state behavior faster withhigher noise. Panel (c) shows the frequency of the oscillation ofthe population as it converges to the steady state solution. This isdetermined by the angle of E2. As ε is increased, the angle of E2increases indicating that the frequency of the oscillation increases

0.1

0.15

0.2

0.25

Syn

chro

ny

0.9768

0.977

0.9772

|E2|

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.114

0.117

0.12

Fre

quen

cy

−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08

0.05

0.1

0.15

0.2

Phase

Pop

Den

sity

β=1β=0.6β=0.28β=0.1

(a)

(b)

(c)

(d)

Fig. 10 Synchrony as a function of stimulus amplitude, β. (a) Asβ increases (and ε = 0.01), synchrony increases, measured by thepeak amplitude of the eigenvector with largest eigenvalue of thetransition matrix. (b) The complex modulus of the eigenvector E2increases with increasing β up to β = 0.45, and then decreases for0.45 < β < 1, which means the oscillator converges more quicklyfor middle stimulation amplitudes. (c) The phase angle of E2determines the frequency of oscillations on the way to conver-gence, which decrease with increasing β. (d) Eigenvectors forthree different values of β. As the stimulus amplitude decreases,the noise term dominates and the population density becomesmore spread across the oscillator’s phase

synchrony decreases. Surprisingly, the rate at which thenetwork approaches the steady state phase distributionseems to peak at some middle value of β indicatingthat the system converges fastest at higher values ofβ. As β decreases, the frequency at which the systemoscillates as it approaches steady state, as measured by∠E2, increases.

In summary, as we decrease noise, ε, or increasestimulus strength β synchrony will be stronger. How-ever, with higher noise and moderate stimulus ampli-tude, the population will converge to their steady statebehavior faster.

5 Discussion

In this paper we have shown that noisy neural oscilla-tors show a predictable phase-dependent variance intheir PRCs that is primarily dependent only on the

Page 12: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

PRC itself. We use a perturbation method to estimatethe change in phase due to the noisy input. With thiscalculation, we are able to estimate many statisticalparameters such as the effects on the mean and, inparticular, the variance of the PRC. We have usedthis perturbation calculation in other papers (Ly andErmentrout 2009). There is no reason why a similarcalculation could not be applied to more realistic formsof noise such as colored noise arising from synapses andas we noted, to more realistic perturbations. The advan-tage of white noise approach is that the autocorrelationis a delta function and, thus, the integrals are easy toevaluate.

Our calculations are predicated on the idea that themagnitude of the noise and the perturbations is small.The reason for this is that the calculations are based onan asymptotic theory. Thus, they may be of limited usein realistic experimental settings. Indeed, in the highnoise case, the notion of phase becomes more difficultto define precisely. On the other hand, numerical sim-ulations, which will work no matter what the size ofthe perturbation or the stimulus, are of limited valuefor improving understanding of the very general phe-nomenon of phase-dependent variance. Analytic calcu-lation, even in somewhat unrealistic limits, can help fillin that understanding. Since the analytic calculationsare approximations, they are valid up to some limit; thelimit is the maximum amplitude such that we remain inthe linear regime.

By examining the variance of phase resetting curvesof real neurons we have shown that the result of thispaper characterizes the residual data quite well. Thenoisy nature of these periodic cells provides an idealapplication of the phase dependent variance theory,which is shown to be statistically significant when com-pared to average variance and ad-hoc methods suchas Eq. (14). Not only does our theory bode well forcharacterizing individual noisy cell dynamics, it is ableto provide insight about the usefulness in describingnetwork behavior. In modeled dual cell networks, thephase dependent variance produces behavior whichclosely follows biologically relevant Hodgkin–Huxley-style conductance models such as the Golomb–Amitaimodel with several ionic currents. One could easilyapply this theory to much larger networks to analyzethe effect of including a phase dependence on the noise.

The noise amplitude ε and stimulus β in Eq. (10)affect the dynamics of a stimulated cell. A neuron’sability to synchronize to a periodic stimulus is likelyto be important to network dynamics, especially thoseinvolved in diseases such as epilepsy and Parkinson’s.It is interesting that by increasing the noise amplitudethe cell converges to the synchronous solution more

rapidly, but a population distribution for the stimulatedcell becomes sharper by decreasing the noise ampli-tude. In a sense, the noise is allowing the system toreach the solution by providing more variability of thephase of oscillation and is keeping a greater variabilityin synchrony. This has important implications as to thestrength of the noise in the neural system. Too muchnoise spreads the synchronous solution out, while toolittle noise causes the system to take longer to reachthe synchronous solution. When varying the stimulusamplitude β, we find the result of greater synchronyand increased rate of convergence with greater pulseamplitude. Although the results of varying β may be asintuitively expected, it must be balanced with the noiseamplitude depending on the structure of the networkand firing rate of the individual neurons when consid-ering the application to an entire system of cells.

We believe the theory of this paper will be moreuseful to experiments and simulations than previouslyused methods of estimating the variance on a noisyneuronal phase resetting curve, which have been largelybased on averages or assumptions about the generalshape of such curves. An ad-hoc method developedfrom noticing the variance on experimentally measuredPRCs follow the general form of Eq. (14) has been usedto describe the phase dependent variance until now.By using the analytical expression developed in thispaper, Eq. (10), we are now able to characterize thephase dependent variance on a phase resetting curvewith much greater accuracy. The main result of thispaper will provide a useful tool when describing phaseresetting curves in greater detail for both experimentand simulation.

Acknowledgements We would like to acknowledge NSF, NSFCAREER Award, and University of Minnesota Grant-in-Aid.

References

Abouzeid, A., & Ermentrout, B. (2009). Type-ii phase resettingcurve is optimal for stochastic synchrony. Physical Review E,80, 011911.

Achuthan, S., & Canavier, C. C. (2009). Phase-resetting curvesdetermine synchronization, phase locking, and clustering innetworks of neural oscillators. The Journal of Neuroscience,29(16), 5218–5233.

Ariaratnam, J. T., & Strogatz, S. H. (2001). Phase diagram forthe winfree model of coupled nonlinear oscillators. PhysicalReview Letters, 86, 4278–4281.

Brown, E., Moehlis, J., & Holmes, P. (2004). On the phase reduc-tion and response dynamics of neural oscillator populations.Neural Computation, 16, 673–715.

Dorval, A. D., Christini, D. J., & White, J. A. (2001). Real-timelinux dynamic clamp: A fast and flexible way to construct

Page 13: The variance of phase-resetting curves...J Comput Neurosci DOI 10.1007/s10827-010-0305-9 The variance of phase-resetting curves G. Bard Ermentrout ·Bryce Beverlin II · Todd Troyer

J Comput Neurosci

virtual ion channels in living cells. Annals of BiomedicalEngineering, 29, 897–907.

Ermentrout, B., & Saunders, D. (2006). Phase resetting and cou-pling of noisy neural oscillators. Journal of ComputationalNeuroscience, 20, 179–190.

Forger, D. B., & Paydarfar, D. (2004). Starting, stopping, andresetting biological oscillators: In search of optimum pertur-bations. Journal of Theoretical Biology, 230, 521–532.

Galan, R. F., Ermentrout, G. B., & Urban, N. N. (2005). Efficientestimation of phase-resetting curves in real neurons and itssignificance for neural-network modeling. Physical ReviewLetters, 94, 158101.

Gardiner, C. W. (2004). Handbook of stochastic methods forphysics, chemistry and the natural sciences. Springer Series inSynergetics (Vol. 13). Berlin: Springer.

Goel, P., & Ermentrout, B. (2002). Synchrony, stability, and firingpatterns in pulse-coupled oscillators. Physica D, 163(3),191–216.

Golomb, D., & Amitai, Y. (1997). Propagating neuronal dis-charges in neocortical slices: Computational and experimen-tal study. Journal of Neurophysiology, 78, 1199–1211.

Guevara, M. R., & Glass, L. (1982). Phase locking, period dou-bling bifurcations and chaos in a mathematical model of aperiodically driven oscillator: A theory for the entrainmentof biological oscillators and the generation of cardiac dys-rhythmias. Journal of Mathematical Biology, 14, 1–23.

Harris, J. J., & Stocker, H. (1998). Handbook of mathematics andcomputational science. New York: Springer.

Ito, K. (1946). On a stochastic integral equation. Proceedings ofthe Japan Academy, 22, 32–35.

Kloeden, P. E., & Platen, E. (1992). Numerical solution of sto-chastic differential equations. Applications of Mathematics(New York) (Vol. 23). Berlin: Springer.

Kuramoto, Y. (1984). Chemical oscillations, waves, and turbu-lence. Dover Publications.

Ly, C., & Ermentrout, G. B. (2009). Synchronization dynamics oftwo coupled neural oscillators receiving shared and unsharednoisy stimuli. Journal of Computational Neuroscience, 26,425–443.

Ly, C., & Ermentrout, G. B. (2010). Coupling regularizes in-dividual units in noisy populations. Physical Review E, 81,11911.

Netoff, T. I., Acker, C. D., Bettencourt, J. C., & White, J. A.(2005a). Beyond two-cell networks: Experimental measure-ment of neuronal responses to multiple synaptic inputs.Journal of Computational Neuroscience, 18, 287–295.

Netoff, T. I., Banks, M. I., Dorval, A. D., Acker, C. D., Haas,J. S., Kopell, N., et al. (2005b). Synchronization in hybridneuronal networks of the hippocampal formation. Journalof Neurophysiology, 93, 1197–1208.

Neu, J. C. (1979). Coupled chemical oscillators. SIAM Journal onApplied Mathematics, 37(2), 307–315.

Pervouchine, D. D., Netoff, T. I., Rotstein, H. G., White, J. A.,Cunningham, M. O., Whittington, M. A., et al. (2006).Low-dimensional maps encoding dynamics in entorhinalcortex and hippocampus. Neural Computation, 18, 2617–2650.

Plackett, R. L. (1983). Karl pearson and the chi-squared test.International Statistical Review, 51, 59–72.

Preyer, A., & Butera, R. (2005). Neuronal oscillators in aplysiacalifornica that demonstrate weak coupling in vitro. PhysicalReview Letters, 95(13), 138103.

Reyes, A. D., & Fetz, E. E. (1993). Effects of transient depolar-izing potentials on the firing rate of cat neocortical neurons.Journal of Neurophysiology, 69, 1673–1683.

Stoop, R., Schindler, K., & Bunimovich, L. A. (2000). Neocorti-cal networks of pyramidal neurons: From local locking andchaos to macroscopic chaos and synchronization. Nonlinear-ity, 13, 1515–1529.

Torben-Nielsen, B., Uusisaari, M., & Stiefel, K. (2010). A com-parison of methods to determine neuronal phase-responsecurves. Frontiers in Neuroinformatics, 4(6).

Welch, B. L. (1947). The generalization of student’s problemwhen several different population variances are involved.Biometrika, 34, 28–35.

Winfree, A. T. (1967). Biological rhythms and the behavior ofpopulations of coupled oscillators. Journal of TheoreticalBiology, 16, 15–42.


Recommended