Complex dynamics in a two-compartmentneuronal model.
Undergraduate research project report
Matthew Hennessy and Gregory M. Lewis (supervisor)Faculty of Science, University of Ontario Institute of Technology,
2000 Simcoe Street North,
Oshawa, ON, L1H 7K4, Canada.
Abstract
A neuron is a complex living cell, with intricate ion dynamics and vast systemof treelike branches called dendrites. Mathematical models of a neuron often groupthe soma and dendrites into a single compartment, thereby simplifying the governingdynamical system. This however, does not explicitly consider the spatial extent ofthe dendrites and assumes that electric potential is transferred from the dendrites tothe soma instantaneously. In our model, we treat the soma and dendrites as separatecompartments, allowing the transfer of potential to be non-instantaneous. By varyingthe influence of the dendrites on the soma, we have shown that the dendrites areresponsible for the generation of diverse neural dynamics, all of which are characterizedby the normal form of a codimension three degenerate Bogdanov-Takens bifurcation.
keywords: two compartment Wang-Busaki model, effect of dendrites on neural dynam-ics, transition from type I to type II excitability, degenerate Bogdanov-Takens bifurcation,saddle-node to invariant cycle (SNIC) bifurcation, center manifold reduction, numerical ap-proximation of normal form coefficients
1
Contents
1 Introduction 31.1 Neuron Anatomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 A Mathematical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Dynamical Systems Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Neural Modelling 72.1 Somatic Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Voltage Dependent Conductances . . . . . . . . . . . . . . . . . . . . . . . . 102.3 A Two Compartment Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Model Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Analytical Methods 153.1 Linear Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Center Manifold Reduction for Bogdanov-Takens Bifurcation . . . . . . . . . 173.3 Bogdanov-Takens Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Numerical Methods 244.1 Pseudoarclength Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2 Continuation of Saddle-node and Bogdanov-Takens Bifurcations . . . . . . . 264.3 Software Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5 Dynamics 305.1 Neuron Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.2 Without Dendritic Influence . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.3 With Dendritic Influence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.4 Dynamical Transitions Via Higher Codimension Bifurcations . . . . . . . . . 39
5.4.1 Homoclinic Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . 475.4.2 The Degenerate Bogdanov-Takens Bifurcation . . . . . . . . . . . . . 52
6 Conclusion 55
2
1 Introduction
The nervous system is perhaps the most complex and important system in our body. It is
responsible for relaying messages about muscle movement and sensory input, allowing us to
interact with and perceive the world around us. The nervous system is mainly composed
of massive networks of interconnected cells called neurons. Therefore, the study of neurons
is of great interest because understanding the nature of neurons themselves can help in the
understanding of how they work together within the larger network.
1.1 Neuron Anatomy
A neuron can be decomposed into three main parts; the soma, the dendrites, and the axon.
The soma is the main body of the neuron, with a semipermeable cell membrane housing
the nucleus of the cell. The dendrites form a vast treelike structure that extends from the
soma. Dendrites are responsible for receiving synaptic input (neurotransmitters) from other
neurons. The axon of a neuron is a long cable like structure that ends in the axon terminal.
The axon terminal is responsible for the release of neurotransmitters that are received by
the dendrites of other neurons. A diagram of a neuron can be seen in Figure 1.
The large branching structure of the dendrites and axon terminal allow each neuron to
have connections to thousands of other neurons, forming a massive communication web.
Neurons communicate via synapses, which are triggered by an electrical impulse in the axon
terminal. The electrical impulse in the axon terminal releases neurotransmitters that bind to
receptor sites on the dendrites of another neuron. The buildup of excitatory neurotransmit-
ters on the dendrite can cause an action potential, which is a large spike in the voltage across
the cell membrane. This electrical impulse can travel along the dendrites to the axon ter-
minals where other synapses can be located, allowing the propagation of information across
the network.
1.2 A Mathematical Approach
To capture the essential dynamics of the electrical impulse propagation along a single neuron,
mathematical equations can be used. However, the complicated physiological structure of a
neuron produces equations that are difficult to analyze. The potential difference across the
cell membrane of the neuron is dependent upon both space and time, so a physiologically
accurate neuron model would be governed by partial differential equations (PDEs). PDEs are
hard to analyze analytically and numerically. To overcome such difficulties, the neuron can
be discretized through a process called compartmentalization (Figure 2). When a neuron
is compartmentalized, it is broken up into discrete segments called compartments. The
3
Soma
DendritesAxon
Figure 1: A diagram of a neuron. The three main parts of a neuron are the soma, dendrites, and axon.
individual compartments have no spatial dependence, so their voltage depends only on time
which allows them to be governed by ordinary differential equations (ODEs). In general, the
analysis of a system of ODEs is much easier than that of a system of PDEs.
The process of compartmentalization allows the neuron to be modelled using spatially
independent compartments. The more compartments a model has, the more physiologically
realistic it becomes. However, large compartment models can be extremely difficult to an-
alyze, and thus it may be difficult to uncover the underlying mechanisms that lead to the
specific neuron dynamics. Thus, it is more practical to analyze models with a small number
of compartments. Typical mathematical models of a neuron only consist of a single com-
partment, which means that the axon, soma, and dendrites are grouped together in a lone
compartment with only a single voltage. In this case, some important features of a neuron are
neglected. For example, during an action potential, the electrical impulse propagates along
the length of the dendrite to the soma and then possibly to the axon terminal. This cannot
occur in a one compartment model because the dendrites, soma, and axon are modelled by
a uniform voltage that has no spatial dependence.
The focus of this report is to analyze a neuron model with two compartments. One
compartment will model the soma, whereas the other will be a very rudimentary model of the
dendrites. Our model will incorporate some of the basic features of a real neuron, including
the transfer of potential from one compartment to another, which models the propagation of
potential from the dendrites to soma. By varying the influence of the dendritic compartment
on the somatic compartment, we wish to determine and analyze the impact of the dendritic
compartment on the overall dynamics of the neuron. This is a problem we will address using
bifurcation analysis.
4
Figure 2: The discretization of a neuron. From left to right; a full neuron, an eight compartment neuron,a four compartment neuron, a single compartment neuron. Each compartment has a unique voltage. A fullneuron model is the most complex, whereas a single compartment neuron model is the simplest.
1.3 Dynamical Systems Theory
Most dynamical systems, including the neuron system, cannot be solved analytically. There-
fore, alternative methods must be used to extract information about the dynamics of the
system. In general, these methods deal with examining the qualitative structure of solutions.
The most basic of these methods is linear stability analysis (section 3.1), where the equations
are linearized about a known steady state equilibrium solution and the eigenvalues are com-
puted. The eigenvalues determine the type and stability of the equilibrium solution, thus
describing the local system dynamics. However, if the system depends on some parameters,
it is often important to know how these equilibrium solutions evolve under the variation of
one or more parameters. This is where bifurcation analysis is particularly useful.
Bifurcation analysis allows a known solution to be tracked under parameter variation,
providing information about system dynamics over a wide range of parameters. More impor-
tantly however, is that bifurcation analysis can detect the critical parameter values where
new solutions emerge or where existing solutions are destroyed. The point which coincides
with the critical parameter values is called a bifurcation point. Knowing the location of a
bifurcation point is of great importance because it marks the transition from one regime of
dynamics to another. The exact type of bifurcation and the nature of the dynamical transi-
tion that occurs can depend on many things, e.g., the number of zero real part eigenvalues of
the linearization about a steady state solution, or the values of the normal form coefficients
5
(section 3.3). The collision between a steady state solution and a periodic solution is called a
global bifurcation. This is because a local analysis of steady state solution will reveal nothing
about the existence of a nearby periodic solution. For reasons such as these, determining
the global dynamics of a system requires a bit more work.
Some of the most powerful tools of bifurcation theory are the Center Manifold and Normal
Form theorems, which can be used to prove the existence and stability of specific types of
solutions in some neighbourhood of a bifurcation point. By applying these theorems at
a bifurcation point, the full dynamical system can be reduced to a simpler system with
the same qualitative dynamics. This system is called the topological normal form. For
parameter values close to the bifurcation point, the dynamics of the full system of equations
are the same as those of the normal form. Therefore, the solution structure of the normal
form is qualitatively similar to the full system, and the normal form can be used to deduce
information about the dynamics of the full system. An important point however, is that
neither theorem can be used to determine the size of neighbourhood where the normal form
and full system have qualitatively similar dynamics.
1.4 Outline
In this report, we analyze a two compartment model of a neuron. In particular, we are
interested in how the second compartment, which models the dendrites, affects the overall
dynamics of the neuron. To do this, we will use aspects of analytical and numerical bifurca-
tion theory. In the second section of this report a detailed explanation of the mathematical
model of a neuron will be given. Section 3 deals with the analytical methods that are used
in the analysis, while Section 4 gives an explanation of the primary numerical methods that
are used. Section 5 presents the dynamics we are observed, and the report ends with a
conclusion.
6
2 Neural Modelling
The mathematical modelling of a neuron is an interesting combination of analytic derivations
using physical principals and empirical methods using accurately determined experimental
data. By applying these methods to each compartment, a system of ordinary differential
equations can be derived which models the voltage in that compartment. For our purposes,
we are only considering a neuron with two compartments; a somatic compartment and a
dendritic compartment. In this section we derive the equations that govern the potential
difference across the soma and the diffusion of ions across its cell membrane. Furthermore,
we discuss how the dendrites are modelled and present the specific neuron equations that
are used in the report.
2.1 Somatic Potential
Like many other cells in the body, the soma of a neuron consists of a semipermeable cell
membrane immersed in an ionic solution. A difference in the internal and external (intra-
cellular and extracellular) somatic ion concentrations will cause ions to diffuse across the
cell membrane which in turn, creates a measurable potential difference and electric field (see
Figure 3). Because the cell membrane has the ability to separate electric charge, it can be
modelled as a capacitor. The potential difference across the soma is the difference between
the internal and external potentials,
VS = Vi − Ve. (1)
An equilibrium potential will be reached when the electric field balances the diffusion. In fact,
the equilibrium potential for an arbitrary ion A can be computed using Nernst’s equation,
EA =RT
nFln
([A]e[A]i
), (2)
where R is the universal gas constant, T is the absolute temperature, n is the charge of
the ion, F is Faraday’s constant, and [A]i,e are the internal and external ion concentrations,
respectively. When the soma is not in electrostatic equilibrium, the movement of ions across
the membrane generates an electric current, with the cell membrane acting as a resistor.
The capacitor and resistor like properties of the cell membrane allow it to be modelled
as a simple electronic circuit (Figure 4). Using well established laws from electromagnetism,
we can derive a differential equation that models the potential difference across the soma.
As mentioned above, the cell membrane acts like a capacitor, and thus we assume that the
voltage VS across the cell membrane satisfies the equation
CVS = Q, (3)
7
Cell Membrane
VeVi
Exterior
[A]i [A]e
Interior
Figure 3: Differences between the internal concentration [A]i and external concentration [A]e of an ioncause that ion to diffuse across the cell membrane, creating a potential difference. For illustrative purposes,we used an arbitrary ion A.
CR
Exterior
Interior
Figure 4: Modelling the cell membrane as an electric circuit with a capacitor and resistor. The resistorrepresents the total resistance to all ions which diffuse across the membrane
8
where C is the capacitance of the membrane, Q is the electric charge, and VS is the potential
difference across the soma as defined above. Differentiating this expression, and realizing
that the change in charge per unit time is current by definition, we have
CdVS
dt=
dQ
dt≡ I. (4)
The current which is generated by the process of ion diffusion is assumed to have a linear
relationship with the somatic potential. Therefore, the current-voltage relationship takes the
well known form V = IR, with R being the resistance. In terms of an arbitrary ion A, the
current generated by diffusion is
IA = gA (VS − EA) , (5)
where EA is the equilibrium potential calculated using the Nernst equation (2), and
gA ≡ 1
RA
is the ion specific membrane conductance. The membrane conductance is the inverse of
membrane resistance. The total amount of generated current due to ion diffusion will be
denoted as Iion, where
Iion =∑
i
gi (VS − Ei) , (6)
where the sum is over all ions. In most neuron models, Iion contains the leakage term
gL (VS − EL) which represents small time independent currents that act collectively to bring
the somatic potential to an equilibrium potential.
Now that we have the necessary equations which govern the individual components of the
circuit, we can derive an equation that governs the entire circuit, and ideally, the membrane
potential. Kirchhoff’s law states that the sum of the current entering a junction must equal
the sum of the current exiting a junction, so the equation of state for the cell membrane is
CdVS
dt+ Iion = 0. (7)
In both experimental and theoretical analysis, an external current is usually injected into
the soma to stimulate ion diffusion. In this case, the equation for membrane potential takes
the form
CdVS
dt+ Iion = Iapp, (8)
where Iapp is the applied current. A more detailed derivation of the membrane potential
equation can be found in [1] and [7].
9
2.2 Voltage Dependent Conductances
In modelling the current that is generated by a single species of ion diffusing across the cell
membrane, we assumed that the equation for modelling the current takes the form
IA = gA (VS − EA) ,
with gA being the conductance of the membrane. This equation alone however, is unrealistic
because of the biological interpretation of membrane conductance. Ions diffuse across the
cell membrane by traversing through open channels or gates. Therefore, the conductance of
the cell membrane is related to how many channels are open and available for ions to move
through. The state of the channels, and thus the conductance of the membrane, typically
depend on the somatic potential difference. To model the state of the channels, a statistical
approach is taken and the membrane conductance is written as
gA = gAPA, 0 ≤ PA ≤ 1, (9)
where gA is the maximum conductance as if every channel was open, and PA is the voltage
dependent probability of finding an open channel.
The probability PA is usually a monomial of an activation variable a and inactivation
variable b, which takes the form
PA = axby. (10)
The exponents x and y are integers which are generally chosen to best fit experimental
data. Statistically, the activation variable is the probability that any channel is open, while
the inactivation variable is the probability that any channel is closed. Therefore, if we can
find an equation that accurately models the (in)activation variables, we have an accurate
description of the membrane conductance and the current being generated by ion diffusion.
The equations which model the evolution of the (in)activation variables are based on the
premise that new channels open at a rate proportional to the probability of finding a closed
channel and vice-versa. Therefore, the (in)activation variables change according to
dc
dt= αc (1− c)− βcc, (11)
where c is an activation or inactivation variable, αc = αc(VS) and βc = βc(VS) are the
voltage dependent opening and closing rates, respectively. It is in the opening and closing
rates where the differences between activation variables and inactivation variables arise. The
voltage dependence of an activation variable is opposite to that of an inactivation variable.
This will be become apparent below. It is instructive to write (11) as
τcdc
dt= c∞ − c, (12)
10
where
τc =1
αc + βc
(13)
is a voltage dependent time constant and
c∞ =αc
αc + βc
(14)
is the voltage dependent steady state function. The steady state function models the long
term behavior of the (in)activation variable c, which comes from the fact that if
dc
dt→ 0, c → c∞.
It is possible that the activation of a certain ion might be much faster than the activation of
the others, in which case the activation variable is substituted by the steady state function
in order to simplify and reduce the system of ordinary differential equations.
Like the exponents of the (in)activation variables in the membrane conductance, the
opening and closing rate functions are chosen to fit experimental data. This was first ac-
complished by Hodgkin and Huxley in their Nobel Prize winning model of the squid axon
[6]. In their model, the sodium conductance is written as
gNa = gNam3h,
with m and h being the respective activation and inactivation variables. The functions
αm =0.1 (V + 25)
exp [(V + 25)/10]− 1, βm = 4 exp (V/18) ,
αh = 0.07 exp(V/20), βh =1
exp [(V + 30)/10] + 1,
are fit to experimental data to model the opening and closing rates. It can be seen that the
voltage dependence of the opening and closing rates are indeed the opposite for activation
and inactivation variables.
2.3 A Two Compartment Neuron
As mentioned above, the modelling of a neuron with two compartments is more physiologi-
cally reasonable than one compartment models. In our neuron model, one compartment will
model the soma, while the second compartment will be a very basic model of the dendrites.
The dendrites are modelled as a single compartment that is coupled to the soma. This is
the simplest model possible that can capture the spatial extent of the dendrites. The sepa-
rate compartment allows the dendrites to have their own voltage which can be transferred
11
Soma Dendrites
Iion
IappID
Id/s
gC
IL
Figure 5: A schematic of a two compartment neuron. The soma and dendrites are coupled by the parametergC, which acts as the conductivity between the two compartments. Arrows represent currents and thedirection in which they flow. Currents include an externally applied current Iapp, current generated by thediffusion of sodium and potassium ions Iion, leakage currents IL and ID, and current entering the soma fromthe dendrites and vice versa, Id/s.
to the soma, which is one of the fundamental features of a neuron that is neglected in one
compartment models.
The somatic compartment will model the potential difference across the cell membrane of
the soma and contain the ion dynamics. In particular, we model the diffusion of sodium and
potassium ions across the cell membrane. To stimulate ion diffusion, we add an externally
applied current, Iapp. We also include a passive leak current, IL, which acts to bring the
somatic potential to its natural equilibrium potential. Finally, we add the current Id/s, which
models the flow of current from the dendrites to the soma or vice-versa. This current does
not exist in one compartment models because the soma and dendrites are grouped together
into a single compartment with the same voltage.
The dendritic compartment is responsible for modelling the potential difference across
the cell membrane of the dendrites. We assume that the dendritic compartment is “leaky”,
so we include the passive leak current, ID, which serves the same purpose as the leak current
in the somatic compartment. Furthermore, we include the same current Id/s as in the somatic
compartment but switch the sign to ensure that this term has opposite affects on the somatic
and dendritic compartments. The currents in the neuron model can be summarized in Figure
5.
The equations of state for the two compartments take the general form
CdVS
dt= −Iion − IL − Id/s + Iapp, (15)
CdVD
dt= −ID + Id/s, (16)
where
Id/s = gC (VS − VD) . (17)
12
The parameter gC acts as the conductance between the two compartments. By varying this
value, we can change how much of an influence the dendritic compartment has on the somatic
compartment and the overall dynamics of the neuron. For this reason, gC will be one of the
primary parameters we vary in the bifurcation analysis.
2.4 Model Equations
The neuron model that we use throughout the analysis is a slightly modified version of
the Wang-Buzsaki model [11]. In particular, synaptic current is removed as we are only
interested in single neuron interactions and replaced with the dendritic coupling term. The
differential equation governing the potential difference across the dendritic compartment is
also introduced. The conductances and (in)activation variables are modelled very closely
to the Hodgkin-Huxley equations, the main difference being the usage of the steady state
function for sodium activation. The equations modelling the flow of current in the soma-
dendrite system are
CdVS
dt= −gNam
3∞h (VS − ENa)− gKn4 (VS − EK)− gL (VS − EL)− gC (VS − VD) + Iapp,
CdVD
dt= −gD (VD − ED) + gC (VS − VD) . (18)
The (in)activation variables for ion conductance are governed by
m∞ =αm
αm + βm
,
dh
dt= φ [αh (1− h)− βhh] , (19)
dn
dt= φ [αn (1− n)− βnn] ,
with the opening and closing rates given by
αm =−0.1(VS + 35)
exp [−0.1(VS + 35)]− 1, βm = 4 exp [−(VS + 60)/18] ,
αh = 0.07 exp [−(VS + 58)/20] , βh =1
exp [−0.1(VS + 28)] + 1,
αn =−0.01 (VS + 34)
exp [−0.1(VS + 34)]− 1, βn = 0.125 exp [−(VS + 44)/80] .
Numerical values for the parameters can be found in Table 1. The parameters that have no
value will be the primary parameters that are used in the bifurcation analysis. Depending
on the type of bifurcation analysis, these parameters might be constantly changing value.
An important property of the model equations is that when gC = 0, the neuron model is
reduced to a one compartment neuron model. The somatic potential becomes the potential
13
C 1 µF/cm2
Iapp - µA/cm2
gL 0.1 mS/cm2
EL −65 mVgNa 35 mS/cm2
ENa 55 mVφ 5 unitlessgK 9 mS/cm2
EK −90 mVgD - mS/cm2
ED −65 mVgC - mS/cm2
Table 1: Numerical values for the parameters used in the neuron model. Parameters with no value canvary and will be the primary parameters we explore in the analysis.
of the single compartment, whereas the dendritic potential becomes irrelevant and makes an
exponentially fast approach to its equilibrium potential, ED. This property allows us to see
how a one compartment model changes when a simple dendritic compartment is added.
14
3 Analytical Methods
It is not uncommon for the governing dynamical systems that result from mathematical
models to be so complex that an analytic solution cannot be found. In spite of this, there
are powerful alternative analytical methods that can be used to deduce information about
the dynamics of a system.
The first method we discuss, linear stability analysis, deals with the linearization about a
steady state equilibrium solution and computing the eigenvalues. Computing the eigenvalues
of an n dimensional system requires solving a polynomial of order n. In general, if n is small
enough, the eigenvalues can be computed analytically.
The second and third methods, Center Manifold reduction and Normal Form theory, are
used to transform the dynamical system into a new system that is as simple as possible,
yet qualitatively the same as the full system. Furthermore, these methods provide analytic
criteria for deducing the types of solutions that exist near a bifurcation point.
3.1 Linear Stability Analysis
Linear stability analysis is perhaps the most basic way to analyze a system of differential
equations. The main idea behind linear stability analysis is that the dynamics of a nonlinear
system of differential equations near a hyperbolic equilibrium can be determined from a
linearization about that equilibrium solution. The dynamics of the full nonlinear system
will depend upon the lineary stability of equilibrium solutions, which is determined by the
eigenvalues of the Jacobian matrix. Solutions will diverge from an unstable equilibrium,
whereas a stable equilibrium will be attracting and solutions will converge to it. More
precisely, consider the system of differential equations
x = F(x), x ∈ Rn, (20)
where the dot denotes differentiation with respect to time. Steady state equilibrium solutions
of (20) will be points which satisfy
F(x) = 0. (21)
Denoting an equilibrium solution as x0, and making a Taylor expansion of (20) about x0,
x = F (x0) +∂F (x0)
∂x(x− x0) +O (|x|2) , (22)
where O(|x|2) represents terms or order two and higher. The Taylor expansion can be
simplified by realizing that
F(x0) = 0. (23)
15
If the nonlinear terms are neglected, the resulting linear approximation to (20) is
x = L(x− x0), (24)
where
L ≡ ∂F(x0)
∂x(25)
is the Jacobian matrix. The stability of x0 can be determined from the eigenvalues of L.
That is, the stability is determined from the roots of the characteristic polynomial
det (L− λI) = 0, (26)
where λ is an eigenvalue and I is the n×n identity matrix. If there are any eigenvalues with
positive real part, the equilibrium solution is unstable. Why is this? If, for simplicity, we
assume that each eigenvalue of L has algebraic multiplicity of one, then the solution to the
linear approximation is
x(t) = x0 +n∑
i=1
ciΦieλit, (27)
where ci is a constant of integration and Φi is an eigenvector satisfying
LΦi = λiΦi. (28)
If there are any eigenvalues with a positive real part, then for some initial conditions close
to x0,
limt→∞
x(t) 6= x0, (29)
and the solution diverges from the steady state equilibrium.
However, if all the eigenvalues have a negative real part, then for all initial conditions
close to x0, the solution approaches and remains at the stable equilibrium, i.e.,
limt→∞
x(t) = x0. (30)
An important point is that these limits only apply to the linear approximation of (20), and
we are interested in the dynamics of the full nonlinear system. If we assume that the equi-
librium solution is hyperbolic, that is, there are no eigenvalues with zero real part, then by
the Hartman-Grobman theorem the dynamics of the full nonlinear system are equivalent
to the dynamics of the linear approximation in some neighborhood of the equilibrium so-
lution. Therefore, all that’s needed to deduce the local dynamics of (20) near a hyperbolic
equilibrium solution is to compute the eigenvalues of the Jacobian evaluated at that point.
16
3.2 Center Manifold Reduction for Bogdanov-Takens Bifurcation
The Center Manifold theorem provides an elegant method in which the dimensionality of a
dynamical system can be reduced to the number of eigenvalues with zero real part. For the
Bogdanov-Takens bifurcation, we assume that the system of differential equations
x = F (x, µ) , x ∈ Rn, µ ∈ Rp (31)
can be rewritten at the critical parameter values µ = µ0 as
x = Lx + N(x), (32)
where Lx is the linear part of (31) and N(x) contains higher order nonlinear terms. Further-
more, we assume that x = 0 is an equilibrium solution of (32) and L has a zero eigenvalue of
algebraic multiplicity two, but not necessarily of geometric multiplicity two. In fact, generi-
cally, we would expect the geometric multiplicity to be one. In this case, there will be a set
of linearly independent generalized eigenvectors such that
LΦ1 = 0, (33)
LΦ2 = Φ1. (34)
These eigenvectors define the center eigenspace Ec, which is the space spanned by all the
eigenvectors which correspond to eigenvalues with zero real part. In particular,
Ec = span {Φ1, Φ2} .
Similarly, the stable eigenspace Es is defined as the space spanned by eigenvectors corre-
sponding to eigenvalues with negative real part. We will assume that the center eigenspace
Ec and the stable eigenspace Es compose the eigenspace of L, that is, there are no eigenvalues
with positive real part. The adjoint eigenvectors satisfy
L∗Φ∗2 = 0, (35)
L∗Φ∗1 = Φ∗
2, (36)
where L∗ is defined as1
〈Lu, v〉 = 〈u,L∗v〉 ∀ u, v ∈ Rn
with 〈·, ·〉 denoting the standard inner product in Rn. We choose normalization constants
such that the eigenvectors and adjoint eigenvectors have the property that
⟨Φi, Φ
∗j
⟩=
{0 if i 6= j1 if i = j
.
1For a real matrix, A∗ = AT
17
We write x as a linear combination of the eigenvectors
x = w1Φ1 + w2Φ2 + Ψ, Ψ =n∑
i=3
ciΦi ∈ Es (37)
and define the projection of x onto the center eigenspace Ec to be
Px = 〈x, Φ∗1〉Φ1 + 〈x, Φ∗
2〉Φ2. (38)
Applying this projection to (32) with x written as in (37), we obtain a two dimensional
system for the center eigenspace variables w1 and w2,
w1 = w2 + 〈N(x), Φ∗1〉 , (39)
w2 = 〈N(x), Φ∗2〉 . (40)
A differential equation for Ψ can be found by using the fact that Ψ = (I− P ) x, where I is
the identity matrix. Applying (I− P ) to (32), the resulting equation is
Ψ = LΨ + (I− P )N(x). (41)
Thus far, we have obtained a new system (39) - (41) that is equivalent to the original system
(31). However, we wish to decouple (39) and (40) from (41) by writing N(x) solely in terms
of the center eigenspace variables. In doing so, we invoke the Center Manifold theorem.
Given that the necessary assumptions on the full system (31) are satisfied, then by the
Center Manifold theorem there is a two dimensional differentiable center manifold W cloc for
(32) that exists in a neighborhood of x = 0, µ = µ0,
W cloc = {x = w1Φ1 + w2Φ2 + H (w1, w2) , ‖w1Φ1 + w2Φ2‖ < δ} , (42)
where H : Ec → Es, ‖ · ‖ is the Euclidean norm in Rn, and δ is sufficiently small. The local
center manifold W cloc has the properties that is it locally invariant, tangent to Ec at the
bifurcation point x = 0, µ = µ0, and is locally exponentially attracting. The last property
implies that the dynamics of the full system (31) are “attracted” to the center manifold.
Thus, if a system of differential equations can be obtained that describes the dynamics on
the center manifold, that system will also describe the dynamics of the full system in some
neighbourhood of the bifurcation point. In fact, this is possible because the center manifold
is a function of the center eigenspace variables (see Figure 6 for a schematic). Therefore, the
two dimensional system for the center eigenspace variables, (39) and (40), can be transformed
into a two dimensional system that describes the dynamics on the center manifold. This two
dimensional system of equations, which we’ll call the center manifold equations, will have
18
Ψ
w1
w2
W cloc
H(w1, w2)
Figure 6: A schematic of the center manifold. The dynamics of the full n dimensional system are attractedto the two dimensional center manifold, W c
loc. The function H(w1, w2) maps a point in the center eigenspaceonto the center manifold. A two dimensional system for the center eigenspace variables, w1 and w2, canbe transformed into a two dimensional system describing the dynamics on the center manifold, which arequalitatively similar to the dynamics of the full system.
the same qualitative dynamics as the full n dimensional system of equations in some region
close to the bifurcation point. The Center Manifold theorem does not however, state how
large this region is.
In order to transform (39) and (40) into the center manifold equations, we use the fact
that on the center manifold
Ψ = H (w1, w2) (43)
which allows us to write
x = w1Φ1 + w2Φ2 + H(w1, w2).
Using the above equality, we can write (39) and (40) as
w1 = w2 + 〈N (w1Φ1 + w2Φ2 + H(w1, w2)) , Φ∗1〉 , (44)
w2 = 〈N (w1Φ1 + w2Φ2 + H(w1, w2)) , Φ∗2〉 . (45)
These equations are the center manifold equations, and depend only on w1 and w2. However,
in order to determine N(x), we need to determine H(w1, w2); we will find Taylor coefficients
of H(w1, w2), which will lead to Taylor coefficients of N(x). Writing N(x) as a power series
in w,
N(x) = N20w21 + N11w1w2 + N02w
22 +O (|w|3) (46)
19
and substituting into (39) and (40), the center manifold equations take the form
w1 = w2 + G20w21 + G11w1w2 + G02w
22 +O (|w|3) , (47)
w2 = G20w21 + G11w1w2 + G02w
22 +O (|w|3) , (48)
where Gij = 〈Nij, Φ∗1〉 and Gij = 〈Nij, Φ
∗2〉. The computation of Nij will be discussed below.
The next step is to write N(w1Φ1 + w2Φ2 + H(w1, w2)) as in (46). Making a Taylor
expansion of H(w1, w2) and noting that H(w1, w2) = O (|w|2) as a result of the center
manifold being tangent to Ec,
H(w1, w2) = H20w21 + H11w1w2 + H02w
22 +O (|w|3) . (49)
Therefore,
x = w1Φ1 + w2Φ2 + H20w21 + H11w1w2 + H02w
22 +O (|w|3) (50)
and
N(x) = N(w1Φ1 + w2Φ2 + H20w
21 + H11w1w2 + H02w
22 +O (|w|3)) . (51)
To determine Hij, recall that Ψ = H(w1, w2). Differentiating this with respect to time,
Ψ =∂H(w1, w2)
∂w1
w1 +∂H(w1, w2)
∂w2
w2, (52)
substituting (47), (48), (49), and collecting like terms,
Ψ = 2H20w1w2 + H11w22 +O (|w|3) . (53)
However, due to the invariance of the center manifold, (53) must also satisfy (41). Writing
Ψ = H(w1, w2) as in (49) and N(x) as in (46),
Ψ = [LH20 + (I− P )N20] w21 + [LH11 + (I− P )N11] w1w2
+ [LH02 + (I− P )N02] w22 +O (|w|3) .
(54)
Equating coefficients of (53) and (54), the following singular systems for the quadratic Hij
coefficients are obtained,
LH20 + (I− P )N20 = 0, (55)
LH11 + (I− P )N11 = 2H20, (56)
LH02 + (I− P )N02 = H11. (57)
The vectors Nij are obtained by writing the problem dependent N(x) as in (46) with x as
in (50) and computing the coefficients of the wi1w
j2 terms.
20
3.3 Bogdanov-Takens Normal Form
Performing a center manifold reduction greatly simplifies a dynamical system at a bifurcation
point, but the Taylor expansion of the center manifold produces lengthy chains of nonlinear
terms. Normal form theory allows us to simplify the center manifold equations as much
as possible by making a series of near identity coordinate transformations that eliminate
all the unnecessary nonlinear terms. By unnecessary terms, we mean these terms can be
removed without altering the qualitative structure of solutions. That is, the existence and
stability of solutions are preserved. The beauty of normal form theory lies in the fact that
the essential nonlinear terms are determined entirely from the linear part of the system, and
the simplification of terms of order n does not affect the structure of lower order terms.
We assume that the dynamical system
x = F(x), x ∈ Rn (58)
has an equilibrium solution at x = 0 and has the Jacobian
L =∂F(0)
∂x. (59)
Then as in [12], the normal form theorem states that the dynamical system (58) can be
transformed into the system
w = Jw + F2(w) + . . . + Fn−1(w) +O (|w|n) (60)
by a series of analytic coordinate changes, where J is the Jordan canonical form of L and Fi
are terms of order i that could not be eliminated. We call (60) the normal form equation.
In the case of a Bogdanov-Takens bifurcation, after applying the center manifold theorem,
the linear part of the center manifold equations (47) and (48) is the same as its Jordan form,
L = J =
(0 10 0
). (61)
Using J, we can now transform the center manifold equations into the normal form equations
(60). The details of the normal form derivation of a Bogdanov-Takens bifurcation can be
found in both [8] and [12], so we will not present them here. However, we will present
the resulting normal form equations. Given a system as in (58) that has the Jordan form
given in (61) and satisfies a nondegeneracy condition, there is a series of analytic coordinate
transformations which reduce (58) into the normal form
w1 = w2, (62)
w2 = µ1 + µ2w2 + a2w21 + b2w1w2, (63)
21
where µ are parameters and a2 and b2 are normal form coefficients. The nondegeneracy
condition is that a2b2 6= 0 at the bifurcation point w = 0, µ = 0, meaning neither of
the quadratic terms can vanish. The values a2 and b2 can be computed using the explicit
equations derived in [8] or [9]. If we take (58) to be the center manifold equations (48)-(48),
then the normal form coefficients can be expressed as
a2 = 〈N20, Φ∗2〉 (64)
b2 = 〈N11, Φ∗2〉+ 2 〈N20, Φ
∗1〉 (65)
where Φ∗1 and Φ∗
2 are computed as in (35) and (36). This allows us to compute the normal
form coefficients directly using vectors derived from the full dynamical system (31), without
having to explicitly compute the center manifold equations, and subsequently reduce them
to normal form.
If the nondegeneracy condition is violated, then the Bogdanov-Takens point is part of a
higher codimension bifurcation, called a degenerate Bogdanov-Takens bifurcation. Roughly
speaking, the codimension of a bifurcation relates to the number of conditions that must
be satisfied for that bifurcation to take place. Generally, one parameter must be tuned to
satisfy each condition, and a bifurcation of codimension n is a single point in an n-dimensional
parameter space. The detection of higher codimension bifurcations is important because the
dynamics described by their normal forms account for more of the dynamics seen in the full
dynamical system.
In the case that a2 = 0, b2 6= 0, which is the case we are interested in, the bifurcation
is codimension three and the normal form equation takes an extended form. It is believed,
but not completely proven [9], that the degenerate Bogdanov-Takens bifurcation takes the
normal form
w1 = w2, (66)
w2 = µ1 + µ2w1 + µ3w2 + a3w31 + b2w1w2 + b′3w
21w2, (67)
where
b′3 = b3 − 3b2a4
5a3
.
The higher order normal form coefficients a3, b3, and a4 can be computed using the formulas
in [9], where aij and bij are given by
aij = i!j! 〈Nij, Φ∗1〉 , (68)
bij = i!j! 〈Nij, Φ∗2〉 . (69)
The degenerate Bogdanov-Takens bifurcation exists in three different types, each type has
its own unique set of dynamics. The exact type of the bifurcation and its dynamics can
22
be determined by certain properties of the normal form coefficients. For example, a3 > 0
corresponds to the saddle type. The two other types are called the focus and elliptic type.
The degenerate Bogdanov-Takens bifurcation is not an isolated case where a bifurcation
can have multiple types. In fact, it is quite common for a bifurcation to have different types,
depending on the values of the normal form coefficients. However, once these coefficients are
computed, there are analytic conditions and expressions that can be used to determine the
exact type of bifurcation and the nature of solutions which exist close to that point.
23
4 Numerical Methods
In the previous section, we discussed elegant analytical methods that can be used at a
bifurcation point to deduce information about the dynamics of a system. However, dynamical
systems are often too complex for a bifurcation analysis to be done analytically. In some
cases, an equilibrium solution, which is the starting point of a bifurcation analysis, cannot
be found analytically. Thus, we need efficient and accurate numerical methods for finding
equilibrium solutions and detecting any bifurcations that might occur as this equilibrium
solution is varied in one or more parameters.
We will first discuss pseudoarclength continuation, which is an efficient predictor-corrector
method that can be used to track how an equilibrium solution evolves when one parameter
is varied. The ideas of pseudoarclength continuation form the basis for the continuation of
saddle-node and Bogdanov-Takens bifurcations, where an equilibrium solution with special
properties is tracked under the variation of two or three parameters.
4.1 Pseudoarclength Continuation
Suppose that we have found an equilibrium solution x = x0, µ = µ0, to the equation
x = F(x, µ), x ∈ Rn, µ ∈ Rp,
and wish to compute a locus of equilibrium solutions by varying a parameter. That is, we
wish to compute the solution curves of
F(x, µ) = 0, µ ∈ R (70)
with only one active parameter µ. The general way to accomplish this is by using a two step
predictor and corrector method. First, a prediction is made to where the new equilibrium
solution lies, and then this point is corrected to ensure that (70) is satisfied to within a given
tolerance. For a brief introduction to numerical continuation using predictor and corrector
methods, see [4] and [8].
Pseudoarclength continuation is a relatively simple and effective method that continues
an equilibrium solution by approximating the arclength along the solution curve defined by
(70) using a small step along its tangent vector. This small step along the tangent vector
computes a predictor, and the correction can be found, e.g., by looking for a solution to
(70) along a vector normal to the tangent (see Figure 7). In particular, the parameter µ is
allowed to vary during the correction as well.
Given two known equilibrium solutions, (x0, µ0), (x1, µ1), and the tangent vector t0 to
24
t n
x
F(x, µ) = 0
x1
x2
x
µ
Figure 7: Pseudoarclength continuation. Using a tangent vector t to a known equilibrium solution x1, apredictor x can be computed. This point is corrected by looking along a vector n that is normal to thetangent vector t, yielding a new equilibrium solution x2.
(70) at (x0, µ0), the tangent vector at (x1, µ1) can be found by solving the system(
Fx(x1, µ1) Fµ(x1, µ1)
t(x)0
Tt(µ)0
)(t(x)1
t(µ)1
)=
(01
), (71)
where Fx is the Jacobian of F with respect to phase variables and Fµ is the Jacobian with
respect to active parameters. The tangent vector has been separated into its components
along the phase variables t(x) and active parameters t(µ). The last equation in (71) is required
to preserve the orientation of the curve. From the new tangent vector t1, we can compute a
predictor
x = x1 +t(x)1
‖t1‖∆s, (72)
µ = µ1 +t(µ)1
‖t1‖∆s, (73)
where ∆s is the size of the step taken along the tangent vector.
The corrector is found by looking for a solution to (70) along a vector n that is normal
to the tangent t. The normal vector is given by
n(x, µ) =
(x− xµ− µ
), (74)
where x and µ are equilibrium solutions. Thus, the corrector (next equilibrium solution) can
be found by solving the extended system
F(x, µ) = 0, (75)
〈t, n(x, µ)〉 = 0.
25
The Jacobian of the extended system (75) is given by(Fx(x, µ) Fµ(x, µ)
t(x)T t(µ)
). (76)
System (75) can be solved using an iterative process such as Newton’s Method, using the
Jacobian (76).
Bifurcations that may occur along a branch of equilibrium solutions can be detected by
monitoring the eigenvalues of Fx at each point on the curve. In particular, a bifurcation
may occur when an eigenvalue of Fx crosses the imaginary axis (i.e. the real part of the
eigenvalue changes sign).
We finish the section with a remark on the size of the step (stepsize) taken along the
tangent vector. The stepsize is important in the determination of bifurcation points along
the computed curves. If the stepsize is too large, a bifurcation point might be jumped
over. Furthermore, the corrector might fail to converge. However, if the stepsize is taken
too small, then the computations may become inefficient. To help resolve these issues, an
adaptive stepsize can be used. If convergence is fast and the stepsize is below an upper
bound, it can be increased. Likewise, if convergence takes many iterations or fails, and the
stepsize is above a lower bound, it can be decreased.
4.2 Continuation of Saddle-node and Bogdanov-Takens Bifurca-tions
Now suppose that we have found a saddle-node bifurcation (Fx has one zero eigenvalue) or
a Bogdandov-Takens bifurcation (Fx has two zero eigenvalues) and we want to continue this
special point in a higher dimensional parameter space using the method of pseudoarclength
continuation as described above. Two free parameters are needed to compute a curve of
saddle-node bifurcations, µ ∈ R2, while three free parameters are needed to continue a
Bogdanov-Takens point, µ ∈ R3. We will construct the system defining a saddle-node
bifurcation first, and then extend it to define Bogdanov-Takens bifurcations. For detailed
derivations of the expressions to follow, see [4] and [8]
One possible way to compute a curve of saddle-node bifurcations is to continuously solve
the system
F(x, µ) = 0,
det (Fx(x, µ)) = 0, (77)
〈t, n(x, µ)〉 = 0.
The problem with this method is that the derivatives of the determinant have to be com-
puted numerically, even if all the other derivatives are known analytically. Instead of using
26
det(Fx(x, µ)) as a function that vanishes at a zero eigenvalue, we can define a test function
s(x, µ) that also vanishes at a zero eigenvalue. Thus, the system we need to solve is
F(x, µ) = 0,
s(x, µ) = 0, (78)
〈t, n(x, µ)〉 = 0,
where s(x, µ) can be obtained by solving the system
(Fx(x, µ) Φ∗
ΦT 0
)(qs
)=
(01
). (79)
In the above system, Φ and Φ∗ are the eigenvector and adjoint eigenvector corresponding to
the zero eigenvalue of Fx at the previously computed saddle-node point. It should be noted
that the matrix in (79) is of full rank, even when Fx(x, µ) is of rank n− 1.
To compute the derivatives of s(x, µ), we use the formulas provided by [4]. Letting z
denote a phase variable or parameter,
sz = −〈w,Fxz(x, µ)q〉 , (80)
where q is computed in (79) and w is computed by solving
(Fx(x, µ) Φ∗
ΦT 0
)T (ws
)=
(01
). (81)
Using these expressions, the Jacobian of the system (78) is
Fx(x, µ) Fµ(x, µ)sT
x sTµ
t(x)T t(µ)T
. (82)
The tangent vector at a point (xi, µi) can be found by solving
Fx(xi, µi) Fµ(xi, µi)sT
x sTµ
t(x)i−1
Tt(µ)i−1
T
(t(x)i
t(µ)i
)=
001
, (83)
using the previously computed tangent vector ti−1.
The system for computing a curve of Bogdanov-Takens points is a natural extension
of the system defined for saddle-node bifurcations. For the saddle-node, we defined a test
function s(x, µ) that vanishes when Fx(x, µ) has a zero eigenvalue. Extending this idea,
27
we can define two test functions s1(x, µ) and s2(x, µ) that both vanish when Fx(x, µ) has a
double zero eigenvalue. The defining system for continuing a Bogdanov-Takens point is then
F(x, µ) = 0,
s1(x, µ) = 0,
s2(x, µ) = 0, (84)
〈t, n(x, µ)〉 = 0.
Before going into details on computing s1 and s2, we define some nomenclature. As in (33)
- (36), the eigenvector and adjoint eigenvectors of Fx satisfy
FxΦ1 = 0, FxΦ2 = Φ1,
F∗xΦ∗2 = 0, F∗xΦ
∗1 = Φ∗
2,
where the dependence of Fx on x and µ has been dropped for clarity. We also define the
matrix M as
M ≡(Fx(x, µ) Φ∗
2
ΦT1 0
). (85)
The test functions s1(x, µ) and s2(x, µ) can be computed by solving the systems
M
(q1
s1
)=
(01
), (86)
M
(q2
s2
)=
(Φ1
0
), (87)
and their derivatives are given by
s1z = −〈w1,Fxz(x, µ)q1〉 , (88)
s2z = −〈w2,Fxz(x, µ)q1〉 − 〈w1,Fxz(x, µ)q2〉 , (89)
where z is a phase variable or parameter. The vectors w1 and w2 can be computed by solving
MT
(w1
s1
)=
(01
), (90)
MT
(w2
s2
)=
(Φ∗
2
0
). (91)
Using these derivatives, the Jacobian of the defining system (84) can be computed and used
to construct the tangent vector system.
28
4.3 Software Packages
The continuation of equilibrium solutions, saddle-node bifurcations, and Bogdanov-Takens
bifurcations are only a small part of numerical bifurcation analysis. There is also the de-
tection and continuation of other local bifurcations, such as a Hopf bifurcation. One topic
we haven’t even mentioned is the continuation of periodic solutions and the detection of
their bifurcations. Therefore, it would be useful if there were easy to use software packages
that could perform such continuations. Luckily, there are. In this report, we used two such
packages; AUTO [3] and MATCONT [2].
AUTO is a UNIX based software package implemented in the C and Python programming
languages. It can continue and detect codimension one bifurcations of steady state equilibria
and periodic solutions. Further, AUTO allows the continuation of periodic solutions with
infinite period. Such solutions are called homoclinic orbits. While continuing homoclinic
orbits, AUTO can detect several type of bifurcations, and allows these bifurcations to be
continued in two or more parameters.
MATCONT is a user friendly MATLAB based software package, with full graphical user
interfaces. In addition to continuing and detecting codimension one bifurcations, MATCONT
can compute the normal form coefficients and detect higher codimension bifurcations, such
as the Bogdanov-Takens bifurcation. In recently updated versions, MATCONT has become
well equipped with tools to analyze homoclinic orbits and their bifurcations.
29
5 Dynamics
In this section we discuss the dynamics of the two compartment neuron. We begin with
an explanation of the two main states of a neuron; one state being a resting state, and
the other being a state of repetitive firing. Following this section are the results obtained
from performing a bifurcation analysis on the neuron model. We first show the bifurcation
structure when there is no dendritic influence, and then show the new bifurcation structure
with dendritic influence. We compare and discuss the changes, and then show how higher
dimensional bifurcation diagrams can be used to explain the dynamical transitions that
occur.
5.1 Neuron Behaviour
Before going into the bifurcation analysis, it is useful to discuss the states in which our
model neuron can exist. One of the states is a quiescent state, where the potential difference
across the cell membrane depolarizes (becomes less negative) until a threshold potential is
reached and an action potential occurs. Recall that an action potential is a large spike in
the membrane potential. Following this action potential is a large hyperpolarization, where
the membrane potential becomes very negative. The neuron then slowly recovers and rests
at an equilibrium potential. This behaviour can be seen in Figure 8.
The second state in which the neuron can exist in is a repetitive firing state. Instead of
the neuron recovering to an equilibrium potential, there is sufficient current being injected
into the soma that the potential continuously depolarizes above the threshold potential,
generating multiple action potentials and hyperpolarizations. Figure 9 shows this type of
behaviour. Both Figure 8 and Figure 9 are numerical solutions to the neuron equations (18)
- (19) that were computed using MATLAB.
There are mathematical analogues to the equilibrium potential or the repetitive firing
that is reached after some time has passed. The equilibrium potential that is approached
in a quiescent neuron corresponds to a stable steady state solution of the neuron equations.
Because it is stable, if the solution is perturbed from the equilibrium value, it will return
to the equilibrium value. More specifically, the solution is attracting. This explains why
the somatic potential eventually approaches this equilibrium voltage. The repetitive firing
state can be considered a stable periodic solution. A periodic solution is fundamentally
different than a steady state solution. A steady state solution is a single value, whereas
a periodic solution cycles through the same values forever. This can be seen in Figure 9,
where after a bit of time, the somatic potential cycles through the same values. Like steady
state solutions, periodic solutions can be stable or unstable, and can attract (or repel) other
30
t [ms]0 5 10 15 20 25 30 35 40 45 50
VS
[mV
]
-70
-60
-50
-40
-30
-20
-10
0
10
20
Figure 8: A neuron in a quiescent state. On the x-axis is time in milliseconds and on the y-axis is the somaticmembrane potential in millivolts. The neuron undergoes a brief action potential and hyperpolarization, andrecovers to an equilibrium potential. This equilibrium potential can be considered a stable steady statesolution to the neuron equations.
t [ms]0 5 10 15 20 25 30 35 40 45 50
VS
[mV
]
-80
-60
-40
-20
0
20
40
Figure 9: Repetitive firing. There is a sufficiently large amount of applied current being injected intothe soma that after the hyperpolarization phase, the potential depolarizes above the threshold and anotheraction potential occurs. This corresponds to the mathematical neuron approaching a stable periodic solution.
31
nearby solutions. This can also be seen in Figure 9. It takes about two action potentials for
the solution to approach the true periodic solution.
As mentioned above, repetitive firing occurs when there is enough current being injected
into the soma that the membrane potential depolarizes above the spiking threshold instead
of slowly recovering to an equilibrium voltage. Therefore, the neuron should change from
a quiescent state to a repetitive firing state if the amount of applied current is increased.
This is indeed what happens, both in experiments and in our neuron model. In fact, the
only difference between the equations used to produce Figure 8 and Figure 9 is the value
of Iapp. With this information, it is reasonable to expect that there is a critical value
of the applied current where both the physiological and mathematical neuron undergo a
transition from quiescence to repetitive firing. This critical value and transition in neuron
dynamics corresponds to a bifurcation. Performing a bifurcation analysis would be useful in
determining this critical value and the particular bifurcation it corresponds to. Moreover, a
bifurcation analysis might also reveal additional regions of new neuron dynamics.
5.2 Without Dendritic Influence
We will first consider a neuron that is not under the influence of the dendritic compartment,
that is, we set gC = 0. As mentioned above, this also corresponds to a one compartment
model. To determine information about how the neuron responds to a wide range of ap-
plied currents, a one parameter bifurcation analysis will be performed with Iapp being the
primary bifurcation parameter. Furthermore, this bifurcation analysis should reveal when
the transition from quiescence to repetitive firing occurs and should reveal the corresponding
bifurcation.
A useful way to understand a bifurcation analysis is to construct a bifurcation diagram.
In the one parameter case, the bifurcation diagram usually has the active parameter on the
x-axis and a variable of the dynamical system on the y-axis. A vertical line on the diagram
(i.e. a fixed value of the parameter) indicates which types of solutions exist, their stability,
and where they are located for that given parameter value. By changing the parameter value,
the diagram shows how these solutions evolve and the role they play in any bifurcations which
may occur.
The one parameter bifurcation diagram for the neuron without dendritic influence can
be seen in Figure 10. On the x-axis is the applied current, Iapp, and the variable we chose for
the y-axis is the somatic potential, VS. As the diagram shows, the one compartment neuron
model undergoes many bifurcations. There are two saddle-node bifurcations (one of them is
part of the SNIC, see below), which give the curve of steady state equilibria its ‘S’ shape.
More importantly however, are the saddle-node on invariant cycle (SNIC) bifurcation and
32
−10 −5 0 5 10 15 20 25 30 35 40
−70
−60
−50
−40
−30
−20
−10
0
10
20
30
Iapp [µA/cm2]
VS
[mV
]
Quiescence Repetitive Firing Quiescence
SNIC
Stable steady stateUnstable steady stateStable periodicUnstable periodic
Saddle-node
SupercriticalHopf
Figure 10: A bifurcation diagram for a neuron without dendritic influence. There are distinct regions ofquiescent and repetitive firing behaviour. The transitions from quiescence to repetitive firing occur via aSNIC bifurcation or a supercritical Hopf bifurcation.
the supercritical Hopf bifurcation, which not only mark the transitions from quiescence to
repetitive firing, but separate these behaviours into distinct regions of applied current values.
A saddle-node bifurcation is a local bifurcation which occurs when two steady-state equi-
librium solutions coalesce to a single equilibrium and then annihilate each other. Another
common name for this bifurcation is a fold bifurcation. This is because it looks as if the
steady state equilibrium is ‘folding’ back onto itself. At the bifurcation point, the Jacobian
has a single zero eigenvalue, and thus there exists a one dimensional center manifold. How-
ever, the saddle-node bifurcation can be part of a more complicated global bifurcation, called
a saddle-node on invariant cycle (SNIC) bifurcation. The SNIC bifurcation begins with two
steady-state equilibria, as in the regular saddle-node bifurcation, except in a SNIC some of
the stable manifolds of one equilibrium are connected to the unstable manifolds of the other.
These connections are global properties called heteroclinic connections. As the two equilib-
ria move closer to each other, they coalesce into a single equilibrium, again, the same as a
regular saddle-node. A major difference now occurs at the bifurcation point of a SNIC. The
previously existing heteroclinic connections cause the center manifold to form a closed loop,
creating a periodic solution that has zero frequency. The low frequency can be attributed to
the single steady equilibrium that lies on the periodic solution. As the bifurcation continues,
33
x0
W c
Iapp = 0.3 Iapp = 5
Cx2x1
Iapp = −1 connectionheteroclinic
Figure 11: Progression of a SNIC bifurcation. The xi represent steady state equilibrium solutions, W c isthe center manifold, and C represents a periodic solution with nonzero frequency. The values of the appliedcurrent are taken from the bifurcation diagram in Figure 10.
the single equilibrium vanishes, causing the periodic solution to have a nonzero, but very
small, frequency. Figure 11 illustrates the SNIC bifurcation. It is important to emphasize
that the SNIC and saddle-node are locally equivalent. Therefore, a local bifurcation analysis
portrays a SNIC as a regular saddle-node bifurcation, the distinguishing global properties
of a SNIC are not seen. The detection of a SNIC bifurcation can be done by the numeric
continuation of a periodic solution. One would look for the termination of a periodic solution
with low frequency at a saddle-node bifurcation. Another method would be to look for the
heteroclinic connections which bind two colliding equilibria.
The dynamics of the SNIC bifurcation, in particular, the transition from quiescence
to repetitive firing with arbitrarily low neuron firing rates classifies the neuron as Type I
excitable. The classification of neurons (actually, axons) based on their excitability originates
from a paper published by Hodgkin in 1948 [5]. In this paper, Hodgkin uses experimental
data as a premise to separate axons into two distinct classes based primarily on their firing
rates at the onset of repetitive firing. Class I axons begin repetitive firing with arbitrarily
low firing rates whereas Class II axons begin repetitive firing with a characteristic (nonzero)
frequency. As mathematical analogues to Class I and Class II axons/neurons, Rinzel and
Ermentrout [10] introduced Type I and Type II excitable neurons. Like Class I neurons,
a Type I neuron begins repetitive firing with arbitrarily low frequencies, thus implying a
transition from quiescence to repetitive firing via a homoclinic bifurcation (see below), such
as a SNIC. For a Type II neuron, the transition would be due to a bifurcation involving
periodic solutions of nonzero frequency, such as a Hopf or a saddle-node of cycles (see below).
Using the classification scheme established by Rinzel and Ermentrout, we can use the neurons
excitability type to qualitatively describe the neurons response to an applied current.
Another important bifurcation is the supercritical Hopf bifurcation, which can be thought
of as the point where repetitive firing ends. In a Hopf bifurcation, the Jacobian of the
34
system has a pair of complex conjugate eigenvalues with zero real part, creating a two
dimensional center manifold. On this center manifold lies a small periodic solution that
either grows or shrinks depending on the direction in which the bifurcation parameter is
varied. The stability of the emerging periodic solution can be determined by normal form
like coefficients called Lyapunov coefficients. A supercritical Hopf gives rise to stable periodic
solutions and a subcritical Hopf creates unstable periodic solutions. In addition to the birth
of periodic solutions, the steady state equilibrium which undergoes the bifurcation changes
stability. This is a result of the eigenvalues crossing the imaginary axis, the real part of
the complex conjugate pair of eigenvalues changes sign. Of all the bifurcations which create
or destroy periodic solutions, the Hopf bifurcation is the simplest to detect and analyze
because it is purely local, unlike other bifurcations such as the SNIC. In addition, the small
periodic solution which is born can be well approximated and used as a starting point for
numerical continuation of periodic solutions. This allows bifurcations of periodic solutions
to be detected, which sheds more light on the overall dynamics of a system or model.
Physiologically, the Hopf bifurcation which occurs for relatively large values of Iapp reflects
the fact that too much applied current will force the ions into electrostatic equilibrium,
preventing the neuron from firing repeatedly. This behaviour is not limited to the Type
I neuron seen in our model without any influence from the dendritic compartment. Hopf
bifurcations, both subcritical and supercritical, can terminate repetitive firing in Type I and
Type II neurons.
5.3 With Dendritic Influence
We now leave the one compartment neuron and focus on the dynamics of the two com-
partment model. To start the analysis, the same one parameter bifurcation diagram will
be produced with the applied current being the active parameter. In this one parameter
bifurcation analysis, we use gC = gD = 2 for the dendrite related parameters. The resulting
bifurcation diagram can be seen in Figure 12.
From the bifurcation diagram, it is clear that the two compartment neuron has different
dynamics, especially at the onset of repetitive firing. A major difference is that no saddle-
node bifurcations occur and the curve of steady state solutions is relatively flat. No saddle-
node bifurcations also means no SNIC occurs, therefore a different bifurcation must be
responsible for the transition from quiescence to repetitive firing. Indeed, stable repetitive
firing occurs via a saddle-node of cycles on the branch of unstable cycles that is born from
the subcritical Hopf bifurcation.
In phase space, which is the space spanned by all the variables of the system of differential
35
10 15 20 25 30 35
−60
−50
−40
−30
−20
−10
0
Iapp [µA/cm2]
VS
[mV
]
Subcritical Hopf
Saddle-nodeof cycles
Region of bistability
Stable steady stateUnstable steady stateStable periodicUnstable periodic
SupercriticalHopf
Figure 12: The bifurcation diagram for a neuron with dendritic influence. The regions of quiescence andrepetitive firing are similar, but the dynamics near the onset of repetitive firing are quite different. In thiscase, we have repetitive firing beginning via a saddle-node of cycles and there is a region of bistability.
36
Iapp = 13 Iapp = 13.9 Iapp = 14.2
Figure 13: Progression of a saddle-node of cycles. Values of the applied current are taken from thebifurcation diagram in Figure 12.
equations2, periodic solutions are closed orbits, or cycles. Basically, a saddle-node of cycles
is the periodic analogue to a regular saddle-node bifurcation. In a saddle-node of cycles,
two periodic solutions move closer together until they coalesce into a single periodic solution
and then annihilate each other, similar to the two steady state solutions annihilating each
other in a regular saddle-node bifurcation. Figure 13 shows a schematic diagram. An
important difference between the SNIC bifurcation and the saddle-node of cycles scenarios is
the frequency of the periodic solution at transition from quiescence to repetitive firing. The
cycles that are born at a SNIC bifurcation have arbitrarily low frequencies. Consequently,
the frequency at onset of repetitive firing is arbitrarily low. This corresponds to Type I
excitability. At a saddle-node of cycles, however, the cycles have nonzero frequency, and
thus, at the onset of repetitive firing, the neuron fires with a characteristic frequency. This
corresponds to a Type II excitable neuron. Below we will discuss in detail the transition
from a Type I excitable neuron to a Type II excitable neuron that occurs as the parameter
gc is increased.
In the Type II neuron, the saddle-node of cycles and the subcritical Hopf bifurcation
have a large impact on the behaviour of the neuron near the onset of repetitive firing. As
mentioned above, the stability of a steady state solution changes at a Hopf bifurcation.
Thus, the single steady state solution remains stable until the subcritical Hopf bifurcation.
Furthermore, the stable periodic solution, which corresponds to the repetitive firing, exists
until the saddle-node of cycles. Because of the subcritical nature of the Hopf bifurcation,
the saddle-node of cycles occurs at a lower value of applied current than the Hopf bifurca-
tion. Therefore, in the region between the saddle-node of cycles and the subcritical Hopf
bifurcation, a stable steady state solution coexists with a stable periodic solution. This re-
gion is called bistable because of the coexisting stable solutions. The coexistence of a stable
steady state solution and a stable periodic solution in the region of bistability implies that
2For the neuron equations, phase space is the {VS, h, n, VD} space.
37
VS(0) [mV]
VD(0
)[m
V]
−60 −58 −56 −54 −52 −50 −48 −46 −44 −42 −40−60
−58
−56
−54
−52
−50
−48
−46
−44
−42
−40
Figure 14: Basins of attraction for the stable steady state and stable periodic solution. If the initial valuesof the somatic and dendritic potentials are in the blue region, the neuron will approach the stable steadystate solution, corresponding to quiescence. If the initial values of potentials are in the red region, the neuronwill approach a stable periodic solution, corresponding to repetitive firing. Thus, the state of the neuron isdependent on the initial conditions. The initial values for the activation variables are h(0) = n(0) = 0.2.
38
the neuron has the possibility to exist in either a quiescent state or repetitive firing state for
a given value of the applied current. If the value of the applied current is in this region of
bistability, the behaviour of the neuron can be determined by the type of stable solution that
is approached in the long time limit. This generally depends on the initial state, or initial
conditions, of the neuron. The set of all initial conditions that are attracted to a particular
stable solution is called a basin of attraction (see Figure 14).
Another interesting feature that results from the bistability is hysteresis, where the tran-
sition from quiescence to repetitive firing occurs at a different value of the applied current
depending on whether the applied current is being increased or decreased. If the neuron
begins in a quiescent state with Iapp < 13.9, and the applied current is increased, the neuron
will remain in the quiescent state until the subcritical Hopf, where the neuron enters a repet-
itive firing state. Likewise, if the neuron begins in a repetitive firing state and the applied
current is decreased, the neuron will remain in a repetitive firing state until the saddle-node
of cycles. Because the Hopf bifurcation and saddle-node of cycles occur at different values
of the applied current, the quiescence to repetitive firing transition can occur at different
values of the applied current. This type of behaviour can be seen in Figure 15.
5.4 Dynamical Transitions Via Higher Codimension Bifurcations
As we have seen, the dynamics of a neuron with and without dendritic influence are very
different, especially at the onset of repetitive firing. We now wish to investigate, in detail,
how the dynamics of the neuron without dendritic influence transform into the dynamics of
a neuron with dendritic influence. One possible way to do this would be to make small incre-
ments in gC and then compute the corresponding one dimensional bifurcation diagram with
Iapp being the active parameter. A major problem with this approach is that codimension
two bifurcations, bifurcations which are single points in two dimensional parameter space,
would most likely be jumped over because the second parameter, gC, would be changing in
discrete increments. This problem can be overcome by the construction of a two parameter
bifurcation diagram.
Given a value of gC, a one parameter bifurcation diagram can be constructed that depicts
codimension one bifurcations and their corresponding value of Iapp. These codimension one
bifurcations can be used as a starting point for a two parameter continuation with gC and Iapp
both being the active parameters, allowing the codimension one bifurcations that occur in one
parameter bifurcation diagrams to be tracked as the dendritic influence, gC, is continuously
changing. The resulting two parameter continuation traces out a curve of codimension
one bifurcations in a two dimensional parameter space. Along these curves, codimension
two bifurcations might be detected. Similar to how codimension one bifurcations mark a
39
13 13.5 14 14.5 15 15.5 16 16.5
−55
−50
−45
−40
−35
−30
−25
−20
−15
−10
−5
Iapp [µA/cm2]
VS
[mV
]
Figure 15: Hysteresis in a two compartment neuron model. The arrows show the stable solution that willbe approached as the applied current is changing. Vertical arrows show the jump from one type of stablesolution to another, corresponding to the quiescence to repetitive firing transition. Depending on whetherthe applied current is increasing or decreasing, this transition occurs at different values of the applied current.
40
dynamical transition in the time series of a system, codimension two bifurcations mark the
point where there is a transition in the one parameter bifurcation structure. The overall
result of the two parameter continuation can be visualized in a two parameter bifurcation
diagram, with the two active parameters on the axes. Our two parameter bifurcation diagram
will have the applied current on the x-axis and the dendritic influence on the y-axis. By
holding the value of gC fixed, the two parameter bifurcation diagram will show what types
of codimension one bifurcations take place and the value of Iapp where they occur. With
this information, one can infer what the corresponding one parameter bifurcation diagram
will look like for that particular value of gC. This can be done for any value of gC that is
contained in the two parameter diagram. In summary, a two parameter continuation and
the construction of a two parameter bifurcation diagram are extremely useful in depicting
the continuous evolution of the one parameter bifurcation diagrams while gC is changing.
Furthermore, any bifurcations of dimension two which occur represent points where the one
parameter bifurcation diagrams are undergoing a transition from one form to another, thus
implying further changes in the dynamics of a neuron.
The two parameter bifurcation diagram that we obtain, with gD = 2, is shown in Figure
16. The diagram illustrates the evolution of familiar codimension one bifurcations, such as
saddle-nodes, Hopf bifurcations, and the saddle-node of cycles. In addition, there are curves
of homoclinic bifurcations. Although homoclinic bifurcations will be discussed in more detail
below, we will give a vague description here. A homoclinic bifurcation can be thought of as
the point where a periodic solution collides with a steady state solution, causing the periodic
solution to end with an infinite period (zero frequency). The SNIC bifurcation that was
observed in the neuron without dendritic influence is classified as a homoclinic bifurcation.
In the diagram, the bifurcations of codimension two are depicted by points. We will see that
the normal form of each codimension two bifurcation accounts for certain codimension one
bifurcations seen in the two parameter diagram and the individual one parameter bifurcation
diagrams.
The first codimension two bifurcation that will be discussed is the cusp bifurcation. As
can be seen from the two parameter bifurcation diagram, the cusp bifurcation occurs when
two saddle-node bifurcations coalesce with each other. Like the saddle-node bifurcation, a
cusp also has a zero eigenvalue. However, the saddle-node bifurcation has a nonzero quadratic
normal form coefficient. This term vanishes in the cusp bifurcation and the normal form
becomes cubic. An analysis of the normal form equation reveals that there should be a
curve of saddle-node bifurcations that emerge from the cusp bifurcation, which is indeed
what occurs the neuron system. Unfortunately however, the normal form of the cusp does
not say anything about the other bifurcations which occur along the curve of saddle-node
41
6.8 7 7.2 7.4 7.6 7.8 8 8.2
0.84
0.86
0.88
0.9
0.92
0.94
0.96
• BT
• Cusp
• NCHSN
• NCHSN
Iapp [µA/cm2]
g C[m
S/c
m2]
0 10 20 300
1
2
3
4
5
6
• GH
• NCHSN
Saddle-node
Hopf
Homoclinic
Saddle-node of cycles
Figure 16: A two parameter bifurcation diagram, with the applied current, Iapp, and the soma/dendriteconductance, gC, being the active parameters that can vary. The inlay is the same figure over a wider rangeof parameters.
42
bifurcations. This is a result of the Center Manifold and Normal Form theorems only being
valid in a neighbourhood close to the cusp bifurcation.
The cusp bifurcation separates regions of parameter space where either two saddle-node
bifurcations exist or no saddle-node bifurcations exist. In the region where there are two
saddle-node bifurcations, the one parameter bifurcation diagram will have a curve of steady
state equilibria that ‘fold’ back onto itself twice, thus creating an ‘S’ shaped curve. Further-
more, the small region between saddle-node bifurcations will have the coexistence of three
steady state equilibria. These dynamics occur in the one parameter bifurcation diagram
of a neuron without dendritic influence (Figure 10). Consider now the region where no
saddle-node bifurcations exist. The one parameter bifurcation diagram will have a relatively
flat curve of steady state equilibria, and by the normal form of the cusp, there will only
be one steady state solution for a given parameter value (a given value of Iapp). These are
the dynamics seen in the neuron with dendritic influence (Figure 12). Therefore, the cusp
bifurcation represents an important transition in the steady state bifurcation structure of a
two compartment neuron, denoting the point where the curve of steady state equilibria loses
its ‘S’ shape.
The next codimension two bifurcation that will be discussed is the Bogdanov-Takens bi-
furcation, denoted as BT on the two parameter diagram. As discussed already, the Bogdanov-
Takens bifurcation occurs when the Jacobian has a double zero eigenvalue. In terms of the
curve of saddle-nodes, the Bogdanov-Takens bifurcation is a point where the dimensionality
of the center manifold increases. Thus, the normal form becomes two dimensional, as pre-
viously discussed. The normal form not only contains a curve of saddle-node bifurcations,
but it also contains a curve of Hopf bifurcations and homoclinic bifurcations. The Hopf bi-
furcations that emerge from the Bogdanov-Takens point correspond to the subcritical Hopf
bifurcation which occurs in the Type II neuron with dendritic influence. The origin of the
homoclinic bifurcation is a bit more complicated. The homoclinics which emerge from the
Bogdanov-Takens bifurcation are in the region below the cusp, therefore two saddle-node
bifurcations exist and the steady state solution curve is ‘S’ shaped. Furthermore, the Hopf
bifurcations which emerge from the Bogdanov-Takens are always in between the saddle-node
bifurcations and occur on the bottom of the ‘S’. The periodic solutions which emerge from
this Hopf would grow until a saddle-node of cycles, but the periodic solutions can’t get
around the bend caused by ‘S’ shape of steady state solutions and thus it collides with one.
The collision of a steady state and periodic solution cause the periodic solution to have an
infinite period, the trademark of a homoclinic bifurcation. These complicated dynamics can
be seen in Figure 17, which is a one parameter bifurcation diagram that takes place in the
region between the Bogdanov-Takens and cusp bifurcation. In particular, the value of gC is
43
7.22 7.23 7.24 7.25 7.26 7.27 7.28 7.29 7.3 7.31
−55
−50
−45
−40
−35
−30
−25
−20
−15
−10
−5
Iapp [µA/cm2]
VS
[mV
]
Saddle-node of cycles
SNIC
Subcritical Hopf
Homoclinic
Figure 17: A close up of the one parameter bifurcation diagram with gC = 0.9. This value of gC is in aregion between the Bogdanov-Takens and cusp bifurcation. The subcritical Hopf and homoclinic bifurcationare a born from the Bogdanov-Takens bifurcation.
0.9.
This one parameter bifurcation reveals more than just the subcritical Hopf and homoclinic
bifurcations which emerge from the Bogdanov-Takens, but provides information about how
the dynamics of the Type I neuron without dendritic influence (gC = 0) transform into the
dynamics of the Type II neuron with dendritic influence (gC = 2). This is possible because
the one parameter bifurcation diagram with gC = 0.9 contains many of the features seen
in the one parameter bifurcation diagrams when gC = 0 and gC = 2. The curve of ‘S’
shaped steady state solutions and the SNIC bifurcation are features of the neuron without
dendritic influence, but the subcritical Hopf, saddle-node of cycles, and region of bistability
are dynamics of the neuron with dendritic influence. Also, if the curve of steady state lost
its ‘S’ shape, the periodic solutions that emerge from the subcritical Hopf might not end in
a homoclinic bifurcation, but rather they would connect to the periodic solutions involved in
the SNIC, creating a bifurcation diagram exactly like Figure 12. Instead of thinking about
44
the dynamics of a neuron with and without dendrites, we can think of the dynamics in terms
of a Type I or Type II excitable neuron. The coexisting SNIC bifurcation and saddle-node
of cycles, along with the other features described above, indicate that the neuron is close
to the point where its excitability undergoes a transition from one type to another. The
bifurcations which are responsible for the initialization of this transition, in particular, the
subcritical Hopf and the homoclinic, are a direct result of the Bogdanov-Takens bifurcation.
Therefore, the Bogdanov-Takens bifurcation plays a large role in the the excitability of a
neuron, even though it is not the defining point where the transition between excitability
type occurs. However, if the intermediate dynamics that occur during the transition are
neglected (such as when gC = 0.9), then the Bogdanov-Takens bifurcation does separate
regions of Type I neurons and Type II neurons. For regions of parameter space below the
Bogdanov-Takens bifurcation, the neuron is Type I excitable, whereas regions above are
Type II excitable. Therefore, knowing where a Bogdanov-Takens bifurcation occurs is not
only valuable for determining how the neuron responds to an applied current for a given set
of parameter values, but it also provides an approximate location as to where this type of
neural response changes.
Along the curve of Hopf bifurcations which emerge from the Bogdanov-Takens bifurca-
tion, another codimension two bifurcation takes place, the generalized Hopf (GH) bifurcation.
Above we discussed the supercritical Hopf bifurcation, which has a negative first Lyapunov
coefficient and gives birth to stable periodic solutions. We also discussed the subcritical Hopf
bifurcation, which has a positive Lyapunov coefficient and gives birth to unstable periodic
solutions. A generalized Hopf occurs as the first Lyapunov coefficient of a regular Hopf
(subcritical or supercritical) goes to zero and then changes sign. Thus, the generalized Hopf
is defined as a point that has a pair of complex conjugate eigenvalues with zero real part
and has a first Lyapunov coefficient with a value of zero. The stability of the periodic solu-
tions which emerge from the generalized Hopf can be determined by computing the second
Lyapunov coefficient. The normal form of the generalized Hopf contains a curve of Hopf
bifurcations. As the Hopf bifurcation passes through the generalized Hopf, it becomes the
other type, for example, a supercritical Hopf becomes a subcritical Hopf. More importantly
however, the normal form contains a curve of saddle-node of cycles and a region of bistabil-
ity involving a stable periodic solution and a stable steady state solution. The introduction
of a saddle-node of cycles classifies the neuron as Type II excitable because it provides a
mechanism for which repetitive firing can begin with a nonzero frequency. Once the saddle-
node of cycles has occurred, there exists a region of bistability where the neuron can exist
in either a quiescent or repetitive firing state. Again we emphasize that these important
neural dynamics are a result of the generalized Hopf bifurcation. However, the generalized
45
28.5 29 29.5 30 30.5 31 31.5 32
−43
−42
−41
−40
−39
−38
−37
−36
Iapp [µA/cm2]
VS
[mV
]
Supercritical Hopfs
Figure 18: The one parameter bifurcation diagram with gC = 5.40, which corresponds to a region abovethe generalized Hopf point. For this value of gC the neuron is still Type II excitable but repetitive firingbegins via a supercritical Hopf. Note that there is no longer a region of bistability and the neuron hasdistinct regions of quiescence and repetitive firing.
Hopf point can also be thought of a point which terminates these dynamics. As seen in the
two parameter bifurcation diagram, the region above the generalized Hopf bifurcation has no
saddle-node of cycles and also has no bistability. Instead, the neuron has two supercritical
Hopf bifurcations and distinct regions of firing behaviour because of the absence of bista-
bility (see Figure 18). However, if the value of gC is too high, the two supercritical Hopf
bifurcations coalesce and vanish, eliminating any regions of repetitive firing. In this region,
the only solutions which exist are steady state solutions which means the neuron can only
exist in a state of quiescence. It should be noted that coalescing of the two Hopf bifurcations
is not in any way related to the generalized Hopf bifurcation that occurs nearby.
46
5.4.1 Homoclinic Bifurcations
Thus far, we have discussed the local codimension two bifurcations and their implications for
the dynamics of the neuron. However, we have only briefly discussed the global bifurcations
that can occur and the curves of homoclinic bifurcations that occur in the two parameter
bifurcation diagram. Furthermore, we have failed to mention anything about the exact
point where the neuron changes from Type I excitable to Type II excitable or why the SNIC
bifurcations in Figure 10 (gC = 0) and Figure 17 (gC = 0.90) occur at different saddle-node
bifurcations. Before we attempt to explain this behaviour and how it relates to the two
parameter bifurcation diagram, we will provide some useful background information on the
theory of homoclinic bifurcations.
We begin with a mathematical definition. Let x(t) be a solution to the dynamical system
x = F(x) ∈ Rn, and let x0 denote a steady state equilibrium of this system. The solution
x(t) will be a homoclinic orbit if
limt→−∞
x(t) = limt→+∞
x(t) = x0.
That is, the equilibrium solution will be approached forwards in time and backwards in time.
Although this may seem abstract, we have already seen such an orbit. At the bifurcation
point of a SNIC (Figure 11), the center manifold forms a homoclinic orbit. Thus, a homo-
clinic orbit can be thought of as a point where a steady state equilibrium lies on a periodic
orbit. Generally, a homoclinic orbit is created either in SNIC or in a homoclinic bifurcation
where a periodic solution grows until it collides with a steady state solution, exactly as in
Figure 17. However, the homoclinic orbit in a SNIC and the one generated in Figure 17
differ in one subtle way; the type of steady state equilibrium that is approached as t → ±∞.
The steady state equilibrium in a SNIC is non-hyperbolic; it has a zero eigenvalue. However,
the homoclinic orbit in Figure 17 has a hyperbolic steady state equilibrium, all eigenval-
ues are nonzero. Thus, homoclinic orbits are divided into two classes; homoclinic orbit to
hyperbolic equilibrium and homoclinic orbit to non-hyperbolic equilibrium. We’ll refer to
the latter as a homoclinic orbit to saddle-node. Curves of homoclinic orbits to saddle-node
bifurcations (SNICs) can be seen in the two parameter bifurcation diagram as the red curves
of homoclinics that lie on top of the curve of saddle-node bifurcations. The other curves
of homoclinic bifurcations in between the saddle-node bifurcations are homoclinic orbits to
hyperbolic equilibria.
As a result of the different steady state equilibriums that homoclinic orbits can begin
and end from, the shape of the orbit itself is slightly different between homoclinic orbits
to hyperbolic equilibria and homoclinic orbits to saddle nodes. First consider a homoclinic
47
orbit to hyperbolic equilibrium. Let
λ1 < λ2 < . . . < λcs < 0 < λcu < . . . < λn−1 < λn
be the spectrum of eigenvalues at the hyperbolic steady state equilibrium solution. The two
eigenvalues λcs and λcu are the central stable and central unstable eigenvalues, respectively.
They are the negative and positive eigenvalues with the smallest magnitude. Using these
eigenvalues, we can construct an eigenvalue problem in which the central stable and central
unstable eigenvectors satisfy
LΦcs = λcsΦcs, LΦcu = λcuΦcu,
where as usual,
L =∂F(x0)
∂x.
These eigenvectors determine the direction in which the homoclinic orbit departs from and
approaches the steady state equilibrium. The homoclinic orbit departs from and returns to
the equilibrium solution along directions tangent to the central unstable and stable eigenvec-
tors, respectively. Because the central stable and central unstable eigenvectors are linearly
independent, the homoclinic orbit will form a non-smooth loop, see Figure 19 (a).
In this case of a homoclinic orbit to saddle-node, there is no central stable or central
unstable eigenvalue because of the zero eigenvalue, it automatically has the smallest magni-
tude. Thus, instead of having two linearly independent central eigenvectors, the homoclinic
orbit is tangent to the center eigenvector given by
LΦc = 0.
In fact, the homoclinic orbit is the center manifold and the center eigenvalue is the center
eigenspace, therefore the homoclinic orbit (center manifold) must be tangent to the center
eigenvector (center eigenspace) by the Center Manifold theorem. Because the homoclinic
orbit departs and returns along the same direction, the homoclinic orbit is smooth, see
Figure 19 (b).
The two parameter bifurcation diagram shows the homoclinic orbits to hyperbolic equi-
libria and homoclinic orbits to saddle-nodes being separated by the codimension two non-
central homoclinic to saddle-node (NCHSN) bifurcation. At this point, the homoclinic orbit
still departs and returns to a saddle-node bifurcation, but instead of departing or returning
along the center eigenvector, it departs or returns along a non-central eigenvector. In the
saddle-node case, the non-central eigenvectors satisfy
LΦnc = λΦnc, λ 6= 0.
48
x0
Φcx0 Φcs
Φcu
(a) (b)
Figure 19: (a) A homoclinic orbit to hyperbolic equilibrium. The vectors Φcs and Φcu are the centralstable and central unstable eigenvectors, respectively. These vectors are linearly independent and causethe homoclinic orbit to form a nonsmooth loop. (b) A homoclinic orbit to saddle-node. The vector Φc
is the center eigenvector corresponding to the zero eigenvalue. Unlike the homoclinic orbit to hyperbolicequilibrium, the orbit of a homoclinic to saddle-node is smooth. In both figures, x0 represents the steadystate equilibrium solution.
Thus, a non-central eigenvector is any eigenvector except the one which corresponds to a
central eigenvalue, which in this case, is the zero eigenvalue. Similar to a homoclinic orbit
to hyperbolic equilibrium, the center eigenvector, Φc, and the non-central eigenvector, Φnc,
are linearly independent and the homoclinic orbit becomes non-smooth.
The non-central homoclinic to saddle-node bifurcation we see in the two parameter bifur-
cation diagram begins with a regular homoclinic orbit to saddle node. As the parameters are
varied, the homoclinic orbit moves closer to a non-central eigenvector as it makes its return
along the center eigenvector. When the parameters reach the critical values, the homoclinic
orbit makes the switch and returns along the non-central eigenvector. At this point, the
non-central homoclinic to saddle-node bifurcation occurs. As the parameters are varied even
more, we move past the saddle-node bifurcation and two hyperbolic equilibria are born, both
with central stable and central unstable eigenvectors. One of these hyperbolic equilibria will
have a homoclinic orbit. Figure 20 provides a schematic of this non-central homoclinic to
saddle-node bifurcation.
Using the non-central homoclinic to saddle-node bifurcation, we can explain how the
SNIC seen in Figure 10 is related to the SNIC seen in Figure 17. As gC is increased from
zero, a non-central homoclinic to saddle-node bifurcation occurs and the SNIC seen in Figure
10 becomes a homoclinic orbit to hyperbolic equilibrium. This new homoclinic orbit occurs at
one of the three hyperbolic equilibria that exist in between the two saddle-node bifurcations,
as seen in Figure 21. As gC is increased further, another non-central homoclinic bifurcation
occurs and a new SNIC is born that is located on the other saddle-node bifurcation, just like
49
x0
Φc
Φnc
x0
Φc
Φnc
µ = µcritical µ > µcriticalµ < µcritical
x0 x1
Φcs
Φcu
Figure 20: The progression of a non-central homoclinic to saddle-node bifurcation. General parameters ofthe system are represented by µ ∈ R2 and the bifurcation occurs at µcritical. Steady state equilibriums aredenoted by xi. Center, central stable, central unstable, and non-central eigenvectors are denoted by Φc, Φcs,Φcu, and Φnc, respectively.
in Figure 17.
An important feature that results from the SNIC moving from the rightmost to the
leftmost saddle-node bifurcation is the existence of a small region of bistability that emerges
right after the non-central homoclinic bifurcation. This region of bistability can be seen in
Figure 21. However, this region of bistability can exist without the saddle-node of cycles,
although it will be very small. Regardless of the size of the region, the existence of bistability
without the saddle-node of cycles implies that bistability is not an exclusive feature that only
Type II neurons exhibit.
We now address the question of how exactly the neuron goes from being Type I excitable
to Type II excitable. As discussed already, the saddle-node of cycles is crucial for a Type II
neuron as they provide the mechanism that allows for the onset of repetitive firing to occur
with a nonzero frequency. Thus, any bifurcation which is responsible for the creation or
destruction of the saddle-node of cycles could be a candidate for the bifurcation responsible
for this change in neural excitability. An obvious point where this occurs is the generalized
Hopf bifurcation, but our previous analysis shows that the neuron is Type II excitable on
both sides of the bifurcation (before and after it takes place). There is one more bifurcation
involving the saddle-node of cycles, although it is not shown explicitly on the two parameter
bifurcation diagram. Although it might appear on the two parameter bifurcation diagram
that the saddle-node of cycles end at the non-central bifurcation, they actually continue
beyond the point and move closer and closer to the homoclinic to hyperbolic bifurcations.
The saddle-node of cycles and homoclinic hyperbolic become so close together that they can
hardly be distinguished in a one parameter bifurcation diagram. This is case in Figure 21.
50
6.75 6.8 6.85 6.9 6.95 7 7.05 7.1
−60
−50
−40
−30
−20
−10
0
Iapp
VS
Saddle-node of cycles
Homoclinic to hyberbolic
Subcritical Hopf and
homoclinic to hyperbolic
Figure 21: One parameter bifurcation diagram with gC = 0.86, showing the homoclinic orbit to hyperbolicequilibrium that exists as the SNIC moves from the rightmost to leftmost saddle-node bifurcation. Theequilibrium solution that lies on the homoclinic orbit is marked by a black dot.
51
Eventually, these two bifurcations coalesce and the saddle-node of cycles vanishes. However,
the homoclinic orbit to hyperbolic equilibrium still remains and the neuron now begins
repetitive firing with vanishing frequency. This bifurcation is responsible for the transition
from Type I excitability to Type II excitability. Although this is technically the bifurcation
where the neural excitability changes, it does not play a large role in changing the one
dimensional bifurcation structure, which is an important part of describing the dynamics
of the neuron. In fact, the excitability of a neuron is often characterized not only by the
frequency of repetitive firing at onset, but also by the one dimensional bifurcation diagram.
A generic Type I neuron has the bifurcation diagram seen with gC = 0, Figure 10. A generic
Type II neuron has a bifurcation diagram that looks like Figure 12, the diagram obtained
with gC = 2. As mentioned above, the bifurcation that really separates these two diagrams
is the Bogdanov-Takens bifurcation. For this reason, the Bogdanov-Takens bifurcation has
become more closely associated with the excitability transition than the bifurcation involving
the saddle-node of cycles and homoclinic orbit to hyperbolic equilibrium. Nevertheless, it is
still interesting and important to know the actual bifurcation that is responsible for birth of
repetitive firing with nonzero frequency and the transition in neural excitability.
5.4.2 The Degenerate Bogdanov-Takens Bifurcation
So far, we have discussed all of the codimension two bifurcations that we have observed and
related them to the dynamics of a neuron. Furthermore, these codimension two bifurca-
tions occur as the amount of influence the dendritic compartment has on the the somatic
compartment is varied. Thus, we have obtained a reasonable understanding of what role
the dendritic compartment has on the overall dynamics of the neuron. However, there is
still one parameter of importance that relates to the dendritic compartment that hasn’t
been analyzed at all. Up to this point, the leakage conductance of the dendrites has been
held constant at gD = 2. Because the dynamics of the dendritic compartment, and thus
the dynamics of the somatic compartment, are dependent on this value, it is important to
determine what role this parameter has on the overall neural dynamics. This can also be
accomplished using bifurcation theory. However, instead of computing a series of two pa-
rameter bifurcation diagrams, we will perform a three parameter continuation of a single
codimension two bifurcation using gC, gD, and Iapp as the active parameters. The particu-
lar bifurcation that will be continued will be the Bogdanov-Takens bifurcation because it
essentially separates Type I neurons from Type II neurons and represents the point where
the neuron is undergoing a large dynamical transition. Furthermore, because the dynamics
on either side of the Bogdanov-Takens bifurcation are relatively well known, we can deduce
the neuron dynamics over a wide range of dendrite related parameters. Figure 22 shows the
52
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50
1
2
3
4
5
6
7
8
9
10
gC [mS/cm2]
g D[m
S/c
m2]
Degenerate Bogdanov-Takens
Type II Excitable
Type I Excitable
Figure 22: A projection of a curve of Bogdanov-Takens bifurcations which separates Type I neurons fromType II neurons. On the x-axis is the conductance between the somatic and dendritic compartment and onthe y-axis is the leakage conductance of the dendritic compartment.
result of the continuation.
As mentioned above, the Bogdanov-Takens bifurcation can be thought of as the point
where the neuron is undergoing a transition in excitability. Thus, the projection of the curve
of Bogdanov-Takens bifurcations onto the (gC, gD) plane depicts the relationship between
the excitability of the neuron and the parameters related to the dendritic compartment. For
low values of gD, the conductance between the soma and dendrites must be quite high for
the transition to occur. Eventually, a critical value of gD is reached and the neuron cannot
be Type II excitable for any value of gC. However, for values of gD > 2, the transition occurs
for relatively low values of gC, indicating that perhaps the dendritic leakage conductance can
be used to induce Type II excitability in the neuron.
At one point along the Bogdanov-Takens bifurcation curve seen in Figure 22 the non-
degeneracy conditions are violated and we have a codimension three degenerate Bogdanov-
Takens bifurcation. It is a result of the cusp bifurcation and regular Bogdanov-Takens bifur-
cation in the two parameter bifurcation diagram (Figure 16) coalescing as gD is decreased.
This bifurcation is characterized by a vanishing quadratic term in the normal form equations.
Recall that the normal form of the cusp bifurcation has no quadratic term and the condition
for a regular Bogdanov-Takens bifurcation is that both of the quadratic terms are present.
53
When the two bifurcations coalesce, the quadratic term of the Bogdanov-Takens normal form
is annihilated. To determine which type of the three possible degenerate Bogdanov-Takens
bifurcations occurs, a center manifold reduction is performed and the normal form coeffi-
cients are computed. We find that this degenerate Bogdanov-Takens bifurcation is of the
focus type. Similar to how the normal form of each codimension two bifurcation accounted
for certain codimension one bifurcations seen in the two parameter bifurcation diagram, the
normal form of a codimension three bifurcation can account for certain codimension two
bifurcations. Amazingly, the normal form of the focus type degenerate Bogdanov-Takens
bifurcation includes all of the codimension two bifurcations observed in the two parameter
bifurcation diagram for the neuron. Therefore, all of the codimension two bifurcations we
have observed and their associated set of neural dynamics are all related to each other and
organized through the codimension three degenerate Bogdanov-Takens bifurcation.
54
6 Conclusion
In this report, we have analyzed the dynamics of a two compartment neuron model. One
compartment represents the potential difference across the cell membrane of the soma, while
the other compartment is a simple model of the potential difference across the dendritic
structure. Because each compartment has its own potential, we can simulate the propagation
of potential from the dendrites to the soma, a process which occurs in a real neuron but
is neglected in one compartment models. Using the analytical and numerical methods of
bifurcation theory, we have shown that the addition of a separate dendritic compartment
has a large impact on the overall neural dynamics.
The one parameter bifurcation analysis has shown that the influence of a dendritic com-
partment can change the excitability of a neuron, which is a qualitative characterization of
how the neuron responds to an externally applied current. In particular, for a given set of
parameter values, the one compartment neuron will begin repetitive firing with arbitrarily
low firing rates, and when the influence of the second compartment is introduced, repetitive
firing can begin with a nonzero frequency. In this latter case, there is a region of bistability,
allowing the neuron to coexist in a state of quiescence or repetitive firing. Consequently,
hysteresis is observed, where the quiescence to repetitive firing transition occurs at a different
value of the applied current depending on whether the applied current is being increased or
decreased.
The origin of these dynamics can be explained by codimension two bifurcations and
their normal forms. We found that there are two main codimension two bifurcations that
characterize the important dynamics of the neuron. One is the generalized Hopf bifurcation.
From this bifurcation emerges a family of cycles that leads to a saddle-node of cycles. In
addition, the generalized Hopf gives rise to the region of bistability that is observed in
the one parameter bifurcation diagrams. More importantly however, is the detection of
the Bogdanov-Takens bifurcation which, in general, corresponds to point where the neuron
begins to change its excitability.
By performing a numerical continuation of the Bogdanov-Takens bifurcation, we have
determined the relationship between the excitability of a neuron and the dendritic compart-
ment. We have found that the amount of coupling needed between the somatic compartment
and dendritic compartment for the excitability transition to occur has a large dependence on
the value of the dendritic leakage conductance. If the leakage conductance is too low there
is never an excitability transition in the neuron. However, once the leakage conductance is
larger than this critical value, the transition can occur and the amount of coupling that is
required can be greatly reduced by a further increase in the leakage conductance. Thus, it
55
appears that the value of the dendritic leakage conductance is critical in determining whether
or not an excitability transition occurs as the coupling between the somatic compartment
and dendritic compartment is increased. Furthermore, the leakage conductance can be used
to induce the excitability transition in a neuron if it does not already exist.
In addition to providing insight to the excitability of a neuron, the curve of Bogdanov-
Takens also reveals the existence of a codimension three degenerate Bogdanov-Takens bifur-
cation. By computing the equations on the center manifold at this point and determining
the normal form, we have found that the normal form of this bifurcation includes all of
the codimension two bifurcations and the dynamics we have observed in the neuron model.
Thus, the dynamics of the neuron that result from the addition of the dendritic compartment
are characterized and organized about this single bifurcation of codimension three.
56
References
[1] Peter Dayan and L. F. Abbott. Theoretical Neuroscience. MIT Press, 2005.
[2] A. Dhooge, W. Govaerts, and Yu. A. Kuznetsov. Matcont: A matlab package for
numerical bifurcation analysis of odes. ACM Transactions on Mathematical Software,
2003.
[3] E. J. Doedel and J. P. Kernevez. Software for Continuation Problems in Ordinary
Differential Equations with Applications. Caltech Preprint.
[4] Willy J. F. Govaerts. Numerical Methods for Bifurcations of Dynamical Equilibria.
Society for Industrial and Applied Mathematics, 2000.
[5] A. L. Hodgkin. The local electric changes associated with repetitive action in a non-
medullated axon. Journal of Physiology, 1948.
[6] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and
its application to conduction and excitation in nerve. Journal of Physiology, 1952.
[7] James Keener and James Sneyd. Mathematical Physiology. Springer Science+Business
Media Inc., 1998.
[8] Yu. A. Kuznetsov. Elements of Applied Bifurcation Theory. Springer-Verlag, NY, 2nd
edition, 1998.
[9] Yu. A. Kuznetsov. Practical computation of normal forms on center manifolds at degen-
erate Bogdanov-Takens bifurcations. International Journal of Bifurcation and Chaos,
2004.
[10] John Rinzel and Bard Ermentrout. Analysis of Neural Excitability and Oscillations,
chapter 7, pages 251–289.
[11] Xiao-Jing Wang and Gyorgy Buzsaki. Gamma oscillation by synaptic inhibition in a
hippocampal interneuronal network model. The Journal of Neuroscience, 1996.
[12] S. Wiggins. Indroduction to Applied Nonlinear Dynamical Systems and Chaos. Springer-
Verlag, NY, 1990.
57