+ All Categories
Home > Documents > Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf ·...

Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf ·...

Date post: 07-Jun-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
161
Research Portfolio Yixin Guo Department of Mathematics Drexel University Contents 1. Curriculum Vitae 2. Research Statements* 3. Citation list of journal articles 4. Journal articles* *All papers and the research statement contain color multiple figures. To view the color figures, please use the pdf files on my webpage www.math.drexel.edu/~yixin
Transcript
Page 1: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Research Portfolio

Yixin Guo

Department of Mathematics

Drexel University

Contents

1. Curriculum Vitae 2. Research Statements* 3. Citation list of journal articles 4. Journal articles*

*All papers and the research statement contain color multiple figures. To view the color figures, please use the pdf files on my webpage www.math.drexel.edu/~yixin

Page 2: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Curriculum Vitae Yixin Guo 1 PERSONAL DATA Name: Yixin Guo Current Position: Assistant Professor Office Address: Department of Mathematics, Drexel University, Philadelphia, PA 19104 Telephone: (215)-895-2581 (office) Fax: (215)-895-1582 2212 Electronic Mail: [email protected] ACADEMIC BACKGROUND Ph.D., Mathematics, August 2003

University of Pittsburgh, Pittsburgh, Pennsylvania, USA. Dissertation Title: Existence and Stability of Standing Pulses in Neural Networks. Ph. D. advisor: Carson Chow

M.A., Mathematics, April 2000 University of Pittsburgh, Pittsburgh, Pennsylvania, USA. B.S., Mathematics, July 1990 Heilongjiang University, Harbin, Heilongjiang Province, P. R. China. Summer Program on Methods in Computational Neuroscience

Marine Biological Laboratory (MBL), Woods Hole, MA, July 31 – August 27, 2005. RESEARCH INTERESTS

• Computational Neuroscience, Mathematical Biology. Dynamical Systems. Ordinary and Partial Differential Equations.

RESEARCH GRANTS

• Yixin Guo, National Science Foundation, DMS-1226180, Closed-loop Deep Brain Stimulation, Synchrony breaking and Chimera State. Funded by NSF, DMS at $164,996 from September 2012 to August 2015.

PUBLICATIONS (PEER-REVIEWED JOURNALS)

• Yixin Guo, Existence and Stability of Traveling Fronts in a Lateral Inhibition Neural Network, to appear on SIAM Journal on Applied Dynamical Systems, 2012.

• Yixin Guo and Jonathan Rubin, Multi-site Stimulation of Subthalamic Nucleus Diminishes Thalamocortical Relay Error in a Biophysical Network Model. Neural Networks, Elsevier. Volume 24, Issue 6, August 2011, Pages 602-616. Special Issue: Neurocomputational Models of Brain Disorders.

• Guo Y, Park C, Rong M, Worth RM, Rubchinsky LL. Modulation of thalamocortical relay by basal ganglia in Parkinson’s disease and dystonia. BMC Neuroscience 2011, 12(Suppl 1):P275.

• Yang D G, Guo Y. Entrainment of a thalamocortical neuron to periodic sensorimotor signals. BMC Neuroscience 2011, 12(Suppl 1):P135.

• Yixin Guo, Jonathan Rubin, Cameron McIntyre and David Terman. Thalamocortical relay fidelity varies across subthalamic nucleus deep brain stimulation protocols in a data-driven computational model, Journal of Neurophysiology, 99, 1477-1492, January 2, 2008.

• Yixin Guo and Carson C. Chow. Existence and Stability of Standing Pulses in Neural Networks: I Existence, SIAM Journal on Applied Dynamical Systems Vol 4, 217-248, 2005.

• Yixin Guo and Carson C. Chow. Existence and Stability of Standing Pulses in Neural Networks: II Stability, SIAM Journal on Applied Dynamical Systems Vol 4, 249-281, 2005.

• Yixin Guo, Wenbo Qu and Shuyan Sun. Convergence Sequences in Local Convex Spaces (in Chinese), Daqing Petroleum Institute Journal Vol. 20, No. 2, June 1996.

Page 3: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Curriculum Vitae Yixin Guo 2 PUBLICATIONS (SUBMITTED AND IN PREPARATION)

• Dennis Y. Guang and Yixin Guo. Localized states in 1-D homogeneous neural field models with general coupling and firing rate functions. Submitted, 2012.

• Yixin Guo, Choongseok Park, Min Rong, Robert M. Worth, Leonid L.Rubchinsky,Thalamocortical relay modulation by basal ganglia in Parkinson’s disease and dystonia. Submitted, 2012.

• Yixin Guo and Dennis Guang Yang, Entrainment of a thalamocortical neuron to periodic sensorimotor signals. In preparation, 2012.

PROFESSIONAL EXPERIENCE

• Assistant Professor, Department of Mathematics, Drexel University, Philadelphia, PA, April 2006 – Present. Maternity leave (tenure clock stopped from September 2009 to August 2010).

• Visiting Assistant Professor, Department of Mathematics, Harvey Mudd College, CA, July 2005 – March 2006.

• Postdoctoral Researcher at Mathematical Biosciences Institute (MBI), and the Laboratory of Research on Attention and Rhythmicity at the Department of Psychology, The Ohio State University, OH, USA. September 2004 –June 2005. Construction of mathematical models on Parkinson’s disease and Deep Brain Stimulation. Construction of mathematical models on timing and rhythm in attention and memory.

• Visiting Assistant Professor, Department of Mathematics, The Ohio State University, OH, USA. Postdoctoral researcher at Mathematical Biosciences Institute (MBI), The Ohio State University, OH, USA. September 2003-August 2004. Teaching: designed and instructed Differential Equations. Fall 2003 and Spring 2004. Research: constructed mathematical models on Parkinson’s disease and Deep Brain Stimulation.

• Department of Mathematics, University of Pittsburgh, PA, USA, August 1997-August 2003. o Research Assistant, May 2000-August 2003. o Teaching Assistant and Instructor August 1997-May 2000

• Department of Mathematics, Heilongjiang University, P.R. China, July 1990-August 1997. o Lecturer, July 1995-August 1997 o Instructor, undergraduate advisor, July 1990-May 1995.

AWARDS AND HONORS

• Yixin Guo, Drexel University, Antelo Devereux award for Young Faculty, Modeling Parkinson’s Disease and Deep Brain Stimulation, $10,000, 2008.

• Financial Support from CIRM (Centre International de Rencontres Mathématiques) for local expenses to give a talk at the Workshop on Spatio-temporal evolution equations and neural fields. October 2011.

• Drexel University Faculty International Travel Award of $700 to attend the Twentieth Annual Computational Neuroscience Meeting, Stockholm, Sweden, July 23-28, 2011.

• Financial support of $1800 from the International Center of Mathematical Sciences (ICMS) at University of Edinburgh for giving an invited talk at the Mathematical Neuroscience Workshop, April 11-13, 2011, Edinburgh, UK.

• Funding for summer program on Methods in Computational Neuroscience at the Marine Biological Laboratory, Woods Hole, MA, July 31 – August 27, 2005.

Page 4: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Curriculum Vitae Yixin Guo 3 INVITED TALKS AT CONFERENCES AND SYMPOSIA

• Invited talk at The 9th AIMS Conference on Dynamical Systems, Differential Equations and applications, Orlando, FL, July 1-5, 2012.

• Invited talk at the Workshop on Spatio-temporal evolution equations and neural fields. Funded by CIRM (Centre International de Rencontres Mathématiques) for accommodation. October 24-28, 2011, Marseille, France.

• Invited talk at the Mathematical Neuroscience Workshop at ICMS, April 11-13, 2011, Edinburgh, UK. Fully funded by ICMS (The International Center for Mathematical Sciences).

• Invited talk at the 2nd International Conference on Cognitive Neurodynamics, November 16-18, 2009. Hangzhou, China.

• Frontiers in Applied and Computational Mathematics, NJ, June 5-7, 2009. • Invited talk, AIMS international Conference on Dynamical Systems, Differential Equations and

Applications, May 18-21, 2008, University of Texas at Arlington, TX. • Invited talk, International Conference on Cognitive Neurodynamics, November 17-21, 2007,

Shanghai, P. R. China. • Invited Talk, SIAM Conference on Applications of Dynamical Systems (DS07) May 28-June 1,

2007, Snowbird, Utah. • Invited talk, The Annual Computational Neuroscience Meeting, Neuronal Patterns of Parkinson's

Disease and Deep Brain Stimulation, Madison, WI, July 21, 2005. • Invited talk, AMS 2004 Fall Eastern Section Meeting, Pittsburgh, PA, November 6-8, 2004.

INVITED COLLOQUIA AND SEMINAR TALKS AT UNIVERSITIES

• Colloquium talk on Standing Patterns of firing rate models and Working Memory, Lehigh University, Department of Mathematics, October 29, 2008.

• Multi-site Local Field Potential stimulation to Restore Thalamocortical Relay fidelity, Mathematical Neuroscience Seminar, Center for Mathematical Biosciences, Indiana University Purdue University Indianapolis, September 12, 2008.

• Modeling Parkinson’s disease and Brain Stimulations, HRL lab, Malibu, CA, September 4, 2008. • Dean’s seminar, Drexel University, October 3, 2007, Philadelphia, PA. • Invited talk, New Jersey Institute of Technology, April 17, 2007, Neward, NJ. • Center for Neurodegenerative Disease Research, University of Pennsylvania, Invited talk, Modeling

Parkinson’s Disease and Deep Brain Stimulation, Philadelphia, PA, August 3, 2006. • The Department of Neurobiology and Anatomy, Drexel University, invited talk, Modeling

Parkinson’s Disease and Deep Brain Stimulation, Philadelphia, PA, June, 9, 2006. • Invited talk, SIAM chapter at the Department of Mathematics, Drexel University, Modeling Neural

Circuits, Philadelphia, PA, April 18, 2006. • Colloquium talk on Modeling Parkinson's Disease and Deep Brain Stimulation, Department of

Mathematics, Drexel University, April 19, 2005. • Colloquium talk on Modeling Parkinson's Disease and Deep Brain Stimulation, Harvey Mudd

College, Department of Mathematics, February 15, 2005. • Research lectures for undergraduate students: I. Working Memory and Standing Pulses;

II. Parkinson's Disease and Deep Brain Stimulation, Harvey Mudd College, February 16, 2005. • Mathematical Biosciences Institute (MBI) Postdoctoral Seminar, research talk, April 15, 2004. • University of Wisconsin, Green Bay, invited talk, April 12, 2004. • Tufts University, Department of Mathematics, invited talk, March 22, 2004. • Applied Analysis seminar, University of Pittsburgh, Department of Mathematics, research talk,

September 2001.

Page 5: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Curriculum Vitae Yixin Guo 4 CONTRIBUTED TALKS AND POSTERS AT CONFERENCES AND SYMPOSIA

• Twentieth Annual Computational Neuroscience Meeting, Stockholm, Sweden, July 23-28, 2011. • SIAM Conference on Applications of Dynamical Systems, Snow Bird, UT, May 22-26, 2011. • The 40th Annual Meeting of Society of Neuroscience, San Diego, CA, November 12-16,2010. • The Annual Computational Neuroscience Meeting, poster, Madison, WI, July 20, 2005. • The Annual Meeting of Society of Mathematical Biology, contributed talk, Ann Arbor, MI, July 25-

28, 2004. • SIAM Conference on Life Science. Portland, OR, poster. July 11-14, 2004. • SIAM Conference on Application of Dynamical Systems. Snowbird, UT, poster, May 27-31, 2003. • Gordon Research Conference: Theoretical Biology and Biomathematics. Tilton, NH, poster, June 9-

14, 2002. • The First SIAM Conference on Life Science. Boston, MA. Poster, March 6-8 2002. • Science 2001: A Research Odyssey. Pittsburgh, PA. Poster, September 12-14, 2001.

REFEREE

• NSF panel review, 2008. • Peer-reviewed journals:

Journal of Mathematical Biology, SIAM Journal of Applied Mathematics, Journal of Computational Neuroscience, Physica D, SIAM Journal on Applied Dynamical Systems, Journal of Dynamics and Differential Equations, Dynamics of Partial Differential Equations.

COURSES TAUGHT

• Drexel University: Graduate level: Ordinary Differential Equations, Mathematical Neuroscience. Undergraduate level: Partial Differential Equations, Ordinary Differential Equations, Calculus I, Calculus III, Linear Algebra, Fundamental of mathematics, Precalculus, Probability and Statistics I, and Numerical Analysis II.

• Harvy Mudd College: Mathematical Biology. • The Ohio State University: Ordinary Differential Equations.

STUDENT THESIS COMMITTEES

• Linge Bai, department of Computer Science, Drexel University. • Amrit Misra, Patrick Ganzer, Marissa Powers, (from Karen Moxon Lab in the School of Biomedical

Engineering); Walter Hinds, Honghui Zhang (from Joshua Jacobs Lab in the School of Biomedical Engineering).

• Svitlana Zhuravytska, defended on May 26th 2011 at department of Mathematics, Drexel University,. • Amal Aafif, defended on June 27, 2007 department of Mathematics, Drexel University,.

SERVICE

• Drexel University: Judge for CoAS Research Day April 3 2012; Serve as a judge to evaluate student posters at the Biomed Talent and Technology Showcase, November 2, 2010; Meet and Greet (College of Art and Sciences), University Open House (2006 and 2008), Convocation .

• Departmental Committee: Tenure-track Faculty Search Committee (2012-2013); Graduate Committee (2011-2012); Teaching Faculty Evaluation Committee (2010-2011); Graduate Committee (2008-2009); Departmental Computer Committee (2007-2008); Graduate Committee (2006-2007 fall quarter), Faculty Hiring Committee (2006-2007).

Page 6: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

RESEARCH STATEMENT Yixin GuoI focus on two directions in my research. The first direction is on modeling Parkinson’s disease (PD) and

other movement disorders, deep brain stimulation (DBS) and other brain stimulation toward the study andapplication using human data. The second is to investigate the spatiotemporal patterns of the neural fieldmodel that is related to working memory and other brain functions.

1 Modeling Parkinson’s disease and other movement disorders, deep brainstimulations and other brain stimulations.

1.1 Past research

In normal state, the thalamocortial neurons (TC) in the thalamus serve to relay excitatory input from sensori-motor cortex while they are targeted by the inhibitory output from the internal segment of the globus pallidus(GPi) in the basal ganglia. In parkinsonian conditions, a TC neuron fails to respond to excitatory corticalinputs faithfully in a one–to–one fashion, namely, there is one TC voltage spike for each input pulse. TheTC cell either fires multiple spikes or no spike at all in response to a single cortical excitatory signal [37, 39].TC relay failure may be responsible for motor symptoms of Parkinson’s disease or other movement disorders.In the past decade, DBS–through a surgically implanted electrode– to the subthalamic nucleus (STN), hasbecome a widely used therapeutic option for the treatment of Parkinson’s disease and other neurological dis-orders. Although the conventional DBS that delivers an ongoing stream of high frequency current pulses tothe stimulation target has shown remarkable therapeutic success, neither the mechanisms for the effectivenessof DBS, nor the possible improvement on drawbacks of the conventional DBS are fully addressed. I have donecomputational studies on investigating the mechanisms underlying the effectiveness of deep brain stimulation(DBS) and improving the conventional DBS to overcome its significant drawbacks. As experimental investi-gations of DBS mechanism or development of new stimulation protocols are prohibited on humans, or are toocostly to be performed on non-human primates, computational study is the necessary first step to advance toclinical application.

1.1.1 Investigating DBS working mechanism

In our previous paper [37], we built a data-driven computational model of a TC neuron to probe why theconventional DBS is therapeutically effective. The model of TC relay neuron is conductance-based Hudgkin-Huxley type with multiple ionic currents. We incorporated GPi spike trains recorded from normal controlmonkeys, and from parkinsonian monkeys with or without DBS, as the source of inhibitory inputs to ourmodel TC neurons. We tested how biologically observed changes in GPi neuronal activity affect TC signaltransmission, both in a single model TC cell and in a heterogeneous population of model TC cells. TC relayfidelity was evaluated using either a periodic or a stochastic train of external excitatory stimuli applied to thesame model TC cells that receive the recorded inhibitory synaptic inputs from GPi. Our results show thatthere is a significant decline in the ability of the TC cells to relay the excitatory stimuli when they are exposedto GPi signals recorded under parkinsonian conditions in the absence of DBS, relative to GPi data recordedfrom normal monkeys. Moreover, relay effectiveness is restored to non-parkinsonian levels by GPi signalsrecorded under parkinsonian conditions in the presence of therapeutic DBS. Our computational studies showthat GPi firing patterns, produced in parkinsonian conditions without DBS, are, more generally, rhythmic orbursty inhibitory signals with correlations in burst timing across cells. We also found that improvement inTC relay can be achieved by either smearing out the arrival times of correlated, bursty inhibitory GPi signalsor replacing the inhibitory GPi inputs from bursty to tonic and high-frequency pattern. As in parkinsonianconditions with therapeutic DBS, TC relay fidelity may be achieved by the latter.

The significant advance in our computational model was to incorporate experimentally recorded GPi firingpatterns in the exploration of the mechanism underlying the efficacy of DBS. Even though we adopted a

1

Page 7: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

relatively simple TC cell model, our study lay the groundwork for further exploration in network models thatconsider the interactions of TC cells and the globus pallidus as well with other brain areas.

1.1.2 Multi-site delayed feedback stimulation (MDFS) using a biophysical model [39]

The conventional DBS has achieved remarkable success in PD patients in relieving motor symptoms.Even thought it has several advantages compared with ablative surgery pallidotomy (destroy part of GPi) orthalamotomy (destroy part of thalamus) in patient’s brain, it also has significant drawbacks:

• “Dumb” stimulation [5]: The conventional DBS relies on external force determined by parameters suchas type of stimulation (monopolar or bipolar), voltage, frequency (Hz), and pulse width (in ms). Suchform of stimulation is considered “dumb” because the external force is not guided by the changes in thebrain’s electrical activity relevant to the disorder being treated.

• Laborious DBS parameter tuning: It may be a laborious and difficult task to tune the DBS parametersto gain optimal treatment efficacy for a given stage of a disease, especially in patients with movementdisorders that would take months to see the therapeutic effect of DBS.

• High energy cost and invasive surgery to replace the battery [4]: It requires surgery of open chest wallto replace the battery of the pulse generator. Energy efficient stimulation is always desirable to prolongbattery life and reduce the number of invasive surgeries.

Multi-site Delayed Feedback Stimulation (MDFS) was first suggested as an alternative to the conventionalDBS in [41, 42, 43, 72, 79]. The MDFS protocol suggested in these papers has at least two advantages. First,it is noticeably “smarter” in that it is adjusted to brain’s own electrical signals. Second, the energy requiredto administer such stimulation can be maintained at lower level. Although these earlier works contributedtoward an excellent idea, the outcomes of MDFS are greatly limited by the choice of model and the focusof study. They all used either non-biophysical phase models or simplified 2-dimensional reduced model thatare very limited in describing the neuronal behavior of the basal ganglia. Second, in [42, 41, 43, 72, 79], onlyself-coupled excitatory population that is representative of the excitatory STN neurons in the basal gangliawere considered. The synchronization mechanism is unfaithful to the anatomy [40] of basal ganglia and thepathology of PD. Further more, the computational work in [42, 41, 43, 72, 79] reported desynchronizing effecton abnormal synchrony of neuron ensembles. Their MDFS neither breaks the bursting pattern nor reduces thehigher burst rate which are important characteristics of GPi and STN activity in PD. Most importantly, noneof the work in [42, 41, 43, 72, 79] incorporated any criteria to evaluate the downstream neuronal behavior inthe thalamus that is relevant to the clinical effectiveness of MDFS.

Rubin and I [39] have recently reported the first work of STN stimulation in the form of MDFS with abiophysical-detailed basal ganglia network model based on Rubin and Terman’s model [73]. In MDFS, wecalculated the local field potential (LFP) of the stimulation target population and feed back the filtered anddelayed LFP signal into the same ensemble through multiple stimulation sites that have different time delays.In [39], both inhibitory population GPe, GPi, and excitatory population STN are included. See Fig. 1 forthe connections between populations. There is no self-coupling among the excitatory STN neurons. Thereforethe synchronized bursting clusters in STN are not induced by strong excitatory self-coupling such as previouswork [41, 42, 43, 72, 79]. Such a setup is consistent with the basal ganglia anatomy that there is not synapticcoupling among excitatory STN neurons [40]. We demonstrated that MDFS applied in STN population notonly breaks the pathological synchrony, but also eliminates the bursting patterns presented in STN neurons.The reduction of the average firing rate is a natural result from the burst elimination. We further evaluatedthe outcome of MDFS by looking at the TC relay fidelity. Our results show that MDFS restores the TC relayability by desynchronization and burst elimination in a parkinsonian basal-ganglia thalamocortical network.Even though we have some success in this first attempt, the following issues can be further improved:

2

Page 8: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Figure 1: Neuronal structure in the network model. Arrows labelled with a ‘-’ sign represent inhibitory synapticconnections or inputs. Arrows labelled with a ‘+’ sign are excitatory synaptic connections or inputs. All dashed arcswith arrows represent stimulation currents. The tail of the dashed arc is where the stimulation current is gathered. Thehead of the dashed arrow points to the stimulation target population. The two blowups show the random topologicalstructures of stimulation targets-either STN or GPi. STN and GPe (boxes shaded by light grey) with blue arrowsrepresent the STN–GPe loop.

• Only MDFS administrated on STN was considered in [39]. In clinical practice, both STN and GPistimulation are commonly practiced on PD patients. GPi as the stimulation target should also beconsidered in further computational study.

• The network is too small and the spatial structure of the neurons is too rigid to be representative of thereal basal ganglia circuit in [39]. There are only tens of neurons in the basal-ganglia network includingGPe, STN, and GPi nuclei (16 in each type). Little is known about the geometric structure of the STNneurons. However, the assumption of perfect symmetry in STN neurons and stimulation sites eliminatesmany possible variations in the administration of MDFS.

• We have shown that MDFS of STN diminishes TC relay error by desynchronization and burst eliminationin [39]. However, we did not investigate the extent to which desynchronization and rate reduction resultedin faithful TC relays. Measures in desynchronization and burst reduction should be developed to evaluatethe outcome of MDFS.

1.2 Planned study

Based on our previous work, we would like to pursue four studies given in the following four subsections.

1.2.1 Closed-loop DBS

The goal in this study is to develop energy efficient and “smarter” closed-loop DBS protocols,guided by changes in neuronal activities specific to disorders being treated. We will first build alarge-scale biological-faithful computational network consisting of hundreds of neurons in each nucleus, and awide range of geometric spatial structure in the stimulation target population. We will then develop and test

3

Page 9: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

several novel closed-loop DBS protocols that applies MDFS stimulation on various targets in the network basedon the feedback signals collected within basal ganglia. We aim to find a plausible MDFS administration withappropriate choices of multiple stimulation sites and parameters. We will develop new quantitative measuresfor the outcome of MDFS, in addition to TC error index in the prior work, to quantify desynchronization andburst rate in the stimulated population.

1.2.2 Chimera state

A mathematical chimera is a state in which an array of oscillators splits into two co-existing sub-populations:one coherent and synchronized with unique frequency, and the other incoherent and desynchronized withdistributed frequency [1, 2, 44, 46, 47, 49, 50, 55, 58, 59, 60, 76, 77]. Such a state was first noticed byKuramoto and his colleagues [45, 46] while simulating the complex Ginzburg-Landau equation with non-localcoupling.

Computational models of uncoupled GPi neurons in the basal ganglia exhibit wide range of behaviorincluding synchronization, desynchronization, and clustering [39, 73, 78]. The coexistence of synchrony andasynchrony, which is the chimera state, has not been shown. We will study the emergence of chimera sates inGPi neurons in the basal ganglia.

We focus on the GPi ensemble for the following reasons. First, GPi is the major output from basal gangliadirectly connected to TC cells in thalamus. Abnormal synchrony and firing patterns of GPi neurons compromiseTC relay to motor cortex which may be responsible for the motor symptoms in PD and other movementdisorders [31, 32, 33]. If we find ways to break the synchrony and/or reduce burst rate in GPi neurons, wecan restore TC relay ability. Second, finding the chimera state in GPi populations in a parkinsonian networkcould be a significant breakthrough in developing systematic methods for synchrony breaking and burst ratereduction, as chimera states are considered as a natural transition between synchrony and asynchrony [56, 58].The important factors in the emergency of chimera states may help to develop mild and effective DBS inmovement disorders characterized by abnormal neuronal firing patterns and pathological synchrony.

We hypothesize that the initial states of the GPi oscillators are the key to breaking the full synchrony, toeliminating the uniform bursting pattern, and, consequently, to finding the chiemra states in the ensemble.As Motter [56] and Omel’chenko et. al. [58] suggested, in an ensemble of coupled oscillators, synchronizationand desynchronization depend on the intrinsic properties, the coupling structure of the oscillators, and theinitial state of each oscillator. The synchrony in GPi population is neither due to the coupling between GPioscillators (there is no synaptic coupling among GPi neurons) nor due to their intrinsic properties. The GPisynchrony is entrained by the upstream structure–the STN-GPe loop in which both STN and GPe are insynchronous bursting clusters. The main focus of our investigation will be on initial states which refer to theinitial conditions of five nonlinear differential equations for GPi oscillators.

Our goal is to pin down the initial conditions of GPi neurons that can break the full synchrony by settlingGPi oscillators into different stable limit cycles while maintaining the parkinsonian STN-GPe common inputto GPi. We will reduce the 5-D GPi oscillator to a 2-D map that can assist us in estimating such initialconditions. The fixed points of the 2-D map correspond to periodic solutions of the original 5-D system. Usingfixed points of the 2-D map, we will identify stable limit cycles of different periods, stable limit cycles withvarious number spikes in each cycle, and stable subthreshold oscillations. These different scenarios correspondto periodic patterns when a GPi neuron fires a single spike, or double spikes (triples or even more spikes), ora subthreshold spike during each period. We will use the fixed points of the 2-D map to estimate the initialconditions of the original 5-D system in which we will be able to find all different types of coexisting periodicpatterns.

We propose a two-step approach to find chimera states in the synchronous GPi ensemble. We will split thewhole ensemble with N neurons into two subgroups with P and Q neurons each, where P + Q = N . First,we break the synchrony and eliminate uniform bursts in one subgroup, say P , by making neurons settle indifferent limit cycles. Second, for a subpopulation of P that are synchronously settled in the same limit cycle,

4

Page 10: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

we will apply the localized desynchronization method by Danzl et. al. [16] which will effectively desynchronizethis subpopulation of P . We may repeatedly apply the second step to all synchronous subpopulations inP . Eventually, neurons in group P will become asynchronous with various periodic firing patterns. We mayfurther make a connection with the study on the relation between TC relay error index EI, synchrony level R,and burst rate rb. We can choose the P number to reach the level of R and rb that is necessary for TC relayerror lower than that in parkinsonian state. As we vary the P number, we may be able to provide theoreticalexplanations of the transition between normal and parkinsonian states through changes in synchrony and burstrate.

Corresponding to the two steps in finding chimera states, we may suggest a new DBS protocol that canadminister two type of stimulation signals. One signal is a shock wave to reset initial states of a subgroup ofGPi neurons into different stable states. The second signal is a mild phase-correcting impulses with minimalenergy through charge-balanced optimization [16].

The planned study given in section 1.2.1 and 1.2.2 will be funded by NSF, DMS from September 2012 toAugust 2015. I will recruit one or two Ph. D. students partially funded by this grant to work on this twoprojects.

1.2.3 Study TC relay using human patient data

One limitation of our previous data-driven TC relay model [37] is the lack of human data. Supported bythe Antelo Devereux award for Young Faculty at Drexel University, I have formed collaboration with medicaldoctor Robert Worth and Leonid Rubchinsky’s research group at the Indiana University Purdue UniversityIndianapolis (IUPUI). My collaborator provided me human GPi data from parkinson’s and dystonic patients.Our goal is to compare the relay responses of TC neurons under GPi inhibition from both dystonia andParkinson’s patients.

Dystonia is a widespread neurological disorder characterized by sustained muscle contractions, involuntaryrepetitive movements and abnormal posture. Although the exact pathophysiological mechanism remains un-known, dystonia is also marked by the presence of oscillatory activity that may affect the efficiency of TC relayand thus contribute to symptoms in dystonic condition.

We use a computational model of thalamo-cortical relay, modulated by real data recorded in GPi of parkin-sonian and dystonic patients, to explore the differences and similarities between parkinsonian and dystonicthalamocortical relay. The use of the real pallidal recordings in the computational model may be a substantialadvantage. The real data will allow us to capture the response of thalamocortical relay not only to burstingin a specified frequency range, but to real pallidal activity with its specific complex temporal structure.

1.2.4 Map reduction of the TC relay model

The goal of this study is to find all the dynamical states of TC relay responses to bursting GPi inhibitionof various burst durations and inter-burst intervals. This is a joint work with postdoc Dennis Guang Yang.We will study the 3D conductance-based model of the single thalamocortical (TC) neuron given in [37] inresponse to sensorimotor signals. In particular, we focus on the entrainment of the system to periodic signalsthat alternate between ‘on’ and ‘off’ states lasting for time Ton and Toff , respectively. By exploiting invariantsets of the system and their associated invariant fiber bundles that foliate the phase space, we reduce the 3DPoincare map to the composition of two 2D maps and also simplify the two components of the 2D maps to auniform shift and a uniform decay. Then based on these 2D maps, we analyze the bifurcations of the entrainedlimit cycles as the parameters Ton and Toff vary.

5

Page 11: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

2 Spatiotemporal patterns of the neural field model

The second direction of my research concerns spatiotemporal patterns of the following neural field model, alsocalled the firing rate model, for neural networks with non-saturating gain and various network architectures.

∂u(x, t)

∂t= −u(x, t) +

∫ ∞−∞

w(x− y)f(u(y, t))dy, (1)

where u(x, t) is the synaptic input to neurons located at position x ∈ (−∞,∞) at time t ≥ 0, and it representsthe level of excitation or amount of input to a neural element. The coupling function w(x) determines theconnections between neurons.

Our primary goal is to investigate the existence and stability of standing and traveling patterns of (1).The long term goal is to explore thoroughly the dynamics of the firing rate model, probe transition betweenstanding and traveling patterns, and map out the bifurcations of spatiotemporal patterns.

2.1 Previous results

2.1.1 Standing pulses

• Existence and stability of single-bump standing pulses

Experiments on delayed response tasks find that a specific set of neurons in the prefrontal cortex becomeactivated by a transient visual cue. They fire at a rate above their spontaneous levels while the memory of cuelocation is being held in mind for several seconds and then return to the baseline levels after the memory is nolonger needed [7, 8, 12, 25, 26, 27, 28, 29, 69, 71]. How are these neurons able to persist in active state during theworking memory period without external stimulus? Previous work probed the question using one-populationfiring rate model with saturating gain functions, which implies that neurons start to fire when their inputsexceed threshold and saturate to their maximum rate quickly [3, 15, 64, 63, 66, 84]. However, experimentalevidence has shown that persistently active neurons fire at rates far below their saturated maximum and thefiring rate increases linearly with input [9, 10, 11, 12, 13, 26, 30, 53, 54, 61, 70, 71, 81, 82]. To resolve theconflict between the analytical study and experimental observation, we have fully analyzed the existence andstability of 1–bump standing pulses of the firing rate model (1 ) with non-saturating piecewise linear gain (2)and lateral inhibition connectivity (3) [34, 35, 36].

f(u) = (α(u− uT ) + 1)Θ(u− h), (2)

where Θ(u− h) is the Heaviside function, α is the gain, and uT is the threshold.

w(x) = Ae−a|x| − e−|x|,where a, A > 1. (3)

where a, A > 1.A standing pulse solution, the “so-called” N -bump (N ≥ 1) solution u(x), of (1) satisfies the following

equilibrium equation of the dynamical system (1).

u(x) =

∫ ∞−∞

w(x− y)f(u(y))dy for all x ∈ R, (4)

In [35], we showed the existence of 1–bump standing pulse of (4). We applied Fourier transform to decomposethe convolution that appears on the equilibrium equation (4) and to obtain a fourth order ordinary differentialequation (ODE) including singular terms from the discontinuity in the gain function. A one-bump solution of(4) corresponds to a homoclinic orbit of the boundary value problem of the ODE. From the fourth order ODE,we constructed different 1–bump solutions. For a fixed gain α and synaptic coupling, we found two one–bumpsolutions, a “large” one that is tall and wide and a “small” one that is short and narrow. In [36], we provedthat the large 1–bump standing pulse is stable, and the small one is unstable. We also found that the firingrate network has stable 1–bump standing pulses when inhibition dominates excitation and the gain is not toolarge.

6

Page 12: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

• Multi-bump standing pulses

Multi-bump solutions (N ≥ 2) of (4) have been studied mainly using numerical techniques [15, 21, 48].The common approach of these previous works is to transfer (4) to an equivalent ODE and use numericalcontinuation to discover a rich family of N -bump solutions and their bifurcations. Although, their ODEs maybe different due to their choices of coupling and gain functions. As Laing et al pointed out in [48], rigorousproof of multi-bump solutions of the equivalent ODE is already a challenging problem. A rigorous proof of theexistence of multi-bump solution of the integral equation (4) with nonspecific coupling and gain functions iseven a more challenging task.

More recently, Faugeras et. al. have studied the local and global structure of stationary solutions using avery general coupling function with mild hypotheses as square integrable and a smooth sigmoidal gain that isinfinitely differentiable [19, 20, 80]. While their functional analysis approach did open up a new direction tostudy the neural field model, they deviated from the classic Amari model (4) by relaxing the infinite domainof the neural field model to a connected and compact domain that is a subset of R [20, 80]. Their methodsand results are not applicable to (4) with non-compact infinite domain.

The objective of the manuscript [83] is to establish the existence of stationary multi-bump solutions of (4)using even, bounded and Lipschitz continuous coupling functions and a general class of gain functions. Ourstrategy involves building a nonlinear map based on Newton’s method, proving this map is a contraction inan appropriate neighborhood centered at a special function which we name as reference solution, and showingthat a fixed point of the nonlinear map is a stationary multi-bump solution of (4).

Our work advances previous results on multi-bump solutions [15, 21, 48] in the following aspects. First, weprove the existence of multi-bump solutions of (4) with much more general kernel and gain functions. As allprevious multi-bump existence of (4) were shown numerically for specific kernels and gain functions. Secondly,we make a connection between our rigorous proof and a multi-bump numerically calculated [48], named asa reference solution in our proof. We use a reference solution as the center of the neighborhood where thecontraction converges to a unique fixed point that is a multi-bump solution of (4) with general coupling andgain functions.

2.1.2 Traveling fronts in a lateral inhibition network

Firing rate model (1) has rich dynamics. When the stability of standing pulse is lost, it is expected to see atransition from standing patterns to traveling waves, such as traveling fronts or traveling pulses. Neuronal waveshave been observed in vitro experiments and in sensory processing such as visual [6, 67, 68, 75], somatosensory[22, 57, 62] and olfactory [23, 24, 51, 52]. Neurological disorders such as epilepsy are also characterized bypropagating activity across the cortex [14].

Inspired by Ermentrout and McLeod, and Zhang’s previous study on traveling fronts [18, 84], We haveinvestigated the existence and stability of traveling fronts of the firing rate model (1). We extend their workin two aspects. First we show the existence of non-monotonic traveling front solutions of (1) with ‘MexicanHat’ type of lateral inhibition coupling and non-saturating piecewise linear gain as Ermentrout and McLeod[18] and Zhang [84] considered only excitatory coupling function. Second, we extend Zhang’s approach usingthe integral Evans function to the lateral inhibition networks with the Heaviside gain. We show that there isno eigenvalue with positive real part for traveling fronts existing in the parameter ranges we consider. SinceZhang’s integral Evans function approach cannot be extended to stability analysis of traveling fronts withnon-zero gain, we further investigate numerically the stability of lateral inhibition network with non-zero gain.Results in this study will appear in the SIAM journal on Applied Dynamical Systems [38].

2.2 Future plan

I planned to do the following series of studies on the firing rate model of neural networks.

7

Page 13: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

2.2.1 Stability of traveling fronts of non-zero gain

We did not analyze the stability of traveling front solution of (1) with non-zero gain in [38]. Since we canno longer apply Zhang’s integral Evans function approach, we will develop a different analytical approach tohandle the linear stability analysis for piecewise linear gain function with non-zero gain. It is expected thatboth analytic derivation and numerical computation in finding the essential spectrum and point spectrum aresignificantly more complicated compared with the integral Evans function approach [84].

2.2.2 Asymmetric multi-bump standing pulses

Dennis Guang Yang and I have developed a method of geometric construction of N–bump standing pulses(N ≥ 3) of (4) with the ‘Mexican Hat’ coupling (3) and piecewise linear gain (2) using the same 4–dim ODEsystem given in [35]. We are interested in different method of constructing multi-bump standing pulses. Wecould use the same approach as in [35] to construct N -bump solution with N ≥ 3. However, the process will betedious and it is more difficult to solve the system of algebraic equations to construct N − bump, especially forasymmetric standing pulses because the equations number goes up quickly as the number of bumps increases.Moreover, it is difficult to show the coexistence of N -bump solutions (N=1, 2, 3,...).

In the geometric construction, we build a map F going from the local unstable manifold of the zeroequilibrium state to its stable manifold (shown in Figure 2).

Σ= u = uT− , ′ u < 0

u > uT

u < uT

Wu

W s

′ u = 0

ξ2 (λ = a)

ξ3 (λ = −1)

ξ4(λ = −a)

F

P1u

P1s

P2s

I− L

Figure 2: Construction of map F .

The 3–D interface Σ in Figure 2 is where u = uT . The colored surface in Figure 2 is the intersection of Σand L that is the level set of conserved energy. Now the global stable and unstable manifolds of zero fixedpoint, Ws and Wu, intersect transversally in Σ ∪ L. Then we can pin down the point where Ws and Wu meeteach other. Our preliminary study shows the this geometric construction not only gives us all the symmetricand asymmetric N − bump standing pulses and also provides a clear view on how they coexist. Interestingly,there is a horseshoe structure when the region R is stretched and folded under the map F and its inverse F−1,where R is the enclosed region by the projection of P u

1 , which is the first intersection Wu with Σ, and P s1 , the

first intersection of Ws with Σ, onto the 2–D plan in the interface Σ. See Figure 3 for an example of 3-bumppulses.

Based on our preliminary investigation, we would like to do a thorough study and finish all the proofs ofthe geometric construction and the horseshoe structure.

8

Page 14: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

F(a)

F(d)

F(c)

F(b)

F(R)

ξ2 (λ = a)

ξ3 (λ = −1)

Figure 3: The coexistence of symmetric and asymmetric 3–bump standing pulses occurs at the intersection of the bluecurve and the red curve. Total there are 23 number of symmetric and asymmetric 3–bump standing pulses. 4 aresymmetric 3–bump pulses and the other 4 are asymmetric ones.

2.2.3 Traveling Pulses in an Excitatory Network with Negative Feedback

Neurons in propagating pulses are recruited because of the excitatory interactions. Nevertheless, excited stateof cells in the brain does not remain indefinitely. Inhibition can disrupt any type of wave propagation. Inthe absence of synaptic inhibition, the intrinsic ionic processes within the cell such as adaptation can alsorepolarize the network. Experimental evidence has proved that inhibition is necessary for the initiation andtermination of traveling pulses [66]. Moreover, inhomogeneities in velocity in different cortical areas werediscovered experimentally. At recruitment cycle, a group of inhibitory neurons is excited, and in turn, itinhibits a new group of excitatory cells. After these excitatory neurons rebound from hyperpolarization, theyrecruit a new group of cells. To fully understand the circuitry mechanism for spatiotemporal traveling pulses,we must include inhibition into the network.

Amari considered a neural field consisting of excitatory and inhibitory cells [3]. Besides recurrent connectionwithin the excitatory group, the excitatory neuron at location x only excites the inhibitory neurons at placex. Inhibitory neurons merely inhibit the excitatory ones. Amari had the following field equation

τut(x, t) = −u(x, t) +

∫wee(x− y)H(u(y, t))dy −

∫wie(x− y)H(v(y, t))dy + h1 (5)

τvt(x, t) = −v(x, t) + wei(x)H(u(x, t)) + h2 (6)

with u(x, t) and v(x, t) being the excitatory and inhibitory input, respectively, h1 is the threshold for excitation,the Heaviside function H(u(x, t)) = 1 if u > h1, otherwise, H(u(x, t)) = 0, the threshold for the inhibitoryfiring rate H(v(x, t)) is h2. Amari used same time constants for both inhibition and excitation. It has beenargued that in two-population continuous model, a difference in time constant is necessary to get travelingpulses [17, 64].

One can generalize Amari’s system to a two-population model with excitatory firing rate function fe(u)and inhibitory gain function fi(u)

ut(x, t) = −u(x, t) +

∫wee(x− y)fe(u(y, t))dy −

∫wie(x− y)fi(v(y, t))dy (7)

vt(x, y) = ε[−βv(x, t) + wei(x)fe(u(y, t))] (8)

where β is the decay of negative feedback, and ε results from rescaling the inhibitory and excitatory timeconstants.

9

Page 15: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Pinto studied a simpler system in which both the inhibition and the effect of excitation on inhibitoryneurons are linear [64]. He also assumed that the decay of negative feedback is weak (i.e. β = 0) and used theHeaviside function for excitatory firing rate. Then system (7) and (8) become

ut(x, t) = −u(x, t) +

∫wee(x− y)H(u(y, t))dy − v(y, t) (9)

vt(x, t) = εu(x, t) (10)

This model can be viewed as a one-population network of excitation with negative feedback that could befrom synaptic depression, spike frequency adaptation or other intrinsic slow process.

Both Amari and Pinto sought traveling pulse solutions u(x− ct) with velocity c as defined in the previoussection. Such solutions represent propagating regions of excitation where the excitatory population is abovethe threshold uh on a finite spatial interval (0, a). Amari showed that there is a traveling pulse solution [3]. Itwas not clear whether such a traveling pulse is unique. Pinto showed that there are two traveling pulses, one iswith fast speed and wider pulse, the other is slow and narrow. Pinto arrived the conclusion that the fast pulseis stable and the slow one is unstable by numerical simulations. Later, Zhang rigorously proved the linearstability for the fast traveling pulse [85]. In [65], Pinto et al. studied system (9) and (10) with one more term−βv(x, t) adding to (10). They demonstrated the existence of two traveling pulse solution when neurons havea single stable state. They also showed the existence of a stationary pulse solutions and a single traveling pulsesolution. They constructed the Evens function to probe the stability of the standing and traveling pulses. Allthis work was based on Heaviside gain function. Sandstede later revisited the construction of Evans functionof the same model [74]. He rigorously showed the reduction of Evans function constructed in [65]. His proofgeneralized the derivation of Evans function for the class of Heaviside gain function.

We will examine traveling pulses in a more general system with biological realistic gainfunction fe = (α(u− uh) + 1)H(u− uh).

For mathematical simplification, we will assume that the inhibitory to excitatory connection in (7) isindependent of the distance between neurons and the firing rate for inhibition is purely linear. One cansimplify (8) by assuming wei(x) is constant. Without loss of generality, one can set wei = 1. We will obtainthe following system

ut(x, t) = −u(x, t) +

∫wee(x− y)fe(u(y, t))dy − v(x, t) (11)

vt(x, y) = ε[v(x, t) + βfe(u(y, t))] (12)

One could probe whether system (11) and (12) has a traveling pulse using singular perturbation [17, 64].However, such a construction will only identify one solution if there are any. When there are more thanone traveling pulses, such as in Pinto’s study, fast stable one and slow unstable one, singular perturbationconstruction will certainly miss one. One may think it is harmless to miss the unstable traveling pulse, butit is as crucial to identify any unstable solutions to understand the dynamics of the system. Therefore, wewill study the system by transforming the integro-differential equations (11) and (12) into a system of highorder ordinary differential equations. After rigorously showing that solutions of system (11) and (12) solvethe derived ODE system and vice versa, we can analyze the existence and stability of traveling patterns of theODE system using theories in ODE and dynamical systems.

References

[1] Daniel M. Abrams, Rennie Mirollo, Steven H. Strogatz, and Daniel A. Wiley. Solvable model for chimerastates of coupled oscillators. Phys. Rev. Lett., 101(8):084103, Aug 2008.

[2] Daniel M. Abrams and Steven H. Strogatz. Chimera states for coupled oscillators. Phys. Rev. Lett.,93(17):174102, Oct 2004.

10

Page 16: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

[3] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybernetics,27:77–87, 1977.

[4] Russell J. Andrews. Neuroprotection at the nanolevelpart ii nanodevices for neuromodulationdeep brainstimulation and spinal cord injury. Ann. N.Y. Acad. Sci., 1122:185196, 2007.

[5] Russell J. Andrews. Neuromodulation: Advances in the next five years. Ann. N.Y. Acad. Sci., 1199:204–211, 2010.

[6] A Arieli, A. Sterkin, A Grinvald, and A. Aertsen. Dynamics of ongoing activity: explanation of the largevariability in evoked cortical responses. Science, 273:1868–1871, 1996.

[7] A. Baddeley. Working Memory. Oxford University Press, 1986.

[8] R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc.Natl. Acad. Sci. USA, 92:3844–3848, 1995.

[9] C. D. Brody, A Hernandez, A Zainos, and Romo R. Timing and neural encoding of somatosensoryparametric working memory in macaques prefrontal cortex. Cerebral Cortex, 13:1196–1207, 2003.

[10] C. D. Brody, Romo R., and Kepecs A. Basic mechanisms for graded persistent activity: discret attractors,continuous attractors and dynamic representations. Current Opinion in Neurobiology, 13:204–211, 2003.

[11] M Camperi and X.J. Wang. A model of visuospatial working memory in prefrontal cortex: recurrentnetwork and cellular bistability. J. Comp. Neurosci, 5:383–405, 1998.

[12] C. L. Cobly, J.-R. Duhamel, and M. E. Goldberg. Oculocentric spatial representation in parietal cortex.Cerebral Cortex, 5:470–481, 1995.

[13] A Compte, C. Constantinidis, J Tegnr, S. Raghavachari, M.V. Chafee, P.S. Goldman-Rakic, and X-JWang. Temporally irregular mnemonic persistent activity in prefrontal neurons of monkeys during adelayed response task. J. Neurophysiol., 9:3441–3454, 2003.

[14] B. W. Connors and Y. Amitai. Generation of epileptiform discharge by local circuits of neocortex. inEpilepsy: Models, Mechanisms, and Concepts, P. A. Schwartkroin, ed., Cambridge University Press,Cambridge, UK, :388–423, 1993.

[15] S. Coombes, G. J. Lord, and M. R. Owen. Waves and bumps in neuronal networks with axo-dendriticsynaptic interactions. Phys. D, 178:219–241, 2003.

[16] P Danzl, A. Nabi, and Moehlis J. Charge-balanced spike timing control for phase models of spikingneurons. Discrete and Continuous Dynamical Systems, 28(4):1413–1435, 2010.

[17] G. B. Ermentrout. Neural networks as spatio-temporal pattern-forming systems. Rep. Prog. Phys.,61:353–430, 1998.

[18] G. B Ermentrout. Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT forResearchers and Students. SIAM, 2002.

[19] L Evans. Partial differential equations. AmericanMathematical Society, 1998.

[20] Olivier Faugeras, Romain Veltz, and Francois Grimbert. Persistent neural states: Stationary localized ac-tivity patterns in nonlinear continuous n-population, q-dimensional neural networks. Neural Computation,21:147187, 2009.

11

Page 17: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

[21] G. Faye, J. Rankin, and P. Chossat. Localized states in an unbounded neural field equation with smoothfiring rate function: a multi-parameter analysis. Journal of Mathematical Biology, page 36, 2012.

[22] I Ferezou, S. Bolea, and C. C. Petersen. Visualizing the cortical respresentation of whisker touch: voltage-sensitive dye imaging in freely moving mice. Neuron, 50:617–629, 2006.

[23] W. J. Freeman and J. M. Barrie. Analysis of spatial patterns of phase in neocortical gamma EEGs inrabbit. J. Neurophysiol., 84:1266–1278, 2000.

[24] R. W. Friedrich and Korsching S. I. Combinatorial and chemotopic odorant coding in the zebrafisholfactory bulb visualized by optical imaging. Neuron, 18:737–752, 1997.

[25] S Funahashi and M. Inoue. Neuronal interactions related to working memory processes in the primateprefrontal cortex revealed by cross-correlation analysis. Cerebral Cortex, 10:535–551, 2000.

[26] S. Funahashi, Bruce G. J., and P.R. Goldman-Rakic. Mnemonic coding of visual space in the monkey’sdorsolateral prefrontal cortex. J. Neurophys., 61:331–349, 1989.

[27] J. Fuster and G. Alexander. Neuron activity related to short-term memory. Science, 173:652–654, 1971.

[28] P. S. Goldman-Rakic. Circuitry of primate prefrontal cortex and regulation of behavior by representationalmemory. In: Handbook of physiology. The nervous system. Bethesda, MD: American Physiological Society,V (Plum F. ed.):373–417, 1987.

[29] P. S. Goldman-Rakic. Cellular basis of working memory. Neuron, 14:477–485, 1995.

[30] S. Grossberg and D. Levine. Some developmental and attentional biases in the contrast enhancement andshort-term memory of recurrent neural networks. Journal of Theoretical Biology, 53:341–380, 1975.

[31] R. Guillery and S. M. Sherman. The role of thalamus in the flow of information to the cortex. PhilosTrans R Soc Lond B Biol Sci, 357:16951708, 2002a.

[32] R. Guillery and S. M. Sherman. The thalamus as a monitor of motor outputs. Philos Trans R Soc LondB Biol Sci, 357:18091821, 2002b.

[33] R. Guillery and S. M. Sherman. Thalamic relay functions and their role in corticocortical communication:generalizations from the visual system. Neuron, 33:163175, 2002c.

[34] Y. Guo. Existence and stability of standing pulses in neural networks. PhD thesis, University of Pittsburgh,2003.

[35] Y. Guo and C.C. Chow. Existence and stability of standing pulses in neural networks: I. existence. SIAMJ. on Applied Dynamical Systems, 4 (2):217–248, 2005.

[36] Y. Guo and C.C. Chow. Existence and stability of standing pulses in neural networks: II. stability. SIAMJ. on Applied Dynamical Systems, 4 (2):249–281, 2005.

[37] Y. Guo, J. E. Rubin, C. C. McIntyre, J. L. Vitek, and D. Terman. Thalamocortical relay fidelity variesacross subthalamic nucleus deep brain stimulation protocols in a data-driven computational model. JNeurophysiol, 99:14771492, 2008.

[38] Yixin Guo. Existence and stability of traveling fronts in a lateral inhibition neural network. SIAM J. onApplied Dynamical Systems, To appear, 2012.

[39] Yixin Guo and Jonathan E. Rubin. Multi-site stimulation of subthalamic nucleus diminishes thalamocor-tical relay errors in a biophysical network model. Neural Networks, 24(6):602–16, 2011.

12

Page 18: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

[40] C. Hamani, J. A. Saint-Cyr, J. Fraser, M. Kaplitt, and A. M. Lozano. The subthalamic nucleus in thecontext of movement disorders. Brain, 127 No. 1:4–20, 2004.

[41] C Hauptmann, O. O. Omel‘chenko, O. V. Popovych, Y. Maistrenko, and P. A. Tass. Control of spatiallypatterned synchrony with multisite delayed feedback. Phys. Rev. E, 76(6):066209, Dec 2007.

[42] C. Hauptmann, O. Omel‘chenko, O. V. Popovych, Y. Maistrenko, and P. A. Tass. Desynchronizing theabnormally synchronized neural activity in the subthalamic nucleus: a modeling study. Expert Review ofMedical Devices, 4(5):633–650, Sep 2008.

[43] C. Hauptmann, O. Popovych, and P. A. Tass. Effectively desynchronizing brain stimulation based on acoordinated delayed feedback stimulation via serveral sites: a computational study. Biological Cybernetics,93:463–470, 2005.

[44] Christoph Kirst, Theo Geisel, and Marc Timme. Sequential desynchronization in networks of spikingneurons with partial reset. Phys. Rev. Lett., 102(6):068101, Feb 2009.

[45] Yoshiki Kuramoto and Dorjsuren Battogtokh. Coexistence of coherence and incoherence in nonlocallycoupled phase oscillators. Nonlinear Phenomena in Complex Systems, 5:4:380–385, 2002.

[46] Yoshiki Kuramoto and Shin-ichiro Shima. Rotating spirals without phase singularity in reaction-diffusionsystems. Prog. Theor. Phys., 150:115–125, 2003.

[47] Yoshiki Kuramoto, Shin-ichiro Shima, Dorjsuren Battogtokh, and Yuri Shiogai. Mean-field theory revivesin self-oscillatory fields with non-local coupling. Progress of Theoretical Physics, 161:127–143, 2006.

[48] C. R. Laing, W. C. Troy, B Gutkin, and G. B Ermentrout. Multiple bumps in a neuronal model of workingmemory. SIAM J. of Applied Math., 63, no.1:62–97, 2002.

[49] Carlo R. Laing. Chimera states in heterogeneous networks. CHAOS, 19:013113, 2009.

[50] Carlo R. Laing. Chimeras in networks of planar oscillators. Phys. Rev. E, 81(6):066221, Jun 2010.

[51] Y. W. Lam, L. B. Cohen, M. Wachowiak, and M. R. Zochowski. Odors elicit three different oscillationsin the turtle olfactory bulb. J. Neurosci., 20:749–762, 2000.

[52] Y. W. Lam, L. B. Cohen, and M. R. Zochowski. Odorant specificity of three oscillations and the DCsignal in the turtle olfactory bulb. Eur. J. Neurosci., 17:436–446, 2003.

[53] P. E. Latham, B.G. Richmond, P.G. Nelson, and S. Nirenberg. Intrinsic dynamics in neuronal networks.i. theory. J. Neurophysiol., 83(2):808–827, 2000.

[54] P. E. Latham, B.G. Richmond, S. Nirenberg, and P.G. Nelson. Intrinsic dynamics in neuronal networks.ii. experiment. J. Neurophysiol., 83(2):828–835, 2000.

[55] Erik A. Martens. Bistable chimera attractors on a triangular network of oscillator populations. Phys.Rev. E, 82(1):016216, Jul 2010.

[56] Adilson E Motter. Nonlinear dynamics: Spontaneous synchrony breaking. Nat Phys, 6(3):164–165, 2010.

[57] M. A. Nicolelis, L. A. Baccala, R. C. Lin, and J. K Chapin. Sensorimotor encoding by synchronous neuralensemble activity at multiple levels of the somatosensory system. Science, 268:1352–1358, 1995.

[58] Oleh E. Omel’chenko, Yuri L. Maistrenko, and Peter A. Tass. Chimera states: The natural link betweencoherence and incoherence. Phys. Rev. Lett., 100(4):044105, Jan 2008.

13

Page 19: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

[59] Oleh E. Omel’chenko, Matthias Wolfrum, and Yuri L. Maistrenko. Chimera states as chaotic spatiotem-poral patterns. Phys. Rev. E, 81(6):065201, Jun 2010.

[60] Edward Ott and Thomas M. Antonsen. Low dimensional behavior of large systems of globally coupledoscillators. CHAOS, 18:037113, 2008.

[61] B Pesaran, M Pezaris, J. S. Sahani, P. P. Mitra, and R. A. Andersen. Temporal structure in neuronalactivity during working memory in macaque parietal cortex. Nat. Neurosci., 5:805–811, 2002.

[62] C. C. Petersen, Hahn T. T., M Mehta, A. Grinvald, and B Sakmann. Interaction of sensory responseswith spontaneous depolarization in layer 2/3 barrel cortex. PNAS, 100:13638–13643, 2003.

[63] J. D. Pinto and G. B. Ermentrout. Spatially structured activity in synaptically coupled neuronal net-works:2 lateral inhibition and standing pulses. SIAM J. Appl. Math., 62:226–243, 2001.

[64] J. D. Pinto and G. B. Ermentrout. Spatially structured activity in synaptically coupled neuronal net-works:1 traveling fronts and pulses. SIAM J. Appl. Math., 62:206–225, 2002.

[65] J. D. Pinto, K. R. Jackson, and C. E. Wayne. Existence and stability of traveling pulses in a continuousneuronal network. SIAM J. Appl. Dynam. Syst., 4:954–984, 2005.

[66] J. D. Pinto, S. L Patrick, W. C Huang, and B. W Connors. Initiation, propagation and termination ofepileptiform activity in rodent neocortex in vitro involve distinct mechanisms. J. Neurosci., 25:8131–8140,2005.

[67] J. C Prechtl, T. H Bullock, and D Kleinfelf. Direct evidence for local oscillatory current sources andintracortical phase gradients in turtle visual cortex. PNAS, 97:877–882, 2000.

[68] J. D. Prechtl, L. B. Cohen, B Pesaran, P. P Mitra, and D Kleinfeld. Visual stimuli induce waves ofelectrical activity in turtle cortex. PNAS, 94:7621–7626, 1997.

[69] S. G. Rao, G Williams, and P. S. Goldman-Rakic. Isodirectional tuning of adjacent interneurons and pyra-midal cells during working memory: evidence for microcolumnar organization in PFC. J. Neurophysiol.,81:1909–1916, 1999.

[70] A. Renart, N. Brunel, and Wang X-J. Mean-field theory of recurrent cortical networks: from irregularlyspiking neurons to working memory. in Computational Neuroscience: A Comprehensive Approach. J.Feng Ed., CRC Press, Boca Raton, 2003.

[71] R Romo, C. D. Brody, A Hernandez, and L. Lemus. Neuronal correlates of parametric working memoryin the prefrontal cortex. Nature, 399:470–473, 1999.

[72] M. G. Rosenblum and A. S Pikovsky. Delayed feedback control of collective synchrony: An approach tosuppression of pathological brain rhythms. Phys. Rev. E, 70(4):041904, Oct 2004b.

[73] J. E. Rubin and D. Terman. High frequency stimulation of the subthalamic nucleus eliminaties pathologicalthalamic rythmicity in a computational model. J. Comput. Neurosci., 16:211–235, 2004.

[74] B. Sandstede. Evans functions and nonlinear stability of travelling waves in neuronal network models.International Journal of Bifurcation and Chaos, 17:2693–2704, 2007.

[75] D. M Senseman and K. A. Robbins. Modal behavior of cortical neural networks during visual processing.J. Neurosci., 19(10):RC3, 1999.

[76] Jane H. Sheeba, V. K. Chandrasekar, and M. Lakshmanan. Chimera and globally clustered chimera:Impact of time delay. Phys. Rev. E, 81(4):046203, Apr 2010.

14

Page 20: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

[77] Shin-ichiro Shima and Yoshiki Kuramoto. Rotating spiral waves with phase-randomized core in nonlocallycoupled oscillators. Phys. Rev. E, 69(3):036213, Mar 2004.

[78] D. Terman, J. E. Rubin, A. C. Yew, and C. J. Wilson. Activity patterns in a model for the subthalam-opallidal network of the basal ganglia. J of Nerurosci, 2002(7):2963–2976, 2002.

[79] N. Tukhlina, M. Rosenblum, A. Pikovsky, and J. Kurths. Feedback suppression of neural synchrony byvanishing stimulation. Physical Review E, 75:011918, 2007.

[80] R Veltz and O Faugeras. Local/global analysis of the stationary solutions of some neural field equations.SIAM J. Appl. Dyn. Syst., 9:95498, 2010.

[81] X-J Wang. Synaptic basis of cortical persistent activity: the importance of nmda receptors to workingmemory. J. Neuroscience, 19(21):9587–9603, 1999.

[82] X-J Wang. Synaptic reverberation underlying mnemonic persistent activity. Trends in Neurosci, 24:455–463, 2001.

[83] G. D Yang and Y. Guo. Localized states in 1-d homogeneous neural field models with general couplingand firing rate functions. submitted, 2012.

[84] L. Zhang. On stability of traveling wave solutions in synaptically coupled neuronal networks. Differentialand Integral Equations, 16 (5):513–536, 2003.

[85] L. Zhang. Existence, uniqueness and exponential stability of traveling wave solutions of some integraldifferential equations arising from neuronal networks. J. Differential Equations, 197:162–196, 2004.

15

Page 21: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Citation of journal articles by Yixin Guo and collaborator up to August 30 2012

• Yixin Guo. Existence and stability of traveling fronts in a lateral inhibition neural network. SIAM J. on Applied Dynamical Systems, to appear, 2012. Not yet been cited. Journal impact factor is 1.79.

• Yixin Guo and Jonathan E. Rubin. Multi-site stimulation of subthalamic nucleus diminishes thalamocortical relay errors in a biophysical network model. Neural Networks, 24(6):60216, 2011. This paper is cited 4 times by other authors. Journal impact factor is 2.182.

• Y. Guo, J. E. Rubin, C. C. McIntyre, J. L. Vitek, and D. Terman. Thalamocortical relay fidelity varies across subthalamic nucleus deep brain stimulation protocols in a data-driven computational model. J Neurophysiol, 99:1477-1492, 2008. This paper is cited 53 times by other authors. Journal impact factor is 3.648.

• Y. Guo and C.C. Chow. Existence and stability of standing pulses in neural networks: I. existence. SIAM J. on Applied Dynamical Systems, 4(2): 249-281, 2005. This paper is cited 43 times by other authors. Journal impact factor is 1.79.

• Y. Guo and C.C. Chow. Existence and stability of standing pulses in neural networks: II. stability. SIAM J. on Applied Dynamical Systems, 4 (2):217-248, 2005. This paper is cited 49 times by other authors. Journal impact factor is 1.79.

16

Page 22: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Peer-reviewed Journal articles (In the following order)

• Yixin Guo, Existence and Stability of Traveling Fronts in a Lateral Inhibition Neural Network, to appear on SIAM Journal on Applied Dynamical Systems, 2012.

• Yixin Guo and Jonathan Rubin, Multi-site Stimulation of Subthalamic Nucleus Diminishes Thalamocortical Relay Error in a Biophysical Network Model. Neural Networks, Elsevier. Volume 24, Issue 6, August 2011, Pages 602-616. Special Issue: Neurocomputational Models of Brain Disorders.

• Yixin Guo, Jonathan Rubin, Cameron McIntyre and David Terman. Thalamocortical relay fidelity varies across subthalamic nucleus deep brain stimulation protocols in a data-driven computational model, Journal of Neurophysiology, 99, 1477-1492, January 2, 2008.

• Yixin Guo and Carson C. Chow. Existence and Stability of Standing Pulses in Neural Networks: I Existence, SIAM Journal on Applied Dynamical Systems Vol 4, 217-248, 2005.

• Yixin Guo and Carson C. Chow. Existence and Stability of Standing Pulses in Neural

Networks: II Stability, SIAM Journal on Applied Dynamical Systems Vol 4, 249-281, 2005.

• Guo Y, Park C, Rong M, Worth RM, Rubchinsky LL. Modulation of thalamocortical relay by basal ganglia in Parkinson’s disease and dystonia. BMC Neuroscience 2011, 12(Suppl 1):P275.

• Yang D G, Guo Y. Entrainment of a thalamocortical neuron to periodic sensorimotor signals. BMC Neuroscience 2011, 12(Suppl 1):P135.

Page 23: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE AND STABILITY OF TRAVELING FRONTS IN ALATERAL INHIBITION NEURAL NETWORK

YIXIN GUO∗

Abstract. We consider the existence and stability of traveling front solutions of a neural networkconsisting of a single–layer of neurons synaptically connected by lateral inhibition. For a specific‘Mexican Hat’ coupling function, the existence condition for traveling fronts can be reduced to thesolution of an algebraic system. Our work extends the existence of traveling fronts of the classicAmari model by considering a non-saturating piecewise linear gain function. We further establish ananalytic method to investigate the linear stability of traveling front solutions in the Heaviside gaincase.

Key words. neural field model, integro-differential equation, traveling front, neural network,existence, linear stability

1. Introduction.Neuronal waves such as traveling fronts and pulses have been observed in vitro

experiments using thin brain slices [3, 7, 8, 31, 43, 45, 55]. When inhibition is blockedor dramatically reduced by pharmacological manipulation, traveling fronts phenom-ena in which cells stay at the excited state can occur. It has been reported that betaoscillations (10-45 Hz) propagate as waves across the motor cortex as monkeys planand execute an instructed-delay reaching task [4, 25, 40, 41, 42, 51]. Beta waves notonly are observed across the sensorimotor cortex but also occur during behavioraltasks that require increased attention and active participation [11, 35, 36, 37]. Thewaves predominantly travel in one of two oppositely oriented directions [48]. Eventhough different cortical areas [2, 19, 23, 24, 32, 33, 38, 39, 46, 47, 52] showed differentpreferences for travel direction, these dominant propagating directions in a corticalarea are consistent across time. Human neurological disorders such as epilepsy are alsocharacterized by propagating activity across the cortex [9]. Epileptiform dischargesstudied in slices of neocortex show horizontal propagation in opposite directions. Thisevidence justifies the use of a one-dimensional model to study neuronal wave propa-gation.

The coarse-grained averaged activity of a neural network can be described by thefollowing neural field equation [1, 13, 26, 53, 54]

τ∂u(x, t)∂t

= −u(x, t) +∫ ∞−∞

w(x− y)f(u(y, t))dy,(1.1)

where u(x, t) is the synaptic input to neurons located at position x ∈ (−∞,∞) attime t ≥ 0, and it represents the level of excitation or amount of input to a neuralelement. The coupling function w(x) determines the connections between neurons.The nonnegative and monotonically non-decreasing gain function f(u), denotes thefiring rate at x at time t. We can set the synaptic decay time τ to unity without lossof generality.

Ermentrout and McLeod [14] considered a network of one-population of excitatoryneurons distributed on the one-dimensional real line. Such a synaptically couplednetwork can be described by integral equation

u(x, t) =∫ ∞

0

h(s)ds∫ ∞−∞

w(x− y)f(u(y, t− s))dy,(1.2)

∗Department of Mathematics, Drexel University, Philadelphia, PA 19104, [email protected]

1

Page 24: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

2 Yixin Guo

where u(x, t), w, and f(u) are defined the same as in equation (1.1), and h(s) definedon [0,∞) is the time course of activity resulting from a single presynaptic spike. h(s)is generally non-negative and monotonically decreasing. Its integration on [0,∞) isnormalized to 1 and sh(s) is integrable on [0,∞). One special h(s) is e−s that gives(1.1) after differentiating (1.2). Ermentrout and McLeod considered a nonnegative,bounded, even and piecewise smooth coupling function. Their gain function f(u) iscontinuously differentiable, monotonically increasing such that f(u)− u has preciselythree zeros at u = 0, a, 1 with 0 < a < 1. Using a continuation argument, Ermentroutand McLeod showed that (1.2) has a unique and asymptotically stable monotonictraveling front solution that connects the two stable constant solution u ≡ 0 andu ≡ 1.

Zhang investigated the existence and stability of traveling front solutions for asystem of integro-differential equations of which the limiting case is (1.1) [56]. Whilehis goal was to find the explicit Evans function for the system of integro-differentialequation, we only mention his derivation for the integral Evans function E(λ) of thelimiting scalar equation (1.1) in that it is most relevant to the current paper. To solvethe eigenvalue problem resulting from linearizing (1.1) around traveling front u(ξ) withthe moving coordinate ξ, Zhang first solved an intermediate homogeneous eigenvalueproblem, then used the method of variation of parameters to find the explicit formfor the eigenfunction φ(ξ). For φ(ξ) to remain bounded, the eigenvalue λ must satisfyE(λ) = 0, where E(λ) is an analytic complex function called Evans function. Using theEvans function E(λ), he further proved that the real part of eigenvalue λ cannot benonnegative except the simple eigenvalue λ = 0 which is associated with translationalinvariance. Therefore, he concluded that traveling fronts of (1.1) are linearly stable.

Ermentrout and McLeod [14] and Zhang [56] considered the same type of couplingfunction. The differences between their work lie in the gain function and their ap-proaches in proving the existence of stable traveling fronts. Ermentrout and McLeodaimed to show that stable traveing fronts exist for a general class of gain functions.They carried out a continuation argument in which they moved from their generalproblem to one that was already proven to have traveling fronts of zero velocity withspecial h0(s) = e−s, w0(x) = 1

2e−|x| and f0 [20, 21]. Then they showed that there

continued to exist a unique traveling front solution of nonzero velocity as they con-tinuously changed h, w, and f within a neighborhood of h0, w0, and f0. Zhang tookthe Heaviside gain function that does not belong to the general class of gain functionswhich Ermentrout and McLeod defined. The homotopy argument by Ermentrout andMcLeod could no longer apply. Zhang’s approach focused on working with differentialequations resulting from integro-differential equations. And the eigenvalue problemderived after perturbing the traveling front were also differential equations. His ma-jor contribution was to associate the spectrum of the eigenvalue problem not only for(1.1) but also for the system of integro-differential equations to the zeros of a analyticcomplex Evans function. Then the stability of traveling fronts can be determined bythe zeros of the Evans function who gained its name in the series papers by John W.Evans [15, 16, 17, 18]. In general, as shown in [49, 50, 56], it is very difficult to findthe Evans function, especially for a system of integro-differential equations.

Inspired by Ermentrout and McLeod, and Zhang’s previous study, we aim toextend their work in two aspects. First we show the existence of non-monotonictraveling front solutions of (1.1) with ‘Mexican Hat’ type of lateral inhibition cou-pling and non-saturating piecewise linear gain. Second, we extend Zhang’s approachusing the integral Evans function to the lateral inhibition networks with the Heavi-

Page 25: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

3

side gain. We show that there is no eigenvalue with positive real part for travelingfronts existing in the parameter ranges we consider. Since Zhang’s integral Evansfunction approach cannot be extended to stability analysis of traveling fronts withnon-zero gain, we further investigate numerically the stability of lateral inhibitionnetwork with non-saturating gain function. Different analytical approach can work inthis case. However, with non-zero gain, both analytic derivation and numerical com-putation in finding the essential spectrum and point spectrum are significantly morecomplicated and beyond the scope of this paper. We will handle the linear stabilityanalysis for piecewise linear gain function in a forthcoming paper.

This paper is organized as the following. We first introduce the set up of theneural network, including the coupling, the gain function and the traveling fronts weconsider in Section 2. Then we study the existence of monotonic and non-monotonictraveling fronts with both positive and negative speed in an excitatory network withnonnegative coupling in Section 3. We provide explanation on the equivalence betweenthe integro-differential equation (1.1) and an higher order ODE. Section 4 focuses onthe existence of non-monotonic traveling fronts of lateral inhibition networks in whichthe coupling function w(x) is no longer nonnegative. In both the existence studies,we use a non-saturating piecewise linear gain function, which differs and extends frommost existing work using the Heaviside gain [6, 10, 22, 56, 57].

We devote the last section to linear stability analysis of traveling front solutionsin a lateral inhibition networks with the special Heaviside gain function. In subsection5.1, we extend Zhang’s Evans function derivation to a lateral inhibition network with‘Mexican Hat’ coupling function. Then we carry out numerical investigation of thetraveling fronts of the neural field equation (1.1). We compare the traveling frontsolutions from simulating (1.1) with those we obtain from solving the higher orderODE derived in section 4.

All numerical and symbolic calculations in this paper are done using Mathematica,XPPAUT [12] and MATLAB.

2. Neural network equations and traveling fronts.We study a neural network (1.1) with both purely excitatory connection and

lateral-inhibition coupling function w(x) for which excitatory connections dominatefor proximal neurons and inhibitory connections dominate for distal neurons. Thegain function is piecewise linear with Heaviside gain as its special case when the gainis zero. We give further details about the coupling and gain function in the followingsection.

2.1. The coupling and the gain function.We consider coupling functions that satisfy the following:

C1 w(x) = w(−x) for all x ∈ R,C2 w(x) ∈ C(R) ∩ C∞(R \ 0),C3 w(x), w′′(x), and w(iv)(x) are continuous on R,C4 w′(x) and w′′′(x) have discontinuity at x = 0 such that w′(0−) = M1 = −w′(0+),

and w′′′(0−) = M2 = −w′(0+), where M1,M2 > 0 are positive constants,C5 w(x), and its derivatives → 0 as x→ ±∞.

In this paper, we consider two coupling functions that represent excitatory connectionand lateral inhibition connection.

Page 26: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

4 Yixin Guo

The excitatory coupling function w(x) is even, nonnegative, piecewise smooth ofthe form

w(x) =A

2e−A|x|, where A ≤ 1.(2.1)

The lateral inhibition coupling function is given by

w(x) = Ae−a|x| − e−|x|,(2.2)

where 1 < a <√A, and 1 < A = 1.5a.

Besides properties C1–C5, the lateral inhibition coupling function (2.2) is positiveon an interval (−x0, x0) with finite x0; negative on (−∞,−x0)∪ (x0,∞); has a uniqueminimum xm on R+ such that xm > x0. w(x) is decreasing on (0, xm] and is strictlyincreasing on (xm,∞). An example of the connection function is shown in Fig. 2.1.

Fig. 2.1. Lateral inhibition connection function with A = 2.8, a = 2.6.

The gain function we consider in this paper is

f(u) = (α(u− h) + 1)Θ(u− h),(2.3)

where Θ(u− h) is the Heaviside function such that

(2.4) Θ(u− h) =

1 if u > h,0 otherwise.

Fig. 2.2. Piecewise-linear gain function.

The gain function (2.3) does not saturate with a gain α. We consider gain −1 <α < 1, and 0.15 < h < 0.85. The gain function (2.3) turns into the Heaviside function

Page 27: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

5

with zero gain when α = 0 ( See Fig. 2.2). Such piecewise linear gain function wasprevious used to study standing pulses in [27, 28, 29]. Later work by Botelho et. al.[5] and Murdock [34] also used piecewise linear gain function.

2.2. The traveling front solution.We rewrite the neural field equation (1.1) in the traveling coordinate ξ = x− ct,

where c is the traveling velocity,

− cu′(ξ) = −u(ξ) +∫ ∞−∞

w(ξ − η)f(u(η))dη.(2.5)

We are interested in traveling front solutions that connect the two constant solutionsof (2.5), which are u1 = 0 and u2 = (1−αh)W0

1−αW0, where W0 =

∫∞−∞ w(x)dx. In the

excitatory network when w(x) = A2 e−A|x|, W0 is normalized to 1. In the lateral

inhibition network when w(x) = Ae−a|x| − e−|x|, we set A = 1.5a to normalize W0.Therefore, we look for traveling front that connects 0 and a2 = (1−αh)

1−α .Since network (2.5) is invariant in translation, the traveling front can cross thresh-

old h at any finite value of ξ. Without loss of generality, we assume that u(0) = h,u < h on (−∞, 0) and u > 0 on (0,+∞), where h is the threshold. We define thetraveling front solution as:

Definition 2.1. Traveling front solution:

u(ξ) :=

> h if ξ ∈ (0,∞),= h if ξ = 0< h if ξ ∈ (−∞, 0),.

(2.6)

such that u, u′, and u′′ are bounded and continuous on R. The n–th order (n ≥ 3)derivative of u are continuous everywhere except at ξ = 0. u(ξ) also satisfies thefollowing:

limξ→∞

u(ξ) = a2 =(1− αh)

1− α, and lim

ξ→−∞u(ξ) = 0.

limξ±∞

u′(ξ) = limξ±∞

u′′(ξ) = limξ±∞

u(n)(ξ) = 0, n ≥ 3.

When α = 0, the traveling velocity satisfies −A ≤ c ≤ A in the excitatory network,and − 1

a ≤ c ≤1a in the lateral-inhibition network.

In general, using the piecewise linear gain (2.3) with α 6= 0, we cannot solveequation (2.5) to obtain a closed form solution directly. The only exception is when(2.3) turns into the Heaviside function with α = 0. Our strategy to find travelingfronts is to convert (2.5) into an ODE that obeys a set of matching conditions atξ = 0 and then solve the ODE.

3. Traveling fronts in an excitatory network.We first study the existence of traveling front solutions in a pure excitatory net-

work with exponential coupling function w(x) = A2 e−A|x|.

3.1. ODE derivation from (2.5) using the excitatory coupling.In this section, we show that the solution of (2.5) satisfies a third order ODE. We

introduce a differentiation method to convert (2.5) into a higher order ODE.

Page 28: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

6 Yixin Guo

Rewrite (2.5) as

−cu′(ξ) = −u(ξ) +

Z ξ

−∞w(ξ − η)f(u(η))dη +

Z ∞ξ

w(ξ − η)f(u(η))dη.(3.1)

Differentiating both sides of (3.1) with respect to ξ using the Leibniz rule,

−cu′′(ξ) = −u′(ξ) + w(0)f(u(ξ)) +

Z ξ

−∞w′(ξ − η)f(u(η))dη

− w(0)f(u(ξ)) +

Z ∞ξ

w′(ξ − η)f(u(η))dη,

−cu′′(ξ) = −u′(ξ) +

Z ξ

−∞w′(ξ − η)f(u(η))dη +

Z ∞ξ

w′(ξ − η)f(u(η))dη.(3.2)

u′′(ξ) is continuous everywhere on R, but it is not smooth. We further differentiate(3.2) for ξ 6= 0, u′′′(ξ) has a jump at ξ = 0 that is indicated in the following equation:

−cu′′′(ξ) = −u′′(ξ) + w′(0+)f(u(ξ)) +∫ ξ

−∞w′′(ξ − η)f(u(η))dη(3.3)

− w′(0−)f(u(ξ)) +∫ ∞ξ

w′′(ξ − η)f(u(η))dη.

Notice that equations (3.1)–(3.3) hold for both the excitatory and lateral inhibitioncoupling functions.

For the excitatory coupling function,

w′(0+) = limx→0+

w′(x) = −Aw(0) = −A2

2, w′(0−) = lim

x→0−w′(x) = Aw(0) =

A2

2,

we have

−cu′′′(ξ) = −u′′(ξ) +

Z ξ

−∞w′′(ξ − η)f(u(η))dη +

Z ∞ξ

w′′(ξ − η)f(u(η))d−A2f(u(ξ))

= −u′′(ξ) +

Z ∞−∞

w′′(ξ − η)f(u(η))dη −A2f(u(ξ)).

For the excitatory coupling w(x) = A2 e−A|x|, w′′ = A2w, therefore,

−cu′′′(ξ) = −u′′(ξ) +A2

Z ∞−∞

w(ξ − η)f(u(η))dη −A2f(u(ξ))(3.4)

We replace the integral in (3.4) by u(ξ)− cu′(ξ) (from (2.5)) to obtain the followingODE,

cu′′′(ξ)− u′′(ξ)−A2cu′(ξ) +A2u(ξ) = A2f(u(ξ)).(3.5)

Notice that the right hand side of ODE (3.5) has the Heaviside function in the gainfunction f(u(ξ)), which indicates the jump in u′′′(ξ) at ξ = 0. The matching conditionat ξ = 0 for the excitatory network is as simple as

u(0+) = u(0−), u′(0+) = u′(0−), u′′(0+) = u′′(0−).(3.6)

Therefore, the traveling front solution of (2.5) with pure excitatory coupling shouldsatisfy ODE (3.5) along with the set of matching conditions (3.6).

Page 29: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

7

On the other hand, we can show that the solution of the third order ODE (3.5)with its matching condition (3.6) is also a solution of (2.5) if the coupling function isin the exponential form w(x) = A

2 e−A|x|. By Lemma 7.1 proved in the Appendix 7.1,

such form of coupling function satisfies ODE

w′′′ − w′′

c−A2w′ +

A2

cw = 0, x 6= 0.(3.7)

Suppose that u is the solution of (3.5), we will show that u also satisfies (2.5). Wemultiply ODE (3.5) by w = w(ξ − η) for fixed ξ 6= η to obtain

w(u′′′ − u′′

c−A2u′ +

A2u

c) =

A2wf(u)

c.(3.8)

Let ζ = ξ − η, we set wζ = dwdζ , wζζ = d2w

dζ2 , and wζζζ = d3wdζ3 , then we have

wu′′′ −A2wu′ = (wu′′ + wζu′ + wζζu−A2wu)′ + wζζζu−A2wζu

wu′′ = (wu′ + wζu)′ + wζζu,

where ()′ = d()dη is also applied to the rest of the derivation in this subsection.

Using (3.7), (3.8) can be written as

(wu′′ + wζu′ + wζζu−A2wu)′ − 1

c(wu′ + wζu)′ =

A2wf(u)

c,(3.9)

Integrating (3.9) with respect to η from −∞ to ∞,Z ∞−∞

(wu′′ + wζu′ + wζζu−A2wu)′ − 1

c(wu′ + wζu)′dη =

Z ∞−∞

A2wf(u)

c,(3.10)

Let J(ξ, η) = (wu′′ − wζu′ + wζζu−A2wu− 1cwu

′ + 1cwζu). Then (3.10) becomesZ ∞

−∞Jη(ξ, η)dη =

Z ∞−∞

A2wf(u)

cZ ξ

−∞Jη(ξ, η)dη +

Z ∞ξ

Jη(ξ, η)dη =

Z ∞−∞

A2wf(u)

c

J(ξ, ξ−)− J(ξ,∞) + J(ξ,∞)− J(ξ, ξ+) =

Z ∞−∞

A2wf(u)

cdη

ˆwζ(0

+)− wζ(0−)˜u′ +

1

c[wζ(0

+)− wζ(0−)]u =

Z ∞−∞

A2f(u)w

cdη

−A2u′ +A2

cu =

Z ∞−∞

A2wf(u)

cdη

−cu′ = −u+

Z ∞−∞

wf(u)dη.

Krisner showed standing pulse solutions of a fourth order ODE are also solutions ofan integral equation using an integration technique [30]. We use the similar techniqueto show the solution of ODE (3.5) is also the solution of integro-differential equation(2.5). However, our derivation is not a simple imitation of Krisner’s proof. Thereare major differences in the assumption for the coupling and the gain function. Onedifference lies in the assumption for the coupling function w. In Krisner’s proof, thefirst and second order derivatives of his coupling functions both are continuous. Thecoupling functions belong to C2(R)∩C∞(R \ 0). In our derivation, it is not necessary

Page 30: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

8 Yixin Guo

for w to have continuous second order derivative. Our coupling functions belong toC(R)∩C∞(R\0). Secondly, The ODE with which Krisner worked were derived usingcontinuous and smooth gain functions. He did not need to deal with any discontinuityin his ODE. In our case, the piecewise linear gain function is discontinuous at thethreshold. This discontinuity is passed to the higher order derivatives of the travelingfront solution. We must carefully handle the discontinuity appearing in each step ofour derivation which is further demonstrated later in Section 4.2. Finally, solutionsconsidered in [30] were standing pulses, therefore Krisner worked with a fourth orderODE and the integral equation u(x) =

∫∞−∞ w(x−y)f(u(y))dy, which gives stationary

solutions of the neural field equation (1.1). Our ODEs are completely different thanthe one used in [30] since we are interested in traveling patterns. The ODE fortraveling front is third order in the excitatory network, and the ODE for travelingfront in the lateral inhibition network is fifth order which is given in Section 4.1.

3.2. Traveling fronts in negative and positive speed.If the gain function f is piecewise linear, a traveling front satisfies the following

ODE1A2

cu′′′ − 1A2

u′′ − cu′ + u = (α(u− h) + 1)Θ(u− h),(3.11)

where Θ(u− h) is the Heaviside function given in (2.4). We write this ODE into twoODEs by separating its domain:

cu′′′ − u′′ −A2cu′ +A2u = A2[α(u− h) + 1] ξ > 0,(3.12)

cu′′′ − u′′ −A2u′ +A2cu = 0 ξ < 0.(3.13)

The characteristic value for (3.13) are ±A, and 1c . It is easy to write out the traveling

front solution on (−∞, 0).The characteristic equation for ξ > 0, is

f(λ) = cλ3 − λ2 −A2cλ+A2(1− α).(3.14)

Suppose the three characteristic values are λ1, λ2, and λ3. None of the characteristicvalue can be zero because λ1λ2λ3 = A2(α−1)

c and A2(1−α)c 6= 0 with α < 1. Further

more, if λ1, λ2, λ3 are all real, we know their signs and the explicit form of thetraveling front by Lemma 7.2 in the Appendix. Lemma 7.3 in the Appendix providesus the solution when λ2 and λ3 are complex conjugates. Based on these features ofthe characteristic values. We have the following solutions for (3.12) and (3.13):

Case E1: According to Lemma 7.2(a), when c > 0, λ1 < 0, λ2 > 0, and λ3 > 0 :

u(ξ) = c1eλ1ξ + a2 ξ > 0,

u(ξ) = c2eAξ + c3e

ξc ξ < 0,

where a2 = 1−αh1−α .

Case E2: According to Lemma 7.2(b), when c < 0, λ1 < 0, λ2 < 0, and λ3 > 0 :

u(ξ) = c1eλ1ξ + c2e

λ2ξ + a2 ξ > 0,

u(ξ) = c3eAξ ξ < 0.

Case E3: According to Lemma 7.3(a), when c > 0, λ1 < 0, λ2 = p + iq, andλ3 = p− iq with p > 0 :

u(ξ) = c1eλ1ξ + a2 ξ > 0,

Page 31: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

9

u(ξ) = c2eAξ + c3e

ξc ξ < 0.

Case E4: According to Lemma 7.3(b), when c < 0, λ1 > 0, λ2 = p + iq, andλ3 = p− iq with p < 0 :

u(ξ) = c1epξ cos qξ + c2e

pξ sin qξ + a2 ξ > 0,

u(ξ) = c3eAξ ξ < 0.

We distinguish all four cases by using the discriminant ∆ of the characteristic poly-nomial (3.14). The explicit form of ∆ is given by (7.3) in Lemma 7.4. When ∆ > 0,(3.14) has three real roots. When ∆ < 0, two roots of (3.14) are complex conjugatesand one root is real. Fig. 3.1B shows the AUTO plot of the discriminant ∆ for variousvalues of α and h. In all the four cases, the dimension of the unstable (stable) man-ifold of constant state u(ξ) = 0 plus the dimension of the stable (unstable) manifoldof constant state u(ξ) = a2 is always three. If the unstable manifold of u(ξ) = 0 isone, then the dimension of the stable manifold of constant state u(ξ) = a2 is two,such as case E2 and E4. In case E1 and E3, the dimension of the unstable manifoldof u(ξ) = 0 is two, and the stable manifold of u(ξ) = a2 has dimension one. Case E1to E3 gives monotonic traveling fronts. The front in case E4 is not monotonic.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.5

0

0.5

1

α

c

1

1: h=0.7

1

1: h=0.7

5

5: h=0.3

2

2: h=0.6

3

3: h=0.5

3

3: h=0.5

4

4: h=0.4

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−20

−15

−10

−5

0

5

10

15

α

Δ

1

1: h=0.7

1

1: h=0.7

5

5: h=0.3

2

2: h=0.6

3

3: h=0.5

3

3: h=0.54 4: h=0.4

AA

−A

B

Fig. 3.1. A: Each curve is the traveling velocity c vs the gain α. Different curves are fordifferent values of threshold h. Grey part of each curve represents Case E3 and E4 when thecharacteristic polynomial (3.14) has complex roots. Black part of each curve are for Case E1 andE2 when the characteristic polynomial (3.14) has all real roots. B: Each curve plots the discriminant∆ of the characteristic polynomial (3.14) for various α. Curve 1 to 5 have threshold values from0.7 decreases to 0.3. Black part is when ∆ > 0, and grey shows negative part of ∆. In both A andB, we set the parameter in the coupling function as A = 1.

We apply the explicit forms of the front solution to the following set of matchingconditions (3.15)–(3.17) at ξ = 0, and combine with three equations of which λ1, λ2,λ3 satisfy, to have system

u(0+)− u(0−) = 0,(3.15)

u′(0+)− u′(0−) = 0,(3.16)

u′′(0+)− u′′(0−) = 0,(3.17)

λ1 + λ2 + λ3 =1c,(3.18)

Page 32: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

10 Yixin Guo

λ1λ2 + λ2λ3 + λ1λ3 = −A2,(3.19)

λ1λ2λ3 = −A2(1− α)c

.(3.20)

Derivation of the matching condition (3.15)-(3.17) across the threshold point at ξ = 0is trivial. It is obvious that u(ξ), and u′(ξ) are continuous. When we rewrite (2.5) as(3.1), we can differentiate both sides of (3.1) with respect to ξ using Leibniz rule ofintegral with variable limits to reach (3.2). Then it is obvious that u′′(ξ) is continuousat ξ = 0 from (3.2).

The system of six equations (3.15)-(3.20) contains six unknowns c, c1 (or c3,) c2,λ1, λ2, and λ3. c3 = h in Case E2 and E4, and c1 is unknown; c1 = h− a2 in Case E1and E3, so c3 is unknown, We solve system (3.15)–(3.20) to obtain values for thesesix unknowns in Mathematica. Then we have the solution u(ξ) explicitly. For a fixedset of parameters A, h, and α, as long we have one set of values of c, c1 (or c3,) c2, λ1,λ2, and λ3, we can trace the traveling velocity as we vary one parameter. We use theAUTO in XPPAUT to trace the stationary solution of system (3.15)-(3.20) to achievethis task. Fig. 3.1A shows the AUTO results on traveling speed c vs the gain α. A isfixed for all the curves. We first fixed h value, and vary α to plot one curve. Then wechange the value of h, and vary α again to plot another curve. The threshold value his different for various curves as shown in the legend of Fig. 3.1A and Fig. 3.1B.

4. Traveling fronts in a lateral inhibition network.We continue to use a non-saturating piecewise-linear gain function (2.3) with a

jump at u = h (see Fig. 2.2.) The coupling now is the ‘Mexican Hat’ function as(2.2). Traveling front solution u(ξ) that connects 0 and a2 = 1−αh

1−α is defined the sameas given in section 2.2.

We first derive the ODE from the integro-differential equation (2.5). Then weverify that the solution of the derived ODE is also a solution of (2.5) with the ‘MexicanHat’ coupling function. We study the existence of traveling fronts through solving theODE along with a set of matching conditions at ξ = 0.

4.1. Derivation of the ODE and matching conditions.Again, we rewrite the integro-differential equation (2.5) as (3.1) and differentiate

with respect to ξ twice to have (3.3). We use w′(0+) = 1− aA = −w′(0−), then

−cu′′′ = −u′′ + 2(1− aA)f(u) +

Z ξ

−∞w′′(ξ − η)f(u(η))dη +

Z ∞ξ

w′′(ξ − η)f(u(η))dη,

(4.1)

It is obvious that, at ξ = 0, u′′′ has jump discontinuity that comes from the disconti-nuity of f(u) when it crosses the threshold. When we differentiate (4.1) with respectto ξ one more time, delta dirac function δ will appear in u(iv) due to the discontinuityin u′′′(ξ). The jump in u′′′(ξ) at ξ = 0 is given in Lemma 7.7 in Appendix 7.2. Weconsider both u(iv) and u(v) are well defined by using the delta dirac function and itsdistributional derivative.

−cu(iv) = −u′′′ + 2(1− aA)df(u)

dξ+w′′(0+)f(u) +

Z ξ

−∞w′′′(ξ − η)f(u(η))dη−

w′′(0−)f(u) +

Z ∞ξ

w′′′(ξ − η)f(u(η))dη,

−cu(iv) =− u′′′ + 2(1− aA)df(u)

dξ+

Z ξ

−∞w′′′(ξ − η)f(u(η))dη +

Z ∞ξ

w′′′(ξ − η)f(u(η))dη,

(4.2)

Page 33: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

11

where

df(u)dξ

= αu′Θ(u− h) + (α(u− h) + 1)δ(ξ).(4.3)

Θ is the Heaviside function defined in (2.4), and δ is the delta dirac function. Wedifferentiate (4.2) to have

−cu(v) = −u(iv) + 2(1− aA)d2f(u)

dξ2+w′′′(0+)f(u) +

Z ξ

−∞w(iv)(ξ − η)f(u(η))dη

− w′′′(0−)f(u) +

Z ∞ξ

w(iv)(ξ − η)f(u(η))dη,

−cu(v) =− u(iv) + 2(1− aA)d2f(u)

dξ2+ 2(1− a3A)f(u) +

Z ∞−∞

w(iv)(ξ − η)f(u(η))dη.(4.4)

The integral term in (4.4) isZ ∞−∞

w(iv)(ξ − η)f(u(η))dη = a4

Z ∞−∞

Ae−a|ξ−η|f(u(η))dη −Z ∞−∞

e−|ξ−η|f(u(η))dη.(4.5)

Replace the two integrals in the right hand side of (4.5) using their expressions (7.6)and (7.7) given in Lemma 7.6 in Appendix 7.2,

Z ∞−∞

w(iv)(ξ − η)f(u(η))dη = −(a2 + 1)cu′′′ + (a2 + 1)u′′ + a2cu′ − a2u+ 2a(A− a)f.

(4.6)

Using (4.6), then (4.4) gives the following fifth order ODE

cu(v) − u(iv) − (a2 + 1)cu′′′ + (a2 + 1)u′′ + a2cu′ − a2u = 2a(a−A)f + 2(1− aA)d2f(u)

dξ2,

(4.7)

with

d2f(u)

dξ2= αu′′Θ(u− h) + 2αu′δ(ξ) + (α(u− h) + 1)δ′(ξ),(4.8)

where δ′ is the first derivative of the Dirac delta function defined as the distributionallimit limε→0

δ(ξ+ε)−δ(ξ)ε .

u(iv)(ξ) also has jump discontinuity at ξ = 0. We provide detailed derivation ofthe jump in Lemma 7.7 in the Appendix 7.2. Lemma 7.7 only shows one way ofderiving the jumps conditions. One can also integrate equation (4.2) and (4.7) on(−ε, ε), a small neighborhood around ξ = 0, respectively, to gain the jumps in u′′′ andu(iv) at ξ = 0.

We can write ODE (4.7) as the following fifth order ODEs on two separate inter-vals along with the set of matching conditions:

cu(v) − u(iv) − (a2 + 1)cu′′′ + (a2 + 1)u′′ + a2cu′ − a2u = F (ξ) ξ > 0,(4.9)

cu(v) − u(iv) − (a2 + 1)cu′′′ + (a2 + 1)u′′ + a2cu′ − a2u = 0 ξ < 0,(4.10)

where F (ξ) = 2a(a−A)f−2(1−aA)d2fdξ2 , and d2f

dξ2 = αu′′

(ξ > 0). At ξ = 0, u satisfiesthe following matching conditions:

u(0+) = h,(4.11)

Page 34: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

12 Yixin Guo

u(0−) = h,(4.12)u′(0+)− u′(0−) = 0,(4.13)u′′(0+)− u′′(0−) = 0,(4.14)

u′′′(0+)− u′′′(0−) =2c

(aA− 1),(4.15)

u(iv)(0+)− u(iv)(0−) =2c2

(aA− 1) +2c

(aA− 1)df

dξ|ξ=0+ ,(4.16)

where f = f(u) is the gain function and dfdξ |ξ=0+ = αu′(0+).

The part of the front above threshold h satisfies (4.9), and the part below thresholdsatisfies (4.10). At ξ = 0, where u(ξ) crosses the threshold, the traveling front mustsatisfy conditions (4.11)–(4.16). We have explained condition (4.11)–(4.14) in Section3.2. Condition (4.15) and (4.16) are shown in Lemma 7.7 in Appendix 7.2.

We have the above set of matching conditions at ξ = 0 because we fixed thethreshold point where u crosses h at ξ = 0 in all the calculations in this paper.Although a traveling front does not necessarily cross the threshold at ξ = 0 dueto translational invariance. One can see this from the derivation of the ODE andthe matching conditions. The threshold point can be any finite value ξ∗ such thatu(ξ∗) = h. Then the matching conditions can be generalized to

u′′′(ξ∗) =2c

(aA− 1)Θ(u(ξ∗)− h),

u(iv)(ξ∗) =2c

(aA− 1)Θ(u(ξ∗)− h) +2c

(aA− 1)df

dξ|ξ=ξ∗ ,

where Θ is the Heaviside function and dfdξ is defined as (4.3) with the delta dirac

function δ(ξ − ξ∗) in it, instead of δ(ξ). We do not need to write out matchingconditions for u, u′ and u′′ because they are continuous everywhere. Their matchingconditions are the same as (4.11)–(4.14) except that we should replace each ‘0’ by a’ξ∗’.

4.2. Solution of ODE (4.7) satisfies integro-differential equation (2.5).In this section, we show that the front solution u(ξ) of ODE (4.7) in the lateral

inhibition network is also a solution of the integro-differential equation (2.5). Wemultiply both sides of ODE (4.7) by w = w(ξ − η) for fixed ξ 6= η to obtain a newODE. The left hand side (LHS) and the right hand side (RHS) of the new ODE arethe following:

LHS :≡ w(cu(v) − u(iv) − c(a2 + 1)u′′′ + (a2 + 1)u′′ + a2cu′ − a2u)(4.17)

RHS :≡ 2a(a−A)wf(u) + 2(1− aA)wd2f(u)dη2

(4.18)

We will integrate LHS and then RHS of the new ODE to reach the integro-differentialequation (2.5) with the help from Lemma 7.5, 7.8 and Lemma 7.9 given in Appendix7.2.

Replace wu′, wu′′, wu′′′, wu(iv), and wu(iv) in LHS using (7.10) –(7.14) given byLemma 7.8, and after reorganize the terms, we have

LHS =c(wu(iv) + wζu′′′ + wζζu

′′ + wζζζu′ + wζζζζu)′ − (wu′′′ + wζu

′′ + wζζu′ + wζζζu)′

(4.19)

Page 35: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

13

− c(a2 + 1)(wu′′ + wζu′ + wζζu)′ + (a2 + 1)(wu′ + wζu)′ + a2c(wu)′

+ u(cwζζζζζ − wζζζζ − c(a2 + 1)wζζζ + (a2 + 1)wζζ + a2cwζ − a2w),

where ()′ = d()dη , ζ = ξ − η and w with n number of subscript ζ represents the n–th

order derivative of w with respect to ζ in (4.19) and the rest of current subsection.By Lemma 7.5 in Appendix 7.2, the last term in (4.19) is zero. We rewrite LHS

as,

LHS = Jη(ξ, η) +Kη(ξ, η),

where

Jη(ξ, η) =c(wζζu′′ + wζζζu

′ + wζζζζu)′ − (wζu′′ + wζζu

′ + wζζζu)′

− c(a2 + 1)(wu′′ + wζu′ + wζζu)′ + (a2 + 1)(wu′ + wζu)′ + a2c(wu)′,

Kη(ξ, η) =c(wu(iv) + wζu′′′)′ − (wu′′′)′.

We integrate Jη(ξ, η) with respect to η on R,Z ∞−∞

Jη(ξ, η)dη =

Z ξ

−∞Jη(ξ, η)dη +

Z ∞ξ

Jη(ξ, η)dη

=

Z ∞−∞

J(ξ, ξ−)− J(ξ,∞) + J(ξ,∞)− J(ξ, ξ+)

= (wζζζ(0+)− wζζζ(0−))(cu′ − u)− (wζ(0

+)− wζ(0−))u′′

− (a2 + 1)(wζ(0+)− wζ(0−))(cu′ − u)

Therefore, Z ∞−∞

Jη(ξ, η)dη = −2ac(a−A)u′ + 2a(a−A)u− 2(1− aA)u′′.(4.20)

We also integrate Kη(ξ, η) with respect to η on R,Z ∞−∞

Kη(ξ, η)dη =

Z ∞−∞

(cwu(iv) − wu′′′)′dη +

Z ∞−∞

c(wζu′′′)′dη.

Using (7.15) and (7.16) in Lemma 7.9,Z ∞−∞

Kη(ξ, η)dη =2(1− aA)u′′ + 2(1− aA)(A− 1)(df

dη|ξ+ −

df

dη|ξ−)(4.21)

− 2(1− aA)2(f |ξ+ + f |ξ−)− 2(1− aA)

Z ∞−∞

w′′f(u)dη.

where f |ξ+ = f(u(ξ+)), f |ξ− = f(u(ξ−)), dfdη|ξ+ = df(u)

dη(ξ+), and df

dη|ξ+ = df(u)

dη(ξ−).

Combine (4.20) and (4.21) to haveZ ∞−∞

LHSdη =− 2ac(a−A)u′ + 2a(a−A)u+ 2(1− aA)(A− 1)(df

dη|ξ+ −

df

dη|ξ−)(4.22)

− 2(1− aA)2(f |ξ+ + f |ξ−)− 2(1− aA)

Z ∞−∞

wζζf(u)dη.

Integrating RHS,Z ∞−∞

RHSdη = 2a(a−A)

Z ∞−∞

wf(u)dη − 2(1− aA)

Z ∞−∞

wd2f(u)

dη2dη(4.23)

Page 36: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

14 Yixin Guo

We apply integration by parts twice to the integral in the second term of the righthand side of (4.23), and use dw

dη = −wζ , d(wζ)dη = −wζζ,Z ∞−∞

wd2f(u)

dη2dη =

Z ξ

−∞wd2f(u)

dη2dη +

Z ∞ξ

wd2f(u)

dη2dη

= wdf(u)

dη|ξ−

−∞ +

Z ξ

−∞wζdf(u)

dηdη + w

df(u)

dη|∞ξ+ +

Z ∞ξ

wζdf(u)

dηdη

= w(0)(df

dη|ξ− −

df

dη|ξ+) + w

df

dη|ξ−

−∞ +

Z ∞−∞

wζζf(u)dη

= −2(A− 1)(df

dη|ξ+ −

df

dη|ξ−) + 2(1− aA)(f |ξ+ + f |ξ−) +

Z ∞−∞

wζζf(u)dη

Z ∞−∞

RHSdη =

Z ∞−∞

wf(u)dη + 2(1− aA)(A− 1)(df

dη|ξ+ −

df

dη|ξ−)(4.24)

− 2(1− aA)2(f |ξ+ + f |ξ−)− 2(1− aA)

Z ∞−∞

wζζf(u)dη.

We equate∫∞−∞ LHSdη with

∫∞−∞RHSdη given in (4.22) and (4.24), and cancel the

same terms that appear on both sides, we have

cu′(ξ) = −u(ξ) +

Z ∞−∞

w(ξ − η)f(u(η))dη,

which is the integro-differential equation (2.5).

4.3. Characteristic values and solution forms for (4.9) and (4.10).The characteristic values for (4.10) are λ = 1

c , ±1, ±a. To have a solution thatconverges to 0 at −∞, the solution of (4.10) must be in one of the following form:

u(ξ) = c1e1c ξ + c2e

aξ + c3eξ ξ < 0, c > 0,(4.25)

u(ξ) = c4eaξ + c5e

ξ ξ < 0, c < 0.(4.26)

The solution form of (4.9) depends on the roots of the following characteristicequation that is a polynomial of degree five:

cλ5 − λ4 − c(a2 + 1)λ3 + (a2 + 1 + 2α(1− aA))λ2 + ca2λ− (a2 + 2αa(a−A)) = 0.(4.27)

The characteristic values λ1, λ2, λ3, λ4, and λ5 of (4.27) always satisfies thefollowing set of equations:

P1(λ1, λ2, λ3, λ4, λ5) =1

c,(4.28)

P2(λ1, λ2, λ3, λ4, λ5) = −(a2 + 1),(4.29)

P3(λ1, λ2, λ3, λ4, λ5) =2α(aA− 1)− a2 − 1

c,(4.30)

P4(λ1, λ2, λ3, λ4, λ5) = a2,(4.31)

P5(λ1, λ2, λ3, λ4, λ5) =a2 + 2αa(a−A)

c,(4.32)

where

P1(λ1, λ2, λ3, λ4, λ5) = λ1 + λ2 + λ3 + λ4 + λ5,

Page 37: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

15

P2(λ1, λ2, λ3, λ4, λ5) = λ1(λ2 + λ3 + λ4 + λ5) + λ2(λ3 + λ4 + λ5) + λ3(λ4 + λ5) + λ4λ5,

P3(λ1, λ2, λ3, λ4, λ5) = λ1λ2(λ3 + λ4 + λ5) + (λ1λ3 + λ2λ3)(λ4 + λ5) + λ4λ5(λ1 + λ2 + λ3),

P4(λ1, λ2, λ3, λ4, λ5) = λ1λ2λ3(λ4 + λ5) + λ1λ4λ5(λ2 + λ3) + λ2λ3λ4λ5,

P5(λ1, λ2, λ3, λ4, λ5) = λ1λ2λ3λ4λ5.

The roots of polynomial (4.27) depends on the front speed c. We cannot solve(4.27) without knowing c, and there is no explicit expressions for roots of a fifth degreepolynomial. However, we can determine the structure of the characteristic values usingthe discriminant ∆ of polynomial (4.27). When ∆ is positive, there could be eitherfive real characteristic values or four complex and one real characteristic values. When∆ < 0, three of the roots are real and two are complex. When ∆ = 0, repeated rootsoccur. See Lemma 7.10. We list all the relevant scenarios in the following Case L1 toCase L7.

Positive velocity c > 0: u(ξ) = c1e1c ξ + c2e

−aξ + c3e−ξ, on ξ ∈ (−∞, 0). On

ξ ∈ (0,∞), u(ξ) has the following forms:Case L1: All five real λ with λj < 0 (j = 1, 2), and λj > 0 (j = 3, 4, 5),

u(ξ) = d1eλ1ξ + d2e

λ2ξ + d0, ξ > 0,

where d0 = 2(1−αh)(A−a)a+2α(a−A) is a constant. Notice that d0 = a2 = 1−αh

1−α after applyingA = 1.5a.

Case L2: Three real λ with λ1,2 < 0, λ3 > 0 and λ4,5 = l ± ir (l > 0), then

u(ξ) = d1eλ1ξ + d2e

λ2ξ + d0, ξ > 0.

Case L3: One real positive λ and four complex λ, s. t. λ1,2 = p± iq, λ3 > 0, andλ4,5 = l ± ir with p < 0, l > 0, then

u(ξ) = d1epξ cos(qξ) + d2e

pξ sin qξ + d0, ξ > 0.

Negative velocity c < 0: u(ξ) = c1eaξ + c2e

ξ, on ξ ∈ (−∞, 0). On ξ ∈ (0,∞),u(ξ) has the following forms:

Case L4: Five real λ s. t. λ1,2,3 < 0, and λ4,5 > 0, then

u(ξ) = d1eλ1ξ + d2e

λ2ξ + d3eλ3ξ + d0, ξ > 0.

Case L5: Three real and two complex λ with λ1,2 = l ± ir (l < 0), λ3 < 0 andλ4,5 > 0, then

u(ξ) = d1elξ cos rξ + d2e

lξ sin rξ + d3eλ3ξ + d0, ξ > 0.

Case L6: One real and four complex λ s.t. λ1,2 = p± iq , λ3 < 0, and λ4,5 = l± irwith q < 0, and l > 0, then

u(ξ) = d1epξ cos qξ + d2e

pξ sin qξ + d3eλ3ξ + d0, ξ > 0.

Case L7: When ∆ = 0, there are repeated roots of (4.27). This case only happensat a few isolated values of α, and it may occur for both c > 0 and c < 0.

• If c > 0, ∆ = 0 is the transition between either Case L1 and L2, or Case L2and L3. The repeated roots are λ1 = λ2 < 0. The solution from is

u(ξ) = d1eλ1ξ + d2ξe

λ1ξ + d0, ξ > 0,

u(ξ) = c1e1c ξ + c2e

−aξ + c3e−ξ, ξ < 0.

Page 38: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

16 Yixin Guo

• If c < 0, ∆ = 0 is the transition between either Case L4 and Case L5, or CaseL5 and Case L6. The repeated roots are real λ4 = λ5 > 0. The solution formis the same as Case L5.

When c = 0, there is a stationary front. We use the following ODEs to calculatestationary fronts.

− u(4) + (a2 + 1)u′′ − a2u = F (ξ) ξ > 0

− u(4) + (a2 + 1)u′′ − a2u = 0 ξ < 0.

For the range of α and h we consider, such a stationary front exists for some α andh values. See Fig. 4.2A in which we trace the traveling speed using AUTO. Forinstance, the plot of c crosses zero for curve 4 when h = 0.65, α = 0.715.

We choose the appropriate form for u(ξ) on ξ ∈ (−∞, 0) and ξ ∈ (0,∞). Thenwe apply the set of matching condition (4.11)–(4.16) across ξ = 0 and combine with(4.28)–(4.32) to obtain a system of eleven algebraic equations. We solve the systemfor c, λ1,2,3,4,5, the unknown coefficients d’s in u(ξ) with ξ > 0, and the unknowncoefficients c’s in u(ξ) with ξ < 0.

Example. Applying the matching conditions to Case L3, we have the followingsystem:

(4.33)

8>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>:

d0 + d1 = h

c1 + c2 + c3 = h

d1p+ d2q = ac1 + c2 +c3c

d1p2 + 2d2pq − d1q

2 = a2c1 + c2 +c3c2

d1p3 + 3d2p

2q − 3d1pq2 − d2q

3 = a3c1 + c2 +c3

c3+

2(aA− 1)

c

d1p4 + 4d2p

3q − 6d1p2q2 − 4d2pq

3 + d1q4 = a4c1 + c2 +

c3c4

+D

2p+ λ3 + 2l =1

c

(p2 + q2) + 2pλ3 + 4pl + 2λ3l + (l2 + r2) = −(a2 + 1)

(p2 + q2)λ3 + 2(p2 + q2 + 2pλ3)l + (2p+ λ3)(l2 + r2) =2α(aA− 1)− a2 − 1

c

2(p2 + q2)λ3l + (p2 + q2 + 2pλ3)l2 + r2) = a2

(p2 + q2)λ3(l2 + r2) =a2 + 2α(a−A)

c

,

where D = 2c2 (aA− 1) + 2

c (aA− 1) dfdξ |ξ=0+ = 2(aA−1)c2 + 2(aA−1)α(c1a+c2+

c3c )

c . In thissystem of 11 equations, 11 unknowns are the front speed c, coefficients d1, d2, c1, c2,c3, characteristic values p, q, λ3, l, and r. For fixed threshold h, A and a, we useMathematica to solve this system of algebraic equations. Then we have the explicitforms of u(ξ) for both ξ < 0 and ξ > 0 which gives traveling front solutions (examplesshown in Fig. 4.1A–G). See Lemma 7.11 for systems of equations for Case L1, L2 andCase L4–L7. We previously use a similar approach to construct the standing pulsesolutions of the neural field equation [28, 29]. We can also trace solutions of system(4.33) using AUTO, which gives us a continuous curve of traveling speed c and ∆ vsα (See Fig. 4.1H) In Fig. 4.1A–G, we show several examples of the traveling frontswith various speed for different value of α and fixed h = 0.6. Fig. 4.1I is AUTO plotof ∆ vs α. Different color segments show various forms of the roots of characteristic

Page 39: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

17

−10 0 10 20

0

0.6

1.2

ξ

u(ξ)

−10 0 10 20

0

0.6

1.2

ξ−10 0 10 20

0

0.6

1.2

ξ−10 0 10 20

0

0.6

1.2

−1 −0.8 −0.4 0 0.4 0.8 1−0.05

0

0.05

0.1

0.15

0.2A

BC

DE

FG

α

c

−10 0 10 20

00.61.2

u(ξ)

0 30 60

0.6

5

10

u(ξ)

−1 −0.8 −0.4 0 0.4 0.8 1−2000

0

2000

4000

6000

A

B C D E F G

α

Δ

0 100 200

0.6

5

10

ξ

u(ξ)

−0.8 −0.5

−15

−5

0 A

α

Δ

0.050

5

10 B

C

α

Δ

0.8 1

0.6

2

4

E F

G

α

ub

0.85

0F G

α

Δ

E

CA

F

G

H

I

J

K L M

α3

α2

α4

α2,3

α2,3

D

α0

α1

α4

α1

α4

α0

B

α1

Fig. 4.1. In all the figures a = 1.5, A = 2.25, h = 0.6 . A-G are examples of traveling frontsolutions with . A: α = −0.5; B: α = 0.04692.5; C: α = 0.08; D: α = 0.5; E: α = 0.58; F: α = 0.89;G: α = 0.89771974. The traveling front shown in G touches the threshold 0.6 at ξ = 9.3336. Forα > 0.89771974, there is no valid traveling front solution because part of u(ξ) for ξ > 0 will go belowthe threshold value h = 0.6. The horizontal dashed lines in A-G and M show the threshold valueh = 0.6. H: front speed c vs. α. The corresponding α values and speed c are marked on H forfronts in the order A, B, C, D, E, F, (all marked by solid dots) and G (marked by a cross) fromleft to right. The corresponding α and ∆ are marked in the same fashion on I. I: The curve showsthe discriminant ∆ vs α. In H-M, α0 = 0.559339, marked by the vertical dash-dot line in H, I andM, is the transition point from positive speed to negative speed where a stationary front exists. On(-1, α0), c > 0; on (α0, α4), c < 0. α1 = −0.66048, α2 = 0.06986, α3 = 0.08522, α4 = 0.89771974.On (-1, α1) and (α2, α3), ∆ < 0 and c > 0, Case L2 occurs; ∆ > 0 and c > 0 on α ∈(α1, α2),Case L1 occurs; ∆ > 0 and c > 0 on α ∈(α3, α0), Case L3 occurs; ∆ > 0 and c < 0 on α ∈(α0,α4), Case L4 occurs. The vertical dotted line in H and I is where α = 0. Horizontal dotted linein H-L shows either speed c=0 or discriminant ∆ = 0. J is the blow-up view of the right box onI. It shows ∆ is moving toward 0, G (marked by a cross) is the point where the traveling front willcease to exist. K is the extended view of ∆ when α ∈ (−1,−0.45). The traveling front continue toexist on this interval. L is the blow-up view of the left box on I. It shows ∆ < 0 when α ∈(α2, α3).Finally, M gives the smallest value ub of traveling front u(ξ) for ξ > 0 when α ∈(α0, α4). Noticethat u(ξ) oscillates and then converge to d0 (see the example of such a traveling front in G). Atα = 0.89771974, ub reaches the threshold h = 0.6. Traveling fronts cease to exist thereafter.

Page 40: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

18 Yixin Guo

polynomial (4.27). For example, the colored segment between (α1, α2) is Case L1when (4.27) has all five real roots with positive speed c. For the segment between (α0,α4), (4.27) has four complex roots and one real root with c < 0, which is Case L6.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

α

c

11

11: h=0.2

10

10: h=0.3

9

9: h=0.4

8

8: h=0.45

7

7: h=0.5

6

6: h=0.55

5

5: h=0.6

4

4: h=0.65

3

3: h=0.7

2

2: h=0.75

1

1: h=0.8

−1 −0.8−0.6−0.4−0.2 0 0.2 0.4 0.6 0.8 1−2000

0

2000

4000

6000

8000

α

Δ

←11

11: h=0.2

10→

10: h=0.3

←9

9: h=0.4

←8

8: h=0.45

7→

7: h=0.5

6→

6: h=0.55

5

5: h=0.6

4

4: h=0.65

3

3: h=0.7

2

2: h=0.75

←1

1: h=0.8

1/a

A B

−1/a

Fig. 4.2. A: Plots of traveling speed c vs. the gain α. The threshold value for curve 1 to curve11 decreases from 0.8 to 0.2. We only consider the traveling front solutions in the range α ∈ (−1, 1),and h ∈ (0.15, 0.85). B: Plots of the discriminant ∆ of the characteristic polynomial (4.27). We use∆ to determine the structure of the characteristic values (Case L1 to L7 in section 4.3). In both Aand B, black part on each curve, the characteristic polynomial (4.27) has all five real roots; Greypart on each curve, (4.27) has either two or four complex roots.

In Fig. 4.2, we trace the traveling speed for various threshold and gain values.In Fig. 4.2A, each curve with a fixed threshold value depicts the traveling velocity cas the gain α varies. The threshold value decreases from 0.8 to 0.2 in the order fromcurve 1 to curve 11. The eleven curves in Fig. 4.2B show the discriminant of thecharacteristic polynomial (4.27) corresponding to the velocity curves in 4.2A.

4.4. Traveling fronts when the gain α = 0.We look at the traveling front solution of the lateral inhibition network with α = 0

carefully in this section because we will analyze the linear stability of such travelingfronts in Section 5. When the gain is zero, traveling fronts satisfy the same ODE

u(v) − u(iv) − (a2 + 1)cu′′′ + (a2 + 1)u′′ + a2cu′ − a2u = 0,(4.34)

on both ξ < 0 and ξ > 0.The characteristic values of ODE (4.34) are 1

c , ±1, and ±a. If the traveling ve-locity c = − 1

a or 1a , ODE (4.34) has repeated characteristic values with dependent

characteristic vectors. To simplify the situation in stability analysis, we rule out thedegenerate case by adding restriction − 1

a < c < − 1a . This condition can be trans-

ferred into the condition we impose on the threshold value in the gain function (2.3)in Section 2.1. We use AUTO on XPPUT to find the appropriate range of thresholdh that can eliminate the degenerate case. We first follow the steady state of alge-braic equation system (7.22) (Case L1 with all real λ, and detailed equations given inLemma 7.11 in Appendix 7.2) as we vary the values for h. Starting from the followingsolution of (7.22) with h = 0.6,

(c, c1, c2, c3, d1, d2, λ1, λ2, λ3, λ4, λ5) =

(0.094627, 1.748132,−1.104518,−0.043614,−1.313553, 0.913553,−1.5,−1, 10.567764, 1, 1.5),

Page 41: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

19

0 0.15 0.3 0.5 0.7 0.85 1−1

−0.6667

−0.3

0

0.3

0.6667

1

h

c

1/a

−1/a

Fig. 4.3. A = 2.25, a = 1.5, α = 0. AUTO plot of c vs. h. It provides the range of h in whichODE (4.34) has no repeated characteristic values.

we trace the velocity as we increase h from 0.6 until h = 0.85 at which point c hitsthe value 1

a . We then decrease h until the velocity hits 0. For c < 0, we must usethe algebraic system (7.23) given in Lemma 7.11 in Appendix 7.2 and start from asolution of (7.23) to trace the velocity. As c reach − 1

a , h = 0.15. Then we set therange for threshold as (0.15, 0.85) to eliminate degenerate cases. The plot of c vs h isshown in Fig. 4.3.

5. Stability of traveling fronts with the gain α = 0.We perturb the traveling front u0(ξ) by ευ(ξ) with small ε > 0 and υ(ξ) =

eγξφ(ξ), where γ ∈ C. After linearizing around the front solution u0(ξ), we derive theeigenvalue equation:

γφ(ξ) = c∂φ(ξ)∂ξ

− φ(ξ) +w(ξ)φ(0)u′0(0)

+ α

∫ ∞−∞

w(ξ − η)Θ(u0(η)− h)φ(η)dη,(5.1)

where c is the traveling speed, u′0(0) is the derivative of the front solution at ξ = 0, andΘ is the heaviside function. φ belongs to the set BC1(R,C) = φ: both φ and φ′ arebounded, continuous and complex-valued integrable functions defined on (−∞,+∞).This derivation is independent of the choice of coupling function w(x). However, it isspecific for piecewise linear gain function as describes in Section 2.1.

In this paper, we focus on stability analysis of the traveling fronts when the gainα = 0. The essential spectrum is the straight line −1 − ics, where s ∈ R [44, 56].This line lies entirely on the left half of the complex plane. We can exclude any insta-bility from the essential spectrum. The point spectrum captures all of the stabilityproperties. We first extend the integral Evans function approach from Zhang’s paper[56, 57] to a lateral inhibition network in Section 5.1. One important observation isthat the integral Evans function approach cannot be applied to stability analysis oftraveling fronts with non-zero gain (α 6= 0.) We will only investigate the stability oftraveling fronts for α 6= 0 numerically.

Page 42: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

20 Yixin Guo

When α = 0, the eigenvalue problem (5.1) becomes

γφ(ξ) = c∂φ(ξ)∂ξ

− φ(ξ) +w(ξ)φ(0)u′0(0)

(5.2)

We define a linear operator L: BC1(R,C)→ BC0(R,C) such as

Lφ(ξ) = c∂φ(ξ)∂ξ

− φ(ξ) +w(ξ)φ(0)u′0(0)

,

where BC0(R,C) = φ: φ is bounded, continuous and complex-valued integrablefunctions defined on (−∞,+∞). We write L as L = L1 +N , where

L1φ(ξ) = c∂φ(ξ)∂ξ

− φ(ξ), and Nφ(ξ) =w(ξ)φ(0)u′0(0)

Due to the compactness of operator N , the continuous spectrum of L is the same asthat of L1. The essential spectrum of operator L1 is the straight line −1− ics, wheres ∈ R, by an easy calculation using Fourier Transform [44, 56].

5.1. Construction of the integral Evans function.Zhang gave detailed derivation of an integral form of Evans function for zero gain

(Heaviside gain function) in [56]. His derivation is suitable for traveling fronts of theneural field equation with two conditions. One is that the coupling w(x) must bean even, nonnegative and piecewise smooth function because he only considered anexcitatory network. Second condition is that the traveling velocity is positive. Sinceour lateral inhibition coupling is not nonnegative, and we also consider traveling frontswith negative speed, we cannot directly use his results to compute the Evans function.We follow his approach to generate the integral Evans function and more constraintsthat result from the lateral inhibition coupling.

Lemma 5.1. (a). The zero eigenvalue of linear operator L with c > 0 satisfiesthe following

E(γ) = 1− 1cu′0(0)

∫ ∞0

w(s)e−1+γc sds = 0;(5.3)

its non-zero eigenvalues, if any, satisfy both (5.3) and the following equation

(1 + γ

c)2 =

a(A− a)aA− 1

.(5.4)

(b). The zero eigenvalue of linear operator L with c < 0 satisfies

E(γ) = 1 +1

cu′0(0)

∫ 0

−∞w(s)e−

1+γc sds = 0,(5.5)

and non-zero eigenvalues of L, if any, satisfy both (5.4) and (5.5). E(γ) is called theEvans function.

Proof is given in the Appendix.Corollary 5.2. (a). The traveling front with c > 0 satisfies the following

u′0(0) =A(c+ 1)− (ac+ 1)

(ac+ 1)(c+ 1).(5.6)

Page 43: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

21

where u′0(0) is the derivative of the traveling front at ξ = 0.(b). When c < 0,

u′0(0) =(ac− 1)−A(c− 1)

(ac− 1)(c− 1).(5.7)

Proof is given in the Appendix.Remark: From Lemma 5.1 and Corollary 5.2, u′0(0) cannot have 0 in its de-

nominator or in the form of 00 . Both situations never occur since we restrict our

traveling velocity −1 < − 1a < c < 1

a < 1 as explained in Section 4.4. u′0(0) cannotbe zero either from the Evans function. According to (5.6), u′0(0) = 0, only whenc = 1−A

A−a = 10.5a − 3, (using A = 1.5a), a quantity less than −1, which conflicts c > 0.

We have the similar confliction from (5.7). Therefore u′0(0) is never zero.

Corollary 5.3. The Evans function E(γ) 6= 0, for non-zero γ with Re(γ) ≥ 0.Moreover, zero is the only eigenvalue of L for both c > 0 and c < 0.

Proof is given in the Appendix.Lemma 5.4. The eigenvalue γ = 0 is simple.We omit the proof. Interested readers can refer to the proof of Lemma 2.6 by

Zhang in [56].We extend Zhang’s approach using the integral Evans function to the class of

lateral inhibition type of coupling. We show that there is no eigenvalue γ with positivereal part for traveling fronts existing in the parameter ranges we consider. Therefore,the traveling fronts in a lateral inhibition network with zero gain are linearly stable.

5.2. Numerical investigation of the neural field equation.To further support the existence of stable traveling fronts with zero gain, we

numerically integrate the following neural field equation (5.8) using the Euler Method.

∂u(x, t)∂t

= −u(x, t) +∫ ∞−∞

w(x− y)f(u(y, t))dy,(5.8)

where w is the lateral inhibition coupling and f(u) is the piecewise linear gain function.In order to numerically simulate (5.8), we must choose a finite spatial domain for

x large enough so that the boundaries does not have significant effect on the travelingfront solution within the time steps we integrate. In all the simulations, we choose thespatial domain as [1, 100] and discretize it using dx = 0.03. We use 500 or 800 Eulersteps for the time variable t. The initial condition at t = 0 to start the integration isthe following for all the examples shown in this section,

u(x, 0) =

−0.3 if x ∈ [0, 54),1.5 if x ∈ [54, 100].

(5.9)

We first numerically simulated (5.8) with α = 0 since we already know that suchtraveling fronts of (5.8) are stable from Section 5.1. After the initial square front givenin 5.9 evolves into the stable traveling front, we compare the profile of the travelingfront obtained from simulating (5.8) at t = 500 with the one from solving the ODEsusing the same values of parameter A, a, and h. Examples are given in Fig. 5.1 and5.2. The left figure in Fig. 5.1 is the top view of traveling front u(x, t) with negativevelocity (the front travels from higher value of x to lower value of x in the spatial

Page 44: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

22 Yixin Guo

Fig. 5.1. A = 2.25, a = 1.5, h = 0.2, α = 0, traveling velocity c = −0.468374946. Left: topview of the traveling front. x is the spatial variable, and t is the time variable. Color representsthe height of the front. The front crosses the threshold in the slanted narrow region in the middle.The front travels from right to left in spatial domain (higher x value to lower x value). Right:Green solid line is the profile of the traveling front u(x, 500) from simulating the integro-differentialequation (5.8). u(x, 500) is shifted in spatial variable x so that the profile crosses the threshold at0. Red dash-dotted line is the traveling front calculated using ODE (4.9) and (4.9) with matchingconditions (4.11)–(4.16). The horizontal dash line is the threshold level 0.2.

Fig. 5.2. A = 2.25, a = 1.5, h = 0.75, α = 0, traveling velocity c = 0.333333. Left: top view ofthe traveling front. x is the spatial variable, and t is the time variable. Color represents the heightof the front. The front crosses the threshold in the slanted narrow region in the middle. The fronttravels from left to right in spatial domain (lower x value to higher x value). Right: Green solidline is the profile of the traveling front u(x, 500) from simulating the integro-differential equation(5.8). u(x, 500) is shifted in spatial variable x so that the profile crosses the threshold at 0. Reddash-dotted line is the traveling front calculated using ODE (4.9) and (4.9) with matching conditions(4.11)–(4.16). The horizontal dash line is the threshold level 0.75.

domain). In the right figure, we plot the profile of the front, u(x, 500), using greensolid line. We shift the front profile to make the threshold point to be at x = 0.We also plot the traveling front calculated using ODEs (4.9) and (4.10) with theirmatching conditions in red dash-dotted line. The red front lies on top of the greenone. They agree to each other very well. In Fig. 5.2, we show a traveling front withpositive velocity (the front travels from lower value of x to higher value of x in the

Page 45: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

23

spatial domain). All parameters and traveling velocity are given in the caption ofeach figure.

Fig. 5.3. A = 2.25, a = 1.5, h = 0.2, α = 0.8, traveling velocity c = −0.33364844. Left: topview of the traveling front. x is the spatial variable, and t is the time variable. Color represents theheight of the front. The front crosses the threshold around the middle line. The front travels fromleft to right in spatial domain (lower x value to higher x value). Right: Green solid line is the profileof the traveling front u(x, 800) from simulating the integro-differential equation (5.8). u(x, 800) isshifted in spatial variable x so that the profile crosses the threshold at 0. Red dash-dotted line is thetraveling front calculated using ODE (4.9) and (4.9) with matching conditions (4.11)–(4.16). Thehorizontal dash line is the threshold level 0.2. Remark: the initial state (IS) needs more time toconverge to the stable front since the IS deviates too much from the stable state.

Fig. 5.4. A = 2.25, a = 1.5, h = 0.8, α = −0.3, traveling velocity c = 0.609948. Left: top viewof the traveling front. x is the spatial variable, and t is the time variable. Color represents the heightof the front. The front crosses the threshold around the middle line. The front travels from rightto left in spatial domain (higher x value to lower x value). Right: Green solid line is the profileof the traveling front u(x, 500) from simulating the integro-differential equation (5.8). u(x, 500) isshifted in spatial variable x so that the profile crosses the threshold at 0. Red dash-dotted line is thetraveling front calculated using ODE (4.9) and (4.9) with matching conditions (4.11)–(4.16). Thehorizontal dash line is the threshold level 0.8.

We further carry out numerical investigation of the stability in the case α 6= 0.In Fig. 5.3 (α = −0.3) and 5.4 (α = 0.8), the left figures are simulation resultsfrom equation (5.8). The right figures show the comparison between the fronts from

Page 46: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

24 Yixin Guo

the integro-differential equation and the ODEs with its matching conditions. Furtherdetails are provided in the captions of the figures.

As the derivation of the integral Evans function can only be applied to the Heavi-side gain, we probe linear stability of traveling fronts of non-zero gain using numericalsimulations. We conjecture that such traveling fronts are linearly stable within theparameter ranges we consider. Although further analysis should be carried out to val-idate our conjecture, we think such analysis is beyond the scope of the current paper.In a forthcoming paper, we will handle the linear stability analysis with non-zero gainusing an approach completely different from Zhang’s.

6. Discussion. In this paper, we study the traveling fronts of a populationneural network model (1.1) with a lateral inhibition coupling and piecewise lineargain function. In the first half of this paper, we show the existence of traveling frontsolutions of the integro-differential equation (2.5) for both zero and non-zero gains.We use an equivalent higher order ODE with a set of matching conditions resultingfrom the discontinuity of the gain function across the threshold. Then the prooffor the existence of a traveling front of (2.5) becomes a proof for the existence of aheteroclinic orbit of the ODE. We derive a system of multiple algebraic equations byapplying the matching conditions to the solutions of the ODEs across the thresholdpoint ξ = 0. From this system, we are able to construct different traveling frontsolutions. We previously use a similar ODE approach to prove the existence of single-and double-bump standing pulses [28, 29]. Zhang has shown a closed form expressionof the traveling front solution with an unique velocity for zero gain (Heaviside gainfunction) [56, 57]. Unfortunately, his approach will not work for non-zero gain ofpiecewise linear function with −1 < α < 1. Our indirect ODE approach not only giveus the explicit form of traveling front solutions, it also allow us to track the solutionsusing AUTO in XPPAUT. Consequently, we can further explore the shape of thefronts and the parameter range in which traveling fronts exist.

In the last section, we focus on linear stability analysis of traveling fronts withzero gain (α = 0.) We construct the integral Evans function using the eigenvalueequation. The advantage of the integral Evans function is that it can give generalstability criteria such as the results stated in Section 5.1. Therefore we can make astrong claim on the linear stability of all existing traveling fronts in the parameterranges we consider. The downside of the derivation of the integral Evans functionis that it cannot be extended to the case of non-zero gain. We need to developother analytic and numerical techniques that are completely different from Zhang’sapproach. One possible approach is to transfer the eigenvalue equation (5.1) to anODE system that can be studied using existing theory in ODE and dynamical systems.However, it will not be a similar study as finding the traveling fronts in Section 4.The analytical derivation and computation will be much more complicated due to thefollowing reasons. First, the essential spectrum is no longer a simple straight line. Itshould be a curve that could create instability by partially crossing the imaginary axison the complex plane. We need a much thorough study on the essential spectrum inwhich we will need other integral transforms, such as the Hilbert Transform. Second,the characteristic structure of the equivalent ODEs of (5.1) that is dependent onthe eigenvalue γ is much more complex. And there is no explicit way to representthose characteristic values except numerical computation. This will also make it verydifficult to rule out the degenerate cases when repeated eigenvalue values occur withlinearly dependent eigenfunctions. Finally, even though we can derive the Evansfunction using the equivalent ODEs, such Evans function may be in such a long and

Page 47: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

25

messy form that it is almost impossible to write down its expression. We may haveto numerically compute across a discretized x-y grid on the right half of the complexplane to investigate any possible point spectrum. The complex analytical derivationand heavy numerical computation for the stability of non-zero gain is beyond the scopeof this paper. We will study the stability analysis of non-zero gain in a forthcomingpaper.

7. Appendix.

7.1. Lemmas on the existence of traveling fronts in excitatory net-works.

Lemma 7.1. The excitatory coupling function w(x) = A2 e−A|x| (A ≤ 1) satisfies

the following ODE on R \ 0,

w′′′ − w′′

c−A2w′ +

A2w

c= 0 for all x 6= 0.(7.1)

Proof. Since w′(x) is continuous everywhere except at x = 0, we use the deriva-tives w′(x), w′′(x), and w′′′(x) on two separate domains x ∈ (0,∞) and x ∈ (−∞, 0)to obtain

w′′′ − w′′

c− a2w′ +

A2w

c=

(−A3w − A2w

c+A3w + A2w

c= 0 if x ∈ (0,∞)

A3w − A2wc−A3w + A2w

c= 0 if x ∈ (−∞, 0)

(7.2)

Remark: both w′(x) and w′′′(x) are discontinuous at x = 0. w′(0+) = −Aw(0) =−A

2

2 , w(0−) = Aw(0) = A2

2 . w′′(x) is continuous at x = 0. We also point out that (7.1)

is not the only ODE w satisfies. For example, w also satisfies ODE w′′ −A2w = 0.

Lemma 7.2. With three real roots, (a) (3.14) has two positive and one negativeroots when the traveling speed c > 0. (b) (3.14) has one positive and two negative rootswhen c < 0.

Proof. (a). In the case when λ1, λ2, λ3 ∈ R and the velocity c > 0, by Descartes’Sign Rule, f(λ) has two positive roots since there are two sign change of the coefficientsof f(λ). The coefficients of f(−λ) has only one sign change, therefore f(λ) has twopositive and one negative roots.

(b). Similarly, f(λ) has one positive and two negative roots if c < 0.

Lemma 7.3. When f(λ) has complex roots, let λ1 be the real root, λ2 and λ3 bethe complex conjugate pair such that λ2 = p+ qi. we have the following two cases:

(a). λ1 < 0, and p > 0 for c > 0.(b). λ1 > 0, and p < 0 for c < 0.Proof. (a). For c > 0, because λ1λ2λ3 = A2(α−1)

c < 0, and λ2λ3 = p2 + q2 > 0,λ1 must be negative. Since λ1 + λ2 + λ3 = λ1 + 2p = 1

c > 0, p must be positive.

(b). For c < 0, from λ1λ2λ3 = A2(α−1)c > 0, and λ2λ3 = p2 + q2 > 0, λ1 > 0.

From λ1 + λ2 + λ3 = λ1 + 2p = 1c < 0, P < 0.

Lemma 7.4. When 0 < α < 1, the cubic equation (3.14) has three real roots.Proof. For a cubic equation in the form x3+Bx2+Cx+D = 0, if the discriminant

∆ = B2C2 − 4C3 − 4B3D − 27D2 + 18BCD > 0, then the cubic equation has threedistinct real roots.

Page 48: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

26 Yixin Guo

Calculate ∆ for (3.14)

∆ = 4A6 +4(1− α)A2

c4+A4[1 + 18(1− α)− 27(1− α)2]

c2,(7.3)

where the first term is positive. If the parabola 1 + 18(1 − α) − 27(1 − α)2 in thesecond term is positive, then it is guaranteed that ∆ > 0. This parabola is positive if

+6− 2√

39

< α <6 + 2

√3

9.

Since we only consider α < 1, ∆ > 0 if 6−2√

39 < α < 1. Therefore, (3.14) has three

real roots when 0 < α < 1.Remark: Lemma 7.4 shows that the roots of cubic equation (3.14) are always real

no matter what the velocity or the values of other parameters are as long as α > 0.For α < 0, there may be three real roots or only one real root depending on the valuesof other parameters. See Fig. 3.1. The black part of each curve is for ∆ > 0 (threereal roots). The grey part, which only occurs for negative α, is for ∆ < 0 (only onereal root).

7.2. Lemmas on the existence of traveling fronts in lateral inhibitionnetworks.

The lateral inhibition coupling function w(x) = Ae−a|x|−e−|x| satisfies all the con-ditions C1–C5 listed in Section 2.1. w′(x) and w′′′(x) are continuous everywhere ex-cept at x = 0, such that w′(0−) = aA−1 = −w′(0+), w′′′(0−) = a3A−1 = −w′′′(0+).Further more, following lemma 7.5 and 7.6 hold for lateral inhibition coupling.

Lemma 7.5. The lateral inhibition coupling function w(x) = Ae−a|x|− e−|x| withA > 1, a > 1, satisfies the following differential equations on R \ 0 :

cw(v) − w(iv) − c(a2 + 1)w′′′ + (a2 + 1)w′′ + a2cw′ − a2w = 0, x 6= 0.(7.4)

Proof.

w(x) =

(Ae−ax − e−x x ∈ (0,∞)

Aeax − ex x ∈ (−∞, 0), w′(x) =

(−aAe−ax + e−x x ∈ (0,∞)

aAeax − ex x ∈ (−∞, 0),

w′′(x) =

(a2Ae−ax − e−x x ∈ (0,∞)

a2Aeax − ex x ∈ (−∞, 0), w′′′(x) =

(−a3Ae−ax + e−x x ∈ (0,∞)

a3Aeax − ex x ∈ (−∞, 0),

w(iv)(x) =

(a4Ae−ax − e−x x ∈ (0,∞)

a4Aeax − ex x ∈ (−∞, 0), w(v)(x) =

(−a5Ae−ax + e−x x ∈ (0,∞)

a5Aeax − ex x ∈ (−∞, 0),

From all the derivatives, we obtain the following:

w(v) − (a2 + 1)w′′′ + a2w′ =((−a5A+ a5A+ a3A− a3A)e−ax + (1− a2 − 1 + a2)e−x = 0 x ∈ (0,∞)

(a5A− a5A− a3A+ a3A)e−ax + (−1 + a2 + 1− a2)e−x = 0 x ∈ (−∞, 0).

Page 49: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

27

Similarly, we can show that w(iv) − (a2 + 1)w′′ + a2w = 0, for x 6= 0. Then

c(w(v) − (a2 + 1)w′′′ + a2w′)− (w(iv) − (a2 + 1)w′′ + a2w) = 0, x 6= 0.(7.5)

We rearrange the terms in (7.5) to obtain (7.4).

Lemma 7.6. In the lateral inhibition network with w(x) = Ae−a|x| − e−|x|, andpiecewise linear gain f(u) defined in (2.3), the following equalities are true:

∫ ∞−∞

Ae−a|ξ−η|f(u(η))dη =cu′(ξ)− cu′′′(ξ)− u(ξ) + u′′(ξ)− 2(1− aA)f(u(ξ))

a2 − 1

(7.6)

∫ ∞−∞

e−|ξ−η|f(u(η))dη =ca2u′(ξ)− cu′′′(ξ)− a2u(ξ) + u′′(ξ)− 2(1− aA)f(u(ξ))

a2 − 1

(7.7)

Proof. From (2.5),

−cu′(ξ) = −u(ξ) +∫ ∞−∞

(Ae−a|x| − e−|x|)f(u(η))dη(7.8)

From (4.4),

−cu′′′(ξ) = −u′′(ξ) +∫ ∞−∞

(a2Ae−a|x| − e−|x|)f(u(η))dη(7.9)

(7.9)-(7.8), and a2(7.8)+(7.8) gives

(a2 − 1)

Z ∞−∞

Ae−a|ξ−η|f(u(η))dη = c(u′(ξ)− u′′′(ξ))− u(ξ) + u′′(ξ)− 2(1− aA)f(u(ξ))

(a2 − 1)

Z ∞−∞

e−|ξ−η|f(u(η))dη = c(a2u′(ξ)− u′′′(ξ))− a2u(ξ) + u′′(ξ)− 2(1− aA)f(u(ξ)).

Lemma 7.7. At the threshold point ξ = 0, the traveling front u(ξ) satisfies thefollowing matching conditions.

u(0+) = h

u(0−) = h

u′(0+)− u′(0−) = 0u′′(0+)− u′′(0−) = 0

u′′′(0+)− u′′′(0−) =2c

(aA− 1)

uiv(0+)− uiv(0−) =2c2

(aA− 1) +2c

(aA− 1)df

dξ|ξ=0+

where dfdξ |ξ=0+ = αu′(0+).

Proof. The matching condition for u, u′, and u′′ are trivial. Since

−cu′′′(ξ) = −u′′(ξ) + w′(0−)f(u(ξ))− w′(0+)f(u(ξ)) +

Z ∞−∞

w′′(ξ − η)f(u(η))dη,

Page 50: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

28 Yixin Guo

we have

−cu′′′(0+) = −u′′(0+) + w′(0+)− w′(0−) +∫ ∞−∞

w′′(ξ − η)f(u(η))dη

−cu′′′(0−) = −u′′(0−) +∫ ∞−∞

w′′(ξ − η)f(u(η))dη.

Therefore

u′′′(0+)− u′′′(0−) =w′(0+)− w′(0−)

−c=

2(aA− 1)c

.

Similarly

−c[uiv(0+)− uiv(0−)

]= −

[u′′′(0+)− u′′′(0−)

]+[w′(0+)− w′(0−)

] dfdξ|ξ=0+ ,

that is

uiv(0+)− uiv(0−) =2(aA− 1)

c2+

2c

(aA− 1)df

dξ|ξ=0+ .

In the following Lemma 7.8 and 7.9, We suppose that u(η) is the solution ofODE (4.7), w = w(ξ − η) is the lateral inhibition coupling, and w with n number ofsubscript ζ represents the n–th order derivative of w with respect to ζ = ξ − η. Let()′ = d()

dη . u(iv) and u(v) both are well defined by using delta dirac function and its

derivative in their representation. Then the following two lemmas are true.

Lemma 7.8. For ξ 6= η, we have the following equalities:

wu′ =(wu)′ + wζu(7.10)

wu′′ =(wu′ + wζ)′ + wζζu(7.11)

wu′′′ =(wu′′ + wζu′ + wζζu) + wζζζu(7.12)

wu(iv) =(wu′′′ + wζu′′ + wζζu

′ + wζζζu)′ + wζζζζu(7.13)

wu(v) =(wu(iv) + wζu′′′ + wζζu

′′ + wζζζu′ + wζζζζu)′ + wζζζζζu(7.14)

Proof. We only show (7.14) here. (7.10)–(7.13) can be shown similarly.

wu(v) =(wu(iv) + wζu′′′ + wζζu

′′ + wζζζu′ + wζζζζu)′ + wζζζζζu

=− wζu(iv) + wu(v) − wζζu′′′ + wζu(iv) − wζζζu′′ + wζζu

′′′ − wζζζζu′+wζζζu

′′ − wζζζζζu+ wζζζζu′ + wζζζζζu.

Lemma 7.9. The two terms in∫∞−∞Kη(ξ, η)dη are:

Z ∞−∞

(cwu(iv) − wu′′′)′dη = 2(1− aA)(A− 1)(df

dη|ξ+ −

df

dη|ξ−),

(7.15)

Page 51: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

29

Z ∞−∞

c(wζu′′′)′dη = −2(1− aA)2(f |ξ+ + f |ξ−) + 2(1− aA)u′′ − 2(1− aA)

Z ∞−∞

w′′f(u)dη.

(7.16)

Proof.Z ∞−∞

(cwu(iv) − wu′′′)′dη =

Z ξ

−∞(cwu(iv) − wu′′′)′dη +

Z ∞ξ

(cwu(iv) − wu′′′)′dη

= w(0+)(cu(iv)(ξ−)− u′′′(ξ−))− w(0−)(cu(iv)(ξ+)− u′′′(ξ+))

= w(0)−c(u(iv)(ξ+)− u(iv)(ξ−)) + (u′′′(ξ+)− u′′′(ξ−))

Note that u(iv) and u′′′ have jump discontinuity at ξ where u(ξ) = h. According tothe matching condition where u crosses the threshold h,

u′′′(ξ+)− u′′′(ξ−) =2

c(aA− 1)Θ(u(ξ)− h),(7.17)

u(iv)(ξ+)− u(iv)(ξ−) =2

c2(aA− 1)Θ(u(ξ)− h) +

2

c(aA− 1)(

df

dη|ξ+ −

df

dη|ξ−)

u(iv)(ξ+)− u(iv)(ξ−) =1

c(u′′′(ξ+)− u′′′(ξ−)) +

2

c(aA− 1)(

df

dη|ξ+ −

df

dη|ξ−)(7.18)

c (7.18) +(7.17) givesZ ∞−∞

(cwu(iv) − wu′′′)′dη = 2(1− aA)(A− 1)(df

dη|ξ+ −

df

dη|ξ−)(7.19)

Z ∞−∞

c(wζu′′′)′dη =

Z ξ

−∞c(wζu

′′′)′dη +

Z ∞ξ

c(wζu′′′)′dη

= c(wζ(0+)u′′′(ξ−)− wζ(0−)u′′′(ξ+)) = c(1− aA)(u′′′(ξ−) + u′′′(ξ+))

Using −cu′′′ = −u′′ + 2(1− aA)f +∫∞−∞ w′′f(u)dη,Z ∞

−∞c(wζu

′′′)′dη = (1− aA)u′′(ξ−)− 2(1− aA)f |ξ− −Z ∞−∞

w′′fdη+

u′′(ξ+)− 2(1− aA)f |ξ+ −Z ∞−∞

w′′f(u)dη

= 2(1− aA)u′′ − 2(1− aA)2(f |ξ+ + f |ξ−)− 2(1− aA)

Z ∞−∞

w′′f(u)dη.

Lemma 7.10. For the characteristic equation

cλ5 − λ4 − c(a2 + 1)λ3 + (a2 + 1 + 2α(1− aA))λ2 + ca2λ− (a2 + 2αa(a−A)) = 0,(7.20)

there are either 5 real characteristic values or four complex with one real characteristicvalues when its discriminant ∆ > 0; there are two complex and three real characteristicvalues when ∆ < 0. There are repeated roots when ∆ = 0.

Proof. The discriminant for a fifth degree polynomial is ∆ = a(2n−2)n

∏i<j(ri −

rj)2, where an is the coefficient of the leading term and r1, r2, ... are the roots of thepolynomial.

Page 52: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

30 Yixin Guo

For equation (7.20),

∆ = c8Yi<j<5

(λi − λj)2(7.21)

It is obvious that when all λ are real, ∆ > 0. When four are complex and one real,∆ is still positive due to its symmetry.

The next possibility is two complex and three real roots of (7.20). Let supposeλ1 and λ2, without loss of generality, are the complex conjugate pair, then (λ1 − λ2)is the only complex factor with only imaginary part in 7.21. All other factors are realor form complex conjugate pairs therefore the product of them is real and positive.Hence ∆ < 0 since (λ1 − λ2)2 is negative.

It is obvious that there are repeated roots when ∆ = 0.Lemma 7.11. Derivation of systems of equations for Case L1, L2 and case L4–L7

listed in section 4.3.

Proof. In the following derivation, d0 = 2(1−αh)(A−a)a+2α(a−A) , and d0 = a2 = 1−αh

1−α with

A = 1.5a. D = 2(aA−1)c2 + 2(aA−1)α(c1a+c2+

c3c )

c .

Positive speed c > 0: u(ξ) = c1e1c ξ + c2e

−aξ + c3e−ξ, on ξ ∈ (−∞, 0].

Case L1: All five real λ with λj < 0 (j = 1,2), and λj (j = 3,4,5),

u(ξ) = d1eλ1ξ + d2e

λ2ξ + d0, ξ > 0, then

8>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>:

d1 + d2 + d0 = h

c1 + c2 + c3 = h

d1λ1 + d2λ2 = ac1 + c2 +c3c

d1λ21 + d2λ

22 = a2c1 + c2 +

c3c2

d1λ31 + d2λ

22 = a3c1 + c2 +

c3c3

+2(aA− 1)

c

d1λ41 + d2λ

42 = a4c1 + c2 +

c3c4

+D

λ1 + λ2 + λ3 + λ4 + λ5 =1

c

λ1(λ2 + λ3 + λ4 + λ5) + λ2(λ3 + λ4 + λ5) + λ3(λ4 + λ5) + λ4λ5 = −(a2 + 1)

λ1λ2(λ3 + λ4 + λ5) + (λ1λ3 + λ2λ3)(λ4 + λ5) + λ4λ5(λ1 + λ2 + λ3) = M

λ1λ2λ3(λ4 + λ5) + λ1λ4λ5(λ2 + λ3) + λ2λ3λ4λ5 = a2

λ1λ2λ3λ4λ5 =a2 + 2α(a−A)

c

(7.22)

where M = 2α(aA−1)−a2−1c

.

Case L2: Three real λ with λ1,2 < 0, λ3 > 0 and λ4,5 = l ± ir (l > 0), then

u(ξ) = d1eλ1ξ + d2e

λ2ξ + d0, ξ > 0, then

Page 53: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

31

d1 + d2 + d0 = h

c1 + c2 + c3 = h

d1λ1 + d2λ2 = ac1 + c2 +c3c

d1λ21 + d2λ

22 = a2c1 + c2 +

c3c2

d1λ31 + d2λ

22 = a3c1 + c2 +

c3

c3+

2(aA− 1)

c

d1λ41 + d2λ

42 = a4c1 + c2 +

c3c4

+D

λ1 + λ2 + λ3 + 2l =1

c

λ1(λ2 + λ3 + 2l) + λ2(λ3 + 2l) + 2lλ3 + (l2 + r2) = −(a2 + 1)

λ1λ2(λ3 + 2l) + 2l(λ1λ3 + λ2λ3) + (l2 + r2)(λ1 + λ2 + λ3) = M

2lλ1λ2λ3 + (l2 + r2)(λ1λ2 + λ1λ3 + λ2λ3 = a2

λ1λ2λ3(l2 + r2) =a2 + 2α(a−A)

c

Case L3: One real positive λ and four complex λ, s. t. λ1,2 = p± iq, λ3 > 0, andλ4,5 = l ± ir with p < 0, l > 0, then

u(ξ) = d1epξ cos(qξ) + d2e

pξ sin qξ + d0, ξ > 0.

The system of equations (4.33) is listed in 4.3.

Negative speed c < 0: u(ξ) = c1eaξ + c2e

ξ, on ξ ∈ (−∞, 0].Case L4: Five real λ s. t. λ1,2,3 < 0, and λ4,5 > 0, then

u(ξ) = d1eλ1ξ + d2e

λ2ξ + d3eλ3ξ + d0, ξ > 0, then

8>>>>>>>>>>>>>>>>>>>>>>>>>><>>>>>>>>>>>>>>>>>>>>>>>>>>:

d1 + d2 + d3 + d0 = h

c1 + c2 = h

d1λ1 + d2λ2 + d3λ3 = ac1 + c2

d1λ21 + d2λ

22 + d3λ

23 = a2c1 + c2

d1λ31 + d2λ

22 + d3λ

33 = a3c1 + c2+

d1λ41 + d2λ

42d3λ

43 = a4c1 + c2 +D

λ1 + λ2 + λ3 + λ4 + λ5 =1

c

λ1(λ2 + λ3 + λ4 + λ5) + λ2(λ3 + λ4 + λ5) + λ3(λ4 + λ5) + λ4λ5 = −(a2 + 1)

λ1λ2(λ3 + λ4 + λ5) + (λ1λ3 + λ2λ3)(λ4 + λ5) + λ4λ5(λ1 + λ2 + λ3) = M

λ1λ2λ3(λ4 + λ5) + λ1λ4λ5(λ2 + λ3) + λ2λ3λ4λ5 = a2

λ1λ2λ3λ4λ5 =a2 + 2α(a−A)

c

(7.23)

Case L5: Three real λ with λ1,2 > 0, λ3 < 0 and λ4,5 = l ± ir (l < 0), then

u(ξ) = d1elξ cos rξ + d2e

lξ sin rξ + d3eλ3ξ + d0, ξ > 0, then

Page 54: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

32 Yixin Guo8>>>>>>>>>>>>>>>>>>>>>>>>>>>>><>>>>>>>>>>>>>>>>>>>>>>>>>>>>>:

d1 + d3 + d0 = h

c1 + c2 = h

d1l + d2r + d3λ3 = ac1 + c2

d1l2 + +2d2lr − d1r

2 + d3λ23 = a2c1 + c2 +

c3c2

d1l3 + +3d2l2r − 3d1lr2 − d2r3 + d3λ

33 = a3c1 + c2 +

2(aA− 1)

c

d1l4 + 4d2l3r − 6d1l2r2 − 4d2lr3 + d1r4 + d3λ

42 = a4c1 + c2 +D

λ1 + λ2 + λ3 + 2l =1

c

λ1(λ2 + λ3 + 2l) + λ2(λ3 + 2l) + 2lλ3 + (l2 + r2) = −(a2 + 1)

λ1λ2(λ3 + 2l) + 2l(λ1λ3 + λ2λ3) + (l2 + r2)(λ1 + λ2 + λ3) = M

2lλ1λ2λ3 + (l2 + r2)(λ1λ2 + λ1λ3 + λ2λ3 = a2

λ1λ2λ3(l2 + r2) =a2 + 2α(a−A)

c

Case L6: One real negative and four complex λ s.t. λ1,2 = p ± iq , λ3 > 0, andλ4,5 = l ± ir with p < 0, and l > 0, then

u(ξ) = d1epξ cos qξ + d2e

pξ sin qξ + d3eλ3ξ + d0, ξ > 0, then8>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>:

d0 + d1 + d3 = h

c1 + c2 = h

d1p+ d2q + d3λ3 = ac1 + c2 +c3c

d1p2 + 2d2pq − d1q

2 + d3λ23 = a2c1 + c2 +

c3c2

d1p3 + 3d2p

2q − 3d1pq2 − d2q

3 + d3λ33 = a3c1 + c2 +

2(aA− 1)

c

d1p4 + 4d2p

3q − 6d1p2q2 − 4d2pq

3 + d1q4 + d3λ

43 = a4c1 + c2 +D

2p+ λ3 + 2l =1

c

(p2 + q2) + 2pλ3 + 4pl + 2λ3l + (l2 + r2) = −(a2 + 1)

(p2 + q2)λ3 + 2(p2 + q2 + 2pλ3)l + (2p+ λ3)(l2 + r2) = M

2(p2 + q2)λ3l + (p2 + q2 + 2pλ3)l2 + r2) = a2

(p2 + q2)λ3(l2 + r2) =a2 + 2α(a−A)

c

Case L7: When ∆ = 0, there are repeated roots of (4.27). This case only happensat a few isolated values of α, and it may occur for both c > 0 and c < 0.

• If c > 0, ∆ = 0 is the transition between either Case L1 and L2, or Case L2and L3. The repeated roots are λ1 = λ2 < 0. The solution from is

u(ξ) = d1eλ1ξ + d2ξe

λ1ξ + d0, ξ > 0,

u(ξ) = c1e1c ξ + c2e

−aξ + c3e−ξ, ξ < 0.

• If c < 0, ∆ = 0 is the transition between either Case L4 and Case L5, or CaseL5 and Case L6. The repeated roots are real λ4 = λ5 > 0. The solution formis the same as Case L5.

We omit the equations in L7.

Page 55: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

33

7.3. Lemmas on the stability of traveling fronts with α = 0.

Proof of Lemma 5.1Proof. Proof of part (a): The eigenvalue problem for operator L1 is

c∂φ(ξ)∂ξ

− (γ + 1)φ(ξ) = 0

with solution φ1(γ, ξ) = e1+γc ξ. The solution of the adjoint equation is φ2(γ, ξ) =

e−1+γc ξ. Notice that φ1(γ, ξ)φ2(γ, ξ) = 1.Using the method of variation of parameters, we assume the solution of (5.2) is

φ(ξ) = φ(γ, ξ) = φ1(γ, ξ)H(γ, ξ), and plug it into the eigenvalue problem (5.2), wehave

∂H

∂ξ= −w(ξ)φ(γ, 0)

cu′0(0)φ2(γ, ξ)(7.24)

Integrating from ξ to ∞

H(γ, ξ) = H(γ) +φ(γ, 0)cu′0(0)

∫ ∞ξ

w(s)φ2(γ, s)ds

where H(γ) is an appropriate complex constant. Then

φ(γ, ξ) = φ1(γ, ξ)(H(γ) +

φ(γ, 0)cu′0(0)

∫ ∞ξ

w(s)φ2(γ, s)ds).

which gives

φ1(γ, ξ)H(γ) = φ(γ, ξ)− φ1(γ, ξ)φ(γ, 0)cu′0(0)

∫ ∞ξ

w(s)φ2(γ, s)ds.(7.25)

Set ξ = 0 in (7.25), and notice φ1(γ, 0) = 1,

H(γ) = φ(γ, 0)(

1− 1cu′0(0)

∫ ∞0

w(s)φ2(γ, s)ds).

γ is an eigenvalue with eigenfunction φ(γ, ξ) for operator L such that Lφ = γφ (weomit the straightforward verification that Lφ = γφ.) For φ(γ, ξ) to remain bounded,the following function E(γ) must be zero,

E(γ) = 1− 1cu′0(0)

∫ ∞0

w(s)φ2(γ, s)ds.

We call E(γ) as the Evans function following Zhang’s paper [56]. Due to translationinvariance of the traveling front, γ = 0 is an eigenvalue. For the zero eigenvalue, itscorresponding eigenfunction is simply

φ(γ, ξ) = φ1(γ, ξ)φ(γ, 0)cu′0(0)

∫ ∞ξ

w(s)φ2(γ, s)ds, φ(γ, ξ) ∈ BC1(R,C),(7.26)

where γ = 0 satisfies E(γ) = 0.

Page 56: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

34 Yixin Guo

Non-zero eigenvalues of operator L not only satisfies E(γ) = 0, it also obeysone more constraint to maintain the boundedness of its corresponding eigenfunction(7.26). Use the ‘Mexican Hat’ coupling w(x), then for ξ < 0,

φ(γ, ξ) = φ1(γ, ξ)φ(γ, 0)

cu′0(0)

Z ∞ξ

w(s)φ2(γ, s)ds

= φ1(γ, ξ)φ(γ, 0)

cu′0(0)

„Z 0

ξ

w(s)φ2(γ, s)ds+

Z ∞0

w(s)φ2(γ, s)ds

«=

Aφ(γ, 0)

cu′0(0)(a+ 1+γc

)eaξ − φ(γ, 0)

cu′0(0)(a+ 1+γc

)eξ

− φ1(γ, ξ)φ(γ, 0)

cu′0(0)

A

a− 1+γc

+A

a+ 1+γc

− 1

1− 1+γc

− 1

1 + 1+γc

!

=Aφ(γ, 0)

cu′0(0)(a+ 1+γc

)eaξ − φ(γ, 0)

cu′0(0)(a+ 1+γc

)eξ

− φ1(γ, ξ)2φ(γ, 0)

cu′0(0)

aA

a2 − ( 1+γc

)2− 1

1− ( 1+γc

)2

!.

For φ(γ, ξ) to remain bounded,

aA

a2 − ( 1+γc )2

− 11− ( 1+γ

c )2= 0,(7.27)

which gives

(1 + γ

c)2 =

a(A− a)aA− 1

.(7.28)

We omit the trivial calculation that φ(γ, ξ) remains bounded on ξ > 0. Therefore,non-zero eigenvalues of a traveling front, if any, must satisfy both E(γ) = 0 and(7.28). Its corresponding eigenfunction is (7.26) with constraint (7.28).

Proof of part (b) for traveling fronts with negative velocity is similar. There aretwo differences compared with the proof of part (a). One is that we integrate (7.24)from −∞ to ξ. The other is that the form of the eigenfunction is the following

φ(γ, ξ) = −φ1(γ, ξ)φ(γ, 0)cu′0(0)

∫ ξ

−∞w(s)φ2(γ, s)ds, φ(γ, ξ) ∈ BC1(R,C),(7.29)

which is different from (7.26). The constraint for non-zero eigenvalue and eigenfunc-tion is still the same as (7.28).

Remark: The eigenfunction for non-zero eigenvalue (c > 0) can be expressedexplicitly as

φ(γ, ξ) =

1

cu′0(0)

(− Aa− 1+γ

c

eaξ + 11− 1+γ

c

eξ)

if ξ ∈ (−∞, 0),1

cu′0(0)

(A

a+ 1+γc

e−aξ − 11+ 1+γ

c

e−ξ)

if ξ ∈ (0,∞).

with

φ(γ, 0+)− φ(γ, 0−) =aA

a2 − ( 1+γc )2

− 11− ( 1+γ

c )2.(7.30)

Page 57: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

35

By (7.27), φ(γ, 0+)− φ(γ, 0−) = 0. Hence, the condition for the continuity of φ(γ, ξ)at ξ = 0 is the same as the condition for φ(γ, ξ) to remain bounded on ξ ∈ (−∞,∞).This is also true for traveling fronts with c < 0.

Proof of Lemma 5.2Proof. For traveling fronts with c > 0:γ = 0 is an eigenvalue because translation invariance of the traveling front. There-

fore, E(0) = H(0) = 0. Then

1cu′0(0)

∫ ∞0

w(s)φ2(γ, s)ds = 1.

Calculate the integration using w(s) = Ae−a|s| − e−|s|, we have cu′0(0) = Acac+1 −

cc+1 ,

which is (5.6) after simplification.For traveling fronts with c < 0, similar calculation will give (5.7).

Proof of Corollary 5.3Proof. For traveling fronts with c > 0:The explicit form of Evans function E(γ) is

E(γ) =cu′0(0)(a+ 1+γ

c )(1 + 1+γc )−A(1 + 1+γ

c ) + (a+ 1+γc )

cu′0(0)(a+ 1+γc )(1 + 1+γ

c )

Simplify the numerator and set it equal to zero, we have the quadratic equation of γ

u′0γ2 + (1−A+ u′0(2 + c+ ac))γ + (u′0(ac+ 1)(c+ 1)−A(c+ 1) + (ac+ 1)) = 0,

(7.31)

where u′0 = u′0(0). Since u′0(ac+ 1)(c+ 1) = A(c+ 1)− (ac+ 1) by Lemma 5.2, (7.31)is the same as

u′0γ2 + (1−A+ u′0(2 + c+ ac))γ = 0.(7.32)

Obviously, E(γ) has a zero root, which is the zero eigenvalue due to translation in-variance, and a nonzero root

γ =A− 1u′0(0)

− (2 + c+ ac).

Using (5.6) for u′0(0),

γ =(ac+ 1)2 −A(c+ 1)2

A(c+ 1)− (ac+ 1)

=− (√A− a)c+ (

√A− 1)(

√A+ a)c+ (

√A+ 1)

(A− a)c+ (A− 1).

Since a <√A, A > 1, and 0 < c < 1

a , we can see that γ is a negative quantity.Therefore, E(γ) 6= 0 for Re(γ) > 0.

The non-zero γ = A−1u′0(0)

− (2 + c + ac) obtained by solving E(γ) = 0 does notsatisfy condition (5.4) given in Lemma 5.1, therefore, it fails to be an eigenvalue. Wecan conclude that zero is the only eigenvalue of L with positive velocity.

Page 58: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

36 Yixin Guo

For traveling fronts with c < 0, the non-zero γ that satisfies

E(γ) = 1 +1

cu′0(0)

∫ 0

−∞w(s)e−

1+γc sds = 0,

is

γ =A− 1u′0(0)

− (2− c− ac)

=(a−

√A)c+ (

√A− 1)(a+

√A)c− (

√A+ 1)

(a−A)c−A(c− 1)(7.33)

where a <√A, A > 1, and − 1

a < c < 0, the denominator and the first factor inthe numerator of (7.33) are positive. The second factor in the numerator is negative.Therefore, it is a negative value. Again, this γ value does not satisfy condition (5.4).It is not an eigenvalue. Hence, zero is the only eigenvalue of L with c < 0.

Acknowledgments. The author would like to thank Dennis Guang Yang for illu-minating discussion and helpful comments on the manuscript. The author is thankfulto the anonymous referee who provide valuable suggestions that make the presentationof this paper better.

REFERENCES

[1] S. Amari, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cyber-netics, 27 (1977), pp. 77–87.

[2] A Arieli, A. Sterkin, A Grinvald, and A. Aertsen, Dynamics of ongoing activity: explana-tion of the large variability in evoked cortical responses, Science, 273 (1996), pp. 1868–1871.

[3] L. Bai, X. Huang, Q. Yang, and J.-Y. Wu, Spatiotemporal patterns of an evoked net-work oscillation in neocortical slices: coupled local oscillators, J. Neurophysiol., 96 (2006),pp. 2528–2538.

[4] S. N. Baker, J. M. Kilner, E. M. Pinches, and R. N. Lemon, The role of synchrony andoscillations in the motor output, Exp. Brain Res., 128 (1999), pp. 109–117.

[5] . Botelho, J. Jamison, and A. Murdock, Single-pulse solutions for oscillatory couplingfunctions in neural networks, Journal of Dynamics and Differential Equations, 20(1) (2008),pp. 165–199.

[6] P Bressloff and S. E. Folias, Front bifurcations in an excitatory neural network, SIAM j.Appl. Math., 65 (2004), pp. 131–151.

[7] Y Chagnac-Amitai and B. W. Connors, Synchronized excitation and inhibition driven byintrinsically bursting neurons in neocortex, J Neurophysiol., 62 (1989), pp. 1149–1162.

[8] R. D. Chervin, P. A. Pierce, and B. W. Connors, Periodicity and directionality in thepropagation of epileptiform discharges across discharges across neocortex, J. Neurophysiol.,60 (1988), pp. 1695–1713.

[9] B. W. Connors and Y. Amitai, Generation of epileptiform discharge by local circuits ofneocortex, in Epilepsy: Models, Mechanisms, and Concepts, P. A. Schwartkroin, ed., Cam-bridge University Press, Cambridge, UK, (1993), pp. 388–423.

[10] S. Coombes, G. J. Lord, and M. R. Owen, Waves and bumps in neuronal networks withaxo-dendritic synaptic interactions, Phys. D, 178 (2003), pp. 219–241.

[11] J. P. Donoghue, J. N. Sanes, N. G. Hatsopoulos, and G. Gaal, Neural discharge andlocal field potential oscillations in primate motor cortex during voluntary movements, J.Neurophysiol., 79 (1998), pp. 159–173.

[12] G. B. Ermentrout, Xppaut, simulation software tool.[13] , Reduction of conductance-based models with slow synapses to neural nets, J. Math.

Biology, 6 (1994), pp. 679–695.[14] G. B Ermentrout, Simulating, Analyzing, and Animating Dynamical Systems: A Guide to

XPPAUT for Researchers and Students, SIAM, 2002.[15] J. W. Evans, Nerve axon equations, i: Linear approximations, Indiana Univ. Math. J., 21

(1972), pp. 877–955.

Page 59: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

37

[16] , Nerve axon equations, ii: Stability at rest, Indiana Univ. Math. J., 22 (1972), pp. 75–90.[17] , Nerve axon equations, iii: Stability of the nerve impulse, Indiana Univ. Math. J., 22

(1972), pp. 577–594.[18] , Nerve axon equations, iv: The stable and unstable impulse, Indiana Univ. Math. J., 24

(1975), pp. 1169–1190.[19] I Ferezou, S. Bolea, and C. C. Petersen, Visualizing the cortical respresentation of whisker

touch: voltage-sensitive dye imaging in freely moving mice, Neuron, 50 (2006), pp. 617–629.

[20] P. C. Fife and B. J. McLeod, The approach of solutions of nonliear diffusion equations totravling front solutions, Arch. Rational Mech. Anal., 65 (1977), pp. 15–40.

[21] , A phase plane discussed of convergence to traveling fronts fro nonlinear diffusion, Arch.Rational Mech. Anal., 75 (1981), pp. 281–314.

[22] S. E. Folias and P.C. Bressloff, Stimulus-locked waves and breathers in an excitatory neuralnetwork, SIAM J. Appl. Math., 65 (2005), pp. 2067–2092.

[23] W. J. Freeman and J. M. Barrie, Analysis of spatial patterns of phase in neocortical gammaEEGs in rabbit, J. Neurophysiol., 84 (2000), pp. 1266–1278.

[24] R. W. Friedrich and Korsching S. I., Combinatorial and chemotopic odorant coding in thezebrafish olfactory bulb visualized by optical imaging, Neuron, 18 (1997), pp. 737–752.

[25] T Gilbertson, E. Lalo, L. Doyle, V. Di Lazzaro, B. Cioni, and P. Brown, Existing motorstate is favored at the expense of new movement during 13-35 Hz oscillatory synchrony inthe human corticospinal system, J. Neurosci., 25 (2005), pp. 7771–7779.

[26] S. Grossberg and D. Levine, Some developmental and attentional biases in the contrastenhancement and short-term memory of recurrent neural networks, Journal of TheoreticalBiology, 53 (1975), pp. 341–380.

[27] Y. Guo, Existence and stability of standing pulses in neural networks, (PhD thesis, Universityof Pittsburgh, 2003).

[28] Y. Guo and C.C. Chow, Existence and stability of standing pulses in neural networks: I.existence, SIAM J. Applied Dynamical Systems, 4 (2) (2005), pp. 217–248.

[29] , Existence and stability of standing pulses in neural networks: I. stability, SIAM J.Applied Dynamical Systems, 4 (2) (2005), pp. 249–281.

[30] E. Krisner, The link between integral equations and higher order ODEs, J. Math. Anal. andApp., 291(1) (2004), pp. 165–179.

[31] N Laaris, G. C. Carlson, and A. Keller, Thalamic-evoked synaptic interactions in barrelcortex revealed by optical imaging, J. Neurosci., 20 (2000), pp. 1529–1537.

[32] Y. W. Lam, L. B. Cohen, M. Wachowiak, and M. R. Zochowski, Odors elicit three differentoscillations in the turtle olfactory bulb, J. Neurosci., 20 (2000), pp. 749–762.

[33] Y. W. Lam, L. B. Cohen, and M. R. Zochowski, Odorant specificity of three oscillations andthe DC signal in the turtle olfactory bulb, Eur. J. Neurosci., 17 (2003), pp. 436–446.

[34] J Angela Hart Murdock, Multi-parameter oscillatory connection functions in neural fieldmodels, Proceedings of the Conference on Fluids and WavesRecent Trends in AppliedAnalysis, 440 (2007).

[35] V. N. Murthy and E. E. Fetz, Coherent 25-to 35-Hz oscillations in the sensorimotor cortexof awake behaving monkeys, PNAS, 89 (1992), pp. 5670–5674.

[36] , Oscillatory activity in sensorimotor cortex of awake monkeys: synchronization of localfield potentials and relation to behavior, 76, J. Neurophysiol. (1996), pp. 3949–3967.

[37] , Synchronization of neurons during local field potential oscillations in sensorimotorcortex of awake monkeys, J. Neurophysiol., 76 (1996), pp. 3968–3982.

[38] M. A. Nicolelis, L. A. Baccala, R. C. Lin, and J. K Chapin, Sensorimotor encodingby synchronous neural ensemble activity at multiple levels of the somatosensory system,Science, 268 (1995), pp. 1352–1358.

[39] C. C. Petersen, Hahn T. T., M Mehta, A. Grinvald, and B Sakmann, Interaction ofsensory responses with spontaneous depolarization in layer 2/3 barrel cortex, PNAS, 100(2003), pp. 13638–13643.

[40] G. Pfurscheller, B Graimann, J. E. Huggins, S. P. Levine, and L. A. Schuh, Spatiotem-poral patterns of beta desynchronization and gamma synchronization in cotricographic dataduring self-paced movement, Clin. Neurophysiol., 114 (2003), pp. 1226–1236.

[41] G. Pfurtscheller, G. Krausz, and C. Neuper, Mechanical stimulation of the fingertip caninduce bursts of beta oscillations in sensorimoter areas, J. Clin. Neurophysiol., 18 (2001),pp. 559–564.

[42] G Pfurtscheller, C Neuper, C Brunner, and F. L. da Silva, Beta rebound after differenttypes of motor imagery in man, Neurosci. Lett., 378 (2005), pp. 156–159.

[43] J. D. Pinto and G. B. Ermentrout, Spatially structured activity in synaptically coupled

Page 60: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

38 Yixin Guo

neuronal networks:1 traveling fronts and pulses, SIAM J. Appl. Math., 62 (2002), pp. 206–225.

[44] J. D. Pinto, K. R. Jackson, and C. E. Wayne, Existence and stability of traveling pulses ina continuous neuronal network, SIAM J. Appl. Dynam. Syst., 4 (2005), pp. 954–984.

[45] J. D. Pinto, S. L Patrick, W. C Huang, and B. W Connors, Initiation, propagation and ter-mination of epileptiform activity in rodent neocortex in vitro involve distinct mechanisms,J. Neurosci., 25 (2005), pp. 8131–8140.

[46] J. C Prechtl, T. H Bullock, and D Kleinfelf, Direct evidence for local oscillatory currentsources and intracortical phase gradients in turtle visual cortex, PNAS, 97 (2000), pp. 877–882.

[47] J. D. Prechtl, L. B. Cohen, B Pesaran, P. P Mitra, and D Kleinfeld, Visual stimuliinduce waves of electrical activity in turtle cortex, PNAS, 94 (1997), pp. 7621–7626.

[48] D. Rubino, Robbinins K. A., and N. G. Hatsopouls, Propagating waves mediate informationtransfer in the motor cortex, Nat. Neurosci., 9 (2006), pp. 1549–1557.

[49] B. Sandstede, Stability of travelling waves, Handbook of Dynamical Systems II (Edited by BFiedler), (2002), pp. 983–1055.

[50] , Evans functions and nonlinear stability of travelling waves in neuronal network models,International Journal of Bifurcation and Chaos, 17 (2007), pp. 2693–2704.

[51] J. N. Sanes and J. P. Donoghue, Oscillations in local field potentials of the primate motorcortex during voluntary movement, PNAS, 90 (1993), pp. 4470–4474.

[52] D. M Senseman and K. A. Robbins, Modal behavior of cortical neural networks during visualprocessing, J. Neurosci., 19(10) (1999), p. RC3.

[53] H. R. Wilson and J. D. Cowan, Excitatory and inhibitory interactions in localized populationsof model neurons, Biophys. J., 12 (1973), pp. 1–24.

[54] , A mathematical theory of the functional dynamics of cortical and thalamic nervoustissue, Kybernetic, 13 (1973), pp. 55–80.

[55] J. Y. Wu, L. Guan, and Y. Tsau, Propagating activation during oscillations and evokedresponses in neocortical slices, J. Neurosci., 19 (1999), pp. 5005–5015.

[56] L. Zhang, On stability of traveling wave solutions in synaptically coupled neuronal networks,Differential and Integral Equations, 16 (5) (2003), pp. 513–536.

[57] , Existence, uniqueness and exponential stability of traveling wave solutions of someintegral differential equations arising from neuronal networks, J. Differential Equations,197 (2004), pp. 162–196.

Page 61: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Neural Networks 24 (2011) 602–616

Contents lists available at ScienceDirect

Neural Networks

journal homepage: www.elsevier.com/locate/neunet

2011 Special Issue

Multi-site stimulation of subthalamic nucleus diminishes thalamocortical relayerrors in a biophysical network modelYixin Guo a,∗, Jonathan E. Rubin b

a Department of Mathematics, Drexel University, Philadelphia, PA 19104, United Statesb Department of Mathematics and Complex Biological Systems Group, University of Pittsburgh, Pittsburgh, PA 15260, United States

a r t i c l e i n f o

Keywords:Parkinson’s diseaseDeep brain stimulationThalamocortical relayLocal field potentialMulti-site stimulationBasal ganglia model

a b s t r a c t

This paper presents results on a computational study of how multi-site stimulation of the subthalamicnucleus (STN), within the basal ganglia, can improve the fidelity of thalamocortical (TC) relay in aparkinsonian network model. In the absence of stimulation, the network model generates activityfeaturing synchronized bursting by clusters of neurons in the STN and internal segment of the globuspallidus (GPi), as occurs experimentally in parkinsonian states. This activity yields rhythmic inhibitionfrom GPi to TC neurons, which compromises TC relay of excitatory inputs. We incorporate two types ofmulti-site STN stimulation into the network model. One stimulation paradigm features coordinated resetpulses that are on for different subintervals of each period at different sites. The other is based on a filteredversion of the local field potential recorded from the STN population. Our computational results show thatboth types of stimulation significantly diminish TC relay errors; the former reduces the rhythmicity ofthe net GPi input to TC neurons and the latter reduces, but does not eliminate, STN activity. Both types ofstimulation represent promising directions for possible therapeutic usewith Parkinson’s disease patients.

© 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Deep brain stimulation (DBS) is an established clinical inter-vention for Parkinson’s disease (PD), essential tremor, and dys-tonia that is now being explored for use in a variety of disor-ders (reviewed in McIntyre & Hahn, 2010; Wichmann & DeLong,2006). In the PD case, DBS is implemented by an implanted pulsegenerator that delivers an ongoing stream of high frequency cur-rent pulses. Although this form of therapy has achieved remark-able success, improvements in DBS would be desirable in or-der to reduce the associated energy use and need for invasivebattery changes, help patients with symptoms that do not re-spond to current DBS paradigms, and allow for individualized op-timization of DBS strategies (Deuschl et al., 2006; Feng, Shea-Brown, Rabitz, Greenwald, & Kosut, 2007a, 2007b; Hauptmann,Omel’chenko, Popovych,Maistrenko, & Tass, 2007; Rodriguez-Orozet al., 2005; Volkmann, 2004). Efforts to develop such improve-ments are hampered, however, by a lack of theoretical understand-ing of the mechanisms through which DBS achieves its clinicalefficacy.

Parkinson’s disease (PD) and experimental models of parkin-sonism are associated with changes in activity patterns of neurons

∗ Corresponding author. Tel.: +1 215 895 1410; fax: +1 215 895 1582.E-mail address: [email protected] (Y. Guo).

0893-6080/$ – see front matter© 2011 Elsevier Ltd. All rights reserved.doi:10.1016/j.neunet.2011.03.010

in the basal ganglia, including increases in synchrony, firing rates,and bursting activity in the subthalamic nucleus (STN) and in-ternal segment of the globus pallidus (GPi) (Bergman, Wich-mann, Karmon, & DeLong, 1994; Boraud, Bezard, Guehl, Bioulac,& Gross, 1998; Brown et al., 2001; Hurtado, Rubchinsky, Sigvardt,Wheelock, & Pappas, 2005; Levy, Hutchison, Lozano, & Dostro-vsky, 2003; Magnin, Morel, & Jeanmonod, 2000; Nini, Feingold,Slovin, & Bergman, 1995; Raz, Vaadia, & Bergman, 2000; Wich-mann et al., 1999;Wichmann & Soares, 2006). Sincemotor outputsfrom the basal ganglia emanate specifically from the GPi (Alexan-der, Crutcher, & DeLong, 1990; Kelly & Strick, 2004; Middleton &Strick, 2000), it seems likely that changes in GPi activity contributeto the development of parkinsonianmotor complications. Further-more, because motor outputs from GPi target the anterior ventro-lateral nucleus of the thalamus (VLa) (DeVito & Anderson, 1982;Kelly & Strick, 2004; Yoshida, Rabin, & Anderson, 1972), whichserves to relay signals between cortical areas (Guillery & Sherman,2002a, 2002b, 2002c; Haber, 2003), we and other authors havehypothesized that pathological GPi outputs may induce parkinso-nian signs by changing thalamic activity patterns or informationprocessing (Dorval, Kuncel, Birdno, Turner, & Grill, 2010; Dorvalet al., 2008; Garcia, D’Alessandro, Bioulac, & Hammond, 2005;Grill, Snyder, & Miocinovic, 2004; Montgomery & Baker, 2000; Xu,Hashimoto, Zhang, & Vitek, 2008) or, in particular, by compromis-ing thalamocortical (TC) relay (Cagnan et al., 2009; Guo, Rubin,McIntyre, Vitek, & Terman, 2008; Pirini, Rocchiand, Sensi, & Chiari,

Page 62: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 603

2009; Rubin& Terman, 2004). This idea is consistentwith the prop-erties of TC neurons, specifically their tendency to fire reboundbursts when exposed to phasic synaptic inhibition, such as thatinduced by burstiness in the GPi under parkinsonian conditions.According to this viewpoint, DBS achieves its therapeutic efficacyfor PD by restoring TC relay fidelity. Importantly, computationalmodels and analysis suggest that returning STN, GPi, and TC activ-ity patterns to their non-parkinsonian states is not necessary forachieving this goal, in as much as a variety of alternative activitypatterns can also be associated with successful relay in computa-tionalmodels or alleviation of bradykinesia in humanpatientswithparkinsonism (Dorval et al., 2010; Feng et al., 2007a, 2007b; Guoet al., 2008; Rubin & Terman, 2004).

In this paper, we study a form of STN DBS that has been sug-gested in the literature as an alternative to standard DBS, namelymulti-site STN stimulation with delays between stimulationperiods at different stimulation sites (Hauptmann et al., 2007;Hauptmann, Popovych, & Tass, 2005; Tass, 2003).We consider suchstimulation with two types of current injection, one using periodicsquare pulses and another based on a local field potential signalrecorded from the STN population (Hauptmann et al., 2007, 2005;Rosenblum & Pikovsky, 2004b; Tukhlina, Rosenblum, Pikovsky, &Kurths, 2007). To perform our investigation, we introduce thesestimulation paradigms into a computational model based on ourearlier work (Rubin & Terman, 2004; Terman, Rubin, Yew, & Wil-son, 2002). This model consists of a small network of conductance-based STN, GPe (external segment of the globus pallidus), GPi, andTC neurons that, by design, generates parkinsonian activity pat-terns in the absence of stimulation. We simulate the delivery ofan excitatory input train to the model TC neurons and compareTC relay performance across simulations without stimulation andsimulations with coordinated reset or LFP-based delayed feedbackstimulation of various amplitudes and periods. Although the formsof stimulation that we study were introduced previously (Haupt-mann et al., 2007, 2005; Tass, 2003), this represents the first workin which they are incorporated into a biophysically-detailed basalganglia network model and in which their impact on TC relay fi-delity is explored. We find that both forms of multi-site stimula-tion, applied to the STN, regularize GPi outputs and significantlydiminish TC relay errors. While both can completely suppress STNactivity if introduced with a large enough amplitude, both can alsorestore TC relay performance without such a drastic effect. More-over, multi-site delayed feedback stimulation based on the LFP inparticular requires relatively small currents (see also Hauptmannet al., 2007, 2005; Tukhlina et al., 2007) and, unlike stimulationwith a constant current of similar magnitude, maintains STN re-sponsiveness to its own excitatory inputs (e.g., from the hyperdi-rect pathway). Hence, these results support the idea thatmulti-sitedelayed feedback stimulation of STN merits further considerationas a possible alternative to standard forms of DBS for PD.

2. The network model

We use a network of conductance-based, single-compartmentmodel neurons adapted from earlier work by Rubin and Terman(2004) to explore how well different patterns of subthalamicnucleus stimulation can improve thalamic relay responses inparkinsonian conditions. The model includes neurons in thethalamus and several nuclei of the basal ganglia, namely theinternal and external segments of the globus pallidus and thesubthalamic nucleus. It is known that the subnetwork consisting ofthe external segment of globus pallidus (GPe) and the subthalamicnucleus (STN) can fire in synchronized clusters when the couplingparameters are appropriately tuned (Best, Park, Terman, &Wilson,2007; Terman et al., 2002). Furthermore, the rhythmic burstingactivity of the STN and GPe clusters induces rhythmic bursting of

Fig. 1. Neuronal structure in the network model. Arrows labeled with a ‘−’ signrepresent inhibitory synaptic connections. Arrows labeled with a ‘+’ sign areexcitatory synaptic connections.

clusters of downstream neurons, in the internal segment of globuspallidus (GPi). We consider this rhythmic clustered regime as theparkinsonian state and refer to the network in this state as theparkinsonian network.

Ourmain goal is to study stimulation patterns that can improvethe fidelity withwhich thalamocortical (TC) relay neurons respondto, or relay, excitatory input signals in a parkinsonian network.For the parkinsonian network described in detail in Section 2.2,STN neurons form two synchronized clusters that induce rhythmicand bursty inhibitory GPi activity with strong correlations in bursttimes among GPi neurons. We show an example of STN clusters,the resulting rhythmic GPi inhibition, and the corresponding TC re-lay responses to excitatory inputs in Fig. 3. We will explore stim-ulation patterns that can eliminate the synchronized, clusteredactivity of STN neurons, which consequently will alter GPi firingin a way that facilitates TC relay responses.

We focus on the impact of two very different STN stimulationparadigms on TC relay performance in the parkinsonian network.One approach is to use coordinated reset stimulation of the STNneurons. The other is to use the local field potential measuredfrom the population of STN neurons to drive closed-loop, delayedfeedback stimulation of the STN neurons.

2.1. Model equations for each neuron type

We consider a network consisting of model TC, STN, GPe,and GPi neurons. Neurons in these areas are linked by variousexcitatory and inhibitory synaptic connections and receive certainexternal inputs, as depicted in Fig. 1. Both GPi and GPe receiveexcitatory inputs from STN, and GPe is subject to an inhibitorystriatal input that is assumed to be constant in the model. Thedetails of the architecture of connections between individualneurons within each area are discussed in Section 2.2. Next, wedescribe the conductance-based model equations for each neurontype in the model network in more detail. All specifics of thefunctions and parameter values used for each type of neuron in themodel are given in Appendix.

Thalamocortical (TC) relay neurons: The model for each TCneuron takes the form

Cmv′= −IL − INa − IK − IT − IGi→TC + IE (1)

h′= (h∞(v) − h)/τh(v)

r ′= (r∞(v) − r)/τ(v).

In system (1), v denotes membrane potential, the evolution ofwhich depends on IL = gL(v − vL), INa = gNam3

∞(v)h(v − vNa),

and IK = gK (0.75(1 − h))4(v − vK ), which are leak, sodium, andpotassiumcurrents, respectively. Hereweuse a standard reductionin our expression for the potassium current, which decreases thedimensionality of the model by one variable (Rinzel, 1985). Thecurrent IT = gTp2∞(v)r(v −vT ) is a low-threshold calcium current,

Page 63: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

604 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

where r is the inactivation and p2∞

(v) is the activation. Note thatreversal potentials are given in mV, conductances in ms/cm2, andtime constants in ms. In all the neuron models, the membranecapacitance Cm is normalized to 1 µF/cm2.

The current IGi→TC in system (1) represents the inhibitory inputto the TC neuron model from the GPi, as discussed further inSection 2.4. IE denotes simulated excitatory synaptic signals to theTC neuron. We assume that these are sufficiently strong to inducea spike (in the absence of inhibition) and therefore may representsynchronized inputs from multiple presynaptic cells. We tune theparameters so that the TC cell yields a firing rate of roughly 12 Hzin the absence of inhibitory GPi and excitatory synaptic inputs.The parameter values chosen place the model TC neuron near atransition from silence to spontaneous oscillations. In the model,IE takes the form gEs(v − vE), where gE = 0.018 ms/cm2, and ssatisfies the equation

s′ = α(1 − s)exc(t) − βs,

where α = 0.8 ms−1 and β = 0.25 ms−1. The function exc(t)controls the onset and offset of the excitation: exc(t) = 1 duringeach excitatory input, whereas exc(t) = 0 between excitatoryinputs. We used periodic exc(t), which was one of the casesconsidered in previous work (Guo et al., 2008; Rubin & Terman,2004), where similar results were obtained with periodic andPoisson excitation. Specifically,

exc(t) = H(sin(2π t/p))(1 − H(sin(2π(t + d)/p))),

where the period p = 50 ms and duration d = 5 ms, and whereH(x) is the Heaviside step function, such thatH(x) = 0 if x < 0 andH(x) = 1 if x > 0. That is, exc(t) = 1 from time0up to time d, fromtime p up to time p+ d, from time 2p up to time 2p+ d, and so on.A baseline input frequency of 20 Hz is consistent with the high-pass filtering of corticothalamic inputs observed in vivo (Castro-Alamancos & Calcagnotto, 2001); at this input rate, the model TCcells rarely fire spontaneous spikes between inputs.

STN Neurons:The STN voltage equation that we use, of the form

Cmv′= −IL − INa − IK − IT − ICa − IAHP − IGe→Sn + Istim,

was introduced in Terman et al. (2002). All the currents andcorresponding kinetics are the same except that we make someparameter adjustments so that STN firing patterns aremore similarto those reported in vivo (Bevan, Jeremy, & Jérôme, 2006; Urbainet al., 2000; Urbain, Rentero, Gervasoni, Renaud, & Chouvet, 2002).IGe→Sn is the inhibitory input current from GPe to STN. Istim isthe external stimulation applied to STN, which is either multi-site coordinated reset stimulation (CRS) or multi-site feedbackstimulation based on the local field potential (LFP). Different typesof stimulation will be discussed further in Sections 3 and 4.

GPe Neurons:The voltage of each model GPe neuron obeys the equation

Cmv′

= −IL − INa − IK − IT − ICa − IAHP − IGe→Ge − ISn→Ge + Iapp,

where IGe→Ge is the inhibitory input from other GPe cells, ISn→Ge isthe excitatory input from STN cells, and Iapp is a constant externalcurrent that represents hyperpolarizing striatal input to all GPecells.

GPi Neurons:The voltage equation for each model GPi neuron is similar to

that for the GPe neurons, namely

Cmv′

= −IL − INa − IK − IT − ICa − IAHP − ISn→Gi + IGe→Gi + Iappi,

where ISn→Gi represents the excitatory input from STN to GPi,IGe→Gi is the inhibitory input fromGPe to GPi, and Iappi is a constantexternal current that represents hyperpolarizing striatal input toall GPi cells.

Fig. 2. Network architecture. Arrows labeled with ‘+’ and ‘−’ signs representexcitatory and inhibitory connections, respectively. Arrows labeled with ‘w+’denote weak excitatory connections.

2.2. Architecture of coupling between individual neurons

As shown previously (Terman et al., 2002), the STN andGPe subnetwork can generate both irregular asynchronous andsynchronous activity (Best et al., 2007; Plenz & Kital, 1999; Termanet al., 2002). Our model includes 16 STN neurons and 16 GPeneurons. We designed the structure of the STN/GPe loop in themodel following the work on clustered rhythms in (Terman et al.,2002), so that the STN cells will segregate into two rhythmicallybursting clusters, with synchronized activity within each cluster.

The detailed structure of connections between STN and GPeneurons is depicted in Fig. 2. Ge and Sn represent subpopulationsof GPe and STN neurons, respectively. In the two-cluster case, wecan distinguish two subpopulations within each cluster, such thatneurons within the same subpopulation provide synaptic inputsto the same targets. We use Kij, where K = Ge or Sn, i = 1, 2, andj = 1, 2, to denote subpopulation j within the ith cluster of type Kneurons. For example, the first subpopulation of STN cluster one,Sn11, sends excitation to the first subpopulation of GPe cluster two,Ge21 (Fig. 2, +). The same subpopulation of STN neurons are alsoweakly coupled with the other half of the same GPe cluster, Ge22(Fig. 2,w+). Each subpopulation of STN neurons is connected withtwo GPe subpopulations in an analogous way. Each subpopulationof GPe neurons inhibits one group of STN neurons, as is alsoillustrated in Fig. 2. Within each GPe subpopulation Geij, there arealso local inhibitory connections.

The model also includes 16 GPi neurons, each receiving in-put from a single corresponding STN neuron. Thus, the rhythmic,bursty, synchronized outputs of each STN cluster induce rhythmic,bursty, synchronized activity in a corresponding group of GPi neu-rons. These GPi activity patterns mimic those seen experimentallyin parkinsonian conditions. The network architecture is set up sothat members of each such synchronized GPi group (Gi1 or Gi2)send synaptic inhibition to the same TC neuron, and hence eachTC neuron receives a rhythmic inhibitory signal in the parkinso-nian network (see Fig. 2), which disrupts the fidelity of TC relayresponses to excitatory inputs, as discussed in more detail in Sec-tion 2.3.

2.3. TC relay responses and error index

We first define how we evaluate the TC relay fidelity. In theparkinsonian network described in Section 2.2, the synaptic inputfrom GPi to TC (the top trace in Fig. 3(B) and (C)) is rhythmic andbursty. Although the TC neuron responds with a single spike tosome of the excitatory inputs that it receives, others elicit eitherno spikes or multiple spikes (compare middle and bottom tracesin Fig. 3(B) and (C)).

In this paper, we quantify relay performance of each TC neuronusing a simple error index computed by dividing the total numberof errors by the total number of excitatory inputs, namelyerror index = (b + m)/n, (2)

Page 64: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 605

Fig. 3. STN clusters in the parkinsonian network. (A) shows that the 16 model STN neurons form two synchronized clusters. (B) and (C) show the membrane potentials oftwo TC neurons (bottom traces for each panel) responding to excitatory sensorimotor signals (middle trace), along with the total synaptic input the neuron receives fromeight GPi neurons (top trace). Note that the vertical axis label applies to the voltage trace, while the latter two traces are placed and scaled arbitrarily. Also, observe that thetotal GPi synaptic input is rhythmic and bursty, representing parkinsonian conditions.

where n is the total number of excitatory inputs. In Eq. (2), bdenotes the number of excitatory inputs to which a TC neurongives a bad response consisting of more than one spike, eithera burst response (typically) or a single-spike response followedafter a delay, but before the next input, by additional spikes. Thenumber m denotes the count of excitatory inputs that are missedby the TC neuron, in the sense that it fails to fire any spikes duringa detection window. This definition of errors guarantees that atmost one error is counted for each excitatory input. The detectionwindow we use in this paper extends from the beginning of eachexcitatory input to 12ms after each input. This error indexwas firstintroduced in Rubin and Terman (2004) and was used previouslywith the same error detection algorithm to quantify how differentpatterns of inhibitory GPi signals obtained from experimentalrecordings of normal and parkinsonianmonkeys, with andwithoutDBS (Hashimoto, Elder, Okun, Patrick, &Vitek, 2003), affect TC relayresponses (Guo et al., 2008).

2.4. Averaged GPi synaptic input to TC

In our network model, the synaptic input from the GPi to aTC neuron, IGi→TC, comes from a subgroup of GPi neurons. Asillustrated in Fig. 2, the subgroup that sends input to TC1 is Gi1,and the subgroup Gi2 connects to TC2. Using vTCj to denote themembrane potential of neuron TCj, this input takes the form

IGij→TCj = gGi(vTCj − vGi)−k∈Ωj

skGij , j = 1, 2. (3)

Here, each Ωj is an index set for neurons in Gij, while gGi is theconstant maximal conductance and vGi is the synaptic reversalpotential for inhibition from GPi. Each skGij satisfies the equation

s′Gi = αGi(1 − sGi)S∞(v) − βGisGi (4)

where S∞(x) = (1+e−(x+57)/2)−1 and v represents themembranepotential of the kth GPi neuron from subgroup Gij (in fact, theexact half-activation value of −57 mV in S∞ is not essential, as inour exploratory simulations, the GPi resting potential was alwaysfar enough below this value to avoid synaptic activation withoutthreshold crossing). We also define

sg1 ≡

−k∈Ω1

skGi1

andsg2 ≡

−k∈Ω2

skGi2

where sg1 is the top trace in Fig. 3B and sg2 is the top trace in Fig. 3C.

Based on the form of Eq. (4), each skGij is between 0 and 1,and hence sgi ∈ [0, 8] for each i. In our simulations, we usethe variability of the time-average of each sgi as an indicator ofGPi rhythmicity. Specifically, we form histograms based on thefrequency with which each sgi time course, averaged over 25 mstime windows, takes different values in bins that cover the range[0, 8]. We display 6 bins centered at 1 through 6, respectively, andeach represents a subinterval of 1 ms/cm2, except that all valuesless than 1.5 are placed in the 1 bin and all values greater than5.5 are sorted into the 6 bin. In the parkinsonian network withoutstimulation, the average sgi values mostly fall into the 1 and 6bins, as displayed in Fig. 4. This result occurs because GPi firing isrhythmic and bursty (see the top traces in Fig. 3B), such that GPisynaptic output is high during each burst and low between bursts.A fewvalues do fall into themiddle bins, due to transitions betweenbursting and quiescent phases. We shall see that very differentresults emerge when stimulation is applied to the STN neurons(Fig. 9 in Section 3.2 and Fig. 11 in Section 4.2).

3. Coordinated reset stimulation

We first investigate whether coordinated reset stimulation(CRS) can improve TC relay performance in the parkinsoniannetwork. In a previous study, we evaluated the ability of amodel TC neuron to relay excitatory inputs under the influenceof inhibitory GPi signals generated from experimental data (Guoet al., 2008). We found that GPi firing patterns, and correspondingsynaptic outputs, produced in parkinsonian conditions withouthigh frequency stimulation of STN switch rhythmically betweenlow and high phases. Under the assumption that all STN neuronsreceive exactly the same train of high frequency pulses, CRS inwhich the pulse train is periodically on significantly improved TCrelay by inducing tonic, high frequency GPi activity that yieldedapproximately constant effective synaptic output levels (Guo et al.,2008; Rubin & Terman, 2004). Continual stimulation with strong,high frequency pulses has drawbacks in vivo, however, includingrelatively large energy requirements and potential for damage tosurrounding tissue. Thus, we use our parkinsonian network modelto explore whether there is amilder andmore efficient stimulationtechnique that can improve TC relay responses. What we meanby milder is a signal with a smaller amplitude (or intensity) a0 ofthe pulses delivered to STN cells. Efficient here refers to a form ofstimulation, such as coordinated reset stimulation (CRS) that canbe on for a certain time interval and then off for a rest interval,rather than applied continuously.

Page 65: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

606 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

Fig. 4. Histograms of time-averaged sg1 (left) and sg2 (right) in the parkinsonian network, in ms/cm2 . Both histograms include two dominant bins, centered at 1 and 6, dueto the quiescent and bursting phases, respectively, of GPi activity.

Fig. 5. 16 STN cells (solid circles) on a square gridwith the center (plus sign) wherean electrode can measure the local field potential. The four square boxes are thestimulation sites.

3.1. Methods

The CRS given by the formula

Istimk = a0h(t)fk(t)fhi(t) (5)

is applied to STN neurons through four stimulation sites as shownin Fig. 5. The 16 model STN neurons, represented in the figure bysolid circles, are arranged in a four by four grid centered at the +sign.

The first row on the square grid are STN1, STN2, STN3, andSTN4, from left to right. STN5 to STN8, STN9 to STN12, and STN13 toSTN16 are on the second, third and fourth rows, respectively. Weassume the distance d between two adjacent horizontal or verticalSTN neurons is 0.1. The four small square boxes in the figure arethe stimulation sites, which we number as 1, 2, 3, 4, proceedingclockwise from the upper left. Note that each stimulation site is atthe center of the square formed by the four nearest STN cells.

There are several components in Eq. (5). Since H denotes theHeaviside step function, the function h(t) = H(t − l1)(1 − H(t −

l2)) equals 1 on (l1, l2) and 0 outside of this interval and thussimply specifies that the overall stimulation period starts at timel1 and stops at time l2. Within this period, stimulation at electrodek is turned on and off as specified by the function fk(t), k =

1, . . . , 4. Each fk(t) is a periodic step functionwith period 2.5τ0 fora constant parameter τ0. Within each full period is a time intervalof length τ0 during which fk(t) = 1, followed by an interval oflength 1.5τ0 onwhich fk(t) = 0.We call these two intervals the ONperiod and OFF period, respectively, for electrode k. The functionfhi(t) = H(sin(ρ1t) − a1) introduces a train of high frequencypulses (≥100 Hz). The product fk(t)fhi(t) therefore takes the formof a 2.5τ0-periodic function consisting of a train of high frequency

pulses delivered for τ0 time units and equal to 0 during 1.5τ0time units, repeated periodically. Finally, a0 is the amplitude ofthe stimulation. Since stimulation is distance-dependent, the fourneurons directly surrounding site k receive the same stimulation.The stimulation administrated at the four different sites has thesame overall period 2.5τ0, with the same rate of high-frequencypulse delivery within the ON period, but there are phase shiftsbetween stimulation sites so that the ON periods at the four sitesdo not coincide. The phase shift between the ON period start timesat any two consecutively numbered stimulation sites is fixed at42.5 ms, which is one fourth of the period of GPi bursting activityin the absence of stimulation. Fig. 6 illustrates the relationshipsamong the stimulation times at the four stimulation sites.

We explore effects of stimulation over a region in (τ0, a0)parameter space. The τ0 values used are centered around τ0 =

42.5 ms, which is one fourth of the period of GPi bursting activityin the absence of stimulation. The range of a0 values was selectedempirically, spanning roughly from minimal values that give anychange in TC relay performance to maximal values at whichit is clear that results have saturated. As discussed below, weplace extra focus on an area in the τ0–a0 plane where we findthat relatively mild and efficient CRS can give good TC relayperformance.

3.2. Results

CRS with particular choices of period and amplitude can reduceTC relay error dramatically. One example, generated with τ0 =

42.5 ms and a0 = 48, is shown in Fig. 7. In this case, the STNneurons form two clusters when there is no CRS, from 0 to 500ms into the simulation. From 500 to 2000 ms, when CRS is on, thetwo clear STN clusters are no longer there, although there is stillstructure to the STN firing pattern. The stimulation is turned offagain and the STN neurons gradually return to a two cluster firingpattern. Fig. 7(B) and (C) show the synaptic input from GPi to TC(the top trace in both B and C) and the TC voltage trace (middle),illustrating the key result for our focus, namely that the TC spikesare more faithful to the the excitatory signals with stimulation onthan before or after the stimulation period. Unlike the bimodaldistribution of GPi synaptic outputs arising in the parkinsoniancase (Fig. 4), the sgi values under stimulation cluster in a fewconsecutive bins in the middle of their possible range (Fig. 8).

The simulation outcomes illustrated in Figs. 7 and 8, as wellas results from many other simulations, suggest that CRS canrestore TC relay fidelity by changing the firing pattern of STNneurons. These transitions in activity patterns are typical forsimulations across a range of τ0 and a0 values, although the rate ofconvergence to a steady clustered state after stimulation seems tospeed upwith larger τ0 and a0 and the precise STN activity patterns

Page 66: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 607

Fig. 6. The periodic step functions f1, f2, f3 , and f4 used to administer the four stimulation signals at four sites. f1 and f2 are shown with solid lines, while f3 and f4 are givenby dashed lines. f2 is identical to f1 but with a phase shift of 42.5 ms; there is the same phase shift between f2 and f3 and between f3 and f4 . Within stimulation ON periodsduring which fk(t) = 1, the stimulation signal consists of a train of high frequency pulses, given by fhi(t) (not shown here).

Fig. 7. Example of CRS. A: Spike times of 16 STN cells. CRS is on from 500 ms to 2000 ms, with τ0 = 42.5 ms, a0 = 48. Before stimulation, there are two fairly synchronizedclusters. During stimulation, the two clusters are no longer there, while they gradually re-emerge after stimulation. B: Relay performance of TC1 . C: Relay performance ofTC2 . In both B and C, the top trace is the GPi synaptic input, the middle trace is the excitatory signal, and the bottom trace is TC voltage (see Fig. 3). TC relay performanceimproves dramatically during the CRS interval.

Fig. 8. Histograms of sg1 (left) and sg2 (right) with PHFS. Note that values cluster in a small number of central bins.

observed during and after stimulation depend on the times ofonset and offset of stimulation and the stimulation parametervalues. Although a systematic exploration of these dependencies istangential to the investigation presented here,wedo observe that acomplete desynchronization of STN activity is not necessary; somebreak-up of clusters suffices to yield improvements in TC relay.

The period τ0 and the amplitude a0 of the CRS are the twomajor parameters that can be used to optimize CRS results. First,we discuss the two extreme cases. The ON period τ0 can be smalland very close to the period of the function fhi(t) that deliversthe high frequency pulse. In this case, the stimulation is almostthe same as the standard deep brain stimulation used clinically;

Page 67: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

608 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

Fig. 9. Error index values (color coded) for TC1 over a range of stimulation parameters τ0 and a0 . Histograms around the outside of the plot show the distribution ofsg1 for some particular choices of τ0, a0 (upper left: τ0 = 48.5 ms, a0 = 50; upper right: τ0 = 46.5 ms, a0 = 58; lower left: τ0 = 40.5 ms, a0 = 46; lower right:τ0 = 42.5 ms, a0 = 60).

correspondingly, the TC performance is good for all a0 (see thebottom rows, showing results for τ0 = 16.5, 18.5 ms, in Fig. 9).The other extreme case is when a0 is strong enough to driveSTN neurons to completely overcome their pathological burstingpattern (the right columns when a0 value is bigger than 65 inFig. 9). When the stimulation amplitude is higher than 65, thestimulation period is not critical, and overall the error is on the lowside, unless the ON period τ0 becomes too large (e.g., 60 ms). Forlarger τ0, the OFF period 1.5τ0 is long enough to let the STN recoverits parkinsonian bursting rhythm between stimulation periods,compromising relay. What we are interested in is finding a regimeof optimal TC performance with a relatively weak stimulationamplitude. After running simulations of the computational modelfor various values of τ0 and a0, we found a region that yields lowerror index values for both TC cells without excessive a0 values,given by τ0 values near one-fourth of the period of the GPi burstingand a0 between 45 and 60 (rectangular box in Fig. 9).

4. Multi-site delayed feedback stimulation

The second type of stimulation that we apply to our parkin-sonian network is the multi-site delayed feedback stimulationbased on LFP of STN population. Similar stimulation has been stud-ied in other neuron models in previous work (Hauptmann et al.,2005; Popovych, Hauptmann, & Tass, 2006; Rosenblum&Pikovsky,2004b).

There is no clear evidence on how the LFP is related to synapticand ionic currents of a single neuron. Computational modelssometimes simulate LFP by summing the membrane potentialchanges of all neurons of the network (Ursino & Cara, 2006). Someauthors adopt a simple computation as the sum of the absolutevalues of AMPA and GABA currents of pyramidal cells (Mazzoni,Panzeri, Logothetis, & Brunel, 2008), while others consider a ‘+’sign for excitatory connections (or currents) and a ‘−’ sign forinhibitory connections (or currents) (Tsirogiannis, Tagaris, Sakas,& Nikita, 2010). For multi-compartmental neuron models, theLFP has been computed as the low-pass filtered extracellular

potential generated by the transmembrane currents across allcompartments (Pettersen & Einevoll, 2008; Pettersen, Hagen, &Einevoll, 2008; Protopapas, Vanier, & Bower, 1998).

According to current-source density analysis (Holt & Koch,1998; Leung, 1990; Mitzdorf, 1985; Nunez, 1981), the field poten-tial depends on the linear sum of potentials from sources (cur-rents injected into the extracellular medium) and sinks (currentremoved from the extracellular medium). The extracellular fieldpotential Φ is governed by the Poisson equation ·(σ Φ) = Iv (6)where σ is the conductivity of the extracellular medium, which isassumed to be isotropic and homogeneous, and Iv is the currentsource density (CSD) computed as the sum of the membranecurrents of the relevant neuronal population. For a single pointsource in an infinite homogeneous medium, the solution to Eq. (6)is given by

Φ =ReIv4πr

(7)

where r is the distance from the point source, Iv is the currentfrom that point source, and Re = 1/σ is the constant extracellularresistance.

4.1. Methods

We use the current-source density to compute the local fieldpotential in the center of the extracellular space in which the 16STN neurons are embedded (Fig. 5). The local field potential, whichreflects the activity of all 16 STN cells, can be recorded by anelectrode at the center of the population (the+ sign in Fig. 5) (Chenet al., 2006; Yoshida et al., 2010). Sincewe have 16 STN neurons, 16presynaptic GPe cells, and stimulation administered at four sites,the point sources that we sum include the membrane currentsfrom all 16 STN cells, the stimulation currents, and the inhibitorysynaptic currents (with ‘−’ signs) from presynaptic GPe cells. Thissum appears in the formula for the LFP,

VLFP(t) =Re

N−j=1

Ijrj

, (8)

Page 68: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 609

Fig. 10. Activity of STN and TC neurons with multi-site delayed feedback stimulation (MDFS). A: The 16 STN neurons form two synchronized clusters before the multi-site feedback stimulation is turned on at 500 ms. During the stimulation period from 500 to 2000 ms, synchronized STN clusters no longer exist. In B and C, the two TCcells respond to excitatory sensorimotor signals (middle trace) faithfully between the stimulation onset and offset due to the change of total GPi synaptic input (top trace).Parameter values for this simulation are a = 0.0025, b = 0.00136, τ = 42.5 ms, µ = 0.0003.

which is based on Eq. (7). In Eq. (8), the constant Re is theextracellular resistance that is set to 1 and rj is the distancebetween neuron j and the LFP measuring electrode. Ij is the totalcurrent source from neuron j, which consists of ionic currents I ionj

and external currents Iexternalj , including both the stimulation andthe synaptic currents from the presynaptic GPe neurons. Hence,wehave Ij = I ionj + Iexternalj where

I ionj = ILj + INaj + IKj + IAHPj + ICaj + ITjIexternalj = −IGi→Sn

j + Istimj .

The simulated LFP signal VLFP(t) is rescaled and filtered by alow-pass harmonic oscillator to generate the stimulation signal.We then apply the stimulation signal via four siteswith time delays(Hauptmann et al., 2007; Hauptmann, Omel’chenko, Popovych,Maistrenko, & Tass, 2008; Hauptmann et al., 2005; Popovych et al.,2006; Rosenblum&Pikovsky, 2004a, 2004b). The low-pass filteringof VLFP(t) is implemented by the equation

x′′+ ax′

+ bx = µVLFP(t) (9)

where a and b are parameters selected to satisfy the conditiona2 < 4b to guarantee that (9) represents a harmonic oscillator.The parameter µ controls the strength of the stimulation. Wefirst choose the values of a = 0.0025 and b = 0.00136 sothat the period of the harmonic oscillator is the same as thenatural frequency of the bursts present in the STN clusters. Later,in Section 4.3, we use various values of a and b to explore howthe frequency of the filter can affect the desynchronization of STNclusters.

The stimulation that the jth STN neuron receives from four sitesis given by

Istimj =h(t)n

4−k=1

e−2dist(j,k)xk(t − (k − 1)τ ) (10)

where h(t) defines the stimulus onset and offset times as in Eq.(5), n is the number of STN cells, dist(j, k) is the distance betweenthe jth neuron and the kth stimulation site, and xk(t − (k − 1)τ )is the time delayed signal from Eq. (9) that is delivered at the kthstimulation site. We assume that the STN neurons are arrangedin a square grid as shown in Fig. 5 with a distance d = 0.1

between two adjacent horizontal or vertical grid points. We placeeach stimulation site at the center of a group of four STN cells asseen in Fig. 5. Hence we can calculate the dist(j, k) in the two-dimensional Euclidean space. For example, the distance betweenSTN2 and stimulation site 4 is dist(2, 4) =

0.252

+ 0.52.

4.2. Results

We found that, when the parameters are properly tuned, multi-site delayed feedback stimulation (MDFS) can suppress the outputof STN neurons in our parkinsonian network. TC relay errors arecorrespondingly reduced dramatically. Fig. 10(A) shows that undermulti-site feedback stimulation, the STN neurons do not burst orform synchronized clusters during the stimulation period, from500 to 2000 ms. Fig. 10(B) and Fig. 10(C) show the improvedTC relay performance during stimulation. During the MDFS, theaveraged values of sgi, i = 1, 2, are low. Histograms of thesevalues show that these GPi output measures mainly lie in thebin centered at 1 (Fig. 11) due to the suppression of activity inthe STN population and hence of synaptic excitation from STNto GPi. The suppression is not due to individual STN neurons orGPe neurons becoming dominant. Rather, it is a population effect.Fig. 12(A) and (B) show the temporal pattern of ionic currents fortwo representative STN neurons. These currents switch betweenpositive and negative values over time. The sum of ionic currentsof STN neurons in one cluster also exhibits positive and negativetrends (Fig. 12(C) and (D)). The sum of ionic currents over all 16STN cells (Fig. 13A) apparently never becomes positive when thestimulation is on, nor does the external current Iexternal (Fig. 13B),which includes both synaptic currents from all GPe cells and thestimulation current (Fig. 13(C) and (D)).

4.3. Multi-site delayed feedback stimulation (MDFS) with differentstimulation strengths and periods

We investigated the effectiveness of MDFSwith various choicesof parameter values in the low-pass filter. Specifically, we fixed aat the small value a = 0.01 to ensure that the oscillation conditiona2 < 4bwould hold for a wide range of b (see Eq. (9)), and then wevaried b. We reasoned that a powerful stimulation signal would

Page 69: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

610 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

Fig. 11. Histograms of sgi values, representing the average strength of GPi synaptic input to each TC cell, during MDFS with parameter values as in Fig. 10. A: sg1 . B: sg2 .

Fig. 12. Currents associated with STN neurons, scaled for use in delayed feedback signal. In all four panels, MDFS is on from 500 to 2000 ms, with parameter values as

in Fig. 10. A: The scaled sum of ionic currents of STN cell 5 in cluster one, i.e. µRe4π

∑ion=Na,K,AHP,T,Ca,l

I ion5r5

. B: The scaled sum of ionic currents of STN cell 8 in cluster two,

i.e. µRe4π

∑ion=Na,K,AHP,T,Ca,l

I ion8r8

. C: The scaled sum of the ionic currents of all 8 STN cells in cluster one, µRe4π

∑j∈Ω1

∑ion=Na,K,AHP,T,Ca,l

I ionjrj

. D: The scaled sum of the ionic

currents of all 8 STN cells in cluster two, µRe4π

∑j∈Ω2

∑ion=Na,K,AHP,T,Ca,l

I ionjrj

.

Fig. 13. Currents associatedwith STNneurons. In all four figures, delayed feedback stimulation is on from500 to 2000ms,with parameter values as in Fig. 10. A: The total ionic

currents of all 16 STN neurons (rescaled), i.e. the plot of µRe4π

∑j=1:16

∑ion=Na,K,AHP,T,Ca,l

I ionjrj

. B: The sumof synaptic currents from all 16 GPe neurons and stimulation currents

to all 16 STN neurons (rescaled), i.e. the plot of µRe4π

∑j=1:16

(−IGi→Snj +Istimj )

rj. C: The sum of synaptic currents from all 16 GPe neurons to all 16 STN neurons, µRe

∑j=1:16

−IGi→Snjrj

.

D: The sum of stimulation currents to all 16 STN neurons, µRe4π

∑j=1:16

Istimjrj

. Istimj is given in Eq. (10).

be generated most efficiently when the filter period, 4π√4b−a2

, is

close to the natural period of STN bursting. Thus, we varied b in theinterval [0.000035, 0.006], chosen because when b = 0.000035and b = 0.0055, the period of the filter is approximately twiceand one half of the natural period of STN bursting, respectively,

and examined network performance over a range of µ valuesfor each fixed b. Both of these extreme b values are marked onFig. 14. As illustrated in Fig. 14, for each choice of b, there is acorresponding range of µ values for which MDFS gives good TCrelay performance. Specifically, the region between the lines inFig. 14 is the area of b and µ in which TC error is between 0 and

Page 70: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 611

Fig. 14. The error index with MDFS applied for various low-pass filter parametervalues, with a = 0.01. The region between the lines is where the combination of band µ selected gives good TC relay performance (0 ≤ error index ≤ 0.3) withoutcomplete suppression of STN activity. Within the gray box, b ∈ (0.001280.00142),the period of the filter is close to the natural period of STN bursting, and µ ∈

(0.00025, 0.00036).

0.3. As the period of the filter decreases, stronger stimulation isrequired to achieve an error index value of less than 0.3. The areabelow the good performance region gives error index values above

0.3. When parameters are selected in the area above the goodperformance region, STN activity is completely suppressed.

To further explore relay performance under MDFS stimulation,we ran additional simulations for a selection of vectors in the(µ, a, b) parameter space, respecting the bounds 0.00025 <µ < 0.00036, 0.002 < a < 0.0031, and 0.00128 < b <0.00142 with a2 < 4b. When µ is too small, the stimulationdoes not desynchronize STN clusters well enough to improve TCrelay performance (top row in Fig. 15). We find that effectivedesynchronization can be obtained, without complete cessation ofSTN activity, for µ values from 0.00028 to 0.00033, for a range ofvalues of a, b and fixed τ = 42.5 ms. For example, when µ =

0.0003, for 0.00128 < b < 0.00132 and 0.002 < a < 0.0031, theTC error is low (see the right figure in the second row in Fig. 15).

The use of stimulation derived from the LFP signal recordedfrom STN neurons forms a closed-loop feedback control mecha-nism.When the stimulation amplitudeµ is increased, the suppres-sion of STN neurons is strengthened, and the STN activity can beshut down (such that none of the STN neurons fire any spikes) fora short period of time, after which isolated spikes emerge. In the-ory, there is no complete shutdown of STN neurons because theLFP is pulled upward toward zero when STN activity drops off, andtherefore the stimulation current at each site moves toward zero.This effect results in progressively less suppression, until the STNneurons are released to resume firing. In Fig. 16, all STN neuronsare completely shut down from the beginning of the stimulationto more than half way through the stimulation period (about 1400ms into the simulation, with the stimulation on from 500 to 2000ms). The corresponding µVLFP(t) and stimulation current increase

Fig. 15. Error index (color-coded) for various stimulation amplitudes µ and stimulation periods that are determined by a and b.

Page 71: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

612 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

Fig. 16. STN network dynamics and LFP signal for relatively large µ. Parameter values associated with stimulation are as in Fig. 10 except that µ = 0.00031. A: The STNneurons are completely shut down when stimulation starts. B: The loss of STN activity reduced the magnitude of the LFP signal. C: When the stimulation signal becomessmall enough, STN neurons are released from suppression. Then STN cells start firing irregular spikes and the LFP signal strengthens, until the stimulation is turned off.

smoothly during the same period until they are very close to zero,releasing the STN cells from suppression. Since we have a finitestimulation duration, a larger µ can prolong the suppression ofSTN neurons sufficiently that it lasts throughout the whole stim-ulation period. Correspondingly, the TC relay performance is per-fect (e.g., the right figure on the bottom row in Fig. 15) becauseTC neurons can respond to their excitatory inputs without inter-ference from GPi inhibition. Although we assume that perfect TCrelay is desirable, it seems unlikely that elimination of STN and GPiactivity would represent an optimal state.

4.4. Comparison with constant negative current stimulation

Given that MDFS restores TC relay fidelity by reducing STNneuron firing, it is reasonable to consider the simpler interventionof applying a constant negative current to STN neurons, withoutrecording and feeding back an LFP signal. We used our model toinvestigatewhether a constant negative stimulation currentwouldwork the same as delayed feedback stimulation.We find that thereis a narrow range of constant negative current strengths that caninduce fairly good TC relay performance without eliminating STNactivity. Once the negative current is outside of that range, it eitherdoes not have a significant effect on TC relay or it completelysuppresses STN neurons. Thus, if extreme STN suppression is to beavoided, then constant negative current stimulation (CNCS) mustbe much more carefully tuned than MDFS to achieve significantlyimproved TC relay.

There are also other advantages of MDFS, relative to constantnegative current stimulation. First, the MDFS that we describeis a closed-loop feedback control mechanism. Any changes inSTN population activity will lead to automatic adjustment of theLFP signal, as discussed in Section 4.3, eliminating the need formanual retuning. Second,we can consider how the suppressed STNneuronswill respond to excitatory cortical inputs, such as from thehyperdirect pathway, with constant negative current stimulation(CNCS) and with MDFS, as well as in the absence of DBS, withstandard HFS, and with CRS for comparison. To do so, we add anextra term, Icor→Sn, representing excitatory cortical input, to theSTN voltage equation, such that it becomes

Cmv′

Sn = −IL − INa − IK − IT − ICa − IAHP − IGe→Sn

+ Istim + Icor→Sn.

Icor→Sn consists of a sequence of square pulses, whichwe generatedusing a Poisson process since we are not aware of any datato suggest that another fast time scale structure is present inthis input stream. We find that this excitatory input to the STNneurons overwhelms the negative constant stimulus and causesbursty STN activity (with a different frequency compared to thebursts arising in the parkinsonian network). This STN activitysignificantly compromises TC relay fidelity. In the MDFS case, thisexcitatory input to the STN induces isolated spikes in STN cells andmaintains good TC relay performance. To compare across all of thestimulation (or non-stimulation) types listed above, we performed40 independent trials of TC relay responses in each case. In eachtrial with stimulation, the stimulation was turned on from 1000to 2500 ms. We performed trials with periodic excitatory inputsto TC cells, to match earlier simulations in the paper, as well aswith Poisson inputs to TC cells, to allow for a statistical similaritybetween cortical inputs to different areas. We find that the TCrelay error index is always high, near non-stimulation levels, withnegative constant stimulation, while it becomes lower for HFSand lower still for CRS and MDFS; see Fig. 17. Although the highfrequency pulse train used in HFS is the same as the one usedin CRS (function fhi(t) in Section 3.1, with the same parametervalues for ρ1 and a1), CRS yields a better relay performance, asseen in Fig. 18, with the same strength a0 of stimulation current.To achieve this better performancewith HFS, a0 must be raised to ahigher level, which would increase energy consumption and couldpossibly damage tissue.

4.5. Multi-site delayed feedback stimulation for heterogeneous TCcells

As a final step, we verified that the effectiveness of MDFS atrestoring TC neuron relay fidelity is not specific to the baselineparameter values thatweuse for ourmodel TC neuron. To do so,wegenerated a population of 40model TC neuronswith heterogeneityin their parameter values by independently selecting gL, gNa, andgT from normal distributions of standard deviations 0.01, 0.05,and 0.08, respectively centered around their baseline values, aswas done in previous work (Guo et al., 2008). We ran simulationsin which time courses of GPi inhibition to the TC neurons wereproduced by the upstream STN-GPe loop and these fixed time

Page 72: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 613

Fig. 17. The error index of a TC neuron on 40 different trials with Poisson excitatory inputs to STN. Whisker plots show mean (red line), 25–75 percentile range (blue box),95% confidence interval (black), and outliers (red plus signs) for parkinsonian activitywithout stimulation (PD), constant negative current stimulation (CNCS), high frequencystimulation (HFS), coordinated reset stimulation (CRS), and multi-site delayed feedback stimulation (MDFS), in the case of periodic inputs to the TC neuron (left) and thecase of Poisson inputs to the TC neuron (right). In both plots, the stimulation parameters of HFS and CRS are a0 = 48, ρ1 = 0.93 and a1 = 0.7. The ON period for CRS isτ0 = 41.5 ms and the constant phase shift is 42.5 ms. Parameters for MDFS in both plots are µ = 0.00031, a = 0.0024, b = 0.00129. (For interpretation of the referencesto colour in this figure legend, the reader is referred to the web version of this article.)

Fig. 18. Error index values for 40model TC neuronswith heterogeneous parametervalues. All the baseline parameters of TC neurons are given in the Appendix. Thesolid squares are error index values derived from the parkinsonian networkwithoutstimulation. Open circles are error index values for the same parkinsonian networkwith CRS. The amplitude of stimulation strength is a0 = 48, and the ON period ofCRS is τ0 = 41.5ms. The pluses are error index values for the parkinsonian networkwith MDFS. The stimulation strength µ = 0.00031, and the parameters of the low-pass filter are fixed at a = 0.0024, b = 0.00129.

courses were used as synaptic inputs to all members of theheterogeneous TC population. This was done with no appliedstimulation as well as with CRS and MDFS forms of stimulationconsidered in Fig. 17. All members of the TC population showedsignificant decreases in error index in the parkinsonian networkwith CRS or MDFS stimulation, compared to the case withoutstimulation, as illustrated in Fig. 18. The baseline parameter valuesused for the TC neurons are given in Appendix and the stimulationparameters are given in the caption of Fig. 18.

5. Discussion

In this paper, we consider a network of synaptically-connected,conductance-based model neurons from the STN, GPe and GPi inthe basal ganglia, based on previous modeling work (Guo et al.,2008; Rubin & Terman, 2004; Terman et al., 2002). The modelis tuned to generate activity patterns featuring synchronized,rhythmic bursts fired by clusters of neurons, with different clustersbursting in alternation, which we take to represent a parkinsonianstate. Inhibitory outputs from the GPi are rhythmic (Figs. 3 and4) and target model TC neurons that also receive excitatory inputtrains. We find that the TC neurons are unable to respond reliablyto these inputs, in agreement with earlier theory and simulations(Guo et al., 2008; Rubin & Terman, 2004). Earlier computationalstudies on alternatives to standard DBS paradigms for Parkinson’sdisease have identified several promising approaches involving thedelivery of stimulation at multiple sites within the STN (reviewedin Hauptmann et al., 2007). We test two such approaches, oneinvolving time-shifted coordinated reset stimulation with a pre-determined pattern and the other involving feedback of filteredLFP signals recorded fromwithin the STN (Hauptmann et al., 2007,2005; Rosenblum & Pikovsky, 2004b; Tukhlina et al., 2007). Both

approaches significantly improve TC relay fidelity, the former byreducing the rhythmicity of the net inhibitory input from GPi toeach TC neuron and the latter by reducing STN activity. Thus, bothdo appear to be worthy of additional consideration for possibletherapeutic use with PD patients.

A trivial way to achieve reliable TC relay in our model isto eliminate STN activity. Although LFP-based delayed feedbackstimulation does suppress STN firing and prevent bursting, itdoes not cut out STN activity completely. Indeed, as STN activitywanes, a reduction of the LFP signal results, as noted by previousauthors (Hauptmann et al., 2005; Tukhlina et al., 2007), until abalance between the activity and the stimulation signal is achieved.Importantly, the LFP signal is generated directly by the STNnetwork and its inputs, and hence would not need to be fine-tuned by a clinician to achieve its effects, unlike a prescribedsuppressive current. Moreover, although the elimination of STNactivity and restoration of TC relay can be achieved by an imposedinhibitory stimulation of a similarmagnitude to the time-averagedLFP signal, such stimulation yields abnormal STN responses tocortical inputs, as might arrive through the hyperdirect pathway,with an associated compromise of relay, whereas effective relaypersists despite cortical inputs under MDFS.

Previous authors have assessed the performance of STN stim-ulation based on recorded LFP signals in terms of its desyn-chronizing effect on model neurons. In particular, Hauptmannet al. (Hauptmann et al., 2007) review results showing that, in anetwork of STN neurons modeled using the conductance-basedMorris–Lecar equations, such stimulation greatly reduces anorder parameter quantifying phase synchronization without sig-nificantly lowering the STN burst rate. Although abnormally highsynchronization within various basal ganglia nuclei is correlatedwith the presence of parkinsonian symptoms, however, a com-plete understanding of howDBSworks requires the elucidation of acausal connection from synchrony tomotor outputs. Ourwork sug-gests one possibility, namely that synchrony within STN translatesinto synchrony of GPi activity and outputs to TC neurons, compro-mising TC relay to motor cortex (Guo et al., 2008; Rubin & Terman,2004). Part of the novelty of this study lies in the assessment of theeffectiveness of LFP-based stimulation in terms of its impact on TCrelay, and this metric leads us to the conflicting prediction that ef-fective LFP-based delayed feedback stimulation does alter the STNburst rate; specifically, this stimulation must be strong enough toreduce synchronized STN bursting in order to achieve therapeuticbenefit. Importantly, this prediction does not represent a challengeto the practical utility of LFP-based delayed feedback stimulation,since our results indicate that the STN can still respond to inputs inthe presence of this stimulation. While additional insights may begained from future simulations involving larger network models,

Page 73: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

614 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

a thorough assessment of the relative merits of MCRS, LFP-baseddelayed feedback stimulation, and other forms of high frequencyDBS will require a better understanding of the role of STN activitypatterns in motor processing. Meanwhile, combining our resultswith the finding that simulated standard DBS becomesmore effec-tive when it reaches a larger portion of the STN population (Hahn& McIntyre, 2010), we can at least conjecture that the use of mul-tiple stimulation sites will be advantageous as long as bursting issufficiently reduced or desynchronized in a large enough STN sub-population.

This perspective is also supported by data and simulationsshowing that therapeutic STN-DBS reduces bursting in GPi (Hahn&McIntyre, 2010; Hahn et al., 2008) and by computational findingsthat the temporal profile of the inhibitory conductance fromGPi toTC neurons is a key predictor of TC rebound burst firing and relayperformance (Cagnan et al., 2009; Dorval et al., 2010; Pirini et al.,2009). One possible conclusion from these and other studies is thatthe regularity of DBS is essential to its success (Dorval et al., 2010,2008). In particular, standard STN-DBS more effectively relievedbradykinesia in PD patients when it was regular than when it wasirregular, and the introduction of aperiodicity into standard DBSstimulation signals to STN compromised its improvement of TCrelay performance (Dorval et al., 2010). Our findings, however,suggest that classifying stimulation patterns by regularity alonemaybe insufficient for predicting their therapeutic utility. Irregularpatterns, such as the MDFS that we consider, may still achieveimproved relay, and hence represent candidates for therapeuticapplication, as long as they change GPi output in a way that allowsTC neurons to respond to excitatory inputs reliably. The pathwayfrom GPi to VLa thalamus is part of an anatomically-distinct motorcircuit that projects tomotor cortical areas (Baron,Wichmann, Ma,& DeLong, 2002; DeLong & Wichmann, 2007; Samuel et al., 1997),which is highly suggestive of a connection between activity inthalamic targets of GPi and parkinsonian symptoms. However, thelink from TC responses to cortical andmuscular outputs associatedwith specific parkinsonian signs remains to be investigated infuture experimental and computational work.

Finally, it is important to consider the feasibility of multi-sitefeedback stimulation methods, particularly those incorporatingLFP recording and feedback. Previous modeling has suggested thatthe LFP signal recorded locally within one STN subpopulationmay be effectively used to suppress synchrony in anothersubpopulation (Hauptmann et al., 2005), although again, effectsbeyond synchronization remain to be considered. MDFS electrodesare under development for testing in the MPTP primate model ofparkinsonism (Tass, 2009). LFP signals have been recorded fromPD patients during stimulation-related surgery (Chen et al., 2006)and the structure of LFP signals has been linked to the severityof certain parkinsonian symptoms (Chen et al., 2010). Thesepromising developments, along with the desirable properties ofMDFS approaches observed in simulations and the need for noveltherapeutic approaches that address limitations of current DBSparadigms, suggest that the continued investigation of multi-sitedelayed feedback stimulation protocols for PD could represent animportant direction for future work.

Acknowledgements

JR received support from NSF awards DMS 0716936 and DMS1021701.

Appendix

In the following text, we use gi to denote conductances inms/cm2 and vi to denote reversal potentials in mV, where thesubscripts i are from the set L,Na,K, Ca,AHP, T, E,Gi,Ge →

Ge,Ge → Sn,Ge → Gi, Sn → Ge, Sn → Gi. τ , with a subscriptor both a superscript and a subscript, is a time constant in units ofms. All α and β with subscripts are rate constants in units of ms−1.Other parameters are constants either without units or with unitsgiven in the following text.Functions for TC neurons in system (1):

m∞(v) = 1/(1 + e−(v+37)/7), p∞(v) = 1/(1 + e−(v+60)/6.2),

h∞(v) = 1/(1 + e(v+41)/4), r∞(v) = 1/(1 + e(v+84)/4),

τh(v) = 1/(ah(v) + bh(v)), τr(v) = 0.4(28 + e−(v+25)/10.5),

ah(v) = 0.128e−(46+v)/18, bh(v) = 4/(1 + e−(23+v)/5).

Parameters for TC neurons:

gL = 0.14, gNa = 3, gK = 5, gT = 5, gE = 0.018,gGi = 0.009, vL = −72, vNa = 50, vK = −90,vT = 90, vE = 0, vGi = −85, p = 50 ms,d = 5 ms, winoff = 12 ms.

GPi currents:IL(v) = gL(v−vL), INa = gNa(m3

∞(v))h(v−vNa), IK = gKn4(v−

vK ),IT = gTa3∞(v)r(v − vCa), ICa = gCas2∞(v)(v − vCa),IAHP = gAHP(v − vK )([Ca]/([Ca] + k)),ISn→Gi = gSn→GisSn→Gi(v − vSn→Gi), where the equation forsSn→Gi is listed under STN equations, andIGe→Gi = gGe→GisGe→Gi(v − vGe→Gi), where the equation forsGe→Gi is listed under GPe equations.Iappi = −1µA is a constant applied current.

GPi equations and functions:

n′= φn(n∞(v) − n)/τn(v), h′

= φh(h∞(v) − h)/τh(v), r ′=

φ(r∞(v) − r)/τr ,[Ca]′ = ϵ(−ICa − IT − kCa[Ca]), s′Gi = α(1− sGi)S∞(v) − βGisGi,where S∞(v) is given in Section 2.1.X∞(v) = 1/(1 + e−(v−θX )/σX ), where X = m, n, h, r, a, s, andτX (v) = τ 0

X + τ 1X /(1 + e−(v−θτ

X )/σ τX ), where X = n, h.

GPi parameters:

gL = 0.1, gNa = 120, gK = 30, gT = 0.5, gCa = 0.1, gAHP =

30, gSn→Gi = 0.5, gGe→Gi = 1,vL = −55, vNa = 55, vK = −80, vCa = 120, vGe→Gi =

−100, vSn→Gi = 0,τ 0n = 0.05, τ 1

n = 0.27, τ 0h = 0.05, τ 1

h = 0.27, τr = 30,φr = 1, φn = 0.1, φh = 0.05,k1 = 30, kCa = 15, ϵ = 0.0001 ms−1,θr = −70, θm = −37, θn = −50, θh = −58, θa = −57, θs =

−35, α = 2, θ τn = −40, θ τ

h = −40,σm = 10, σn = 14, σh = −12, σr = −2, σa = 2, σs = 2, σ τ

n =

−12, σ τh = −12,

βGi = 0.08, kCa = 15.

STN currents:

IL, INa, IK , ICa, IAHP are as given above for the GPi neuron, andIT = gTa3∞(v)b∞(r)(v − vCa). The synaptic currents from GPeto STN are the following:IGe→Sn = gGe→Sn

∑j∈Λ sjGe→Sn(v − vSn→Ge), where Λ is a

subgroup of GPe cells and the equation for sGe→Sn is providedunder GPe equations.

For the stimulation current Istim, see details in Sections 3.1 and4.1.STN equations and functions:

n, h, r, [Ca] equations and functions X∞(v), τX (v) are the sameas given above for GPi neurons, except there is no r∞(v) usedand we introduce b∞(r) = 1/(1+ e(r−θb)/σb)−1/(1+ e−θb/σb).

Page 74: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616 615

The synaptic input from STN to GPe and GPi is described as:

s′Sn→Ge = αSn→Ge(1 − sSn→Ge)s∞(v − 30) − βSn→GesSn→Ge,s′Sn→Gi = αSn→Gi(1− sSn→Gi)s∞(v − 30) − βSn→GisSn→Gi, for s∞defined under GPi equations and function.

STN Parameters:

gL = 2.25, gNa = 37.5, gK = 45, gT = 0.5, gCa = 0.5, gAHP =

9, gGe→Sn = 0.9,vL = −60, vNa = 55, vK = −80, vCa = 140, vGe→Sn = −100,τ 0n = 1, τ 1

n = 100, τ 0h = 1, τ 1

h = 500, τ 0r = 7.1, τ 1

r = 17.5,φr = 0.5, φn = 0.75, φh = 0.75,k1 = 15, kCa = 22.5, ϵ = 5 × 10−5,θr = −67, θm = −30, θn = −32, θh = −39, θa = −63, θs =

−39, θb = 0.25,θ τn = −80, θ τ

h = −57, θ τr = 68,

σm = 15, σn = 8, σh = −3.1, σr = −2, σa = 7.8, σs = 8,σb = 0.07σ τn = −26, σ τ

h = −3, σ τr = −2.2,

αSn→Ge = 5, αSn→Gi = 1, βSn→Ge = 1, βSn→Gi = 0.05,wk = 0.45.CNCS current is fixed at -15.6.HFS stimulation: Istimk = a0h(t)fhi(t), where ρ1 = 0.93, a1 =

0.7, a0 = 48.CRS parameters: ρ1 = 0.93, a1 = 0.7, τ = 42.5, a0 ∈

[36, 100], τ0 ∈ [16.5 ms, 60.5 ms].LFP parameters are all given in the text.

GPe currents:IL, INa, IK , ICa, IAHP are modeled as given above for the GPi

neuron. The synaptic currents to GPe are:ISn→Ge = gSn→Ge

∑j∈Λ sSn→Ge(v − vSn→Ge), where Λ is a

subgroupof STNneurons and the equation for sSn→Ge is givenunderSTN equations.

IGe→Ge = gGe→Ge∑

j∈Λ sGe→Ge(v − vGe→Ge) where Λ is asubgroup of STN neurons and the equation for sGe→Ge is the sameas that for sGe→Sn below.

Iapp = −1.2 is a constant applied current.GPe equations and functions:

n, h, r, [Ca] equations and function X∞(v), τX (v) are as givenabove for the GPi neuron.

The synaptic input from STN to GPe and GPi is described as:

s′Ge→Sn = αGe→Sn(1 − sGe→Sn)s∞(v − 20) − βGe→SnsGe→Sn,s′Ge→Gi = αGe→Gi(1 − sGe→Gi)s∞(v − 20) − βGe→GisGe→Gi, fors∞(v) defined under GPi equations and functions.

GPe parameters:Most parameters for GPe are the same as those for GPi. We only

list those that have different values and the additional ones notpresent in the GPi model. These are

gSn→Ge = 0.18, gGe→Ge = 0.01, vSn→Ge = 0, vGe→Ge = −80,αGe→Sn = 2, αGe→Gi = 1,βGe→Sn = 0.04, βGe→Gi = 0.1.

References

Alexander, G. E., Crutcher, M. D., & DeLong, M. R. (1990). Basal ganglia thalamo-cortical circuits: parallel substrates for motor, oculomotor, ‘‘prefrontal’’ and‘‘limbic’’ functions. Progress in Brain Research, 85, 119–146.

Baron, M. S., Wichmann, T., Ma, D., & DeLong, M. R. (2002). Effects of transientfocal inactivation of the basal ganglia in parkinsonian primates. The Journal ofNeuroscience, 22, 592–599.

Bergman, H., Wichmann, T., Karmon, B., & DeLong, M. (1994). The primatesubthalamic nucleus. II. Neuronal activity in the mptp model of parkinsonism.Journal of Neurophysiology, 72, 507–520.

Best, J., Park, C., Terman, D., & Wilson, C. (2007). Transitions between irregular andrhythmic firing patterns in excitatory–inhibitory neuronal networks. Journal ofComputational Neuroscience, 23, 217–235.

Bevan, M. D., Jeremy, A. F., & Jérôme, B. (2006). Cellular principles underlyingnormal and pathological activity in the subthalamic nucleus. Current Opinionin Neurobiology, 16, 621–628.

Boraud, T., Bezard, E., Guehl, D., Bioulac, B., & Gross, C. (1998). Effects of L-dopaon neuronal activity of the globus pallidus externalis (GPe) and globus pallidusinternalis (GPi) in the MPTP-treated monkey. Brain Research, 787, 157–160.

Brown, P., Oliviero, A., Mazzone, P., Insola, A., Tonali, P., & Lazzaro, V. D.(2001). Dopamine dependency of oscillations between subthalamic nucleus andpallidum in Parkinson’s disease. The Journal of Neuroscience, 21, 1033–1038.

Cagnan, H., Meijer, H. G. E., van Gils, S. A., Krupa, M., Heida, T., Rudolph,M., et al. (2009). Frequency-selectivity of a thalamocortical relay neuronduring Parkinson’s disease and deep brain stimulation: a computational study.European Journal of Neuroscience, 30, 1306–1317.

Castro-Alamancos, M., & Calcagnotto, M. (2001). High-pass filtering of corticotha-lamic activity by neuromodulators released in the thalamus during arousal: invitro and in vivo. Journal of Neurophysiology, 85, 1489–1497.

Chen, C. C., Chan, H. L., Tu, P. H., Lee, S. T., Lu, C. S., & Brown, P. (2010).Complexity of subthalamic 13–35 Hz oscillatory activity directly correlateswith clinical impairment in patients with Parkinson’s disease. ExperimentalNeurology, 224(1), 234–240.

Chen, C. C., Pogosyana, A., Zrinzoa, U. L., Tischa, S., Limousina, P., Ashkana, K., et al.(2006). Intra-operative recordings of local field potentials can help localize thesubthalamic nucleus in Parkinson’s disease surgery. Experimental Neurology,198, 214–221.

DeLong, M. R., & Wichmann, T. (2007). Circuits and circuit disorders of the basalganglia. Archives of Neurology, 64, 20–24.

Deuschl, G., Schade-Brittinger, C., Krack, P., Volkmann, J., Schäfer, K., Bötzel, K., et al.(2006). A randomized trial of deep-brain stimulation for Parkinson’s disease.The New England Journal of Medicine, 355, 896–908.

DeVito, J. L., & Anderson, M. E. (1982). An autoradiographic study of the efferentconnections of the globus pallidus in Macaca mullata. Brain Research, 46,107–117.

Dorval, A. D., Kuncel, A. M., Birdno, M. J., Turner, D. A., & Grill, W. M. (2010). Deepbrain stimulation alleviates parkinsonian bradykinesia by regularizing pallidalactivity. Journal of Neurophysiology, 104, 911–921.

Dorval, A., Russo, G., Hashimoto, T., Xu, W., Grill, W., & Vitek, J. (2008). Deepbrain stimulation reduced neuronal entropy in the MPTP-primate model ofParkinson’s disease. Journal of Neurophysiology, 100, 2807–2818.

Feng, X., Shea-Brown, E., Rabitz, H., Greenwald, B., & Kosut, R. (2007a). Optimal deepbrain stimulation of the subthalamic nucleus—a computational study. Journal ofComputational Neuroscience, 23(3), 265–282.

Feng, X., Shea-Brown, E., Rabitz, H., Greenwald, B., & Kosut, R. (2007b). Towardclosed-loop optimization of deep brain stimulation for Parkinson’s disease:concepts and lessons froma computationalmodel. Journal of Neural Engineering ,4, L14–L21.

Garcia, L., D’Alessandro, G., Bioulac, B., & Hammond, C. (2005). High-frequencystimulation in Parkinson’s disease: more or less? Trends in Neurosciences, 28,209–216.

Grill, W., Snyder, A., & Miocinovic, S. (2004). Deep brain stimulation creates aninformational lesion of the stimulated nucleus. Neuroreport , 15, 1137–1140.

Guillery, R., & Sherman, S.M. (2002a). The role of thalamus in the flowof informationto the cortex. Philosophical Transactions of the Royal Society of London Series BBiological Sciences, 357, 1695–1708.

Guillery, R., & Sherman, S. M. (2002b). The thalamus as a monitor of motor outputs.Philosophical Transactions of the Royal Society of London Series B BiologicalSciences, 357, 1809–1821.

Guillery, R., & Sherman, S. M. (2002c). Thalamic relay functions and their role incorticocortical communication: generalizations from the visual system.Neuron,33, 163–175.

Guo, Y., Rubin, J. E., McIntyre, C. C., Vitek, J. L., & Terman, D. (2008). Thalamocorticalrelay fidelity varies across subthalamic nucleus deep brain stimulationprotocols in a data-driven computational model. Journal of Neurophysiology, 99,1477–1492.

Haber, S. (2003). The primate basal ganglia: parallel and integrative networks.Journal of Chemical Neuroanatomy, 2, 317–330.

Hahn, P. J., & McIntyre, C. C. (2010). Modeling shifts in the rate and pattern ofsubthalamopallidal network activity during deep brain stimulation. Journal ofComputational Neuroscience, 28(3), 425–441.

Hahn, P. J., Russo, G. S., Hashimoto, T., Miocinovic, S., Xu, W., McIntyre, C. C.,et al. (2008). Pallidal burst activity during therapeutic deep brain stimulation.Experimental Neurology, 211(1), 243–251.

Hashimoto, T., Elder, C., Okun, M., Patrick, S., & Vitek, J. (2003). Stimulation of thesubthalamic nucleus changes the firing pattern of pallidal neurons. The Journalof Neuroscience, 23, 1916–1923.

Hauptmann, C., Omel’chenko, O., Popovych, O. V.,Maistrenko, Y., & Tass, P. A. (2007).Control of spatially patterned synchrony with multisite delayed feedback.Physical Review E, 76, 066209.

Hauptmann, C., Omel’chenko, O., Popovych, O. V., Maistrenko, Y., & Tass, P. A.(2008). Desynchronizing the abnormally synchronized neural activity in thesubthalamic nucleus: a modeling study. Expert Review of Medical Devices, 4,633–650.

Hauptmann, C., Popovych, O., & Tass, P. A. (2005). Effectively desynchronizing brainstimulation based on a coordinated delayed feedback stimulation via severalsites: a computational study. Biological Cybernetics, 93, 463–470.

Holt, G. R., & Koch, C. (1998). Electrical interactions via the extracellular potentialnear cell bodies. Journal of Computational Neuroscience, 6(2), 169–184.

Hurtado, J., Rubchinsky, L., Sigvardt, K., Wheelock, V., & Pappas, C. (2005). Temporalevolution of oscillations and synchrony in GPi/muscle pairs in Parkinson’sdisease. Journal of Neurophysiology, 93, 1569–1584.

Page 75: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

616 Y. Guo, J.E. Rubin / Neural Networks 24 (2011) 602–616

Kelly, R. M., & Strick, P. L. (2004). Macro-architecture of basal ganglia loops with thecerebral cortex: use of rabies virus to reveal multisynaptic circuits. Progress inBrain Research, 143, 449–459.

Leung, L. W. S. (1990). Field potentials in the central nervous system: recording,analysis, and modeling. In neurophysiological techniques: applications toneural systems. Neuromethods, 15, 277–312.

Levy, R., Hutchison, W., Lozano, A., & Dostrovsky, J. (2003). High-frequencysynchronization of neuronal activity in the subthalamic nucleus of parkinsonianpatients with limb tremor. The Journal of Neuroscience, 20, 7766–7775.

Magnin, M., Morel, A., & Jeanmonod, D. (2000). Single-unit analysis of the pallidum,thalamus, and subthalamic nucleus in parkinsonian patients. Neuroscience, 96,549–564.

Mazzoni, A., Panzeri, S., Logothetis, N. K., & Brunel, N. (2008). Encoding of naturalisticstimuli by local field potential spectra in networks of excitatory and inhibitoryneurons. PLoS Computational Biology, 4(12), doi:10.137.

McIntyre, C. C., & Hahn, P. (2010). Network perspectives on themechanisms of deepbrain stimulation. Neurobiology of Disease, 38, 329–337.

Middleton, F. A., & Strick, P. L. (2000). Basal ganglia output and cognition: evidencefrom anatomical, behavioral, and clinical studies. Brain and Cognition, 42,183–200.

Mitzdorf, U. (1985). Current source-density method and application in cat cerebralcortex: investigation of evoked potentials and EEG phenomena. PhysiologicalReviews, 65, 37–100.

Montgomery, E., Jr., & Baker, K. (2000). Mechanism of deep brain stimulation andfuture technical developments. Neurological Research, 22, 259–266.

Nini, A., Feingold, A., Slovin, H., & Bergman, H. (1995). Neurons in the globuspallidus do not show correlated activity in the normal monkey, but phase-locked oscillations appear in the MPTP model of parkinsonism. Journal ofNeurophysiology, 74, 1800–1805.

Nunez, P. L. (1981). Electric fields of the brain. New York: Oxford University Press.Pettersen, K. H., & Einevoll, G. T. (2008). Amplitude variability and extracellular

lowpass filtering of neuronal spikes. Biophysical Journal, 784–802.Pettersen, K. H., Hagen, E., & Einevoll, G. T. (2008). Estimation of population firing

rates and current source densities from laminar electrode recordings. Journal ofComputational Neuroscience, 24, 291–313.

Pirini, M., Rocchiand, L., Sensi, M., & Chiari, L. (2009). A computational approach toinvestigate different targets in deep brain stimulation for Parkinson’s disease.Journal of Computational Neuroscience, 26, 91–107.

Plenz, D., & Kital, S. (1999). A basal ganglia pacemaker formed by the subthalamicnucleus and external globus pallidus. Nature, 400, 677–682.

Popovych, O. V., Hauptmann, C., & Tass, P. A. (2006). Control of neuronal synchronyby nonlinear delayed feedback. Biological Cybernetics, 95, 69–85.

Protopapas, A., Vanier, M., & Bower, J. M. (1998). Simulating large networks ofneurons. InMethods in neuronal modeling: from ions to networks (pp. 461–498).

Raz, A., Vaadia, E., & Bergman, H. (2000). Firing patterns and correlations ofspontaneous discharge of pallidal neurons in the normal and tremulous 1-methyl-4- phenyl-1, 2, 3, 6 tetrahydropyridine vervet model of parkinsonism.The Journal of Neuroscience, 20, 8559–8571.

Rinzel, J. (1985). Bursting oscillations in an excitable membrane model. In B. Slee-man, & R. Jarvis (Eds.), Ordinary and partial differential equations (pp. 304–316).New York: Springer-Verlag.

Rodriguez-Oroz, M. C., Obeso, J. A., Lang, A. E., Houeto, J.-L., Pollak, P., Rehncrona,S., et al. (2005). Bilateral deep brain stimulation in Parkinson’s disease: amulticentre study with 4 years follow-up. Brain, 128, 2240–2249.

Rosenblum, M. G., & Pikovsky, A. S. (2004a). Controlling synchronization in anensemble of globally coupled oscillators. Physical Review Letters, 92, 114102.

Rosenblum, M. G., & Pikovsky, A. S. (2004b). Delayed feedback control of collectivesynchrony: an approach to suppression of pathological brain rhythms. PhysicalReview E, 70, 041904.

Rubin, J. E., & Terman, D. (2004). High frequency stimulation of the subthalamicnucleus eliminates pathological thalamic rythmicity in a computational model.Journal of Computational Neuroscience, 16, 211–235.

Samuel, M., Ceballos-Baumann, A. O., Turjanski, N., Boecker, H., Gorospe, A.,Linazasoro, G., et al. (1997). Pallidotomy in Parkinson’s disease increasessupplementary motor area and prefrontal activation during performance ofvolitional movements: an H15

2 O PET study. Brain, 120, 1301–1313.Tass, P. (2003). A model of desynchronizing deep brain stimulation with a demand-

controlled coordinated reset of neural subpopulations. Biological Cybernetics, 89,81–88.

Tass, P. (2009). Personal communication.Terman, D., Rubin, J. E., Yew, A. C., & Wilson, C. J. (2002). Activity patterns in a

model for the subthalamopallidal network of the basal ganglia. The Journal ofNeuroscience, 2002(7), 2963–2976.

Tsirogiannis, G. L., Tagaris, G. A., Sakas, D., & Nikita, K. S. (2010). A population levelcomputationalmodel of the basal ganglia that generates parkinsonian local fieldpotential activity. Biological Cybernetics, 102, 155–176.

Tukhlina, N., Rosenblum, M., Pikovsky, A., & Kurths, J. (2007). Feedback suppressionof neural synchrony by vanishing stimulation. Physical Review E, 75, 011918.

Urbain, N., Gervasoni, D., Souliere, F., Lobo, L., Rentero, N., Windels, F., et al.(2000). Unrelated course of subthalamic nucleus and globus pallidus neuronalactivities across vigilance states in the rat. European Journal of Neuroscience, 12,3361–3374.

Urbain, N., Rentero, N., Gervasoni, D., Renaud, B., & Chouvet, G. (2002). The switchof subthalamic neurons from an irregular to a bursting pattern does not solelydepend on their GABAergic inputs in the anesthetic-free rat. The Journal ofNeuroscience, 22, 8665–8675.

Ursino, M., & Cara, G. (2006). Travelling waves and EEG patterns during epilepticseizure: analysis with an integrate and fire network. Journal of TheoreticalBiology, 242, 171–178.

Volkmann, J. (2004). Deep brain stimulation for the treatment of Parkinson’sdisease. Journal of Clinical Neurophysiology, 21, 6–17.

Wichmann, T., Bergman, H., Starr, P., Subramanian, T., Watts, R., & DeLong,M. (1999). Comparison of MPTP-induced changes in spontaneous neuronaldischarge in the internal pallidal segment and in the substantia nigra parsreticulata in primates. Experimental Brain Research, 125, 397–409.

Wichmann, T., & DeLong, M. R. (2006). Deep brain stimulation for neurologicdisorders. Neuron, 52, 197–204.

Wichmann, T., & Soares, J. (2006). Neuronal firing before and after burst dischargesin the monkey basal ganglia is predictably patterned in the normal state andaltered in parkinsonism. Journal of Neurophysiology, 95, 2120–2133.

Xu, W., Hashimoto, G., Zhang, T., & Vitek, J. (2008). Subthalamic nucleusstimulation modulates thalamic neuronal activity. The Journal of Neuroscience,28, 11916–11924.

Yoshida, F., Martinez-Torres, I., Pogosyan, A., Holl, E., Petersen, E., Chen, C. C.,et al. (2010). Value of subthalamic nucleus local field potentials recordings inpredicting stimulation parameters for deep brain stimulation in Parkinson’sdisease. Journal of Neurology Neurosurgery & Psychiatry, 81(8), 885–889.

Yoshida, M., Rabin, A., & Anderson, M. E. (1972). Monosynaptic inhibition of pallidalneurons by axon collaterals of caudatonigral fibers. Experimental Brain Research,15, 333–347.

Page 76: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

99:1477-1492, 2008. First published Jan 2, 2008; doi:10.1152/jn.01080.2007 J NeurophysiolDavid Terman Yixin Guo, Jonathan E. Rubin, Cameron C. McIntyre, Jerrold L. Vitek and

You might find this additional information useful...

68 articles, 33 of which you can access free at: This article cites http://jn.physiology.org/cgi/content/full/99/3/1477#BIBL

including high-resolution figures, can be found at: Updated information and services http://jn.physiology.org/cgi/content/full/99/3/1477

can be found at: Journal of Neurophysiologyabout Additional material and information http://www.the-aps.org/publications/jn

This information is current as of April 7, 2008 .

http://www.the-aps.org/.American Physiological Society. ISSN: 0022-3077, ESSN: 1522-1598. Visit our website at (monthly) by the American Physiological Society, 9650 Rockville Pike, Bethesda MD 20814-3991. Copyright © 2005 by the

publishes original articles on the function of the nervous system. It is published 12 times a yearJournal of Neurophysiology

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 77: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Thalamocortical Relay Fidelity Varies Across Subthalamic Nucleus DeepBrain Stimulation Protocols in a Data-Driven Computational Model

Yixin Guo,1,* Jonathan E. Rubin,2,* Cameron C. McIntyre,3 Jerrold L. Vitek,4 and David Terman5

1Department of Mathematics, Drexel University, Philadelphia, Pennsylvania; 2Department of Mathematics, University of Pittsburgh,Pittsburgh, Pennsylvania; 3Departments of Biomedical Engineering and 4Neuroscience, Cleveland Clinic, Cleveland;and 5Department of Mathematics, The Ohio State University, Columbus, Ohio

Submitted 27 September 2007; accepted in final form 29 December 2007

Guo Y, Rubin JE, McIntyre CC, Vitek JL, Terman D. Thalamo-cortical relay fidelity varies across subthalamic nucleus deep brainstimulation protocols in a data-driven computational model. JNeurophysiol 99: 1477–1492, 2008. First published January 2,2008; doi:10.1152/jn.01080.2007. The therapeutic effectiveness ofdeep brain stimulation (DBS) of the subthalamic nucleus (STN) mayarise through its effects on inhibitory basal ganglia outputs, includingthose from the internal segment of the globus pallidus (GPi). Changesin GPi activity will impact its thalamic targets, representing a possiblepathway for STN-DBS to modulate basal ganglia-thalamocorticalprocessing. To study the effect of STN-DBS on thalamic activity, weexamined thalamocortical (TC) relay cell responses to an excitatoryinput train under a variety of inhibitory signals, using a computationalmodel. The inhibitory signals were obtained from single-unit GPirecordings from normal monkeys and from monkeys rendered par-kinsonian through arterial 1-methyl-4-phenyl-1,2,3,6-tetrahydropyri-dine injection. The parkinsonian GPi data were collected in theabsence of STN-DBS, under sub-therapeutic STN-DBS, and undertherapeutic STN-DBS. Our simulations show that inhibition fromparkinsonian GPi activity recorded without DBS-compromised TCrelay of excitatory inputs compared with the normal case, whereas TCrelay fidelity improved significantly under inhibition from therapeutic,but not sub-therapeutic, STN-DBS GPi activity. In a heterogeneousmodel TC cell population, response failures to the same input oc-curred across multiple TC cells significantly more often without DBSthan in the therapeutic DBS case and in the normal case. Inhibitorysignals preceding successful TC relay were relatively constant,whereas those before failures changed more rapidly. Computationallygenerated inhibitory inputs yielded similar effects on TC relay. Theseresults support the hypothesis that STN-DBS alters parkinsonian GPiactivity in a way that may improve TC relay fidelity.

I N T R O D U C T I O N

The delivery of high-frequency stimulation to the subtha-lamic nucleus (STN) or other target areas, through a surgicallyimplanted electrode, has become a widely used therapeuticoption for the treatment of Parkinson’s disease (PD) and otherneurological disorders (Benabid et al. 2006). The mechanismsunderlying the effectiveness of deep brain stimulation (DBS),however, remain unclear and under debate. Multiple studieshave shown that pathological rhythmicity emerges in certainsubsets of cells within the basal ganglia in parkinsonism(Bergman et al. 1994; Brown et al. 2001; Hurtado et al. 1999,2005; Levy et al. 2003; Magnin et al. 2000; Nini et al. 1995;Raz et al. 2000). Therefore DBS for PD may work by elimi-

nating or modifying such pathological signals. Initial attemptsto address this concept focused on the possibility that DBSblocks neural activity, creating a physiologic lesion (Beurrieret al. 2001; Filali et al. 2004; Magarinos-Ascone et al. 2002;Tai et al. 2003; Welter et al. 2004). According to this theory,suppression of thalamic firing by inhibition from basal gangliaoutput areas, such as the pallidum, is reduced by DBS, andthrough this reduction DBS restores the capability of thethalamus to engage in appropriate movement-related activity(Benabid et al. 2001; Benazzouz et al. 2000; Obeso et al. 2000;Olanow and Brin 2001; Olanow et al. 2000).

Recent experimental and computational results, however,suggest that neurons directly downstream from stimulatedregions may in fact be activated by DBS (Anderson et al.2003; Hashimoto et al. 2003; Hershey et al. 2003; Jech et al.2001; McIntyre et al. 2004; Miocinovic et al. 2006; Paulet al. 2000; Windels et al. 2000, 2003). These results supportthe alternative idea that DBS works by replacing pathologicalrhythms with regularized firing activity (Foffani and Priori2006; Foffani et al. 2003; Garcia et al. 2005; Grill et al. 2004;Meissner et al. 2005; Montgomery and Baker 2000; Vitek2002). In past theoretical work, we offered a computationalimplementation of this idea (Rubin and Terman 2004). Weused Hodgkin-Huxley-type models of cells in the indirectpathway of the basal ganglia (Terman et al. 2002) to generateinhibitory output trains, which served as synaptic inputs to amodel thalamocortical (TC) relay cell. In this previous modelsystem, we assessed TC cell activity under stereotyped repre-sentations of normal, parkinsonian, and DBS conditions. Oursimulations and analysis demonstrated and explained a mech-anism by which pathological oscillatory or bursty inhibitionfrom the internal segment of the globus pallidus (GPi) to TCcells could compromise the fidelity of TC relay of excitatorysignals, whereas elimination of the oscillations within thisinhibition, even at levels that are elevated relative to normalconditions, could restore TC cells’ relay capabilities (Rubinand Terman 2004).

In this study, we use GPi spike trains recorded from normalcontrol monkeys and from parkinsonian monkeys (Hashimotoet al. 2003), with or without DBS of the STN region, as thesource of inhibitory inputs to our model TC cells. By doing so,we circumvent the controversy surrounding the effects of DBSat the stimulation site. Within this theoretical framework, weare able to test how biologically observed changes in GPi

* Y. Guo and J. E. Rubin contributed equally to this work.Address for reprint requests and other correspondence: J. E. Rubin, Dept. of

Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania 15260 (E-mail: [email protected]).

The costs of publication of this article were defrayed in part by the paymentof page charges. The article must therefore be hereby marked “advertisement”in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

J Neurophysiol 99: 1477–1492, 2008.First published January 2, 2008; doi:10.1152/jn.01080.2007.

14770022-3077/08 $8.00 Copyright © 2008 The American Physiological Societywww.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 78: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

neuronal activity affect TC signal transmission, both in asingle-model TC cell and in a heterogenous population ofmodel TC cells. TC relay fidelity is evaluated using a train ofexternal excitatory stimuli applied to the same model TC cellsthat receive the recorded inhibitory synaptic inputs from GPi.Our results show that there is a significant decline in the abilityof the TC cells to relay the excitatory stimuli when they areexposed to GPi signals recorded under parkinsonian conditionsin the absence of DBS or with sub-therapeutic DBS, defined byits failure to induce a therapeutic effect on motor symptoms,relative to GPi data recorded from normal monkeys. Moreover,relay effectiveness is restored to nonparkinsonian levels by GPisignals recorded under parkinsonian conditions in the presenceof therapeutic DBS, which induced a measurable improvementin motor symptoms. Interestingly, while response failuresacross a population of TC cells tend to occur on similar trialsin the parkinsonian and sub-therapeutic cases, failures occurasynchronously under therapeutic STN DBS as well as undernormal conditions, which would moderate their downstreameffect. Finally, to extend these results, we harness a purelycomputational approach that allows us to systematically varythe rhythmicity and degree of correlation within the in-hibitory inputs that TC cells receive. Our results show thatmoderately increasing the burstiness and correlation of inhib-itory spike trains, as might be expected in a transition fromnormal to parkinsonian conditions, leads to a gradual loss ofrelay fidelity, while a further transition to tonic high-frequency,highly correlated inhibitory signals, as may occur in clinicallyeffective DBS (Hashimoto et al. 2003), leads to significantrestoration of effective relay.

M E T H O D S

Proposed mechanism for DBS effectiveness

In awake states, TC cells serve to relay excitatory inputs (Steriadeet al. 1997). The TC population targeted by GPi cells likely isinvolved in the relay of excitatory inputs between cortical areas

(Guillery and Sherman 2002a,b; Haber 2003). The basic idea beingexplored in this paper is that changes in inhibitory output from the GPito its target TC cell population affect the relay reliability of these TCcells, defined in terms of the generation of TC activity patterns thatmatch the inputs to TC cells. Specifically, parkinsonian conditionsinduce oscillations, burstiness, and enhanced correlations in GPioutputs, and these effects are hypothesized to compromise relayfidelity. We further hypothesize that the effectiveness of DBS is dueto the replacement of pathological GPi firing patterns with moreregular activity. While this regular activity may in fact be overlyregular, and may occur at a higher frequency, relative to the activitythat occurs in nonparkinsonian states, it nonetheless restores thalamo-cortical relay reliability. This concept is illustrated schematically inFig. 1.

These effects on TC relay in parkinsonian and DBS conditionsremain to be demonstrated experimentally, but they were shown toarise in a previous, purely computational study (Rubin and Terman2004) where a possible dynamical mechanism that could yield theseresults was also explained. The fundamental hypothesis from ouroriginal study was that DBS leads to tonic, regular inhibitory input tothe TC cells, and this allows the activation and inactivation levels ofTC cell membrane ionic currents to equilibrate, such that reliable relaycan occur, as long as excitatory inputs are not excessively rapid.During parkinsonian conditions, the inhibitory output of GPi featuressynchronized oscillations with bursting activity. When a significantincrease in the level of inhibition of TC cells associated with suchoscillations occurs, a period of re-equilibration of the TC ioniccurrents ensues. During this time, it is difficult for the TC cells toreliably respond to excitation (Jahnsen and Llinas 1984a). Further,after currents have equilibrated to a high level of inhibition, arelatively abrupt decrease in inhibition can lead to an excessive orbursty TC response to excitation due to increased availability ofspike-generating and -sustaining currents (Jahnsen and Llinas1984a,b). We propose that therapeutic DBS reduces this oscillatoryactivity in GPi and TC cells, thereby improving the ability of TC cellsto relay information.

Model TC cells

The model used for the TC cells is a slightly modified version ofthat used in our earlier study (Rubin and Terman 2004), which is itself

FIG. 1. Hypothesized mechanism for deep brain stimulation(DBS) effectiveness. In each of the 3 cases shown, the targetthalamocortical (TC) cell receives inhibitory inputs from theinternal segment of the globus pallidus (GPi), which affects itsrelay of an excitatory drive. In the normal case, the inhibitionis irregular and relatively weak due to low correlation levels(represented by 1), and the TC cell successfully relays itsinputs. In the parkinsonian case, inhibition is more bursty andstronger ( ) due to enhanced correlations. During each inhibi-tory burst, the TC cell fails to respond to its drive (i.e., misses),while its response is excessive (i.e., bad) between bursts. In thecase of supraclinical or therapeutic DBS, inhibition is strongbut quite regular. Despite the strength of the inhibition, suc-cessful TC relay is restored.

1478 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 79: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

a simplification of an earlier formulation (Sohal and Huguenard 2002;Sohal et al. 2000). In this model, the current-balance and ionicactivation equations take the form

Cmv IL INa IK IT IGi3Th IE Iext

h hv h/hv

r rv r/rv (1)

In the preceding equations, the terms IL gL[v EL], INa gNam

3 (v)h[v ENa], and IK gK[0.75(1 h)]4[v EK] are leak,sodium, and potassium spiking currents, respectively, with squarebrackets denoting multiplication. Note that we use a standard reduc-tion in our expression for the potassium current, which decreases thedimensionality of the model by one variable (Rinzel 1985). Thecurrent IT gTp

2 (v)r[v ET] is a low-threshold calcium current. Forthese intrinsic currents, the forms of the functions and the values ofthe parameters used appear in Table 1. Note that reversal potentialsare given in mV, conductances in mS/cm2, and time constants in ms.Further, we have scaled the parameters such that the capacitance isCm 1 F/cm2. Finally, the resting potential, spike threshold, andresponsiveness of the model TC cell, in the absence of inputs, arerobust to changes of ionic conductances in the model. Durations ofrebound bursts, after release from hyperpolarizing input, may jumpabruptly by tens of milliseconds as gT is varied, however, when anadditional spike is appended to the burst. As is typical for conduc-tance-based models, the model is less robust to changes in thethreshold and slope constants within its nonlinear terms; however, itsrobustness is comparable to other models of this type.

Additional terms in Eq. 1 refer to inputs to the TC cell model. Theequations and parameter values relevant to these terms are summa-rized in Table 2 with the same units used as in Table 1. Iext

corresponds to a constant background input, chosen at Iext 0.44nA/cm2 to yield a firing rate of roughly 12 Hz in the absence of otherinputs and held fixed at this level throughout all simulations. Thevalue chosen places the model TC cell near transition from silent tospontaneously oscillatory in the absence of synaptic inputs. Similar

results were obtained whether the model TC cell was intrinsicallysilent or oscillatory. By choosing Iext near the transition point, weachieved wide variations in TC cell behaviors when we introducedvariability into the set of model TC cell parameters as discussed in thefollowing text. IGi3Th denotes the inhibitory input current from theGPi and will also be discussed in the following text. IE representssimulated excitatory synaptic signals to the TC cell. We assume thatthese are sufficiently strong to induce a spike (in the absence ofinhibition) and therefore may represent synchronized inputs frommultiple presynaptic cells. In the model, IE takes the form gEs[v vE]where gE 0.05 S/cm2, so that maximal input is super-threshold,where vE 0 mV, and where

s 1 sexct s (2)

In Eq. 2, 0.8 ms1 and 0.25 ms1. Because we do nothave an explicit representation of a presynaptic neuron in the model,we use the function exc(t) to control whether the excitatory input is onor off. Specifically, exc(t) 1 during each excitatory input, whereasexc(t) 0 between excitatory inputs. We used two general forms oftime course for the binary signal exc(t), namely periodic and stochas-tic, as done in past work (Rubin and Terman 2004). In the periodiccase

exct H sin 2t/p 1 Hsin 2t d/p

where the period p 50 ms and duration d 5 ms, and where H(x)is the Heaviside step function, such that H(x) 0 if x 0 and H(x) 1 if x 0. That is, exc(t) 1 from time 0 up to time d, from time pup to time p d, from time 2p up to time 2p d, and so on. A baselineinput frequency of 20 Hz is consistent with the high-pass filtering ofcorticothalamic inputs observed in vivo (Castro-Alamancos and Cal-cagnotto 2001); at this input rate, the model TC cells rarely recoverand fire spontaneous spikes between inputs, which simplifies ouranalysis. In the stochastic case, input times are selected from aPoisson distribution, with an enforced pause of 20 ms between inputsto avoid excessive firing, with the same input duration and amplitudeas in the periodic case and with a mean input frequency of 20 Hz. Insimulations done with stochastic inputs, results shown representaverages over five simulations, each with a different random inputpattern. The use of a stochastic excitatory input provides one measureof the robustness of our results to noise. In some simulations, asmall-amplitude white-noise term is also included in the voltageequation.

The choice of gE was motivated by the conjecture that strong inputswould represent important signals and that differences in TC relay ofstrong inputs would have the most significant impact on downstreamprocessing. At the same time, it is unlikely that even strong inputswould be perfectly synchronized. The values of the rate parameters ,, and d were selected based on corticothalamic excitatory inputsrecorded in vivo (Castro-Alamancos and Calcagnotto 2001), assumingthat IE represents a set of temporally proximal, but imperfectlyaligned, cortical inputs to a TC cell. Our qualitative results are robustto variations in these parameters.

In one set of simulations, we feed the same input currents IGi3Th

and IE into all members of a heterogeneous population of model TC

TABLE 2. Inputs to the TC cell

Background current: Iext 0.44Excitatory signal: IE gEs(v vE)

s (1 s) exc(t) sexc(t) H[sin(2t/p)](1 Hsin[2(t d)/p])gE 0.05, vE 0, 0.5, 0.22, p 50, d 5

GPi synaptic input: IGi3Th gsynsj(v Esyn)sj jsj between spikes

sj 1 after a spike

gsyn 0.066, Esyn 85, j 0.04Poisson processes:

Burst duration: minimum: 10 msmean: 25 ms

Burst rate: rb [.002, .02]Minimum interburst interval: 10 msSpike rate in bursts: mean 200 HzNo minimum interspike interval

TABLE 1. TC cell model functions and parameters

Current Activation Inactivation Parameters

IL gL .05, EL 70INa m(v) 1/1 exp[(v 37)/7] h(v) 1/1 exp[(v 41)/4] gNa 3, ENa 50

h(v) 1/[a1(v) b1(v)], a1(v) .128exp[(v 46)/18], b1(v) 4/1 exp[(v 23)/5]IK gK 5, EK 90IT p(v) 1/1 exp[(v 60)/6.2] r(v) 1/1 exp[(v 84)/4] gT 5, ET 0

r(v) 0.428 exp[(v 25)/10.5]

TC, thalamocortical.

1479THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 80: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

cells. In the absence of experimental data on the variability ofparticular conductances within TC cells, we chose to form the hetero-geneous population by selecting gNa, gL, and gT from normal distri-butions with means given by the values in Table 1 and with SDs givenby 20% of these values, which were sufficiently large to yield a widevariation in intrinsic TC spike frequencies in the absence of inputs,without major changes in most other spike-related characteristics, asdiscussed in the text following Eq. 1.

Experimentally obtained GPi data

Single-unit extracellular recordings of neurophysiologically identi-fied GPi neurons were acquired with glass-coated platinum-iridiummicroelectrodes in three rhesus macaques (Macaca mulatta). Oneanimal was a normal nonparkinsonian control, and two animals wererendered parkinsonian with 1-methyl-4-phenyl-1,2,3,6-tetrahydropy-ridine (MPTP) via a single injection through the internal carotid artery(Hashimoto et al. 2003). The parkinsonian animals developed a stabledisease state characterized by contralateral rigidity and bradykinesiaand had a chronic DBS electrode implanted in the STN region(Hashimoto et al. 2003). The chronic stimulating electrode wasconnected to a programmable pulse generator (Itrel II, Medtronic)implanted subcutaneously in the monkey’s back. The stimulating leadwas a scaled-down version of the chronic stimulation electrode usedin humans (Model 3387, Medtronic). The cylindrical lead consisted offour metal contacts each with a diameter of 0.75 mm, height of 0.50mm, and separation between contacts of 0.50 mm. The most effectivepair of electrode contacts in the STN region was chosen for bipolarstimulation in each animal after evaluation of the clinical effects of thestimulation (Hashimoto et al. 2003). In both the normal and parkin-sonian monkeys, spontaneous neuronal activity (with the animal atrest and the head fixed) of electrophysiologically identified GPineurons was recorded. In the parkinsonian monkeys, GPi activity wasalso recorded during DBS of the STN region. Stimulation parameterswere selected to address two conditions in each animal: stimulationparameters that produced therapeutic benefit and stimulation param-eters subthreshold for a therapeutic effect. The therapeutic effective-ness of DBS was assessed with two quantitative measures of brady-kinesia as well as a subjective evaluation of rigidity provided by atrained rater. In each animal, therapeutic stimulation settings weredetermined, and then sub-therapeutic settings were obtained by re-ducing stimulus amplitude until therapeutic benefit was no longerdetected (Hashimoto et al. 2003). Specifically, DBS was applied at afrequency 136 Hz with therapeutic benefit obtained at 3.3 or 1.8 V anda pulse width of 90 or 210 s, depending on the animal, andsubthreshold stimulation at 2 or 1.4 V, again depending on the animal.To analyze neural activity during stimulation, a template of thestimulus artifact was constructed by averaging across all peristimulussegments. The stimulus artifact template was then subtracted from theindividual traces, and neuronal spikes were detected (Hashimoto et al.2002, 2003).

Collections of several cells from each of the three animals wereused in the analysis. The cells were selected from a database ofrecordings to be representative of the population in the normal,

parkinsonian, sub-therapeutic DBS, and therapeutic DBS cases. Threegeneral characteristics were used to select the cells. First, the exper-imental recording had good to excellent isolation of the single unit.Second, the average firing rate of the unit closely corresponded to theaverage population firing rates for GPi cells in the four respectivecases (Hashimoto et al. 2003; Wichmann et al. 2002). Finally, thecoefficient of variation of the firing rate was used to identify cells withfiring patterns representative of the four respective cases. The partic-ular firing characteristics of the cells used are summarized in Table 3with relevant values from the literature provided for comparison.

Inhibitory inputs to TC cells, derived from GPi data

In most simulations, we used experimentally recorded data, asdiscussed in the preceding text, to represent the GPi spike times. Forsystematic exploration of the effects of particular features of theinhibitory input, however, we used computationally generated GPispike times. In both cases, the synaptic inhibition from the GPi to asingle model TC cell in our simulations took the form

IGi3Th gsyn[j sj][v Esyn] (3)

where the summation is over the synaptic activation variables sj of thepresynaptic GPi cells, and where the inhibitory synaptic reversalpotential Esyn 85 mV (Lavin and Grace 1994) and synapticconductance gsyn 0.066 S/cm2. At each spike time of the corre-sponding GPi cell, the variable sj was reset to 1, after which it decayedvia the equation

sj inhsj (4)

with inh 0.04 ms1. We used a relatively large synaptic conduc-tance and a synaptic decay rate that is somewhat slower than thattypically found for GABAA-mediated synaptic transmission to makeour single input train more representative of multiple, imperfectlysynchronized synaptic inputs; this approximation will be improved infuture work as multi-unit GPi data are collected experimentally.

Experimental GPi data were recorded from parkinsonian monkeysbefore, during, and after the application of DBS (Hashimoto et al.2003). When we used non-DBS and DBS recordings from the samecell, we only used non-DBS recordings from the period before theapplication of DBS, not from the period after the cessation of DBS, toavoid any residual effects of DBS on GPi neuronal activity. Moreover,we selected data segments by counting back in time from the end ofthe DBS period, always stopping 2 s away from the start of DBS tominimize the possibility of our results being affected by transientsassociated with DBS onset.

Error index: a measure of TC relay fidelity

The computations in this paper were performed using customizedcodes simulated in XPPAUT (Ermentrout 2002; see www.pitt.edu/phase) and Matlab (The MathWorks, Natick, MA).

TABLE 3. Firing characteristics of GPi cells

Condition Firing Frequencies, Hz Respective Coefficients of Variation Literature

Normal 51, 67, 80 0.75, 0.89, 0.68 40–70 Hz in Macaca mulatta (Wichmann et al. 1999)

Parkinsonian 55, 55, 59, 60, 66, 70, 78, 80 1.09, 1.54, 1.73, 1.24, 0.88, 0.64, 0.55, 1.2163.2 17.2 Hz and 70.4 27.6 Hz in two subgroups

(Hashimoto et al. 2003)

Sub-therapeutic DBS 55, 81, 93, 106 1.05, 0.86, 1.10, 1.22No significant change from parkinsonian case seen

(Hashimoto et al. 2003)

Therapeutic DBS 54, 83, 99, 156 1.39, 0.95, 0.78, 0.6281.7 37.0 Hz and 112.0 36.8 Hz in two

subgroups (Hashimoto et al. 2003)

GPi, globus pallidus; DBS, deep brain stimulation.

1480 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 81: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

We compute an error index to measure the fidelity with which theTC cells respond to excitatory inputs, similar to the error indexdescribed previously (Rubin and Terman 2004). Note that we userelay fidelity to refer to the faithfulness of relay such that a TC cellthat generates a spike train that is very similar to its input train hasachieved high degree of relay fidelity. We are not using fidelity torefer to the generation of similar responses to multiple presentationsof the same stimulus, which is a form of reliability considered in someother studies. In brief, the error index that we use consists of the totalnumber of errors divided by the total number of excitatory inputs.Errors can take the form of bad responses or misses. Specifically, foreach excitatory stimulus, we record a miss if no TC spike occurswithin a designated detection time window after the input arrives. Ifmore than one TC spike occurs during this window, then we recorda bad response. Finally, if exactly one TC spike occurs during thewindow, then we record a bad response if there are one or moreadditional TC spikes before the next input arrives (see Fig. 1). Thisalgorithm counts at most one error per input; for example, if a TCcell fires multiple spikes after a single excitatory input, then this isjust counted as a single bad response. In summary, the error indexis given by

error index b m/n (5)

where b denotes the number of excitatory inputs leading to badresponses, m the number of excitatory inputs leading to misses, and nthe total number of excitatory inputs. We use a detection window of10 ms to allow for delays from threshold crossing to action potentialgeneration. Thus an error index of 0 results if one TC spike occurswithin 10 ms of each excitatory input and no other TC spikes occuruntil the next input, corresponding to optimal relay fidelity. With ashorter detection window, some formerly successful responses wouldbe classified as misses. However, we did not observe any bias towardshorter or longer response latencies in any particular inhibitionregime, and indeed, we obtained qualitatively similar results in sim-ulations with detection windows of 6 and 12 ms.

In theory, our error index could be susceptible to “false positives,”in which single spike TC responses occur close in time to excitatoryinputs, but not caused by the excitatory inputs. Thus as mentionedearlier, we use excitatory input rates that are sufficiently high such thatin normal conditions, TC cells rarely recover and fire spontaneousspikes between inputs. Finally, note that our error index gives a directand straightforward measure of relay success that is well suited for ourcomputational experiments, in which the simplicity of our simulatedexcitatory inputs and of the relay process does not warrant analysiswith more standard, yet more complex and indirect, informationtheoretic measures.

Burstiness and correlation of inhibitory GPi signals

Much of our analysis concerns ways in which the error indexdepends on the burstiness and correlation of the inhibitory GPi signalssj. To quantify burstiness, we first perform a simple detection algo-rithm for high-frequency spiking episodes (HFE). In this approach, wedetect all spikes that are preceded by a silent period of 12 ms. Eachsuch spike is considered to be the start of an HFE if the next spikefollows it by 8 ms. Each subsequent spike is counted as part of theHFE if and only if it occurs within 8 ms of its predecessor. Theduration of the HFE is the time from the first spike in the HFE tothe last spike in the HFE; see Fig. 2D. More involved statisticalmethods exist to compensate for chance epochs of high-frequencyspikes that fit within a given set of HFE criteria of the type given here(Legendy and Salcman 1985); however, because all such HFE gen-erate similar inhibitory inputs to TC cells in our simulations, there isno reason to try to classify them for the purposes of our study. Fromthe HFE, we compute the elevated spiking time (EST), which issimply the fraction of the simulation time during which HFE occur.Hence, when the EST is zero, the GPi signal consists of low-frequency isolated spikes, a moderate EST corresponds to a highlybursty signal, and a signal with a higher EST is dominated by HFE,corresponding to relatively tonic high-frequency firing.

FIG. 2. Examples of inputs from GPi cells to TC cells. A–C: examples of the high-frequency burst portions from computationally generated GPi signals.A: a low burst rate rb and no overlaps (shared wij) were used to generate these signals, and correspondingly, there are relatively few bursts, leading to a meanelevated spiking time (EST) of 0.14 across the 2 cells. Moreover, the amount of time during which the traces simultaneously exhibit high-frequency firing is small,yielding a small correlation time of 0.082. B: a moderate burst rate rb and 2 overlaps were used to generate these signals, and correspondingly, each GPi traceshows high-frequency oscillations for about half of the total simulation time, with a mean EST of 0.61. Although the times at which these occur are somewhatcorrelated, due to the overlaps and chance, the fraction of the total simulation time during which the traces simultaneously exhibit high-frequency firing is 1/2with a correlation time of 0.45. C: with a high rb and 2 overlaps, each trace exhibits high-frequency oscillations for most of the simulation time, yielding an ESTof 0.85, and the fraction of time during which the traces simultaneously show high-frequency activity is much closer to 1, yielding a correlation time of 0.73.D: illustration of the algorithm for detection of coincident high-frequency episodes (HFE), applied to experimental data. The times at which HFE occur are readoff of GPi spike trains (top 2 panels; HFE times are indicated with thick black segments in all panels). Next, HFE times are compared and times when both cellsare engaged in HFE are captured (bottom panel).

1481THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 82: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

The aspect of the correlation between pairs of GPi signals that ismost relevant for our study is the temporal relationship of the HFEacross the two signals. To obtain a single number that represents thisrelationship over a simulation of duration T ms, we simply sum thedurations of all epochs during which both GPi cells are engaged inhigh-frequency spiking simultaneously and divide by T, yielding anumber between 0 and 1 (Fig. 2D).

Event-triggered averaging, sorted by TC cell responses

An additional computational procedure that we performed on GPidata was event-triggered averaging. In this procedure, we classifiedexcitatory inputs into those that were immediately followed by amissed, a bad, or a successful (i.e., neither missed nor bad) TCresponse. For each excitatory input that led to a missed response, weextracted a 25-ms segment of the GPi input signal to the TC cell,extending from 20 ms before the start of the excitatory input to 5 msafter its start. The entire time course of each signal was normalized bysubtracting off the signal’s initial amplitude. We summed thesenormalized, “miss-triggered” GPi signals and divided by the numberof missed responses to generate a miss-triggered average GPi signal.Next we repeated this procedure for bad responses and successfulresponses to generate a bad-triggered average GPi signal and asuccess-triggered average GPi signal, respectively. In this averagingprocess, we combined inhibitory signals leading to the same type ofresponse from all four inhibitory input regimes (nonparkinsonian,parkinsonian without DBS, parkinsonian with sub-therapeutic DBS,and parkinsonian with therapeutic DBS), after verifying that similarsignals emerged in all cases. In total, 40 blocks of GPi data, each of5-s duration, were used. These yielded 280 bad responses, 667 missedresponses, and 2,223 successful responses, all of which were includedin the averages computed.

Plots of average GPi signals do not include error bars. We chose toomit them because the error bars for averages of GPi signals could belarge, despite a very high degree of qualitative similarity, such aswhen each signal showed an abrupt increase at some time within agiven time window, but the precise increase times were rather diverse.A similar issue arises in averaging the action potential responses of aneuron over multiple stimulus presentations or in multi-unit record-ings, in averaging over action potentials generated by different cells inresponse to the same stimulus (e.g., Kapfer et al. 2007). Following theprocedure used by Kapfer et al. (2007), instead of plotting error bars,we complement plots of average signals with data from a sample ofindividual signals that contributed to the averages, selected com-pletely at random.

Jittered inputs

Note that the experimental GPi data used in this study consist ofsingle-unit recordings acquired with a single electrode. Therefore itwas not possible to use this data directly to explore how correlationsamong multiple GPi inputs to TC cells contribute to the TC cell relayfidelity. Because we did not have this option, in some simulations, weused the single-unit GPi recordings to generate multiple GPi signals toeach TC cell. To do this, we first formed N identical copies of a singleGPi spike train. We indexed the spike times within this train as t1,t2,. . ., tp. Next, we introduced jitter by selecting values ij, i 1,. . .,N, j 1,. . ., p, from a normal distribution with amplitude (seeRESULTS for values used). These were used to form the new spiketrains t11, t12,. . ., t1p,. . .,tN1, tN2,. . ., tNp with tij tj ij. Aftersome experimentation, we found that the qualitative trends induced bythis jittering process are already apparent with N 2. Given thisobservation, we restrict our results to the case of N 2, and we alsoturn to simulated GPi inputs to explore more thoroughly the effects ofdifferent activity patterns and different levels of correlations amonginhibitory signals.

Computational GPi inputs and their burstinessand correlation

By using purely computational GPi input signals, we were able toexplore systematically how changes in input ESTs and the degree ofcorrelation between inputs affect TC relay. For simulated GPi inputs,each signal sj, j 1,2, in Eq. 2 was formed using a computationalprocedure, rather than using experimental data, based on a combina-tion of five independent point processes, wij, i 1,. . ., 5; see Fig. 7.Each point process wij was produced by a set of four Poissonprocesses. One Poisson process (p1) was used to generate isolatedspike times. A set of three additional Poisson processes were used togenerate bursts of high-frequency activity that were superimposed onthe isolated spikes. Specifically, a primary Poisson process (p2)selected HFE onset times with rate rb, while within bursting HFEs, asecondary process (p3) produced spike times, at high frequencies.Finally, HFE durations were selected randomly from a third, indepen-dent Poisson process (p4), with a minimum duration of 10 ms and amean duration of 25 ms, for all rb. For each GPi cell in the compu-tational case, the EST was computed as the sum of the durations of allHFEs for the point processes used to form the signal sj for that cell.This approach is computationally simpler than basing the EST onparticular spike times and interspike intervals within each HFE, aswas done in the experimental case, although it yields EST values thatare systematically larger than those obtained in the experimental case.

The five point processes wij were used to generate a single contin-uous time input signal sj(t) (see Figs. 2 and 3). Specifically, at eachspike time within any of the wij, the variable sj(t) was reset to 1, afterwhich it decayed continuously via Eq. 4. This approach, of generatinga continuous time signal sj(t) from a collection of point processes wij,allows for parametric control of the degree of burstiness and the spikerate of each wij, and hence of each sj(t) (Tateno and Robinson 2006).The reason that we used multiple signals wij for each sj(t) is that thisallowed us to control the correlation across the sj by using some of thesame signals wij for different j (Galan et al. 2006; see Fig. 7). We referto the number of signals wij shared by two GPi cells as the number ofoverlaps between them.

To form the total synaptic input conductance to the TC cell as afunction of time, the signals s1(t), s2(t) were summed, as indicated inEq. 3, and multiplied by gsyn 0.04 S/cm2. This maximal synapticconductance value is smaller than was used in the experimental caseto compensate for the replacement of a single experimental GPi signalwith a pair of computational ones.

R E S U L T S

With experimentally obtained GPi inputs, clinically effectiveDBS improves TC relay fidelity

We generated GPi inputs to our model TC cell using GPispike trains obtained from experimental recordings from anormal monkey as well as from two parkinsonian monkeys inthe absence of DBS, during sub-therapeutic DBS, and duringtherapeutic DBS (Hashimoto et al. 2003), as described inMETHODS. In each simulation, a single GPi train was used, andhence the sum j sj in Eq. 3, became simply s1. Figure 3 (toptraces in each panel) shows typical examples of the experi-mentally recorded GPi spike times and the resulting GPi signals1 from each regime. The pattern of GPi activity recorded inparkinsonian conditions in the absence of DBS led to a GPisignal (Fig. 3B, top trace) that is much more phasic, featuringjumps between high and low states, than the relatively constantsignal that appeared when therapeutic DBS was present (Fig.3D, top trace) or in nonparkinsonian conditions (Fig. 3A, toptrace). In each case, an excitatory input train was delivered

1482 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 83: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

(Fig. 3, bottom traces) and the effectiveness of the TC cell atrelaying this train was assessed.

If perfect relay fidelity were achieved, the TC cell wouldexhibit one voltage spike for each input pulse, possibly with ashort lag due to the delay between threshold crossing andactual spiking. In the absence of DBS and with subclinicalDBS, however, the TC cell failed to respond to many of theinputs and generated bursts of multiple spikes to other inputs(Fig. 3, B and C, middle traces). These results contrast stronglywith the normal and therapeutic DBS cases, which show anear-perfect relay performance (Fig. 3, A and D, middletraces).

We calculated the error index based on the computationalTC cell’s relay performance for each of the four cases, namelycontrol, PD (no DBS), sub-therapeutic DBS, and therapeuticDBS, both for periodic excitation and for stochastically timedexcitation. Results are shown in Fig. 3, E (periodic) and F(stochastic), where each data point represents 5 s of simulationtime, with nonoverlapping 5-s GPi data segments used, and is

plotted as a function of the EST of the GPi input signal,computed as described in METHODS. It is important to note thatfor the GPi recordings involved, all available data were used;that is, we did not select out particular simulation periods basedthe resulting error indices. The values of the error index showthat TC cell relay success depends strongly on which form ofinhibitory input the cell receives. Indeed, in both the periodicand the stochastic excitation cases, the mean performancesacross the four regimes were statistically significantly different(ANOVA, P 0.0001), and a posteriori pairwise comparisonsyielded significant differences across all pairs of regimes inboth cases as well (Tukey’s honestly significant difference,P 0.01 for all pairs), except no significant difference wasfound between therapeutic DBS and normal cases either withperiodic excitation or with stochastic excitation. Similar resultswere obtained with variations in the rise and decay times of theexcitatory input signals and in the detection window used todefine successful TC responses as well as with the introductionof small noise as shown in Eq. 1. Once rise times dropped by

FIG. 3. TC relay fidelity improves with clinically effective DBS. A–D: the central trace in each plot shows voltage vs. time for the model TC cell. The voltagescale on each plot applies to this trace. Offset above each such trace, experimentally recorded GPi spike times (discrete events) are shown along with theinhibitory signal s1 that these spike times are used to generate (continuous curve, above the spike times, shows s1, with amplitude scaled 100-fold for visibility).Offset below each TC voltage trace, simulated excitatory input signals are shown (scaled by a factor of 3 for visibility). Note that the same excitatory input signalswere used for all examples shown here and that TC spikes may lag excitatory input times by a few milliseconds, corresponding to delays from threshold crossingto spike generation. A: control (nonparkinsonian); EST 0.05. B: parkinsonian without DBS; EST 0.15. C: parkinsonian with sub-therapeutic DBS; EST 0.27. D: parkinsonian with therapeutic DBS; EST 0.55. E and F: error index against EST calculated from simulations of 5-s blocks of data from all 4 cases.In these plots, results for the different cases are color coded (purple: normal, 2 blocks from each of 3 cells; blue: parkinsonian without DBS, 3 blocks from eachof 3 cells and 4 blocks from 1 cell; green: parkinsonian with sub-therapeutic DBS, 6 blocks from 1 cell and 2 blocks from 1 cell; red: parkinsonian with therapeuticDBS, 6 blocks from 1 cell and 5 blocks from another cell). Across the 3 parkinsonian cases, each symbol corresponds to the use of data from a particular GPicell. For example, results indicated by a blue diamond and a red diamond were obtained using data from the same GPi cell, recorded in the absence of DBS andwith therapeutic DBS, respectively. E: results from 20-Hz periodic excitatory inputs. F: results from excitatory inputs generated by a Poisson process with aminimum time interval of 20 ms imposed between inputs.

1483THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 84: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

20%, the statistical significance of the differences in errorindex values between some cases, particularly sub-therapeuticDBS/normal, began to degrade; however, the qualitative dis-tinctions between these values remained.

Differences in GPi signals precede different TCcell responses

As described in METHODS, the TC cell response to eachexcitatory input was classified as a miss if the TC cell failed tospike within a prescribed time window following the input, abad response if the TC cell generated multiple spikes inresponse to the input, or a successful response. Misses and badresponses raised the error index, while successful responses didnot. All three types of responses were found, in differingproportions, in the four scenarios of normal and of parkinso-nian with no DBS, with sub-therapeutic DBS, and with thera-peutic DBS. To analyze further the way in which the inhibitorysignal to the TC cell contributed to its responses, we performedthe averaging procedure described in METHODS on the same GPiinput signals used to compute the error index scores (Fig. 3, Eand F). We observed important differences across the resultingmiss-, bad-, and success-triggered GPi signals (Fig. 4; n 667miss, n 220 bad, n 2223 success). GPi inputs thatpreceded TC cell misses showed a substantial rise in strengthover the 25-ms time interval considered. In the face of such arise in inhibition, the TC cell would require additional deinac-tivation of its spike-generating currents, namely INa and IT inthe TC model (1), relative to their resting levels, to respond toan incoming excitatory stimulus (Rubin and Josic 2007; Rubinand Terman 2004). This deinactivation occurs relativelyslowly, however, and thus would typically require more thanthe 25 ms available here.

Conversely, GPi inputs that preceded bad TC cell responsesshowed a substantial decline in strength over the 25-ms intervalconsidered. Recall that what we classify as bad responsesconsist of multiple spikes fired in response to single excitatory

inputs because such responses do not reflect the content of theinput signals. In the presence of a strong inhibitory input,deinactivation of a TC cell’s spike-generating currents willoccur. The resulting enhanced availability of these currents willallow for successful responses in the presence of sustained inhi-bition. When followed by a relatively rapid drop in inhibition,however, as seen in the bad-averaged signal in Fig. 4, thisadditional deinactivation will lead to an excessive response toexcitatory inputs (Rubin and Terman 2004) until it can be negatedby a subsequent slow inactivation of the currents involved.

Finally, GPi inputs that preceded successful TC cellresponses were relatively constant and therefore avoided thegeneration of current imbalances. Interestingly, the roughlyconstant averaged inhibition level in this case was relativelyhigh (data not shown). This is consistent with the notion thatDBS of the subthalamic nucleus promotes GPi activity (Hashi-moto et al. 2003). However, the level of an approximatelyconstant inhibitory signal has relatively little impact on TC cellresponsiveness to excitatory inputs, after an initial transientconsisting of a few such inputs. This invariance rises becausethe inactivation that occurs during each TC spike and thedeinactivation that occurs between TC spikes tend to balanceout over the course of the transient such that the deinactivationcompensates for the inactivation and allows for reliable TCresponses, as long as the excitatory input frequency is not toohigh (Rubin and Terman 2004).

DBS leads to dispersion in TC cell failure times

The functional relevance of relay failures in TC cells willdepend on how these failures are distributed across the TCpopulation. In particular, if one TC cell bursts or fails torespond to an input but other TC cells in the populationrespond to this input appropriately, then the single aber-rant response is unlikely to have a significant downstreameffect. On the other hand, if multiple TC cells respondinappropriately to the same input, then this would be morelikely to impact downstream activity.

FIG. 4. Average GPi signals preceding different types ofTC cell responses to excitatory inputs are qualitativelydifferent. A: the 3 traces shown were formed by averagingover 25-ms segments of normalized GPi signals s1, span-ning the arrival times of excitatory inputs to a TC cell. Thesignals were aligned such that the excitatory input arrivaltimes occurred at 20 ms as indicated by the vertical dashedline in the figure. Signals were averaged separately forexcitatory inputs that produced TC cell misses (n 667),bad responses (n 280), or successful responses (n 2223). B–D: the values at 0 ms, 20 ms (i.e., excitatory inputarrival time), and 25 ms for a randomly selected sample of10 normalized miss-triggered (B), bad-triggered (C), orsuccess-triggered (D) signals, from the sets of signals usedto generate the averages shown in A.

1484 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 85: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

To test the degree of temporal coincidence of TC responseerrors, for each case, we used a representative 3-s segment ofexperimental GPi recording to generate an inhibitory signalthat was input to each member of a population of 40 model TCcells. All TC cells also received an identical excitatory inputtrain, consisting of 59 pulses delivered at a frequency of 20 Hz.To enhance the realism of this computational experiment, weintroduced significant heterogeneity into the TC population asdescribed in METHODS. The parkinsonian and sub-therapeuticcases in these simulations are characterized by many trials inwhich very few TC cells achieve successful relay (Fig. 5). Incontrast, in the normal and therapeutic DBS cases, there arealmost no such trials (Fig. 5). More generally, the frequencydistributions for numbers of TC cells achieving successfulrelay vary quite noticeably across the different regimes with asubstantial shift in weight from trials in which most TC cellsexhibit successful relay to trials in which few TC cells relayeffectively and back again as GPi recording conditions switchfrom normal to parkinsonian without therapeutic DBS to par-kinsonian with therapeutic DBS. In particular, there werestatistically significant differences in the frequencies with whichdifferent numbers of TC cells responded successfully betweenthe therapeutic DBS scenario and the other PD recordingconditions (Kolmogorov-Smirnov test, P 0.0001 for thera-peutic DBS/PD as well as for normal/PD, P 0.01 fortherapeutic DBS/sub-therapeutic DBS) with a statistically

insignificant difference between response frequencies in thenormal and therapeutic DBS cases (P 0.65).

Figure 5, A2–D2, summarizes this data in four histograms,one for each case. In each histogram, results are binnedaccording to the frequency with which different numbers ofTC cells responded to excitatory inputs. For example, of the59 excitatory inputs, there were 25 inputs to which zero toeight TC cells responded successfully in the parkinsoniancase without DBS (Fig. 5B2). Inspection of these plotsreinforces the observation that there are many more in-stances of coincident TC response failures, across a largesubset of the TC cell population, in the parkinsonian case inthe absence of DBS than with either form of DBS, while theresponse failures in the presence of DBS tend to be moretemporally dispersed. Moreover, this trend is a gradual one,with sub-therapeutic DBS representing an intermediate casebetween PD and therapeutic DBS, while the temporal dis-persion of response failures in the case of therapeutic DBSresembles that of the normal case. Similar results wereobtained when noise was introduced into the TC model, inaddition to heterogeneity (results not shown).

Burstiness and correlation of GPi inputs both affectTC cell relay fidelity

EXPERIMENTAL CASE. Experimental results have shown an in-crease in bursting activity, as well as an increase in correlations

FIG. 5. TC cell failures coincide without DBS and are dispersed with DBS. A1, B1, C1, and D1: numbers of TC cells, from a heterogeneous population of40 cells, responding successfully to each excitatory input in a train of 59 inputs (numbered 2–60, with input 1 discarded due to spurious transients), deliveredat 20 Hz. For consistency, the same periodic excitatory input train was used in all cases (although we checked to ensure that qualitatively similar results heldfor Poisson inputs), while the GPi data used to generate the inhibition was taken either from a nonparkinsonian recording (A1), a parkinsonian recording withoutDBS (B1), a parkinsonian recording with sub-therapeutic DBS (C1), or a parkinsonian recording with therapeutic DBS (D1). In all cases, a successful responsewas defined as a response without a miss or an extra spike, as discussed in METHODS. A2, B2, C2, and D2: for each scenario, TC responses are collected in ahistogram. To form each histogram, excitatory inputs were binned by the number of TC cells responding successfully to them. Each histogram thus shows thenumber of trials in which various numbers of TC cells responded successfully.

1485THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 86: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

across GPi neurons, in parkinsonian conditions, relative tonormal states (Bergman et al. 1994; Brown et al. 2001; Magninet al. 2000; Nini et al. 1995; Raz et al. 2000). However, theeffect of such changes on TC relay capabilities has not beenestablished. Because our experimental GPi data consisted ofsingle-cell recordings, we could not use this data to assess theeffect of increased correlations between GPi neurons directly.In our simulations up to this point, however, we had set gsyn forIGi3Th sufficiently high so that a single GPi input train couldsignificantly impact TC firing. Based on this, we reasoned thatthe single GPi input train could be thought of as a collection ofmore than one, perfectly synchronized GPi signals. Corre-spondingly, we generated two copies of each GPi input trainand divided the amplitude of the corresponding signal for eachcopy in half, and then we proceeded to introduce independent,normally distributed jitter into the input timing in each copy, asdescribed in METHODS . We then subjected the TC cell to thejittered pair of inhibitory signals and considered how TC relayof periodic excitatory inputs varied with the amplitude of thisjitter. We repeated this experiment in parkinsonian and DBSconditions, averaging over 40 jittered signals generated from asingle baseline 5-s GPi data set for each case (Fig. 6, A and B).

The introduction of jitter within the therapeutic DBS inputtrain had little effect on the already good TC response fidelity(Fig. 6C), although a slight smoothing of the GPi input signal,and corresponding relay enhancement, did result. Jitter ampli-tude did have some effect on the proportion of time duringwhich HFE occurred in the GPi signals, as measured by theEST, and on the correlation of the pair of GPi signals in thetherapeutic DBS case. However, the EST in the presence ofjitter remained high (0.35, relative to 0.23 in the PD casewithout jitter), indicating that GPi inputs remained in a regimewith high rates of high-frequency spiking (Fig. 6, D).

In contrast, the inclusion of jitter resulted in smoothing ofthe GPi input signal and, as jitter amplitude was increased,eventually yielded significant improvement in TC responsefidelity in the absence of DBS (Fig. 6C). It is important to notethat the introduction and gradual increase in amplitude of jitterdecreased the correlation between the GPi inputs to levels nearzero, but it only diminished the EST in these signals by aboutone third, as shown in Fig. 6, D and E, such that significantHFE remained. Indeed, the EST values for the GPi signals inthe absence of DBS corresponded to bursty inhibitory timecourses, featuring significant epochs with and without high-frequency spiking, for all levels of jitter. Therefore the fact thatthe error index dropped with increased jitter in the PD case, ascan be seen in Fig. 6C, shows that input correlations likely playa role in the compromise of TC cell relay in the absence ofDBS. At the same time, comparison of the PD and therapeuticDBS cases (Fig. 6, C–E) shows that the error index for the PDcase remains substantially above that for the DBS case, even asjitter becomes relatively large. This comparison demonstratesthat the phasic or bursty nature of GPi inputs in PD, indicatedhere by moderate EST (Fig. 6D), also contributes significantlyto the loss of TC cell relay fidelity. In summary, based on thesefindings, we predict that both significant correlations in GPiactivity and phasic burstiness in GPi activity contribute to thecompromise of TC relay fidelity in parkinsonian conditions.

COMPUTATIONAL CASE. To further explore the effect on TCresponses induced by changes in the rate at which HFE occurand in input correlation, corresponding to the proportion oftime featuring simultaneous high-frequency spiking of GPicells, we performed simulations with computationally gener-ated GPi input trains, for which we could control these inputcharacteristics directly, as described in METHODS (Fig. 7). In

FIG. 6. Introducing jitter across multipleGPi signals reduces but does not eliminatethe distinction between parkinsonian andDBS relay performance. Note that DBS hererefers to therapeutic deep brain stimulation.A and B: the top 4 panels show GPi inputsignals (top traces), TC cell voltage timecourses (middle traces) and excitatory inputs(bottom traces). The top 2 panels (A, 1 and2) correspond to the DBS case, with 0 jitteron the left and 0.05 on the right, whilethe bottom 2 (B, 1 and 2) correspond to theparkinsonian case, with 0 jitter on the leftand 0.05 on the right. C: error index asa function of the level of jitter amplitude for DBS (F) and parkinsonian () simula-tions, averaged over 40 instantiations of jit-ter applied to a single GPi data set for eachcase. D: EST vs. jitter amplitude for DBS (F)and PD (). While EST drops with increas-ing jitter for both DBS and parkinsoniancases, the EST values for DBS stay wellabove baseline parkinsonian levels and re-main at a level corresponding to significantperiods of high-frequency firing, while theEST values for the parkinsonian case remainbounded away from zero. E: correlation vs.jitter amplitude for DBS (F) and parkinso-nian () cases. Note that in the parkinsoniancase, the fraction of time spent with the GPicells simultaneously exhibiting HFE dropsalmost completely to 0 as jitter is increased.

1486 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 87: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

brief, we generated two computational GPi signals, each ofwhich depended on five stochastic spike trains, and in eachspike train, HFE occurred with a rate rb. We refer to thenumber of spike trains that were common to both GPi signalsas the number of overlaps in the simulation. For a fixed numberof overlaps, we could achieve a range of correlation times byvarying rb. However, with fewer overlaps, a larger rb would berequired to achieve a fixed correlation level. Hence allowingdifferent numbers of overlaps allowed us to consider more thanjust a one-dimensional curve in the two-dimensional spacecorresponding to the EST of, and the correlation between, twoinhibitory input signals.

We performed 3-s simulations with a range of rb values anddifferent numbers of inhibitory input overlaps. For each sim-ulation, we counted the number of TC misses and bad spikes(i.e., bursts or spikes not aligned with excitatory inputs, seeMETHODS) and used the results to compute the error index,according to Eq. 5, resulting from application of a 20 Hzexcitatory input train. The range of error index values producedin our purely computational simulations was similar to thatobtained in our simulations incorporating experimental data(compare Figs. 3 and 8), which supports the idea that ourcomputationally generated GPi signals represent a reasonable

generalization of those generated from experimental record-ings. For each fixed number of overlaps, the relation betweenthe error index and the correlation between the inhibitoryinputs (achieved by varying rb) seen in our simulations isnonmonotonic: starting from small inhibitory input correla-tions, increases in correlations are associated with more relayerrors, while starting from large correlations, further increasesreduce errors in relay (Fig. 8A). A very similar trend also arisesif error index is plotted against the EST of the inhibitorysignals (see following text). Note also that for a fixed moderateor large value of correlation, the error index decreases as thenumber of overlaps decreases. For a given correlation level tooccur with fewer overlaps, HFE must be present in a higherproportion of the overall inhibitory input signal; that is, theEST must be higher. Thus the cases with fewer overlaps arecloser to the case of high-frequency tonic inhibition that wasobserved to improve relay fidelity in our other simulations(e.g., Figs. 3 and 6, therapeutic DBS case).

The nonmonotonic dependence of TC relay performance,measured by the error index, on correlation and EST can alsobe illustrated by plotting error index against both correlationand EST simultaneously (Fig. 8B). Doing so confirms that errorindex values peak for moderate inhibitory input EST. As notedin the preceding text, as the EST increases beyond moderatelevels, the proportion of time during which high-frequencyspikes are present in the inhibitory input trains increases, suchthat input trains approach a high-frequency, tonic spiking state(see Fig. 2) and input currents become relatively constant. Inthis regime, error index values decrease significantly, particu-larly when there are no overlaps (blue circles for large EST),which is consistent with Fig. 8A. Further, higher error rates areseen when correlations are higher, at each fixed EST, consis-tent with the hypothesis that synchronization of bursts ofinhibition enhances their capacity to compromise relay fidelity.

Finally, we decomposed the error index into the fraction ofexcitatory inputs for which the TC cell fails to respond (missedspikes; see Fig. 8C) and the fraction of excitatory inputs towhich the TC cell does respond but does so excessively (badspikes; see Fig. 8D). The number of missed spikes risessignificantly from low to moderate inhibitory input EST andthen drops again at high EST. This number depends muchmore weakly on correlation, for fixed EST, than on EST itself.

Unlike missed spikes, the number of bad spikes dependsstrongly both on correlation and on inhibitory input EST withthe highest bad spike rate occurring for relatively high corre-lation and moderate EST (corresponding to high burstiness).For each fixed EST, increased input correlations yield a no-ticeably higher rate of bad spikes. This trend makes sensebecause bad spikes tend to arise via a rebound effect upon therelatively abrupt withdrawal of inhibition (Rubin and Terman2004). Such an abrupt withdrawal is more likely to occur withhigher input correlations (also see Fig. 4), whereas lower inputcorrelations lead to more smeared out input arrival times andcorrespondingly less abrupt changes in inhibitory currents.Similarly, for fixed correlation level, higher EST yield muchlower bad spike rates, likely corresponding to the fact that withhigher EST, the TC cell is subject to significant inhibition fromat least one of its GPi inputs more of the time, making reboundless likely.

Taken together, the results from our computational model(Fig. 8) all support three main ideas. First, TC cell relay fidelity

w21 w22

w32

w51

w12w41w31w11

GPi1 GPi2

TCs1 s2inhibition

excitation

randomsignals

overlap

A

B

GPi GPi

FIG. 7. Schematic representation of the numerical generation of GPi spiketimes. A: each GPi cell receives and filters a combination of 5 independentrandom point processes, wij. An individual point process may belong to theinput set of 1 GPi cell; in this example, there are 2 such overlaps, or sharedwij, with w41 w42, and w51 w52. Varying the number of overlaps allows forcontrol of the correlation across the inhibitory GPi inputs to the TC cell, s1 ands2 (see Eq. 3), which may affect the TC cell’s responses to incoming excitatorysignals. B: each wij is generated by a set of 4 Poisson processes that determinethe signal’s spike times and degree of burstiness or EST. Specifically, 1process (p1) selects the times of isolated spikes, a 2nd (p2) selects the burstonset times or equivalently the times between successive bursts, a 3rd (p3)selects spike times within bursts, and a 4th (p4) selects burst durations.

1487THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 88: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

is compromised by inhibitory inputs that display alternationsbetween the presence and absence of HFE with a significantcorrelation, or alignment of HFE, across inputs. This effectoccurs through a combination of increased missed spikes andincreased bad, or excessive, responses. Second, the presence ofa rather tonic, high-frequency inhibitory input train, corre-sponding to high EST and correlation in our measures, leads toa relatively constant inhibitory current that reduces both missedand bad responses and thereby restores TC cell relay fidelity.Third, both the prevalence of the HFE and the level of thecorrelations in the inhibitory input structure contribute to thiseffect yet the contributions that these features make aredistinct.

D I S C U S S I O N

The fundamental goal of this study was to quantify howdifferent patterns of GPi inhibition, generated from experimen-tal recordings of normal and parkinsonian monkeys with andwithout DBS, affect TC relay fidelity. To this end, we sub-jected a Hodgkin-Huxley-type model TC cell to stereotypedexcitatory signals and evaluated its ability to relay that excita-tory input while under the influence of experimentally derivedinhibitory pallidal modulation. We also explored a broaderparameter space with computationally generated inhibitorytrains in which the prevalence of high-frequency spiking epi-sodes and the correlation structure were varied systematically.Our results show that GPi firing patterns produced in parkin-sonian conditions without DBS or with sub-therapeutic DBSand, more generally, rhythmic or bursty inhibitory signals with

correlations in burst timing across cells, tend to compromisethe fidelity of TC cell responses to excitatory signals, relativeto GPi firing patterns arising in normal conditions or in par-kinsonian conditions with therapeutic DBS. More generally,improvement in TC relay fidelity was achieved by eithersmearing out the arrival times of correlated, bursty inhibitorysignals or by converting inhibitory inputs from bursty to tonicand high-frequency. Moreover, across a model TC cell popu-lation, response failures tended to coincide temporally in par-kinsonian conditions despite heterogeneity in the intrinsiccharacteristics of cells in the population, whereas under DBS,these failures, when they occurred, were temporally dispersed.

Multiple forms of experimental observations suggest that atleast a subset of the excitatory inputs to the pallidal receivingareas of the thalamus arise from cortical areas (Guillery andSherman 2002a–c; Haber 2003). Inputs to thalamic relay cellshave been classified as drivers and modulators, the former ofwhich act on ionotropic receptors and directly induce firing andthe latter of which are detectable primarily through theirindirect influence on TC responses to driving signals, whichmay arise through action on metabotropic receptors (Shermanand Guillery 1998). Evidence has been amassed that, at least incertain thalamic areas, the excitatory drivers of thalamic relaycells represent copies of motor control signals sent from thecortex. This has led to the idea that a primary function ofthalamocortical relay in the motor thalamus is to help coordi-nate cortical motor processing by sharing information on bothmotor instructions and sensory observations (Guillery andSherman 2002b). Inhibitory inputs, on the other hand, have

FIG. 8. The error index rises and then falls again with increasing inhibitory input correlation and EST. A: error index vs. correlation, demonstrating thedependence of error index and correlation on the number of overlapping signals wij (coded by symbols and color) in the GPi input sets. Results in this and allother panels are based on simulation epochs of 3 s, with 20-Hz periodic excitation applied; similar results were obtained with Poisson input trains. B: error indexvs. EST and correlation. Different symbols correspond to different numbers of overlaps (circles: 0 overlaps; triangles: 2 overlaps; squares: 4 overlaps; diamonds:5 overlaps). The error index values are color coded such that warm colors, which occur here for moderate EST/correlation levels, correspond to high error ratesand cool colors, visible here for low and high EST/correlation levels, correspond to low error rates. Note that for moderate EST, GPi firing is bursty, whereasfor high EST, it is high-frequency and more tonic. C and D: the error index is decomposed into missed spikes (C) and bad spikes (D), and the dependence ofeach is plotted against EST and correlation. In these plots, the color bars represent the total number of occurrences observed within each 3-s simulation.

1488 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 89: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

been posited to act as modulatory signals to TC cells (Smithand Sherman 2002). Our hypothesis about the mechanismthrough which parkinsonian conditions and DBS impact motorperformance is consistent with this viewpoint. Specifically, ourcomputational analysis demonstrates that differing inhibitorybasal ganglia output patterns, as arise in differing nonparkin-sonian and parkinsonian conditions, lead to significant differ-ences in the ability of TC cells to relay information transmittedto these cells from other brain regions. Interestingly, we havefound similar TC relay in nonparkinsonian conditions as in theparkinsonian case with therapeutic DBS. This finding suggestsa way in which high-frequency stimulation of STN couldrestore some measure of “normal” function to the basal gan-glia-thalamocortical circuit despite its profound impact on GPiactivity patterns. In fact, our error index scores for the thera-peutic DBS case are even lower than those based on nonpar-kinsonian data. We are not suggesting, however, that TC relayin isolation could be a direct measure of expected motorperformance but rather that the impact of GPi firing on TCrelay offers one of what are likely many mechanisms throughwhich the effects of DBS occur. Moreover, functions that havebeen hypothesized to be performed by temporally precise GPifiring in normal conditions, such as termination of motorbehaviors (Mink 1996; Mink and Thach 1993; Nambu et al.2002), would presumably be disrupted by DBS, and the im-pacts of this disruption, as well as relevant compensationmechanisms, remain to be characterized.

The modulatory impact of GPi inhibition on TC relay in ourmodel is mediated by the inactivation/deinactivation of spike-promoting currents, namely a sodium current (INa) and alow-threshold calcium current (IT) (see also Rubin and Terman2004). In normal conditions, with a relatively constant inhibi-tion from the basal ganglia to TC cells, the TC cells act in tonicmode to relay excitatory inputs, with little IT participation(Rubin and Terman 2004). In parkinsonian conditions, how-ever, bursty inhibition leads to two effects that compromiserelay, both of which are evident in Fig. 4. First, relativelyabrupt rises in inhibition lead to failed relay, when excitatoryinputs arrive before INa and IT can deinactivate sufficiently toovercome the inhibition. Second, subsequent deinactivation ofINa and IT followed by relatively abrupt release from inhibitionleads to activity bursts that do not represent excitatory inputcontent. Finally, in therapeutic DBS conditions, although thelevel of inhibition to TC cells is generally higher than normal,the lack of inhibitory rhythmicity leaves IT relatively constantand therefore eliminates most rebound bursts. Moreover, theadded inhibition maintains INa and IT at partially deinactivatedlevels, such that the added availability of these currents helpscounter the direct tendency of synaptic inhibition to shuntspikes, which could otherwise lead to relay failure (Rubin andTerman 2004).

Within the literature, substantial evidence has been pre-sented that DBS of the subthalamic nucleus (STN) suppressesor reduces somatic activity (Beurrier et al. 2001; Filali et al.2004; Magarinos-Ascone et al. 2002; Meissner et al. 2005; Taiet al. 2003; Welter et al. 2004). Often this has been interpretedto mean that the efficacy of DBS stems from such suppression,through a removal of excessive inhibition from the targets ofbasal ganglia outputs (Benabid et al. 2001; Benazzouz et al.2000; Obeso et al. 2000; Olanow and Brin 2001; Olanow et al.2000). While this hypothesis is consistent with classical, firing-

rate-based representations of information flow through thebasal ganglia (Albin et al. 1989; Wichmann and DeLong1996), it is at odds with a variety of studies showing thatDBS activates areas downstream from its target site (Andersonet al. 2003; Hashimoto et al. 2003; Hershey et al. 2003; Jechet al. 2001; McIntyre et al. 2004; Miocinovic et al. 2006;Paul et al. 2000; Windels et al. 2000, 2003). From a rate-basedperspective, the idea that both parkinsonian and DBS condi-tions lead to increased thalamic inhibition represents a para-dox. This paradox may be resolved, however, by consideringthat DBS changes the pattern, along with the firing rate, ofinhibitory inputs to thalamus (Foffani and Priori 2006; Foffaniet al. 2003; Garcia et al. 2005; Meissner et al. 2005; Mont-gomery and Baker 2000; Rubin and Terman 2004; Termanet al. 2002; Vitek 2002). There have been some previouscomputational efforts to explore the details of how thesevarying firing patterns emerge and depend on a variety ofneuronal and stimulus-related parameters (Grill et al. 2004;McIntyre et al. 2004). Building on one previous study (Rubinand Terman 2004), the work presented in this paper fills inimportant details of how specific changes in activity patternsinduced downstream from the STN-DBS site can lead tochanges in information processing through the basal ganglia-thalamocortical loop (Leblois et al. 2006; Rubchinsky et al.2003), which would likely impact motor behavior. Interest-ingly, local field potential recordings from the STN of Parkin-son’s disease patients have shown that movement-related300-Hz oscillations are restored by levodopa administrationand contribute to related motor improvement (Foffani et al.2003). These findings have led to the idea that high-frequencySTN DBS could produce clinical benefits not only by disrupt-ing pathological oscillations but also by driving this rhythm, attwice stimulation frequency, and thereby supporting motorprocessing (Foffani and Priori 2006). Our results tie in nicelywith these ideas, offering one suggestion of how high-fre-quency oscillations in STN output could be conducive tonormal information flow downstream in the network from thesite of stimulation.

The incorporation of experimentally recorded GPi firingpatterns into our model represents a significant advance in thecomputational exploration of the mechanism underlying theefficacy of DBS. As this work now stands, it represents ademonstration that in at least some subset of cells, the GPifiring pattern under parkinsonian conditions could significantlycompromise TC cell relay fidelity, whereas the change in GPifiring pattern induced by therapeutic DBS could restore relayfidelity. While alterations in activity undoubtedly vary acrossdifferent cells even within the same setting, the existence ofchanges of this type in even a subset of cells could be sufficientto affect downstream processing. One important limitation ofour study, however, was the lack of simultaneous multi-unitrecordings from GPi. While we were able to use computationaltechniques to generate simulated multi-unit inputs (Fig. 6) andto consider the impact of the experimental data on a multi-celltarget population (Fig. 5), future work involving simulta-neously recorded data will be performed to allow for a moredirect and in-depth consideration of the activity patterns acrossthe GPi network and the thalamic responses that these patternsinduce.

Another limitation of our study was the use of a relativelysimple TC cell model. We felt that it was appropriate to

1489THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 90: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

perform our analysis on a model that, while based onexperimental data (Rubin and Terman 2004; Sohal andHuguenard 2002; Sohal et al. 2000), did not introduce unduecomplexity. The promising results of this study lay thegroundwork for future efforts to further evaluate TC relayfidelity during DBS in more detailed, multi-compartmentalTC cell models (Destexhe et al. 1998; Emri et al. 2000). Inaddition, the concepts considered in this work should befurther explored in network models that account for theinteractions of TC cells and the GPi with other brain areas,such as the thalamic reticular network (Destexhe et al. 1996;Golomb et al. 1994). In this vein, we have not assigned aspecific source to the excitatory signals to the TC cell in ourmodel, and we therefore have not considered any source-specific patterns that might be present in these signals butrather have used periodic and stochastic excitatory inputsconsistent with past work (Rubin and Terman 2004). Aperiodic excitatory signal is nonbiological but provides thecleanest test of the effect of GPi inhibitory patterns on relay.For our stochastic excitatory inputs, we felt that, in theabsence of source-specific information, it was reasonable toselect the most widely used and generic form of stochasticneuronal spike train, namely a Poisson spike train. Ourresults do not strongly depend on input period or Poissoninput rate as long as the input frequency is sufficiently highthat there is little intrinsic (i.e., not input-driven) TC firing.Our results do weaken if individual excitatory input dura-tions are made to be shorter than a few milliseconds. Thusin a situation where there is an extremely tight synchroni-zation of excitatory inputs to a TC cell, relay fidelitydifferences between scenarios might be suppressed. We usea simple error index to quantify TC cell relay fidelity and astraightforward calculation of elevated spiking time to quan-tify the burstiness of the inhibitory signals from GPi cells.While it is possible that a more sophisticated measure wouldcompletely separate all model outputs in a single dimension,it is rather remarkable that the two simple measures usedhere distinguish the normal, parkinsonian, sub-therapeuticDBS, and therapeutic DBS cases so well (Fig. 3).

As noted in the preceding text, it is highly likely that corticalareas participate in driving the TC cells targeted by basalganglia outputs. It is possible that there could be some rela-tionship between the cortical input to these TC cells and thecortical input that enters the basal ganglia through the striatumor the subthalamic nucleus, which could then be reflected in aninterdependence of the inhibitory and excitatory signals thatthe relevant TC cells receive. On the other hand, such arelationship might be diluted by the multi-synaptic nature ofthe cortico-basal-ganglia-thalamic pathway, and if DBS wereapplied, its effect would likely be diminished by the strongDBS signal to the STN. Given the design of our study, we didnot have access to the excitatory inputs to TC cells that werepresent when the GPi signals were recorded experimentally.Thus in our simulations with experimental GPi signals, wecould not consider the effects of any correlations betweencortical signals to TC cells and cortical signals propagatingthrough the basal ganglia, nor did we find sufficiently preciseexperimental characterization of such correlations to justifyincluding them in our purely computational experiments.While it is outside the scope of this work, it would beinteresting for future efforts to explore how motor signals

emerging from the basal ganglia and inputs from other sourcesinteract to shape thalamic firing patterns and how these inter-actions are modulated by parkinsonian rhythms. Finally, wehave not analyzed the downstream responses to changes inrelay across a TC cell population nor their relevance for motorperformance. Future experimental work to flesh out the detailsof the functionally relevant inputs to, and output targets of, thethalamic cells receiving inhibition from the GPi would proveuseful in pursuing such extensions of this work.

A C K N O W L E D G M E N T S

The authors thank T. Hashimoto and J. Zhang for acquiring the experimentaldata used in this study, G. Russo and W. Xu for processing the experimentaldata, and P. Hahn for assistance in selecting representative GPi recordings.

G R A N T S

The research was partially supported by the National Science Foundationunder agreement 0112050 and grants DMS0414023 and DMS0716936 to J. E.Rubin and DMS0514356 to D. Terman and by the National Institute ofNeurological Disorders and Stroke Grants NS-37019 to J. L. Vitek andNS-047388 to C. C. McIntyre.

R E F E R E N C E S

Albin R, Young A, Penney J. The functional anatomy of basal gangliadisorders. Trends Neurosci 12: 366–375, 1989.

Anderson M, Postpuna N, Ruffo M. Effects of high-frequency stimulation inthe internal globus pallidus on the activity of thalamic neurons in the awakemonkey. J Neurophysiol 89: 1150–1160, 2003.

Benabid AL, Deuschl G, Lang A, Lyons K, Rezai A. Deep brain stimulationfor Parkinson’s disease. Mov Disord 21, Suppl 14: S168–S170, 2006.

Benabid A, Koudsie A, Benazzouz A, Piallat B, Krack P, Limousin-Dowsey P, Lebas J, Pollak P. Deep brain stimulation for Parkinson’sdisease. Adv Neurol 86: 405–412, 2001.

Benazzouz A, Gao D, Ni Z, Piallat B, Bouali-Benazzouz R, Benabid A.Effect of high-frequency stimulation of the subthalamic nucleus on theneuronal activities of the substantia nigra pars reticulata and the ventrolat-eral nucleus of the thalamus. Neuroscience 99: 289–295, 2000.

Bergman H, Wichmann T, Karmon B, DeLong M. The primate subthalamicnucleus. II. Neuronal activity in the MPTP model of parkinsonism. J Neu-rophysiol 72: 507–520, 1994.

Beurrier C, Bioulac B, Audin J, Hammond C. High-frequency stimulationproduces a transient blockade of voltage-gated currents in subthalamicneurons. J Neurophysiol 85: 1351–1356, 2001.

Brown P, Oliviero A, Mazzone P, Insola A, Tonali P, Lazzaro VD.Dopamine dependency of oscillations between subthalmaic nucleus andpallidum in Parkinson’s disease. J Neurosci 21: 1033–1038, 2001.

Castro-Alamancos M, Calcagnotto M. High-pass filtering of corticothalamicactivity by neuromodulators released in the thalamus during arousal: in vitroand in vivo. J Neurophysiol 85: 1489–1497, 2001.

Destexhe A, Contreras D, Steriade M, Sejnowski T, Huguenard J. In vivo,in vitro, and computational analysis of dendritic calcium currents in thalamicreticular neurons. J Neurosci 16: 169–185, 1996.

Destexhe A, Neubig M, Ulrich D, Huguenard J. Dendritic low-thresholdcalcium currents in thalamic relay cells. J Neurosci 18: 3574–3588, 1998.

Elder C, Hashimoto T, Zhang J, Vitek J. Chronic implantation of deep brainstimulation leads in animal models of neurological disorders. J NeurosciMethods 142: 11–16, 2005.

Emri Z, Antal K, Toth T, Cope D, Crunelli V. Backpropagation of the deltaoscillation and the retinal excitatory postsynaptic potential in a multi-compartment model of thalamocortical neurons. Neuroscience 98: 111–127,2000.

Ermentrout B. Simulating, Analyzing, and Animating Dynamical Systems.Philadelphia, PA: SIAM, 2002.

Filali M, Hutchison W, Palter V, Lozano A, Dostrovsky J. Stimulation-induced inhibition of neuronal firing in human subthalamic nucleus. ExpBrain Res 156: 274–281, 2004.

Foffani G, Priori A. Deep brain stimulation in Parkinson’s disease can mimicthe 300 Hz subthalamic rhythm. Brain 129: e59, 2006.

Foffani G, Priori A, Egidi M, Rampini P, Tamma F, Caputo E, Moxon K,Cerutti S, Barbieri S. 300-Hz subthalamic oscillations in Parkinson’sdisease. Brain 126: 2153–2163, 2003.

1490 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 91: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

Galan R, Fourcaud-Trocme N, Ermentrout G, Urban N. Correlation-induced synchronization of oscillations in olfactory bulb neurons. J Neuro-sci 26: 3646–3655, 2006.

Garcia L, D’Alessandro G, Bioulac B, Hammond C. High-frequencystimulation in Parkinson’s disease: more or less? Trends Neurosci 28:209–216, 2005.

Golomb D, Wang X-J, Rinzel J. Synchronization properties of spindleoscillations in a thalamic reticular nucleus model. J Neurophysiol 72:1109–1126, 1994.

Grill W, Snyder A, Miocinovic S. Deep brain stimulation creates an infor-mational lesion of the stimulated nucleus. Neuroreport 15: 1137–1140,2004.

Guillery R, Sherman SM. The role of thalamus in the flow of information tothe cortex. Philos Trans R Soc Lond B Biol Sci 357: 1695–1708, 2002a.

Guillery R, Sherman SM. The thalamus as a monitor of motor outputs. PhilosTrans R Soc Lond B Biol Sci 357: 1809–1821, 2002b.

Guillery R, Sherman SM. Thalamic relay functions and their role in corti-cocortical communication: generalizations from the visual system. Neuron33: 163–175, 2002c.

Haber S. The primate basal ganglia: parallel and integrative networks. J ChemNeuroanat 2: 317–330, 2003.

Hashimoto T, Elder C, Okun M, Patrick S, Vitek J. Stimulation of thesubthalamic nucleus changes the firing pattern of pallidal neurons. J Neu-rosci 23: 1916–1923, 2003.

Hashimoto T, Elder C, Vitek J. A template subtraction method for stimulusartifact removal in high frequency deep brain stimulation. J NeurosciMethods 113: 181–186, 2002.

Hershey T, Revilla F, Wernle A, McGee-Minnich L, Antenor J, Videen T,Dowling J, Mink J, Perlmutter J. Cortical and subcortical blood floweffects of subthalamic nucleus stimulation in PD. Neurology 61: 816–821,2003.

Hurtado J, Gray C, Tamas L, Sigvardt K. Dynamics of tremor-relatedoscillations in the human globus pallidus: a single case study. Proc NatlAcad Sci USA 96: 1674–1679, 1999.

Hurtado J, Rubchinsky L, Sigvardt K, Wheelock V, Pappas C. Temporalevolution of oscillations and synchrony in GPi/muscle pairs in Parkinson’sdisease. J Neurophysiol 93: 1569–1584, 2005.

Jahnsen H, Llinas R. Electrophysiological properties of guinea pig thalamicneurons: an in vitro study. J Physiol 349: 205–226, 1984a.

Jahnsen H, Llinas R. Ionic basis for the electro-responsiveness and oscilla-tory properties of guinea pig thalamic neurons in vitro. J Physiol 349:227–247, 1984b.

Jech R, Urgosik D, Tintera J, Nebuzelsky A, Krakensy J, Liscak R, RothJ, Ruzicka E. Functional magnetic resonance imaging during deep brainstimulation: a pilot study in four patients with Parkinson’s disease. MovDisord 16:1126–1132, 2001.

Kapfer C, Glickfield L, Atallah B, Scanziani M. Supralinear increase ofrecurrent inhibition during sparse activity in the somatosensory cortex. NatNeurosci 10: 743–753, 2007.

Lavin A, Grace A. Modulation of dorsal thalamic cell activity by the ventralpallidum: its role in the regulation of thalamocortical activity by the basalganglia. Synapse 18: 104–127, 1994.

Leblois A, Boraud T, Meissner W, Bergman H, Hansel D. Competitionbetween feedback loops underlies normal and pathological dynamics in thebasal ganglia. J Neurosci 26: 3567–3583, 2006.

Legendy C, Salcman M. Bursts and recurrences of bursts in the spike trainsof spontaneously active striate cortex neurons. J Neurophysiol 53: 926–939,1985.

Levy R, Hutchison W, Lozano A, Dostrovsky J. High-frequency synchro-nization of neuronal activity in the subthalamic nucleus of parkinsonianpatients with limb tremor. J Neurosci 20: 7766–7775, 2003.

Magarinos-Ascone C, Pazo J, Macadar O, Buno W. High-frequency stim-ulation of subthalamic nucleus silences subthalamic neurons: a possiblecellular mechanism of Parkinson’s disease. Neuroscience 115: 1109–1117,2002.

Magnin M, Morel A, Jeanmonod D. Single-unit analysis of the pallidum,thalamus, and subthalamic nucleus in parkinsonian patients. Neuroscience96: 549–564, 2000.

McIntyre C, Grill W, Sherman D, Thakor N. Cellular effects of deep brainstimulation: model-based analysis of activation and inhibition. J Neuro-physiol 91: 1457–1469, 2004.

Meissner W, Leblois A, Hansel D, Bioulac B, Gross C, Benazzouz A,Boraud T. Subthalamic high frequency stimulation resets subthalamic firingand reduces abnormal oscillations. Brain 128: 2372–2382, 2005.

Mink J. The basal ganglia: focused selection and inhibition of competingmotor programs. Prog Neurobiol 50: 381–425, 1996.

Mink J, Thach W. Basal ganglia intrinsic circuits and their role in behavior.Curr Opin Neurobiol 3: 950–957, 1993.

Miocinovic S, Parent M, Butson C, Hahn P, Russo G, Vitek J, McIntyreC. Computational analysis of subthalamic nucleus and lenticular fasciculusactivation during therapeutic deep brain stimulation. J Neurophysiol 96:1569–1580, 2006.

Montgomery E Jr, Baker K. Mechanism of deep brain stimulation and futuretechnical developments. Neurol Res 22: 259–266, 2000.

Nambu A, Tokuno H, Takada M. Functional significance of the cortico-subthalamo-pallidal “hyperdirect” pathway. Neurosci Res 43: 111–117, 2002.

Nini A, Feingold A, Slovin H, Bergman H. Neurons in the globus pallidus donot show correlated activity in the normal monkey, but phase-lockedoscillations appear in the MPTP model of parkinsonism. J Neurophysiol 74:1800–1805, 1995.

Obeso J, Rodriguez-Oroz M, Rodriguez M, Macias R, Alvarez L, GuridiJ, Vitek J, DeLong M. Pathophysiologic basis of surgery for Parkinson’sdisease. Neurology 55, Suppl 6: S7–S12, 2000.

Olanow W, Brin M. Surgery for Parkinson’s disease: a physician’s perspec-tive. Adv Neurol 86: 421–433, 2001.

Olanow W, Brin M, Obeso J. The role of deep brain stimulation as asurgical treatment for Parkinson’s disease. Neurology 55, Suppl 6:S60 –S66, 2000.

Paul G, Reum T, Meissner W, Marburger A, Sohr R, Morgenstern R,Kupsch A. High frequency stimulation of the subthalamic nucleus influ-ences striatal dopaminergic metabolism in naive rats. Neuroreport 11:441–444, 2000.

Raz A, Vaadia E, Bergman H. Firing patterns and correlations of spontane-ous discharge of pallidal neurons in the normal and tremulous 1-methyl-4-phenyl-1,2,3,6 tetrahydropyridine vervet model of parkinsonism. J Neurosci20: 8559–8571, 2000.

Rinzel J. Bursting oscillations in an excitable membrane model. In: Ordinaryand Partial Differential Equations, edited by Sleeman B, Jarvis R. NewYork: Springer-Verlag, 1985, p. 304–316.

Rubchinsky L, Kopell N, Sigvardt K. Modeling facilitation and inhibition ofcompeting motor programs in basal ganglia subthalamic nucleus-pallidalcircuits. Proc Natl Acad Sci USA 100: 14427–14432, 2003.

Rubin J, Josic K. The firing of an excitable neuron in the presence ofstochastic trains of strong inputs. Neural Comp 19: 1251–1294, 2007.

Rubin JE, Terman D. High frequency stimulation of the subthalamic nucleuseliminaties pathological thalamic rythmicity in a computational model.J Comput Neurosci 16: 211–235, 2004.

Sherman SM, Guillery R. On the actions that one nerve cell can have onanother: distinguishing “drivers” from “modulators.” Proc Natl Acad SciUSA 95: 7121–7126, 1998.

Smith G, Sherman SM. Detectability of excitatory versus inhibitory drive inan integrate-and-fire-or-burst thalamocortical relay neuron model. J Neuro-sci 22: 10242–10250, 2002.

Sohal V, Huguenard J. Reciprocal inhibition controls the oscillatory state inthalamic networks. Neurocomp 44: 653–659, 2002.

Sohal V, Huntsman M, Huguenard J. Reciprocal inhibitory connectionsregulate the spatiotemporal properties of intrathalamic oscillations. J Neu-rosci 20: 1735–1745, 2000.

Steriade M, Contreras D, Amzica F. The thalamocortical dialogue during wake,sleep, and paroxysmal oscillations. In: Thalamus, edited by Steriade M, Jones E,McCormick D. Amsterdam: Elsevier, 1997, p. 213–294.

Tai CH, Boraud T, Bezard E, Bioulac B, Gross C, Benazzouz A. Electro-physiological and metabolic evidence that high-frequency stimulation of thesubthalamic nucleus bridles neuronal activity in the subthalamic nucleus andthe substantia nigra reticulata. FASEB 17: 1820–1830, 2003.

Tateno T, Robinson H. Rate coding and spike-time variability in corticalneurons with two types of threshold dynamics. J Neurophysiol 95: 2650–2663, 2006.

Terman D, Rubin J, Yew A, Wilson C. Activity patterns in a model for thesubthalamopallidal network of the basal ganglia. J Neurosci 22: 2963–2976, 2002.

Vitek J. Mechanisms of deep brain stimulation: excitation or inhibition. MovDisord 17, Suppl 3: S69–S72, 2002.

Welter M-L, Houeto J-L, Bonnet A-M, Bejjani P-B, Mesnage V, DormontD, Navarro S, Cornu P, Agid Y, Pidoux B. Effects of high-frequencystimulation on subthalamic neuronal activity in parkinsonian patients. ArchNeurol 61: 89–96, 2004.

Wichmann T, Bergman H, Starr P, Subramanian T, Watts R, DeLong M.Comparison of MPTP-induced changes in spontaneous neuronal discharge

1491THALAMOCORTICAL RELAY ACROSS STN-DBS PROTOCOLS

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 92: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

in the internal pallidal segment and in the substantia nigra pars reticulata inprimates. Exp Brain Res 125: 397–409, 1999.

Wichmann T, DeLong MR. Functional and pathophysiological models of thebasal ganglia. Curr Opin Neurobiol 6: 751–758, 1996.

Windels F, Bruet N, Poupard A, Feuerstein C, Bertrand A, Savasta M.Influence of the frequency parameter on extracellular glutamate and -aminobu-

tyric acid in substantia nigra and globus pallidus during electrical stimulation ofsubthalamic nucleus in rats. J Neurosci Res 72: 259–267, 2003.

Windels F, Bruet N, Poupard A, Urbain N, Chouvet G, Feuerstein C,Savasta M. Effects of high frequency stimulation of subthalamic nucleus onextracellular glutamate and GABA in substantia nigra and globus pallidus inthe normal rat. Eur J Neurosci 12: 4141–4146, 2000.

1492 Y. GUO, J. E. RUBIN, C. C. McINTYRE, J. L. VITEK, AND D. TERMAN

J Neurophysiol • VOL 99 • MARCH 2008 • www.jn.org

on April 7, 2008

jn.physiology.orgD

ownloaded from

Page 93: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

SIAM J. APPLIED DYNAMICAL SYSTEMS c© 2005 Society for Industrial and Applied MathematicsVol. 4, No. 2, pp. 217–248

Existence and Stability of Standing Pulses in Neural Networks: I. Existence∗

Yixin Guo† and Carson C. Chow‡

Abstract. We consider the existence of standing pulse solutions of a neural network integro-differential equa-tion. These pulses are bistable with the zero state and may be an analogue for short term memoryin the brain. The network consists of a single layer of neurons synaptically connected by lateralinhibition. Our work extends the classic Amari result by considering a nonsaturating gain function.We consider a specific connectivity function where the existence conditions for single pulses can bereduced to the solution of an algebraic system. In addition to the two localized pulse solutions foundby Amari, we find that three or more pulses can coexist. We also show the existence of nonconvex“dimpled” pulses and double pulses. We map out the pulse shapes and maximum firing rates fordifferent connection weights and gain functions.

Key words. integro-differential equations, integral equations, standing pulses, neural networks, existence

AMS subject classifications. 34A36, 37N25, 45G10, 92B20

DOI. 10.1137/040609471

1. Introduction. The temporary storage of information in the brain for short periods oftime is called working memory [6]. It is known that the firing activity of certain neurons in thecortex are correlated with working memory states, but it is not known what neural mechanismsare responsible for maintaining the persistent neural activity [27, 30, 50, 73]. Experimentsfind that a specific set of neurons become activated by a memory cue. They fire at a rateabove their background levels while the memory is being held and then return to baselinelevels after the memory is extinguished. When the neurons are active their firing rates arelow compared to their maximal possible rates. Cortical neurons are generally not intrinsicallybistable and do not fire unless given an input that is above a threshold [14, 49, 68]. It has beensuggested that recurrent excitatory inputs in a network could be responsible for maintainingneural activity observed during memories [27, 30, 33, 49, 67, 68, 72, 73, 76, 77, 36, 46]. Thepersistent activity is bistable with the background state. To match experimental data, amemory network must have the ability to maintain persistent activity in a selected subset ofthe neurons while keeping the firing rates low compared to their possible maximum.

Mathematically, this question has been probed by examining the existence and stabilityof localized persistent stationary solutions of neural network equations [3, 17, 21, 23, 24, 33,59, 64, 65, 76, 77]. These localized states have been dubbed “bump attractors” [45, 43, 46,36, 73, 76]. In a one dimensional network they have also been called standing pulses [23, 59].While these simple networks do not capture all of the biophysical features of cortical circuits,

∗Received by the editors June 3, 2004; accepted for publication (in revised form) by D. Terman September 21,2004; published electronically April 14, 2005. This work was supported by the National Institute of Mental Health,the A. P. Sloan Foundation, and the National Science Foundation under agreement 0112050.

http://www.siam.org/journals/siads/4-2/60947.html†Department of Mathematics, The Ohio State University, Columbus, OH 43210 ([email protected]).‡Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 ([email protected]).

217

Page 94: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

218 YIXIN GUO AND CARSON C. CHOW

they do capture the qualitative behavior of working memory.The coarse-grained averaged activity of a neural network can be described by [3, 23, 33,

76, 77]

τ∂u(x, t)

∂t= −u(x, t) +

∫Ωw(x− y)f [u(y, t)]dy,(1.1)

where u(x, t) is the synaptic input to neurons located at position x ∈ (−∞,∞) at timet ≥ 0, and it represents the level of excitation or amount of input to a neural element. Theconnection function w(x) determines the connections between neurons. The nonnegative andmonotonically nondecreasing gain function f [u] denotes the firing rate at x at time t. We canset the synaptic decay time τ to unity without loss of generality.

In his classic work, Amari [3] considered (1.1) with a “Mexican hat” connection function(i.e., excitation locally and inhibition distally). While this is not biologically realistic for asingle layer of neurons, it has been argued that networks of combined excitatory and inhibitoryneurons with biophysically plausible connections can effectively mimic Mexican hat connec-tivity under certain conditions [23, 39, 59, 74]. Neurophysiological recordings indicate thatthe strength of excitatory connections between neurons generally decreases with spatial dis-tance [15, 16, 19, 26, 71]. Recordings in inhibitory neurons involved in working memory taskexperiments demonstrate that the range of effective inhibition between excitatory neuronsis broader than the excitation [16, 26]. This does not necessarily imply that the inhibitoryconnections have a broader range. It implies only that the net effect of excitatory neuronsprojecting onto local inhibitory neurons which project back onto excitatory neurons have abroader effect. Hence, in a cortical network, the combined effect of excitatory and inhibitoryconnections on the excitatory neurons can be approximated locally by a Mexican hat.

In his paper [3], Amari also made the assumption that f [u] is the Heaviside function. Thisapproximation made (1.1) analytically tractable, and he was able to find a host of solutions,one of them being localized stable pulses that are bistable with zero activity. Kishimoto andAmari [41] later showed these solutions also existed for a smooth sigmoidal gain function thatsaturated quickly.

Later work considered two populations [58, 59], various connectivity functions [17, 45,65], and two dimensions [38, 46]. However, all used either the Heaviside gain function or asaturating sigmoidal gain function implying that neurons start to fire when their inputs exceedthreshold and saturate to their maximum rate quickly. However, in the brain persistentlyactive neurons fire at rates far below their saturated maximum [12, 13, 26, 72, 73]. How anetwork can maintain persistent activity at low firing rates is not fully understood [10, 13, 33,45, 47, 48, 63, 72, 73].

The problem of persistent activity at low firing rates cannot be addressed with a quicklysaturating gain function. To circumvent this limitation, we use a nonsaturating piecewise-linear gain function with a jump (see Figure 2) having the form

g[u] =

α(u− uT ) + β, u > uT ,

0, u ≤ uT .(1.2)

When the gain α is zero, (1.2) becomes the Heaviside function scaled by β. We note thatothers have considered piecewise-linear gain functions but without the jump [7, 37, 69]. In

Page 95: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 219

these cases, persistent activity is not possible unless the threshold is set to zero and the gainto unity, where a multistable “line attractor” is possible [69].

In this paper we show the existence of isolated convex standing pulse solutions (singlepulses) of (1.1). We consider a single one dimensional layer of neurons. Although this con-figuration is a major simplification, it has been shown that such networks exhibit featurespresent in more realistic architectures. We investigate how the pulse solutions change whenparameters of the gain function and the connection function change. We demonstrate thecoexistence of two single-pulse solutions as seen by Amari [3] and give conditions where morethan two pulse solutions can coexist. We also show the existence of nonconvex “dimple-pulse”solutions and double-pulse solutions. We derive the stability criteria for stable pulses in anaccompanying paper [35].

2. Neural network equations. We study a neural network (1.1) with lateral inhibitionor Mexican hat type connection function w(x) for which excitatory connections dominate forproximal neurons and inhibitory connections dominate for distal neurons. In general, w(x)satisfies the following six properties:

1. w(x) is symmetric; i.e., w(−x) = w(x).2. w(x) > 0 on an interval (−x0, x0), and w(−x0) = w(x0) = 0.3. w(x) is decreasing on (0, xm].4. w(x) < 0 on (−∞,−x0) ∪ (x0,∞).5. w(x) is continuous on R, and w(x) is integrable on R.6. w(x) has a unique minimum xm on R+ such that xm > x0, and w(x) is strictly

increasing on (xm,∞).For concreteness, we consider the connection function given by

w(x) = Ae−a|x| − e−|x|,(2.1)

where a > 1 and A > 1 guarantee that w(x) obeys properties 1–6. An example of (2.1) isshown in Figure 1. This connection function is of the lateral inhibition or Mexican hat class.Perhaps, given the cusp at zero, it should be called a “wizard hat” function.

For connection function (2.1), x0 = lnAa−1 and xm = ln aA

a−1 . The area of w(x) above andbelow the x-axis represents the net excitation and inhibition in the network, respectively. Thetotal area of (2.1) is 2(Aa −1). The amount of excitation and inhibition depends on the ratio ofA to a. If A > a, i.e., 2(Aa − 1) > 0, excitation dominates in the network, and if 2(Aa − 1) < 0,inhibition dominates. In the balanced case, A = a; i.e., 2(Aa − 1) = 0.

The gain function (1.2) can be written as

f [u] = [α(u− uT ) + β]Θ(u− uT ),(2.2)

where Θ(u− uT ) is the Heaviside function such that

Θ(u− uT ) =

1 if u > uT ,

0 otherwise.(2.3)

The gain function (2.2) does not saturate with a positive slope α. Without loss of generality,we set β = 1. The gain function (2.2) turns into the Heaviside function when α = 0 (seeFigure 2).

Page 96: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

220 YIXIN GUO AND CARSON C. CHOW

Figure 1. Connection function with A = 2.8, B = 1, a = 2.6, b = 1.

uT

u0

f

αβ

Figure 2. Piecewise-linear gain function.

A stationary solution of (1.1) satisfies the equilibrium equation

u(x) =

∫ ∞

−∞w(x− y)f [u(y)]dy.(2.4)

An example of a working memory state can be seen by considering constant solutions of (2.4).For u(x) = u0, the integral equation becomes

u0 = f [u0]

∫ ∞

−∞w(y)dy.(2.5)

Using (2.1), the constant solution satisfies

u0 = w0f [u0],(2.6)

Page 97: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 221

where w0 = 2(A/a − 1). From (2.6), we immediately see that u0 = 0 is a solution. In fact,zero is a solution of (2.4) for any positive threshold uT and any values of parameters a, A,and α.

Inserting gain function (2.2) into (2.6) gives

u0 = w0(α(u0 − uT ) + 1), u0 > uT .(2.7)

The existence of constant solutions can be deduced graphically (see Figure 3). Nontrivialconstant solutions (u0 > 0) require w0 > 0 which means that A/a > 1. Thus only for netexcitatory connections are nontrivial constant solutions possible. A simple stability calculationshows that α < 1 is necessary for stability. Condition (2.7) shows that for uT < 0 and α < 1,there is a single stable solution. If uT > 0 and α > 0, there can be three solutions (seeFigure 3). Two of the solutions, u0 = 0 and u0 > uT , are stable. The third solution atu0 = uT is unstable. For this parameter set, the network exhibits working memory–likebehavior. The network is stable in the background state u0 = 0. A transient input froma memory cue can switch the network into the stable u0 > uT state which represents thememory. This is a state of persistent activity that is sustained by positive feedback. Thestate can be switched off to zero by another transient input when it is no longer needed. Thenext section will examine spatially localized pulses that have the same memory property.

uT

u0

y

w0 f[u

0 ]

Figure 3. Bistability of constant solutions. The solid circles are the two stable constant solutions, and theopen circle is an unstable solution. w0 is the integral of w(x) on its domain.

3. Single-pulse solutions. We prove the existence and determine the properties of local-ized stationary persistent states which we call single pulses. We consider single-pulse solutionsof (2.4) that satisfy the following definition.

Definition 3.1. Single-pulse solution:

u(x)

⎧⎪⎨⎪⎩> uT if x ∈ (−xT , xT ), xT > 0,

= uT if x = −xT , x = xT ,

< uT otherwise,

Page 98: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

222 YIXIN GUO AND CARSON C. CHOW

such that (u, u′, u′′, u′′′) → (0, 0, 0, 0) exponentially fast as x → ±∞ and u, u′ ∈ L1(R).u and u′ are bounded and continuous on R. u′′, u′′′, and u′′′′ are continuous everywhereexcept at x = ±xT and bounded everywhere on R. u(x) is symmetric with u′′(0) < 0; u(0) isthe maximum between −xT and xT (Figure 4).

We note that there also exist pulses where u′′(0) > 0, which implies u(0) is no longer themaximum of the pulse. We call this solution a dimple pulse. The theorem below gives a rangefor which there is no single-pulse solution.

Theorem 3.2. For fixed a, A, and β = 1, there is no single-pulse solution if both α < a2A

and uT > 2Aa are true.

Proof. Substituting the exponential connection function (2.1) and gain function (2.2) intothe integral equation (2.4) gives

u(x) =

∫ ∞

−∞(Ae−a|x−y| − e−|x−y|)[α(u(y) − uT ) + 1]Θ(u− uT )dy.(3.1)

Suppose there is a single-pulse solution as defined above when both α < a2A and uT > 2A

a aresatisfied. For a single pulse to exist,

u(0) =

∫ ∞

−∞(Ae−a|y| − e−|y|)[α(u(y) − uT ) + 1]Θ(u− uT )dy

=

∫ xT

−xT

(Ae−a|y| − e−|y|)[α(u(y) − uT ) + 1]dy

≤∫ xT

−xT

Ae−a|y|[α(u(y) − uT ) + 1]dy,

where u(x) ≥ uT is continuous on I := [−xT , xT ]. Ae−a|y| is integrable on I and Ae−a|y| ≥ 0for all x ∈ I. By the mean value theorem for integrals, ∃ c0 ∈ I such that

u(0) ≤ (αu(c0) − αuT + 1)

∫ xT

−xT

Ae−a|y|dy

≤ αPu(c0) + (1 − αuT )P ,(3.2)

where P =∫∞−∞Ae−a|y|dy = 2A

a . If αP < 1 and (1−αuT ) ≤ 0 are both true, then u(0) < u(c0),c ∈ I. However, this cannot be true because u(0) is the maximum of u(x) on R. From αP < 1,we get α < a

2A . From (1 − αuT ) ≤ 0, uT ≥ 1α > 2A

a . Therefore, there is no single pulse whenα < a

2A and uT > 2Aa are both true. In other words, if the gain is too low or the threshold too

high, there cannot be a single pulse.

3.1. Strategy to construct a single-pulse solution. The general approach to studyingintegral equation (2.4) is to derive an associated differential equation whose solutions are alsosolutions of the integral equation. We derive the differential equation by using the Fouriertransform

F [g(x)] =

∫ ∞

−∞g(x)eisxdx,

Page 99: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 223

where g ∈ L1(R) and s ∈ R, with the inverse Fourier transform

g(x) =1

∫ ∞

−∞F [g(x)]e−isxds.

For our conditions on u(x) and w(x), an application of the Fourier transform to (2.4) iswell defined and turns the convolution into a pointwise product

F [u] = F [w]F [f [u]].(3.3)

Computing F [w] in (3.3) gives

F [u] =(2aA + 2as2A− 2a2 − 2s2)

(a2 + a2s2 + s2 + s4)F [f ].(3.4)

Multiplying both sides of (3.4) by the denominator of the right-hand side and using the linearproperty of the Fourier transform with the identities F [u′′] = −s2F [u] and F [u′′′′] = s4F [u]give

F [u′′′′ − (a2 + 1)u′′ + a2u] = F [2(aA− a2)f ] + 2(aA− 1)F [s2f ].(3.5)

By the definitions of u(x) and f [u],

F [u′′′′ − (a2 + 1)u′′ + a2u]

and

F [2(aA− a2)f ]

are in L1(R).Integrating F [s2f ] by parts yields

F [s2f ]

=

∫ ∞

−∞s2eisxf [u(x)]dx

=

∫ xT

−xT

s2eisxf [u(x)]dx

= f [u(xT )](−iseisxT + ise−isxT ) + f ′[u(x−T )]u′(xT )(eisxT + e−isxT )

−∫ xT

−xT

eisxd2f [u(x)]

dx2dx.

Note that f [u(x)] = 0 outside of (−xT , xT ) and F [s2f ] ∈ L1(R).Applying the inverse Fourier transform to (3.5) gives a fourth order ODE

u′′′′ − (a2 + 1)u′′ + a2u = 2(aA− a2)f [u(x)](3.6)

+ 2(aA− 1)

f [u(xT )]M ′(x) + f ′[u(x−T )]u′(xT )M(x) − d2f [u(x)]

dx2

,

Page 100: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

224 YIXIN GUO AND CARSON C. CHOW

where

M ′(x) = δ′(x− xT ) + δ′(x + xT )

and

M(x) = δ(x− xT ) + δ(x + xT ).

Here δ(x) and δ′(x) are defined as

δ(x) =1

∫ ∞

−∞eisxds,

∫ ∞

−∞f(x)δ′(x)dx = −f ′(0).

If u(x) is a solution of (3.6) where (3.3)–(3.5) hold, then u(x) is also a solution of (2.4).We construct a single-pulse solution as in Figure 4 by decomposing ODE (3.6) into two

linear differential equations:

u′′′′ − (a2 + 1)u′′ + a2u = 2a(A− a)f(u) − 2(aA− 1)d2f [u]

dx2if u > uT (region I),(3.7)

u′′′′ − (a2 + 1)u′′ + a2u = 0 if u < uT (region II and III).(3.8)

x

u

u

III I II

T

−x xT T

Figure 4. Single-pulse solution.

We label the solution of (3.6) on regions I, II, and III by uI(x), uII(x), and uIII(x), respec-tively. The solutions uI(x), uII(x), and uIII(x) must be connected together at −xT and xT

to get a continuous and smooth u(x) on R. uI(x) and uII(x) are connected at xT with fivematching conditions:

uI(xT ) = uT ,(3.9)

uII(xT ) = uT ,(3.10)

u′I(xT ) = u′II(xT ),(3.11)

u′′I (xT ) = u′′II(xT ) − 2(aA− 1)f(u(xT )),(3.12)

u′′′I (xT ) = u′′′II(xT ) − 2(aA− 1)f ′(u(xT ))u′(xT ).(3.13)

Page 101: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 225

Conditions (3.9)–(3.11) are given by the continuity of u(x) and u′(x). Equation (3.13) isobtained by integrating (3.6) over a small neighborhood of xT . Equation (3.12) is obtainedby integrating (3.6) twice—first with respect to x and then over a small neighborhood of xT .u′′(x) and u′′′(x) are discontinuous at xT ; i.e., there are jumps in u′′(xT ) and u′′′(xT ). Sinceu(x) is symmetric, similar matching conditions apply to uI(x) and uIII(x) at −xT .

In region II, the solution for a single pulse that satisfies the boundary conditions is

uII(x) = Ee−ax + Fe−x, E, F ∈ R.(3.14)

By symmetry, the solution in region III is

uIII(x) = Eeax + Fex, E, F ∈ R.(3.15)

In region I, substituting f [u(x)] = α(u− uT ) + 1 and d2f [u(x)]dx2 = αu′′(x) into (3.7) gives

u′′′′ − (a2 + 1 − 2α(aA− 1))u′′ + (a2 − 2aα(A− a))u = 2a(A− a)(1 − αuT ).(3.16)

The eigenvalues of (3.16) are ω1, −ω1, ω2, −ω2, where

ω21 = R + S,(3.17)

ω22 = R− S(3.18)

with

R =(a2 + 1 − 2α(aA− 1))

2,(3.19)

S =

√∆

2,(3.20)

and

∆ = (a2 + 1 − 2α(aA− 1))2 − 4(a2 − 2aα(A− a)).(3.21)

Imposing symmetry and u′(0) = 0, the general solution of ODE (3.16) can be written in theform

uI(x) = C(eω1x + e−ω1x) + D(eω2x + e−ω2x) + U0,(3.22)

where

U0 =2(A− a)(β − αuT )

a− 2α(A− a)

for x ∈ (−xT , xT ), xT ∈ R, C,D ∈ C, and uI(x) ∈ R.The single-pulse solutions of (3.6) are found by matching uI , uII , and uIII across xT and

−xT using the matching conditions (3.9)–(3.10). We investigate the existence and shape ofsingle-pulse solutions as we change the gain and connection function. For simplicity, we callxT the width of a pulse although it is actually the half width. The height of a single pulse isu(0). The firing rate of the pulse is given by f [u].

Page 102: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

226 YIXIN GUO AND CARSON C. CHOW

3.2. Solutions for the Amari case (α = 0). Amari found conditions for which single-pulse solutions exist for (2.4) with general Mexican hat connectivity and the Heaviside gainfunction [3]. Here, we revisit the Amari case for the exponential connection function (2.1).When α = 0 and β = 1, the gain function (2.2) becomes the Heaviside function Θ(u) and the

term 2(aA− 1)d2f [u]dx2 does not exist in ODE (3.7). The eigenvalues (3.17) and (3.18) become

simple and the solutions (3.22) and (3.14) are

uI(x) = C(eax + e−ax) + D(ex + e−x) + U0,(3.23)

uII(x) = Ee−ax + Fe−x,(3.24)

respectively. Applying conditions (3.9)–(3.13) to (3.23) and (3.24) yields the system

Ee−axT + Fe−xT = uT ,(3.25)

C(eaxT + e−axT ) + D(exT + e−xT ) +2(A− a)β

a= uT ,(3.26)

aC(eaxT − e−axT ) + D(exT − e−xT ) = −aEe−axT − Fe−xT ,(3.27)

a2C(eaxT + e−axT ) + D(exT + e−xT ) = a2Ee−axT + Fe−xT − 2(aA− 1)β,(3.28)

a3C(eaxT − e−axT ) + D(exT − e−xT ) = −a3Ee−axT − Fe−xT .(3.29)

The system (3.25)–(3.29) is linear in the coefficients C, D, E, and F , which can be solved interms of xT :

C = −A

ae−axT ,

D = e−xT ,

E =A

a(eaxT − e−axT ),

F = −(exT − e−xT ).

From these coefficients we arrive at the following proposition for single-pulse solutions.

Proposition 3.3. There are two pulse solutions when uT ≤∫ (lnA)/(a−1)0 w(x)dx and (Aa −

1) < uT for A > a and 0 ≤ uT for A < a.Proof. Substituting E and F into (3.25) (or C and D into (3.26)) gives an existence

condition for a single pulse Φ(xT ) = uT , where

Φ(x) =A

a(1 − e−2ax) − (1 − e−2x).(3.30)

We term Φ(x) the “existence function.” Two examples are shown in Figures 6 and 7 wherethe curve Φ(x) crosses uT twice, implying that there are two single-pulse solutions (exampleis shown in Figure 5).

Since

limx→∞

Φ(x) =A

a− 1 =

< 0 if A < a (Figure 6),

≥ 0 if A ≥ a (Figure 7),

Page 103: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 227

xT

l2 4

x

−0.15

0.65

u

uT

2 4x

−0.15

0.65

u

uT

xT

s

Figure 5. Large single pulse l and small single pulse s for A = 2.8, a = 2.6, α = 0, uT = 0.3. (Left)Single pulse l: xl

T = 0.68633, height = u(0) = 0.79991. (Right) Single pulse s: xsT = 0.12985, height = u(0) =

0.37358.

1 2 3xT

lxT

s

xA/a−1

0.2

0

Φ

Figure 6. Existence function Φ(x) when A < a with α = 0, A = 2.6, a = 3. limx→∞ Φ(x) = Aa− 1 =

−0.1333. Φ(x) gives the range of thresholds uT that supports two single-pulse solutions. Example: At uT = 0.2,Φ(x) shows that we have a single-pulse solution l with width xl

T ; the second single-pulse solution s is narrowerand has width xs

T .

the lower bound of uT that supports two pulses is 0 if A < a and the lower bound of uT thatguarantees two pulses is A

a − 1 when A > a.The upper bound on threshold uT that supports two pulse solutions is the maximum of

Φ(x). Solving

dx= Ae−2ax − 2e−2x = 0

gives x = lnA2(a−1) . Thus Φ reaches its maximum at

Φ(x) =A

a(1 − e−

a lnAa−1 ) − (1 − e−

lnAa−1 ) =

∫ lnAa−1

0w(x)dx,(3.31)

Page 104: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

228 YIXIN GUO AND CARSON C. CHOW

1 2 3xT

lxT

sxT

P

xA/a−1

0.2

0.4

0

uT

uT

P

Φ

P

Figure 7. Existence function Φ(x) when A > a. α = 0, A = 2.8, a = 2.6, limx→∞ Φ(x) = Aa−1 = 0.07692.

Example: At uT = 0.3, Φ(x) shows that there is a wide single-pulse solution l with width xlT = 0.68633 and a

narrower single-pulse solution s with width xsT = 0.12985. P is the transition point where single pulse l changes

into a dimple pulse d. At the transition, uPT = 0.15672 and xP

T = 1.24379.

62 4x

−0.45

0.65

u

uT

Figure 8. Dimple pulse d for A = 2.8, a = 2.6, α = 0, uT with width xdT = 1.8832.

proving the proposition.

Proposition 3.3 does not distinguish between convex single pulses and dimple pulses whichare in the family of single-pulse solutions. An example of a dimple pulse which usually existsfor small threshold is shown in Figure 8. In Figure 7, as uT is lowered, P is the transition pointwhere u′′ = 0 and the convex single pulse l transforms into the dimple pulse d. The smallsingle pulse s always remains a single pulse. The transition point P is identified by followingu′′(0) as a function of uT using the continuation program AUTO. In the accompanying paperwe compute the stability of these solutions. In agreement with Amari [3] we find that thelarge pulse is stable and the small pulse is unstable. Additionally, we find that dimple-pulsesolutions can also be stable.

Page 105: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 229

3.3. Solutions for the general case. For the general case of α = 0, the complex eigen-values ω1, −ω1, ω2, −ω2 given by (3.17) and (3.18) will change form for different parametervalues. The transition points for the eigenvalues are given by the relative signs of functionsR (3.19), S (3.20), and ∆ (3.21). There are three cases: both eigenvalues ω1,2 are real, bothare complex, or both are imaginary.

We consider the transitions when α is changed for fixed a and A. We find that thereare five critical points where the eigenvalue structure changes. At α = α0 ≡ a/(2(A − a)),R − S = 0, with R > 0 and ∆ > 0. The solutions of the quadratic equation ∆(α) = 0 giveα1 and α3. At α2, R = 0. At α4 = a/(2(A − a)), R + S = 0 with R < 0 and ∆ > 0. Wearrange αi (i = 0, 1, 2, 3, 4) in increasing order. ω1 and ω2 are complex conjugates for bothα ∈ (α1, α2) and α ∈ (α2, α3). In our analysis, we consider only the case where α > 0 (i.e.,the firing rate is increasing with input). The case of α = 0 with the general connection weightfunction was fully treated in [3] and is reevaluated in section 3.2. Tables 1, 2, and 3 enumerateall the possible forms of ω1 and ω2.

Table 1Eigenvalue chart when A > a.

E1 E2 E3 E4 E5 E6

> 0 > 0 > 0 = 0 < 0 ∆ > 0R < 0 < |R| R < 0 < S R < 0 < |R||R| < S 0 < S < R S < |R| |R| = S

ω1 real real imaginary = ω2 = ω∗2 , complex =

√2R

ω2 imaginary real imaginary = ω1 = ω∗1 , complex 0

α (α4,∞) (−∞, α1) (α3, α4) α1, α3 (α1, α3) α4

Table 2Eigenvalue chart when A < a.

E1 E2 E3 E4 E5 E6

> 0 > 0 > 0 = 0 < 0 ∆ > 00 < R < S 0 < S < R R < 0 < S < |R| 0 < R = S

ω1 real real imaginary = ω2 complex =√

2R

ω2 imaginary real imaginary = ω1 complex 0

α (−∞, α0) (α0, α1) (α3,∞) α1, α3 (α1, α3) α0

Table 3Eigenvalue chart when A = a.

E1 E2 E3 E4 E5 E6

> 0 > 0 > 0 = 0 < 00 < R < S 0 < S < R R < 0 < S < |R| R = S

ω1 real imaginary = ω2 = ω∗2 , complex

ω1 ∅ real imaginary = ω2 = ω∗1 , complex ∅

α (−∞, α1) (α3,∞) α1, α3 (α1, α3)

Although both α0 and α4 have the same expression, they do not coexist. When A < a,α0 = a

2(A−a) < 0, and when A > a, α4 = a2(A−a) > 0. For all values of ω1 and ω2, uII(x) and

uIII(x) always have the form (3.14) and (3.15), respectively.

Page 106: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

230 YIXIN GUO AND CARSON C. CHOW

3.4. Solutions for real eigenvalues.

3.4.1. Construction of single-pulse solutions. For α ∈ (0, α1), both ∆ and R are positive,so ω1 and ω2 are both real. Hence uI(x) and uII(x) have the following form:

uI(x) = C(eω1x + e−ω1x) + D(eω2x + e−ω2x) +2(A− a)(β − αuT )

a− 2α(A− a),(3.32)

uII(x) = Ee−ax + Fe−x.(3.33)

When eigenvalues ω1 and ω2 are real, C and D must also be real for real uI(x). Applying thematching conditions (3.9)–(3.13) to (3.32) and (3.33) yields

Ee−axT + Fe−xT = uT ,(3.34)

C(eω1xT + e−ω1xT ) + D(eω2xT + e−ω2xT ) + U0 = uT ,(3.35)

ω1C(eω1xT − e−ω1xT ) + ω2D(eω2xT − e−ω2xT ) = −aEe−axT − Fe−xT ,(3.36)

ω21C(eω1xT + e−ω1xT ) + ω2

2D(eω2xT + e−ω2xT ) = a2Ee−axT + Fe−xT(3.37)

− 2(aA− 1)β,

ω31C(eω1xT − e−ω1xT ) + ω3

2D(eω2xT − e−ω2xT ) = (−a3 + 2aα(aA− 1))Ee−axT(3.38)

+ (−1 + 2α(aA− 1))Fe−xT .

System (3.34)–(3.38) can be solved for the five unknowns C, D, E, F , and xT using Math-ematica [78], giving an explicit formula for uI(x) and uII(x). The single pulse is then givenby uI(x) on (−xT , xT ), uII(x) on (xT ,∞), and uIII(x) on (−∞, xT ). For the parameterset (a,A, α, β, uT ) = (2.6, 2.8, 0.15, 1, 0.400273), the solution is (C,D,E, F, xT ) = (−0.8532,1.16865, 2.94108,−0.89571, 0.41902). Figure 9 shows a graph of this single pulse. The heightuI(0) of the pulse is 0.77892. Its width is xl

T = 0.41902. There also exists a second smaller andnarrower single-pulse solution to (3.34)–(3.38) for the same set of parameters (see Figure 10).The height and the width of this pulse are uI(0) = 0.6123 and xs

T = 0.2582, respectively.

When ∆ = 0 and R > 0 (α = α1), there are repeating real eigenvalues ω1, ω1, −ω1, −ω1,where ω1 =

√R. By the symmetry of uI(x), C1 = D1 and C2 = D2, so uI(x) from (3.22) can

be written as

uI(x) = C(eω1x + e−ω1x) + Dx(eω1x + e−ω1x) + U0.(3.39)

Applying matching conditions (3.9)–(3.13) gives a similar system to (3.34)–(3.38) to whichsolutions can be found numerically.

3.4.2. Finding solutions for real eigenvalues using the existence function. In order toperform numerical continuation on the single-pulse solutions, it is more convenient to utilizethe existence function Φ(x) introduced by Amari [3] and calculated in section 3.2. We computeit for the general case by first eliminating the threshold uT from system (3.34)–(3.38) to get

Page 107: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 231

xT

2 4−xT

x

−0.15

0.85

u

uT

Figure 9. Construction of large single pulse l. A = 2.8, a = 2.6, α = 0.15, uT = 0.3. xlT = 0.41092,

height = u(0) = 0.77892.

xT

2 4−xT

x

−0.15

0.85

u

uT

xT

2 4x

−0.15

0.65

u

uT

Figure 10. Large single pulse l and small single pulse s. A = 2.8, a = 2.6, α = 0.15, uT = 0.3. (Left) Singlepulse l: xl

T = 0.41092, height = u(0) = 0.77892. (Right) Single pulse s: xsT = 0.2582, height = u(0) = 0.6123.

an equivalent four equation system

C(eω1xT + e−ω1xT ) + D(eω2xT + e−ω2xT ) + U0(3.40)

=a

a− 2α(A− a)(Ee−axT + Fe−xT ),

ω1C(eω1xT − e−ω1xT ) + ω2D(eω2xT − e−ω2xT ) = −aEe−axT − Fe−xT ,(3.41)

ω21C(eω1xT + e−ω1xT ) + ω2

2D(eω2xT + e−ω2xT ) = a2Ee−axT + Fe−xT(3.42)

− 2(aA− 1)β,

ω31C(eω1xT − e−ω1xT ) + ω3

2D(eω2xT − e−ω2xT ) = (−a3 + 2aα(aA− 1))Ee−axT(3.43)

+ (−1 + 2α(aA− 1))Fe−xT .

Page 108: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

232 YIXIN GUO AND CARSON C. CHOW

Equations (3.40)–(3.43) form a linear system in C, D, E, and F . To obtain an existencefunction Φ(x), we construct coefficient vectors

m1 =

⎛⎜⎜⎜⎝

eω1xT + e−ω1xT

ω1(eω1xT − e−ω1xT )

ω21(e

ω1xT + e−ω1xT )

ω31(e

ω1xT − e−ω1xT )

⎞⎟⎟⎟⎠, m2 =

⎛⎜⎜⎜⎝

eω2xT + e−ω2xT

ω2(eω2xT − e−ω2xT )

ω22(e

ω2xT + e−ω2xT )

ω32(e

ω2xT − e−ω2xT )

⎞⎟⎟⎟⎠,

m3 =

⎛⎜⎜⎜⎝

aa−2α(A−a)

a

−a2

a3 − 2aα(aA− 1)

⎞⎟⎟⎟⎠, m4 =

⎛⎜⎜⎜⎝

aa−2α(A−a)

1

−1

1 − 2α(aA− 1)

⎞⎟⎟⎟⎠,

m0 =

⎛⎜⎜⎜⎝

(A−a)βa−2α(A−a)

0

−2(aA− 1)β

0

⎞⎟⎟⎟⎠.

Let DETxT (α) = | m1 m2 m3 m4 |, where |.| is the determinant. For parameters(a,A, α, β, uT ) = (2.6, 2.8, 0.15, 1, 0.400273), the solution (C,D,E, F, xT ) = (−0.8532, 1.16865,2.94108,−0.89571, 0.41902) with DETxT (α) = −243.2415568475 is given by Mathematica.We use this solution as an initial guess to continue system (3.34)–(3.38) using AUTO whilefollowing DETxT (α) as α decreases to 0 and then increases to α1. The recorded value ofDETxT shows that DETxT (α) = 0 as α < α1. Therefore, we can always solve the linearsystem (3.40)–(3.43) by Cramer’s rule. The solutions for E and F given by

E =

∣∣ m1 m2 m0 m4

∣∣∣∣ m1 m2 m3 m4

∣∣ e−axT,

F =

∣∣ m1 m2 m3 m0

∣∣∣∣ m1 m2 m3 m4

∣∣ e−xT

are then substituted back into uII(x) = E(x)e−ax + F (x)e−x to obtain the existence function

Φ(x) = Ee−ax + Fe−x =

∣∣ m1 m2 m0 (m3 −m4)∣∣∣∣ m1 m2 m3 m4

∣∣ .(3.44)

Figure 11 shows Φ(x) (3.44). Φ(x) approaches a limit as x → ∞, but this limit is differentfrom the limit of Φ(x) in the Amari case (α = 0).

Single pulses of width xT are given by solutions of Φ(xT ) = uT . Since all four eigenvaluesare real, there are no oscillations in Φ(x) and so there are at most two pulse solutions. One isa small single pulse, and the other is either a large single pulse or a dimple pulse dependingon the threshold uT .

Page 109: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 233

1 2 3xT

lxT

sxT

Px

0.2

0

uT

uT

P

Φ

P

Figure 11. Existence function Φ(x) for α = 0.15, A = 2.8, a = 2.6. At uT = 0.400273, Φ(x) has a singlepulse l which has width xl

T = 0.4109; the second single pulse s is narrower with width xsT = 0.2582. At P ,

threshold uPT = 0.1489, and u′′(0) of the pulse at P is 0.

2 4

x

−0.3

0.85

u

uT

Figure 12. Example of P -pulse with a = 2.6, A = 2.8, α = 0.15, uPT = 0.14838. The width of this pulse is

xPT = 1.27978 and u′′(0) = 0.

3.4.3. Transition point P between single pulses and dimple pulses. The existence func-tion Φ(x) gives a range of thresholds uT for which there exist two pulse solutions; a large pulse l(or dimple pulse d) and a small pulse s, or only one small single-pulse solution. The x-valueof the intersection of uT and Φ(x) is the width of a pulse. In Figure 11, xs

T is the width of s,and xl

T is the width of l. At P , the curvature at the peak of the pulse solution is zero (i.e.,u′′(0) = 0) as seen in Figure 12. For this set of parameters, dimple pulses appear if uT isbetween uPT = 0.1489 and limx→∞ Φ(x) (see Figure 11). Figure 13 shows the continuationplot of u′′(0); u′′(0) crosses zero at uT = uPT .

3.4.4. Loss of existence for unbalanced synaptic connectivity. An examination of theshape of Φ(x) shows why single pulses do not exist when excitation and inhibition are toomuch out of balance. When excitation dominates, A/a > 1. For fixed a, α, and threshold uT ,

Page 110: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

234 YIXIN GUO AND CARSON C. CHOW

0.2 0.4uT0

−6

−12

u’’(0)

P

Figure 13. Plot of u′′(0) when α = 0.15, A = 2.8, a = 2.6. P is the transition point between single pulse land dimple pulse d. When threshold uP

T = 0.1489, u′′(0) = 0.

0 1 2 3 4 5

x

0.2

uT

0.6

Φ

Figure 14. Φ(x) with excitation dominating inhibition. a = 2.6, α = 0.05, uT = 0.400273, and there aredifferent values of A: A = 3 (cyan); A = 3.5 (blue); A = 3.62 (black); A = 3.7 (red); A = 4.5 (green). WhenΦ(x) is tangential to threshold uT for large x, the width of the large pulse is ∞.

as A becomes larger, the existence function Φ(x) moves up and limx→∞ Φ(x) becomes larger.The width of the large pulse l (or dimple pulse) increases (Figure 14). When Φ(x) is tangentto uT for large x (black curve in Figure 14), the width becomes ∞ and the pulse no longerexists. The pulse l or d can be regained by increasing uT . However, for large enough A/a,Φ(x) will become monotonic (see Figure 15) and only s exists.

When A/a < 1, inhibition dominates excitation in the network. For fixed a and α,Φ(x) diminishes as A is decreased (ratio A/a becomes smaller). Eventually, the ratio is smallenough to make Φ(x) negative (see Figure 16), and for uT > 0, single pulses no longer exist.Inputs to the neurons in the network never exceed threshold so the neurons cannot fire.

3.5. Solutions for complex eigenvalues. As seen in Tables 1, 2, and 3, the eigenvaluesω1 and ω2 form a complex conjugate pair for α1 < α < α3. Thus as long as α1 < α3,complex eigenvalues can be found for arbitrary a and A. Suppose ω1 = ω∗

2 = p + iq. Then

p = (R2 + S2)14 cos θ, p = (R2 + S2)

14 sin θ, where θ = 1

2 arctan

√|∆|

2R for α ∈ (α1, α2) or

θ = π2 + 1

2 arctan

√|∆|

2R for α ∈ (α2, α3). When α > α3, ∆ > 0, R < 0, and S =√

∆2 < |R|.

Page 111: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 235

0 1 2 3 4 5

x

4

8

12

Φ

uT

Figure 15. Φ(x) with excitation dominating inhibition: A = 29.6, a = 2.6, α = 0.03. There exists onlypulse s; pulse l or d do not exist.

1 2 3 4 5

x

0.2

uT

−0.2

−0.4

0

Φ

Figure 16. Φ(x) with inhibition dominating excitation. a = 2.6, α = 0.05, uT = 0.400273, and there aredifferent values of A: A = 2.5 (cyan); A = 2 (blue); A = 1.6 (red); A = 1.05 (black). There is neither pulse snor pulse l for positive uT when A/a is small enough (black).

The real parts of ω1 and ω2 are both zero and w1 = iq1, w2 = iq2, where q1 =√

|R + S| andq2 =

√|R− S|.

3.5.1. Construction of a single pulse with complex eigenvalues. To ensure that uI(x)is real, C and D must be complex. Imposing symmetry gives C = D∗. Setting C = CR + iCI

implies D = CR − iCI . Substituting C, D, ω1, and ω2 into (3.22) for uI(x) gives

uI(x) = 4CR cos(qx) cosh(px) − 4CI sin(qx) sinh(px) + U0.

For simplicity, we relabel with C = 4CR and D = −4CI . When ω1 and ω2 are both imaginary,we have

uI(x) = C cos(q1xT ) + D cos(q2xT ) + U0.

Applying the matching conditions (3.9)–(3.13) results in five algebraic equations with un-knowns C, D, E, F , and xT , which can be solved numerically to obtain the explicit form ofuI(x). The plots of pulses l and s are shown in Figure 17.

When α = α3, R < 0, implying ω1 = ω2 =√R = i

√−R. Let ω =

√−R ∈ R. Then

uI(x) = C1 cosωx + C2 sinωx + D1x cosωx + D2x sinωx + U0.

Page 112: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

236 YIXIN GUO AND CARSON C. CHOW

xT

l2 4

x

1

−0.15

0.65

u

uT

2 4

x

1

−0.15

0.65

u

uT

xT

s

Figure 17. Two single pulses with A = 2.8, a = 2.6, α = 0.6178, uT = 0.3. (Left) Single pulse l withxl

T = 0.58384 and u(0) = 1.0901. (Right) Single pulse s with xsT = 0.21317 and u(0) = 0.5744.

1 2 3xT

sxT

lx

0.2

uT

0

uT

P

Φ

P

P1 P2

s l

Figure 18. Existence function Φ(x). α = 0.6178, A = 2.8, a = 2.6. At uT = 0.400273, Φ(x) shows thatthere is a single pulse l which is wider and has width xl

T = 0.58385; the second single pulse s is narrower andhas width xs

T = 0.21317. As we increase uT to the maximum of Φ(x), pulses s and l annihilate in a saddle-nodebifurcation. At P , uP

T = 0.0767, xPT = 1.454, and u′′(0) = 0. At both P1 and P2, uT = 0.063, u′′(0) > 0, and

the widths are 1.6 and 1.9, respectively. See Figure 19.

Since uI(x) = uI(−x), then C2 = D1 = 0, leaving

uI(x) = C cosωx + Dx sinωx + U0.

3.5.2. Finding solutions for complex eigenvalues using the existence function. Theexistence function Φ(x) with complex ω1 and ω2 (Figures 18 and 20) can be obtained usingmethods similar to those in section 3.4.2. The main difference from the real case is that Φ(x)for complex eigenvalues can oscillate as seen in Figure 18. After the first local minimumbetween P1 and P2, Φ(x) will approach a constant with decaying oscillations for increasing x.Additionally, if the threshold is between the first local minimum and the next local maximum,

Page 113: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 237

62 4

x

−0.45

0.65

u

uT

62 4

x

−0.45

0.65

u

uT

Figure 19. Dimple pulses with parameters A = 2.8, a = 2.6, α = 0.6178, uT = 0.063. (Left) Dimple pulseat P1 with xd

T = 1.6. (Right) Dimple pulse at P2 with xdT = 1.9.

1 2 3

x00.06

0.22

Φ

P uT

P

Figure 20. Existence function Φ(x) with imaginary ω1 = ω2 for A = 2.8, a = 2.6, α = α3, uPT = 0.0967003.

The empty circles are small single pulses. The solid circles are large single pulses. The triangle is a dimplepulse. Point P is where the dimple pulse (Figure 21) breaks into a double pulse. The crosses are not validsolutions.

there exist more than two pulse solutions. Figure 19 shows an example where there are a smallsingle pulse and two dimple pulses. There are never more than two coexisting pulses for real ω1

and ω2, which includes the Amari case, because the existence function Φ(x) does not oscillate.It is important to note that satisfying the existence condition Φ(x) = uT is a necessary butnot a sufficient condition for a pulse solution. It is possible to satisfy the matching conditionsand not be a pulse. Thus although in principle an infinite number of pulses could exist, in our

Page 114: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

238 YIXIN GUO AND CARSON C. CHOW

−2 2 4−4

x

−0.4

1

u

uT

P

Figure 21. The transition from a dimple pulse to a double pulse at P with uPT = 0.0967003, and u(0) =

0.0967003.

experience we find that most of the larger x solutions are not pulses. As will be shown in theaccompanying paper [35], pulse s is unstable and pulse l is stable. If there are three pulses,the largest third pulse which can be either a single pulse or a dimple pulse is unstable.

Oscillations in Φ(x) also exist when the eigenvalues are purely imaginary. As before,more than two pulses, including dimple pulses, can coexist (see Figure 20) depending onthe threshold uT . Figure 21 shows a special case of a dimple pulse where the dimple min-imum reaches the threshold. If the minimum drops below the threshold, the dimple pulsebreaks into two disjointed single pulses or a double pulse. This double pulse is not a validsolution because it violates the assumptions of the equations from which the solution wasderived. However, double pulses can exist, and we show this using a separate formalism insection 5.

3.5.3. Blow-up for large α. For large enough α, the large pulse l blows up at a criticalvalue α0 and does not exist for α ≥ α0. The blow-up occurs in the regime where both ω1

and ω2 are imaginary. Thus the height of pulse l is

u(0) = C + D +2(A− a)(β − αuT )

a− 2α(A− a),(3.45)

which can be expressed as

u(0) =

∣∣ (m1 −m2) m0 m3 m4

∣∣∣∣ m1 m2 m3 m4

∣∣ ,(3.46)

where the coefficient factors m1, m2, m3, m4 for C, D, E, F , and m0 are defined in section3.4.2. The blow-up occurs because the denominator of (3.46) goes to zero at α = α0 while thenumerator remains finite, sending the height u(0) of the large pulse to infinity.

We can also see the loss of l in the existence function:

Φ(x) = Ee−ax + Fe−x =

∣∣ m1 m2 m0 (m3 −m4)∣∣∣∣ m1 m2 m3 m4

∣∣ .

Page 115: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 239

Figure 22 shows that when α ≥ α0, there is always the small single pulse s but no large singlepulse l. A third solution (the third intersection of uT and Φ(x)) could also exist, but we havenot examined this solution. As α increases, the height of l becomes very large, but the widthof the large pulse remains finite. This can be observed both from the existence function Φ(x)and the continuation plot (Figure 24) in section 4.

0.4 0.8 1x0

0.6

1.2Φ

uT

0.4 0.8 1x0

0.6

1.2

Φ

uT

Figure 22. Existence function Φ for imaginary ω1,2 with A = 2.8, a = 2.6, uT = 0.400273. (Left) α = 1.4.There is a single pulse l, and xl

T = 0.8491539857774331, height = u(0) = 146.2227855915919, which is bigbecause α = 1.4 is close to α0 where DET = 0. (Right) α = 1.41 > α0. Single pulse l no longer exists. Thevertical line in both pictures is where Φ(x) blows up.

4. Continuation in parameter space. One of our original goals was to understand howthe shape of stationary pulses and their corresponding firing rates change as the parametersof synaptic connectivity and gain are changed. Here we give a global picture in the parameterspace of uT , a, A, and α. A difficulty in this undertaking is that as the parameters arealtered, the eigenvalue structure will make abrupt transitions. Hence, one must keep track ofthe eigenvalues and switch the form of the solutions when appropriate to construct a globalpicture.

As we saw before, the small and large pulses arise out of a saddle-node bifurcation [22, 25].This gives a minimal condition for when pulses can exist. In Figures 23 and 24, we show thelarge pulse l and the small pulse s arising from a saddle-node bifurcation as α is increased forfixed a and A. We have set the threshold to

u0T =

∫ lnA/(a−1)

0w(x)dx

so that the saddle-node is exactly at α = 0. Note that at the saddle-node bifurcation, thepulse arises with nonzero height and width.

We can now track the location of the saddle-node and the maximum firing rate of the pulseat the saddle-node in parameter space. We reduce the four dimensional parameter space byprojecting to the space (α, a/A, uT ). The saddle-node location can be found by setting thethreshold uT to the value of the first local maximum of Φ(x). This gives an upper bound forallowable thresholds of the gain function to support a pulse solution. As long as uT is belowthis upper bound and positive, a single pulse can exist.

We first set A = 1.5 and vary the ratio a/A and α to identify the saddle-node threshold uT .We then calculate the maximum firing rate fmax of the single-pulse solution at this threshold

Page 116: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

240 YIXIN GUO AND CARSON C. CHOW

0 α1

α3

2α0α

0.5

1

Width

Figure 23. Width of single pulse l (upper branch) and s (lower branch) for a = 2.6, A = 2.8, anduT = 0.400273. For α ∈ [α∗, α0), there are two single pulses. For α ∈ [α0,∞), there is only one single-pulsesolution. At α = 0 there is a saddle-node bifurcation where the large single pulse l and the small single pulse sarise. At α0, the large single pulse l blows up.

α1

α3

0 2α0α

2

4

6

Height

Figure 24. Height of single pulse l (upper branch) and s (lower branch) for the same conditions as inFigure 23.

which creates a two dimensional surface in the space of (a/A, α, uT ). We increment A in stepsof 1 and create a set of surfaces. The surface plots of uT and fmax versus a/A and α areshown in Figures 25 and 26. Single-pulse solutions exist below a given surface (with uT > 0).Depending on the parameters, solutions could include one single pulse s, a coexistence ofsingle pulses s and l (or a dimple pulse d but in a smaller global range), or coexistence ofmore than two pulses.

When excitation dominates inhibition (i.e., a/A is small) the single-pulse solution canblow up as mentioned in section 3.5.3. We note that the crucial parameter for maintaining

Page 117: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 241

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0.5

1

1.5

2

2.5

3

3.5

0

0.5

1

1.5

2

2.5

αa/A

uT

Figure 25. Surface plot of saddle-node point in parameter space of (a/A, α, uT ). The separate leavescorrespond to values of A ranging from 1.5 to 10.5.

0

0.5

1

1.5

0.5

1

1.5

2

2.5

3

3.5

4

1

2

3

4

5

6

αa/A

fmaxfmaxfmaxfmaxfmaxfmaxfmaxfmaxfmaxfmax

Figure 26. Maximum point of firing rate of the single pulse at the saddle-node for same the conditions asin Figure 25.

low firing rates is for inhibition to dominate excitation (i.e., a/A to be large). Even for thebalanced case of a = A, for gain slope α beyond unity, the firing rate rises dramatically. Thisis in correspondence with observations of numerical simulations [72, 73].

Page 118: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

242 YIXIN GUO AND CARSON C. CHOW

5. Construction of double-pulse solutions. The neural network equation (1.1) can alsosupport double-pulse or even multiple-pulse solutions [34, 44]. Double pulses are solutionsthat have two disjoint open and finite intervals for which the synaptic input u(x) is abovethreshold.

Definition 5.1. Double-pulse solution: A solution u(x) of (2.4) is called a double pulse ora 2-pulse if there are x1 > 0 and x2 > 0 such that

u(x)

⎧⎪⎨⎪⎩> uT if x ∈ (x1, x2) ∪ (−x2,−x1), x1,2 > 0,

= uT if x = −x2,−x1, x1, x2,

< uT otherwise

with

(u, u′, u′′, u′′′) → (0, 0, 0, 0)

exponentially fast as x → ±∞. u and u′ are bounded and continuous on R. u′′, u′′′, and u′′′′

are continuous everywhere for x ∈ R except x = ±x1,2 and bounded everywhere on R. u(x) issymmetric with u′′(0) > 0; u(0) is the minimum between −x1 and x1 (Figure 27).

The approach to finding and constructing double-pulse solutions is similar to that forsingle pulses. The connection function is (2.1) and the gain function is (2.2). Laing andTroy [44] found that double pulses can exist for the Amari case (α = 0). However, for theexponential connection function (2.1), the double pulses are unstable. Coombes, Lord, andOwen [17] found that double and higher number multiple-pulse solutions could exist in anetwork with a saturating sigmoidal gain function. As in the single-pulse case, a fourth orderODE on x ∈ (−∞,∞) for double pulses can be derived:

u′′′′ − (a2 + 1)u′′ + a2u(5.1)

= 2a(A− a)f [u(x)] + 2(aA− 1)f [u(x2)]∆

′2(x) − f [u(x1)]∆

′1(x)

− 2(aA− 1)

f ′[u(x1)]u

′(x1)∆1(x) + f ′[u(x1)]u′(x1)∆2(x)

− 2(aA− 1)

d2f [u(x)]

dx2,

where

∆1(x) = δ(x− x1) + δ(x + x1),

∆2(x) = δ(x− x2) + δ(x + x2),

∆′1(x) = δ′(x− x1) − δ′(x + x1),

∆′2(x) = δ′(x− x2) − δ′(x + x2).

Double pulses can be constructed using ODE (5.1) and the following set of matching conditions

Page 119: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 243

(5.2)–(5.11) at both x1 and x2:

uI(x1) = uT ,(5.2)

uII(x1) = uT ,(5.3)

uII(x2) = uT ,(5.4)

uIII(x2) = uT ,(5.5)

u′I(x1) = u′II(x1),(5.6)

u′′I (x1) = u′′II(x1) + 2(aA− 1)f(u(x1)),(5.7)

u′′′I (x1) = u′′′II(x1) + 2(aA− 1)f ′(u(x1))u′(x1),(5.8)

u′II(x2) = u′III(x2),(5.9)

u′′II(x2) = u′′III(x2) − 2(aA− 1)f(u(x2)),(5.10)

u′′′II(x2) = u′′′III(x2) − 2(aA− 1)f ′(u(x2))u′(x2),(5.11)

where x1 and x2 are the x-values where the double pulse crosses the threshold uT . uI(x) isthe solution of ODE (5.1) on the middle region I = (−x1, x1), uII(x) is the solution of (5.1)on region II = (x1, x2), and uIII is the solution of (5.1) on region III = (x2,∞) (see examplesin Figures 27 and 28). The explicit forms for uI, uII, and uIII are given in (5.12)–(5.14):

uI = C(eax + e−ax) + D(ex + e−x),(5.12)

uII = E1eω1x + E2e

−ω1x + F1eω2x + F2e

−ω2x +2(A− a)(β − αuT )

a− 2α(A− a),(5.13)

uIII = Ge−ax + He−x.(5.14)

Here ω1 and ω2 are the eigenvalues of the linear ODE reduced from (5.1) on region I. All theconstants x1, x2, C, D, E1, E2, F1, F2, G, and H can be found using matching conditions(5.2)–(5.11).

For the general α > 0 case, we have found two coexisting double pulses as shown inFigure 27. We have not investigated how the double pulses vary in the global regime ofconnection weights and the gain. The coexistence of more than two double pulses for fixed a,A, and α (> 0) remains an open problem as well.

In the Amari case (α = 0), for the same set of values of a, A, uT we find that there are atmost two coexisting double-pulse solutions. The existence conditions are

f1(x1, x2) = u(x1) =

∫ −x1

−x2

w(x1 − y)dy +

∫ x2

x1

w(x1 − y)dy = uT ,(5.15)

f2(x1, x2) = u(x2) =

∫ −x1

−x2

w(x2 − y)dy +

∫ x2

x1

w(x2 − y)dy = uT .(5.16)

Both x3 = f1(x1, x2) and x3 = f2(x1, x2) form two dimensional surfaces in the three di-mensional space (x1, x2, x3). The two surfaces intersect in a convex up space curve. Theintersection of this space curve with the plane x3 = uT are widths of candidate double pulsesolutions. Since the surfaces do not oscillate for the Amari case there are at most two inter-section points. These two points give two double-pulse solutions. An example is shown in

Page 120: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

244 YIXIN GUO AND CARSON C. CHOW

−x1

−x2

x1

x2

−2 24 4

x

−0.5

1

u

uT

I II III

Figure 27. Two coexisting double pulses. A = 2.8, a = 2.6, α = 0.98, uT = 0.26. For the large doublepulse (blue), x1 = 0.19266, x2 = 1.38376. For the small double pulse (red), x1 = 0.50582, x2 = 0.752788.

−x1

−x2

x1

x2

−2 2−4 4

x

−0.3

0.8

u

uT

Figure 28. Double pulse for the Amari case in which α = 0. A = 2.8, a = 2.6, α = 0, uT = 0.26. Forthe large double pulse (blue), x1 = 0.279525, x2 = 1.20521. For the small double pulse (black), x1 = 0.49626,x2 = 0.766206.

Figure 28. In particular, we have found two coexisting double pulses when a = A and α = 0.In the accompanying paper [35] we confirm the Laing and Troy finding [44] that the doublepulses in the Amari case are unstable.

6. Discussion. In this paper, we consider a population neural network model of the form(1.1) with a nonsaturating gain function of the form (1.2). We show the existence of stationarysolutions that satisfy the equilibrium equation (2.4) by explicitly constructing single-pulsesolutions for a specific synaptic connection function (2.1). The strategy was to convert theintegral equation (2.4) into a fourth order ODE. This may seem like a roundabout approachgiven that a single-pulse solution u(x) satisfies the Fredholm integral equation (6.1) of the

Page 121: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 245

second kind

u(x) = µh(x) + α

∫ xT

−xT

w(x− y)u(y)dy, µ = 1 − αuT ,(6.1)

where h(x) =∫ xT

−xTw(x−y)dy, and xT is the pulse width. This can be solved with a Neumann

series. Additionally, if a, A, α, uT , and xT are fixed and satisfy certain conditions, it is notdifficult to show the existence of a unique solution u(x) of (6.1) by a fixed point theorem.However, it is difficult to show using this approach that this solution is a single pulse. Aswe are interested in examining how parameters affect the precise shape of the single-pulsesolutions, we need the explicit solution of (6.1). While it may be possible to obtain a closedform expression by summing the Neumann series exactly, we feel that given the discontinuitiesat the boundaries ±xT , it is simpler to map (6.1) to an ODE and solve that.

In the ODE approach, a proof for the existence of a single pulse of (2.4) becomes a prooffor the existence of a homoclinic orbit of the ODE. Since the ODE has discontinuities acrossthe threshold points, the ODE on the real line is reduced to three different linear ODEs onthree regions separated by threshold points. The matching conditions for the solutions of theODEs across the threshold points must satisfy a system of five equations. From this system,we are able to construct different single-pulse solutions.

The eigenvalue structure of the linear ODEs is important for determining how many pulsesexist. For real ω1 and ω2, there are at most two pulses—the small single pulse and the largeone. Amari’s case (α = 0) belongs to this regime. A large convex single pulse can transforminto a dimple pulse depending on the threshold value (Figure 12). If the eigenvalues arecomplex, there could be a small single pulses and two large pulses with different widths.Depending on the threshold, these two large pulses could be dimple pulses (Figure 19). Therealso exists a transition point where a dimple pulse breaks into a double pulse.

There are three ways that the large pulse can disappear. First, for fixed gain α andthreshold uT , if the excitation is too strong, i.e., ratio A/a is large, the width of the largepulse becomes wider and eventually loses existence. Second, with fixed excitation, i.e., fixed aand A, if the gain is too large, i.e., α is large, the large pulse increases in height and blows upat a finite value of α. Third, with too little excitation or gain, the stable large pulse coalesceswith the unstable small pulse and vanishes in a saddle-node bifurcation.

The pulses are a proposed mechanism of persistent neuronal activity observed duringworking memory. Therefore, it is crucial to access their stability. In the accompanying paper,we show that the large pulse is stable and the small pulse is unstable. We also show thatdimple pulses, like large pulses, can be stable. We show that single pulses can exist for a widevariety of gain and connection functions. However, for single pulses to exist with low firingrates, we require the gain to not be too large and inhibition to dominate excitation. Thissuggests that the cortex could be dominated by inhibition.

Acknowledgments. We would like to thank G. Bard Ermentrout, Jonathan Rubin, BjornSandstede, and William Troy for illuminating discussions.

Page 122: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

246 YIXIN GUO AND CARSON C. CHOW

REFERENCES

[1] C. D. Aliprantis, Problems in Real Analysis: A Workbook with Solutions, Academic Press, New York,1999.

[2] C. D. Aliprantis and O. Burkinshaw, Principles of Real Analysis, Academic Press, New York, 1998.[3] S. Amari, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybernet., 27

(1977), pp. 77–87.[4] M. A. Arbib, ed., The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, MA,

1995.[5] K. E. Atkinson, Numerical Solution of Integral Equations of the Second Kind, Cambridge University

Press, Cambridge, UK, 1997.[6] A. Baddeley, Working Memory, Oxford University Press, Oxford, UK, 1986.[7] R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky, Theory of orientation tuning in visual cortex,

Proc. Natl. Acad. Sci. USA, 92 (1995), pp. 3844–3848.[8] C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers I:

Asymptotic Methods and Perturbation Theory, Springer-Verlag, New York, 1999.[9] W. E. Boyce and R. C. DiPrima, Introduction to Differential Equations, John Wiley and Sons, New

York, 1970.[10] M. Camperi and X.-J. Wang, A model of visuospatial working memory in prefrontal cortex: Recurrent

network and cellular bistability, J. Comp. Neurosci., 5 (1998), pp. 383–405.[11] A. R. Champneys and J. P. McKenna, On solitary waves of a piece-wise linear suspended beam model,

Nonlinearity, 10 (1997), pp. 1763–1782.[12] C. L. Cobly, J.-R. Duhamel, and M. E. Goldberg, Oculocentric spatial representation in parietal

cortex, Cerebral Cortex, 5 (1995), pp. 470–481.[13] A. Compte, C. Constantinidis, J. Tegnr, S. Raghavachari, M. V. Chafee, P. S. Goldman-

Rakic, and X.-J. Wang, Temporally irregular mnemonic persistent activity in prefrontal neurons ofmonkeys during a delayed response task, J. Neurophys., 9 (2003), pp. 3441–3454.

[14] B. W. Connors, Intrinsic neuronal physiology and the functions, dysfunctions and development of neo-cortex, Progress in Brain Research, 102 (1994), pp. 195–203.

[15] C. Constantinidis, M. N. Franowicz, and P. S. Goldman-Rakic, Coding specificity in corticalmicrocircuits: A multiple-electrode analysis of primate prefrontal cortex, J. Neurosci., 21 (2001), pp.3646–4655.

[16] C. Constantinidis and P. S. Goldman-Rakic, Correlated discharges among putative pyramidal neu-rons and interneurons in the primate prefrontal cortex, J. Neurophys., 88 (2002), pp. 3487–3497.

[17] S. Coombes, G. J. Lord, and M. R. Owen, Waves and bumps in neuronal networks with axo-dendriticsynaptic interactions, Phys. D, 178 (2002), pp. 219–241.

[18] L. M. Delves and J. L. Mohamed, Computational Methods for Integral Equations, Cambridge Univer-sity Press, Cambridge, UK, 1988.

[19] J. Deuchars, D. C. West, and A. M. Thomson, Relationships between morphology and physiologyof pyramid-pyramid single axon connections in rate neocortex in vitro, J. Physiology, 478 (1994),pp. 423–435.

[20] D. G. Duffy, Green’s Functions with Applications, Chapman and Hall/CRC, Boca Raton, FL, 2001.[21] S. A. Ellias and S. Grossberg, Pattern formation, contrast control, and oscillations in the short-term

memory of shunting on-center off-surround networks, Biol. Cybernet., 20 (1975), pp. 69–98.[22] G. B. Ermentrout, XPPAUT, Simulation Software Tool.[23] G. B. Ermentrout, Reduction of conductance-based models with slow synapses to neural nets, J. Math.

Biol., 6 (1994), pp. 679–695.[24] G. B. Ermentrout, Neural networks as spatio-temporal pattern-forming systems, Rep. Progr. Phys., 61

(1998), pp. 353–430.[25] G. B. Ermentrout, Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT

for Researchers and Students, Software Environ. Tools 14, SIAM, Philadelphia, 2002.[26] S. Funahashi, G. J. Bruce, and P. R. Goldman-Rakic, Mnemonic coding of visual space in the

monkey’s dorsolateral prefrontal cortex, J. Neurophys., 61 (1989), pp. 331–349.

Page 123: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

EXISTENCE OF PULSES IN NEURAL NETWORKS 247

[27] J. Fuster and G. Alexander, Neuron activity related to short-term memory, Science, 173 (1971),pp. 652–654.

[28] J. M. Fuster, Prefrontal Cortex: Anatomy, Physiology, and Neuropsychology of the Frontal Lobe,Lippincott-Raven, Philadelphia, 1997.

[29] F. Garvan, The Maple Book, Chapman and Hall, London, 2001.[30] P. S. Goldman-Rakic, Cellular basis of working memory, Neuron, 14 (1995), pp. 477–485.[31] C. D. Green, Integral Equation Methods, Nelson, London, 1969.[32] D. H. Griffel, Applied Functional Analysis, Ellis Horwood, Chichester, UK, 1985.[33] S. Grossberg and D. Levine, Some developmental and attentional biases in the contrast enhancement

and short-term memory of recurrent neural networks, J. Theoret. Biol., 53 (1975), pp. 341–380.[34] Y. Guo, Existence and Stability of Standing Pulses in Neural Networks, Ph.D. thesis, University of

Pittsburgh, Pittsburgh, PA, 2003.[35] Y. Guo and C. C. Chow, Existence and Stability of Standing Pulses in Neural Networks: II. Stability,

SIAM J. Appl. Dyn. Syst., 4 (2005), pp. 249–281.[36] B. S. Gutkin, C. R. Laing, C. L. Colby, C. C. Chow, and G. B. Ermentrout, Turning on and

off with excitation: The role of spike-timing asynchrony and synchrony in sustained neural activity,J. Comp. Neurosci., 11 (2001), pp. 121–134.

[37] D. Hansel and H. Sompolinsky, Modeling feature selectivity in local cortical circuits, in Methods inNeuronal Modeling: From Synapse to Networks, 2nd ed., C. Koch and I. Segev, eds., MIT Press,Cambridge, MA, 1998, Chap. 13.

[38] E. Haskell and P. C. Bressloff, On the formation of persistent states in neuronal networks modelsof feature selectivity, J. Integ. Neurosci., 2 (2003), pp. 103–123.

[39] M. Kang, K, Shelley, and H. Sompolinsky, Mexican hats and pinwheels in visual cortex, Proc. Natl.Acad. Sci. USA, 100 (2003), pp. 2848–2853.

[40] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, New York, 1995.[41] K. Kishimoto and S. Amari, Existence and stability of local excitations in homogeneous neural fields,

J. Math. Biol., 7 (1979), pp. 303–318.[42] Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, Springer-Verlag, New York, 1998.[43] C. R. Laing and C. C. Chow, Stationary bumps in networks of spiking neurons, Neural Comp., 13

(2001), pp. 1473–1493.[44] C. R. Laing and W. C. Troy, Two-bump solutions of Amari type models of working memory, Phys. D,

178 (2003), pp. 190–218.[45] C. R. Laing, W. C. Troy, B. Gutkin, and G. B. Ermentrout, Multiple bumps in a neuronal model

of working memory, SIAM J. Appl. Math., 63 (2002), pp. 62–97.[46] C. R. Laing and W. C. Troy, PDE methods for nonlocal models, SIAM J. Appl. Dyn. Syst., 2 (2003),

pp. 487–516.[47] P. E. Latham, B. G. Richmond, P. G. Nelson, and S. Nirenberg, Intrinsic dynamics in neuronal

networks. I. Theory, J. Neurophysiol., 83 (2000), pp. 808–827.[48] P. E. Latham, B. G. Richmond, S. Nirenberg, and P. G. Nelson, Intrinsic dynamics in neuronal

networks. II. Experiment, J. Neurophysiol., 83 (2000), pp. 828–835.[49] D. A. McCormick, Y.-S. Shu, A. Hasenstaub, M. Sanchez-Vives, M. Badoual, and T. Bal,

Cellular and network mechanisms of rhythmic, recurrent activity in the cerebral cortex, CerebralCortex, 13 (2003), pp. 1219–1231.

[50] E. K. Miller, C. A. Erickson, and R. Desimone, Neural mechanisms of visual working memory inprefrontal cortex of the macaque, J. Neurosci., 16 (1996), pp. 5154–5167.

[51] N. Morrison, Introduction to Fourier Analysis, Wiley-Interscience, New York, 1994.[52] J. D. Murray, Mathematical Biology, Springer-Verlag, New York, 2002.[53] J. G. Nicholls, From Neuron to Brain: A Cellular Molecular Approach to the Function of the Nervous

System, Sinauer Associates, Sunderland, MA, 1992.[54] Y. Nishiura and M. Mimura, Layer oscillations in reaction-diffusion systems, SIAM J. Appl. Math.,

49 (1989), pp. 481–514.[55] E. Part-Enander, A. Sjoberg, B. Melin, and P. Isaaksson, The MATLAB Handbook, Addison–

Wesley, Reading, MA, 1998.

Page 124: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

248 YIXIN GUO AND CARSON C. CHOW

[56] L. A. Peletier and W. C. Troy, Spatial Patterns: Higher Order Models in Physics and Mechanics,Birkhauser Boston, Boston, 2001.

[57] D. E. Pelinovsky and V. G. Yakhno, Generation of collective-activity structures in a homogeneousneuron-like medium. I. Bifurcation analysis of static structures, Internat. J. Bifur. Chaos Appl. Sci.Engrg., 6 (1996), pp. 81–87, 89–100.

[58] D. J. Pinto and G. B. Ermentrout, Spatially structured activity in synaptically coupled neuronalnetworks: I. Traveling fronts and pulses, SIAM J. Appl. Math., 62 (2001), pp. 206–225.

[59] D. J. Pinto and G. B. Ermentrout, Spatially structured activity in synaptically coupled neuronalnetworks: II. Lateral inhibition and standing pulses, SIAM J. Appl. Math., 62 (2001), pp. 226–243.

[60] A. D. Polianin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, FL,1998.

[61] D. L. Powers, Boundary Value Problems, Harcourt Academic Press, New York, 1999.[62] M. Rahman, Complex Variables and Transform Calculus, Computational Mechanics Publications,

Boston, 1997.[63] A. Renart, N. Brunel, and X.-J. Wang, Mean-field theory of recurrent cortical networks: From ir-

regularly spiking neurons to working memory, in Computational Neuroscience: A Comprehensive Ap-proach, J. Feng, ed., CRC Press, Boca Raton, FL, 2003.

[64] J. E. Rubin, D. Terman, and C. C. Chow, Localized bumps of activity sustained by inhibition in atwo-layer thalamic network, J. Comp. Neurosci., 10 (2001), pp. 313–331.

[65] J. E. Rubin and W. C. Troy, Sustained spatial patterns of activity in neuronal populations withoutrecurrent excitation, SIAM J. Appl. Math., 64 (2004), pp. 1609–1635.

[66] W. Rudin, Principles of Mathematical Analysis, McGraw–Hill, New York, 1976.[67] E. Salinas and L. F. Abbott, A model of multiplicative neural responses in parietal cortex, Proc. Natl.

Acad. Sci. USA, 93 (1996), pp. 11956–11961.[68] M. Sanchez-Vives and D. McCormick, Cellular and network mechanisms of rhythmic, recurrent ac-

tivity in the cerebral cortex, Nature Neuroscience, 3 (2000), pp. 1027–1034.[69] S. H. Seung, How the brain keeps the eyes still, Proc. Natl. Acad. Sci. USA, 93 (1996), pp. 13339–13344.[70] S. H. Strogatz, Nonlinear Dynamics and Chaos, Perseus Books, New York, 1994.[71] A. M. Thomson and J. Deuchars, Temporal and spatial properties of local circuits in neocortex, Trends

in Neurosci., 17 (1994), pp. 119–126.[72] X.-J. Wang, Synaptic basis of cortical persistent activity: The importance of nmda receptors to working

memory, J. Neurosci., 19 (1999), pp. 9587–9603.[73] X.-J. Wang, Synaptic reverberation underlying mnemonic persistent activity, Trends in Neurosci., 24

(2001), pp. 455–463.[74] X.-J. Wang, C. Tegner, J. Constantinidis, and P. S. Goldman-Rakic, Division of labor among

distinct subtypes of inhibitory neurons in a cortical microcircuit of working memory, Proc. Natl. Acad.Sci. USA, 101 (2004), pp. 1368–1373.

[75] S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-Verlag, NewYork, 1990.

[76] H. R. Wilson and J. D. Cowan, Excitatory and inhibitory interactions in localized populations of modelneurons, Biophys. J., 12 (1973), pp. 1–24.

[77] H. R. Wilson and J. D. Cowan, A mathematical theory of the functional dynamics of cortical andthalamic nervous tissue, Kybernetic, 13 (1973), pp. 55–80.

[78] S. Wolfram, The Mathematica Book, 4th ed., Cambridge University Press, Cambridge, UK, 1999.[79] E. Zeidler, Nonlinear Functional Analysis and Its Applications I: Fixed-Point Theorems, Springer-

Verlag, New York, 1986.

Page 125: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

SIAM J. APPLIED DYNAMICAL SYSTEMS c© 2005 Society for Industrial and Applied MathematicsVol. 4, No. 2, pp. 249–281

Existence and Stability of Standing Pulses in Neural Networks: II. Stability∗

Yixin Guo† and Carson C. Chow‡

Abstract. We analyze the stability of standing pulse solutions of a neural network integro-differential equation.The network consists of a coarse-grained layer of neurons synaptically connected by lateral inhibitionwith a nonsaturating nonlinear gain function. When two standing single-pulse solutions coexist, thesmall pulse is unstable, and the large pulse is stable. The large single pulse is bistable with the“all-off” state. This bistable localized activity may have strong implications for the mechanismunderlying working memory. We show that dimple pulses have similar stability properties to largepulses but double pulses are unstable.

Key words. integro-differential equations, integral equations, standing pulses, neural networks, stability

AMS subject classifications. 34A36, 37N25, 45G10, 92B20

DOI. 10.1137/040609483

1. Introduction. In the accompanying paper [27], we considered stationary localized self-sustaining solutions of an integro-differential neural network or neural field equation. Thepulses are bistable with an inactive neural state and could be the underlying mechanism ofpersistent neuronal activity responsible for working memory. However, in order to serve as amemory, these states must be stable to perturbations. Here we compute the linear stabilityof stationary pulse states.

The neural field equation has the form

∂u(x, t)

∂t+ u(x, t) =

∫ ∞

−∞w(x− y)f [u(y)]dy(1.1)

with a nonsaturating gain function

f [u] = [α(u(y, t) − uT ) + 1]Θ(u− uT ),(1.2)

where Θ(·) is the Heaviside function, and “wizard hat” connection function

w(x) = Ae−a|x| − e−|x|.(1.3)

In [27], we considered stationary solutions u0(x), where u0(x) > uT on an interval −xT <x < xT , u(xT , t) = uT , and u(x, t) = u0(x) satisfies the stationary integral equation

u0(x) =

∫ xT

−xT

w(x− y)[α(u0(y) − uT ) + 1]dy.(1.4)

∗Received by the editors June 3, 2004; accepted for publication (in revised form) by D. Terman September 21,2004; published electronically April 14, 2005. This work was supported by the National Institute of Mental Health,the A. P. Sloan Foundation, and the National Science Foundation under agreement 0112050.

http://www.siam.org/journals/siads/4-2/60948.html†Department of Mathematics, The Ohio State University, Columbus, OH 43210 ([email protected]).‡Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 ([email protected]).

249

Page 126: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

250 YIXIN GUO AND CARSON C. CHOW

x

u

u

III I II

T

−x xT T

Figure 1. Single-pulse solution.

We have shown the existence of pulse solutions of (1.4) in the form of single pulses, dimplepulses, and double pulses [26, 27]. Examples can be seen in Figures 1, 17, and 14. Weconstructed the pulses by converting the integral equation (1.4) into piecewise-linear ODEsand then matching their solutions at the threshold points xT [36, 37]. When the excitation Aand gain α are small, there are no pulse solutions. If either is increased, there is a saddle-nodebifurcation where two coexisting single pulses, a small one and a large one, arise. As the gainor excitation increases, more than two pulses can coexist. For certain parameters, the largepulse can become a dimple pulse, and a dimple pulse can become a double pulse [26, 27].

In this paper, we derive an eigenvalue equation to analyze the stability of the pulse so-lutions. While our eigenvalue equation is valid for any continuous and integrable connectionfunction w(x), we explicitly compute the eigenvalues for (1.3). For the cases that we tested,we find that the small pulse is unstable and the large pulse is stable. If there is a third(larger) pulse, then it is unstable. The stability properties of dimple pulses are the same ascorresponding large pulses. Double pulses are unstable.

2. Eigenvalue equation for stability. We consider small perturbations around a stationarypulse solution by substituting u(x, t) = u0(x) + εv(x, t) into (1.1), where ε > 0 is small. Sincethe pulse solutions are localized in space, we must assume the perturbation to the pulsewill lead to time dependent changes to the boundaries of the stationary pulse (i.e., whereu0(xT ) = uT ). Thus the boundaries −xT and xT become time dependent functions

x1(t) = −xT + ε∆1(t),(2.1)

x2(t) = xT + ε∆2(t),(2.2)

where ε∆1(t) and ε∆2(t) are the changes of the boundaries −xT and xT produced by the smallperturbations. Inserting u(x, t) = u0(x) + εv(x, t) into (1.1) and eliminating the stationarysolution with (1.4) give

vt(x, t) + v(x, t) = α

∫ xT

−xT

w(x− y)v(y, t)dy + I1 + I2,(2.3)

Page 127: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 251

where

I1 =

∫ −xT

−(xT +ε∆1(t))w(x− y)[α(u0(y) + εv(y, t) − uT ) + 1]dy,(2.4)

I2 =

∫ xT +ε∆2(t)

xT

w(x− y)[α(u0(y) + εv(y, t) − uT ) + 1]dy.(2.5)

Expanding the integrals I1 and I2 to order ε yields the linearized dynamics for the perturba-tions v(x, t)

vt(x, t) + v(x, t) = α

∫ xT

−xT

w(x− y)v(y, t)dy − w(x + xT )∆1 + w(x− xT )∆2.(2.6)

The time dependence of ∆1 and ∆2 is found by using the fact that u(x, t) is equal tothe threshold uT at the boundaries of the pulse. Inserting (2.1) and (2.2) into the boundarycondition u(x1(t), t) = uT and expanding to first order in ε lead to

∆1(t) = −v(−xT , t)

c,(2.7)

where

c =du0(x)

dx

∣∣∣∣x=−xT

> 0.(2.8)

Similarly,

∆2(t) =v(xT , t)

c.(2.9)

Consider time variations of v(x, t) that obey

v(x, t) = v(x)eλt,(2.10)

where v(x) is a bounded and continuous function that decays to 0 exponentially as x → ±∞.Substitute (2.10) with (2.7) and (2.9) into (2.6) to obtain

(1 + λ)v(x) = w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c+ α

∫ xT

−xT

w(x− y)v(y)dy,(2.11)

where λ is an eigenvalue with corresponding eigenfunction v(x). Equation (2.11) is an eigen-value problem that governs the stability of small perturbations to pulse solutions of the neuralfield equation (1.1). If the real parts of all the eigenvalues are negative, the stationary pulsesolution u0(x) is stable. If the real part of one of the eigenvalues is positive, u0(x) is unstable.

We define an operator L: C[−xT , xT ] → C[−xT , xT ]:

Lv(x) = w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c+ α

∫ xT

−xT

w(x− y)v(y)dy.(2.12)

Page 128: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

252 YIXIN GUO AND CARSON C. CHOW

Then the eigenvalue equation (2.11) becomes

(1 + λ)v(x) = L(v(x)) on C [−xT , xT ] .(2.13)

We show in the appendix (Theorem A.7) that L is a compact operator. We also show thefollowing properties of the eigenvalue equation (2.11):

1. Eigenvalues λ are always real (Theorem A.4).2. Eigenvalues λ are bounded by λb ≡ 2k0

c +2αk1xT−1, where k0 is the maximum of |w(x)|on [0, 2xT ] and |w(x− y)| ≤ k1 for all (x, y) ∈ J × J , J = [−xT , xT ] (Theorem A.5).

3. Zero is always an eigenvalue (Theorem A.6).4. λ = −1 is the only possible accumulation point of the eigenvalues (Theorem A.8).

Thus, the only possible essential spectrum of operator L is located at λ = −1, implyingthat the discrete spectrum of L (i.e., eigenvalues of (2.11)) captures all of the stabilityproperties.

We use these properties to compute the discrete eigenvalues to determine stability of the pulsesolutions.

3. Linear stability analysis of the Amari case (α = 0). Amari [3] computed the stabilityof pulse solutions to (1.1) for α = 0. He obtained stability by computing the dynamics of thepulse boundary points. He found that the small pulse is always unstable and the large pulseis always stable. Pinto and Ermentrout [46] later confirmed Amari’s results by deriving aneigenvalue problem for small perturbations.

We consider a stationary pulse solution of (1.1) with width xT . Applying eigenvalueequation (2.11) to the Amari case yields

(1 + λ)v(x) = w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c≡ T1(v(x)),(3.1)

where T1 is a compact operator on C[−xT , xT ] (see Theorem A.7). The spectrum of a compactoperator is a countable set with no accumulation point different from zero. Therefore, the onlypossible location of the essential spectrum for T1 is at λ = −1. This implies that instabilityof a pulse is indicated by the existence of a positive discrete eigenvalue.

The eigenvalue λ can be obtained by setting x = −xT and x = xT in (3.1) to give a twodimensional system (

1 + λ− w(0)

c

)v(xT ) − w(2xT )

cv(−xT ) = 0,(3.2)

−w(2xT )

cv(xT ) +

(1 + λ− w(0)

c

)v(−xT ) = 0.(3.3)

This is identical to the eigenvalue equation of [46]. Setting the determinant of system(3.2)–(3.3) to zero gives the eigenvalues

λ =w(0) ± w(2xT )

c− 1,(3.4)

which agrees with [46].

Page 129: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 253

The stationary solution of the Amari problem satisfies

u(x) =

∫ xT

−xT

w(x− y)dy =

∫ x−xT

x+xT

w(y)dy.(3.5)

Differentiating u(x) yields u′(x) = w(x + xT ) − w(x− xT ), implying

u′(−xT ) = w(0) − w(2xT ) = c.(3.6)

Inserting into (3.4) gives the eigenvalues

λ =w(0) + w(2xT )

c− 1, 0.(3.7)

The zero eigenvalue was expected from translational symmetry. Since w(0) > w(2xT ), the signof c alone determines stability of the pulse. Recall that the small and large pulses arise froma saddle-node bifurcation [3, 9, 26, 27]. At the saddle-node bifurcation, both eigenvalues arezero. Thus, setting λ = 0 in (3.7) shows that the width of the pulse satisfies w(2xT ) = 0 [3].For our connection function, w(x) has only one zero at x0 for w(x) on (0,∞) (see [26, 27]),where x0 = lnA

a−1 . Thus xT = x0/2 at the saddle-node. For the large pulse, xT > x0/2, implyingw(2xT ) < 0 and c > 0. Conversely, c < 0 for the small pulse. Thus the large pulse is stableand the small pulse is unstable.

Consider the example a = 2.4, A = 2.8, uT = 0.400273, α = 0. There exist two singlepulses, the large pulse l and the small pulse s [26, 27]. For the pulse l, xl

T = 0.607255 givesthe nonzero eigenvalue λ = −0.165986 < 0, indicating it is stable. For the small pulse s,xs

T = 0.21325 gives λ = 0.488339 > 0, indicating it is unstable.

4. Computing the eigenvalues. For the case of α > 0, we must compute the eigenvalues of(2.11) with the integral operator. Our strategy is to reduce the integral equation to a piecewise-linear ODE on three separate regions. The discrete spectrum can then be obtained from thezeros of the determinant of a linear system based on the matching conditions between theregions. This approach is similar to the Evans function method [10, 17, 18, 19, 20, 30, 50, 61].

4.1. ODE form of the eigenvalue problem. We transform (2.11) (with the connectionfunction defined by (1.3)) into three piecewise-linear ODEs on (−∞, xT ), (−xT , xT ), and(−xT ,∞). The ODEs then obey a set of matching conditions at x = xT and x = −xT .

On the domain x ∈ (−xT , xT ), we can write (2.11) in the form

(1 + λ)v(x) = T1(x) + I1 − I2 + I3 − I4,(4.1)

where

I1(x) = α

∫ x

−xT

Ae−a(x−y)v(y)dy, I2(x) = α

∫ x

−xT

e−(x−y)v(y)dy,

I3(x) = α

∫ xT

xAea(x−y)v(y)dy, I4(x) = α

∫ xT

xe(x−y)v(y)dy,

Page 130: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

254 YIXIN GUO AND CARSON C. CHOW

and

T1(x) = w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c.(4.2)

Differentiating (4.1) repeatedly gives

(1 + λ)v′(x) = T ′1(x) − aI1 + I2 + aI3 − I4,(4.3)

(1 + λ)v′′(x) = T ′′1 (x) + a2I1 − I2 + a2I3 − I4 + 2α(1 − aA)v(x),(4.4)

(1 + λ)v′′′(x) = T ′′′1 (x) − a3I1 + I2 + a3I3 − I4 + 2α(1 − aA)v′(x),(4.5)

(1 + λ)v′′′′(x) = T ′′′′1 (x) + a4I1 − I2 + a4I3 − I4 + 2α(1 − a3A)v(x)(4.6)

+ 2α(1 − aA)v′′(x),

where we have used

I ′1 = −aI1 + αAv(x), I ′2 = −I2 + αv(x),

I ′3 = aI3 − αAv(x), I ′4 = I4 − αv(x).

Taking (4.5) − a2(4.1) and rearranging give

I2 + I4 =1

a2 − 1

[(λ + 1)v′′ + (2αaA− 2α− a2λ− a2)v + a2T1 − T ′′

1

].(4.7)

Substituting (4.7) back into (4.1) leads to

I1 + I3 =1

a2 − 1

[(λ + 1)v′′ + (2αaA− 2α− λ− 1)v + T1 − T ′′

1

].(4.8)

Substituting both (4.7) and (4.8) into (4.6) results in a fourth order ODE for v on the domainx ∈ (−xT , xT )

1 + λ

αv′′′′ =

[(1 + λ)(a2 + 1)

α+ 2(1 − aA)

]v′′ + a

[2(A− a) − λ + 1

αa

]v(4.9)

+ T ′′′′1 (x) − (1 + a2)T ′′

1 (x) + a2T1(x).

Using T ′′′′1 (x)−(1+a2)T ′′

1 (x)+a2T1(x) = 0 (obtained by differentiating T1(x)) and simplifyinglead to

(1 + λ)v′′′′ −Bv′′ + Cv = 0, x ∈ (−xT , xT ),(4.10)

where B = (1 + λ)(a2 + 1) + 2α(1 − aA) and C = (λ + 1)a2 − 2αa(A− a).On the domain x ∈ (xT ,∞), (2.11) can be written as

(1 + λ)v = T1 + J1 − J2,(4.11)

where

J1 = αA

∫ xT

−xT

e−a(x−y)v(y)dy, J2 =

∫ xT

−xT

e−(x−y)v(y)dy,

Page 131: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 255

and T1 is defined by (4.2) on the domain (xT ,∞).Differentiating (4.11) and using J ′

1 = −aJ1 and J ′2 = −J2 give

(1 + λ)v′(x) = T ′1 − aJ1 + J2,(4.12)

(1 + λ)v′′(x) = T ′′1 + a2J1 − J2.(4.13)

Taking a(4.11) + (a + 1)(4.12) + (4.13) and using T ′′1 + (1 + a)T ′

1 + aT1 = 0 lead to

v′′ + (a + 1)v′ + av = 0, x ∈ (xT ,∞).(4.14)

Similarly, the ODE on (−∞,−xT ) is given by

v′′ − (a + 1)v′ + av = 0, x ∈ (−∞,−xT ).(4.15)

In summary, the eigenvalue problem (2.11) reduces to three ODEs:

(ODE I) v′′ − (a + 1)v′ + av = 0, x ∈ (−∞,−xT ),

(ODE II) (1 + λ)v′′′′ −Bv′′ + Cv = 0, x ∈ (−xT , xT ),

(ODE III) v′′ + (a + 1)v′ + av = 0, x ∈ (xT ,∞),

where B = (1 + λ)(a2 + 1) + 2α(1 − aA) and C = (λ + 1)a2 − 2αa(A− a).

4.2. Matching conditions. The solutions of ODEs I, II, and III and their first threederivatives must satisfy a set of matching conditions across the boundary points −xT and xT .We derive these conditions from the original eigenvalue equation (2.11) which we write as

c(1 + λ)v(x) = w(x− xT )v(xT ) + w(x + xT )v(−xT ) + cαW (x),(4.16)

where W (x) =∫ xT

−xTw(x−y)v(y)dy, x ∈ (−∞,∞). From (4.16), we see that v(x) is continuous

on (−∞,∞). However, w(x) has a cusp at x = 0 which will lead to discontinuities in thederivatives of v(x) across the boundary points −xT and xT .

We first probe the discontinuities of W (x) and its derivatives. W (x) is continuous on(−∞,∞). By a change of variables, W (x) =

∫ x+xT

x−xTw(z)v(x− z)dz, from which we obtain

W ′(x) = w(x + xT )v(−xT ) − w(x− xT )v(xT ) +

∫ x+xT

x−xT

w(z)v′(x− z)dz,

indicating that W ′(x) is also continuous on (−∞,∞). However, W ′(x) is not smooth at−xT and xT . Differentiating W ′(x) for x = −xT , xT gives

W ′′(x) = w′(x + xT )v(−xT ) − w′(x− xT )v(xT ) + w(x + xT )v′(−x+T )

− w(x− xT )v′(x−T ) −∫ x−xT

x+xT

w(z)v′′(x− z)dz,

where v′(−x+T ) = limx→−x+

Tv′(x) for x > −xT (right limit) and v′(x−T ) = limx→x−

Tv′(x) for

x < xT (left limit).

Page 132: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

256 YIXIN GUO AND CARSON C. CHOW

Using the convention

[·] |x=xT = ·|x=x+T− ·|x=x−

T, [·] |x=−xT = ·|x=−x+

T− ·|x=−x−

T

to represent the jump at the boundaries, we find that[W ′′(xT )

]= W ′′(x)|x=x+

T−W ′′(x)|x=x−

T= −

[w′(0)

]v(xT ),[

W ′′(−xT )]

= W ′′(x)|x=−x+T−W ′′(x)|x=−x−

T=

[w′(0)

]v(−xT ).

We differentiate W ′′(x) for x = −xT , xT and find[W ′′′(xT )

]= −

[w′′(0)

]v(xT ) −

[w′(0)

]v′(x−T ),[

W ′′′(−xT )]

=[w′′(0)

]v(−xT ) +

[w′(0)

]v′(−x+

T ).

To find the matching conditions for the derivatives of v(x), we differentiate (4.16) withrespect to x for x = −xT , xT and obtain

c(1 + λ)v′(x) = w′(x− xT )v(xT ) + w′(x + xT )v(−xT ) + cαW ′(x).

v′(x) is discontinuous at the boundaries because of the discontinuity of w′(x) at x = 0.Therefore,

[v′(xT )

]=

1

c(1 + λ)

[w′(0)

]v(xT ),

[v′(−xT )

]=

1

c(1 + λ)

[w′(0)

]v(−xT ).

Differentiating (4.16) twice yields

c(1 + λ)v′′(x) = w′′(x− xT )v(xT ) + w′′(x + xT )v(−xT ) + cαW ′′(x), x = −xT , xT .

There are discontinuities of v′′(x) at −xT and xT that come from W ′′(−xT ) and W ′′(xT ). Notethat w′′(0−) = w′′(0+). The jump conditions of v′′(x) at −xT and xT are[

v′′(xT )]

1 + λ

[W ′′(xT )

]= − α

1 + λ

[w′(0)

]v(xT ),[

v′′(−xT )]

1 + λ

[W ′′(−xT )

]=

α

1 + λ

[w′(0)

]v(−xT ).

By differentiating a third time we find the jump conditions for v′′′(x) at −xT and xT :

[v′′′(xT )

]=

1

c(1 + λ)

[w′′′(0)

]v(xT ) +

α

1 + λ

[W ′′′(xT )

]=

1

c(1 + λ)

[w′′′(0)

]v(xT ) − α

1 + λ

[w′(0)

]v′(x−T ),

[v′′′(−xT )

]=

1

c(1 + λ)

[w′′′(0)

]v(−xT ) +

α

1 + λ

[W ′′′(xT )

]=

1

c(1 + λ)

[w′′′(0)

]v(−xT ) +

α

1 + λ

[w′(0)

]v′(−x+

T ).

Page 133: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 257

Using the connection function w(x) defined in (1.3), we have[w′(0)

]= w′(0+) − w′(0−) = 2(1 − aA),[

w′′(0)]

= w′′(0+) − w′′(0−) = 0,[w′′′(0)

]= w′′′(0+) − w′′′(0−) = 2(1 − a3A).

These results lead directly to the following theorem.

Theorem 4.1. The continuous eigenfunction v(x) on (−∞,∞) in (2.11) has the followingjumps in its first, second, and third order derivatives at the boundary −xT and xT :

[v(xT )] = 0,(4.17) [v′(xT )

]=

2α(1 − aA)

1 + λv(xT ),(4.18)

[v′′(xT )

]=

2(aA− 1)

c(1 + λ)v(xT ),(4.19)

[v′′′(xT )

]=

2(1 − a3A)

c(1 + λ)v(xT ) +

2α(aA− 1)

1 + λv′(x−T ),(4.20)

[v(−xT )] = 0,(4.21) [v′(−xT )

]=

2α(1 − aA)

1 + λv(−xT ),(4.22)

[v′′(−xT )

]=

−2(aA− 1)

c(1 + λ)v(−xT ),(4.23)

[v′′′(−xT )

]=

2(1 − a3A)

c(1 + λ)v(−xT ) − 2α(aA− 1)

1 + λv′(−x+

T ).(4.24)

4.3. Eigenfunction symmetries. We define v1(x), v2(x), and v3(x) as the solutions ofODEs I, II, and III, respectively (see Figure 2). The three ODEs are all linear with constantcoefficients. The continuous and bounded eigenfunction v(x) of (2.11) is defined as

v(x) =

⎧⎪⎨⎪⎩v1(x), x ∈ (−∞,−xT ],

v2(x), x ∈ [−xT , xT ],

v3(x), x ∈ [xT ,∞),

and v1(x) matches v2(x) at −xT and v2(x) matches v3(x) at xT .

0−xT xT

xODE I ODE II ODE III

v1(x) v2(x) v3(x)

Figure 2. Valid ODEs on different sections and their solutions.

Page 134: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

258 YIXIN GUO AND CARSON C. CHOW

Lemma 4.2. The eigenfunction v(x) is either even or odd.Proof. By symmetry of ODE II, if v2(x) is a solution, then v2(−x) is also a solution.

Hence, both the even function v2(x)+v2(−x)2 and the odd function v2(x)−v2(−x)

2 are solutions ofODE II.

Let

T2(x) = α

∫ xT

−xT

w(x− y)v2(y)dy.

If v2(x) is an even function, then since w(x) is even, T2(x) is also even.By the continuity of v(x) on R, v(xT ) and v(−xT ) can be replaced by v2(x

−T ) and v2(−x+

T ),respectively. Thus the eigenvalue problem (2.11) is

(1 + λ)v(x) = w(x− xT )v2(xT )

c+ w(x + xT )

v2(−xT )

c+ T2(x).(4.25)

Given that v2(x), w(x), and T2(x) are all even functions, from (4.25) we see that v(x) is alsoeven. Similarly, we can show that v(x) is odd when v2(x) is odd.

Lemma 4.3. The matching conditions at −xT are identical to those at xT when v(x) is anodd or an even function.

Proof. This is shown with a direct calculation of the matching conditions of v′(x), v′′(x),and v′′′(x) at both −xT and xT .

If v(x) is even, i.e., v(−xT ) = v(xT ) and v′(−x+T ) = −v′(x−T ), then defining the jump of v

at x as [v(x)] = v(x+) − v(x−), the following equalities are derived:

[v(−xT )] = − [v(xT )] ,(4.26) [v′(−xT )

]=

[v′(xT )

],(4.27) [

v′′(−xT )]

= −[v′′(xT )

],(4.28) [

v′′′(−xT )]

=[v′′′(xT )

].(4.29)

Given the equalities (4.26)–(4.29), a direct calculation shows that the matching conditions(4.21)–(4.24) at −xT are equivalent to the matching conditions (4.17)–(4.20) at xT .

When v(x) is odd, using the same approach, we can also justify that the matching condi-tions at −xT and xT are the same.

4.4. ODE solutions. ODEs I, II, and III are linear with constant coefficients and canbe readily solved in terms of the parameters A, a, α, and uT . The eigenvalue λ is specifiedwhen the solutions of the three ODEs are matched across the boundaries at x = −xT andx = xT . Solutions of ODE I are related to ODE III by a reflection x → −x. By Lemma 4.3,the matching conditions at −xT are the same as those at xT . Thus matching solutions v2(x)of ODE II with solutions v3(x) of ODE III across xT are sufficient to specify the eigenvaluesof (2.11). The solution of ODE III is

v3(x) = c5e−ax + c6e

−x,

where c5 and c6 are constants. Notice that v3(x) exponentially decays to zero as x → ∞, inaccordance with the assumed properties of v(x).

Page 135: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 259

The solutions of ODE II will depend nontrivially on the parameters A, a, and α. Thecharacteristic equation of ODE II is

(1 + λ)ω4 −Bω2 + C = 0,

where

B = (1 + λ)(a2 + 1) + 2α(1 − aA)(4.30)

and

C = (1 + λ)a2 − 2αa(A− a).(4.31)

The characteristic values are

ω2 =B ±

√∆

2(1 + λ),(4.32)

where

∆ = B2 − 4(1 + λ)C(4.33)

= (a2 − 1)2λ2 + 2(a2 − 1)(a2 − 1 − 2aAα− 2α)λ

− (a2 − 1)(1 − a2 + 4α + 4aAα) + 4α2(1 − aA)2.

Let λB be the zero of B. If ∆ is negative, (4.32) shows that ODE II will have complexcharacteristic values. If ∆ is positive, combinations of B and ∆ yield either real or imaginaryvalues. For fixed A, a, and α, ∆ is a parabola with a left zero λl and a right zero λr. ByLemmas A.9 and A.10 in the appendix, either λl ≤ λB ≤ λr and does not intersect with eitherbranch of

√∆ or λB ≤ λl and intersects with the left branch of

√∆. Tables 1 and 2 describe

all the possible structures of the characteristic values ±ω1 and ±ω2. There are three possibleforms of solution v2(x): (1) both ω1 and ω2 are real; (2) both ω1 and ω2 are complex; and(3) ω1 is real and ω2 is imaginary.

Table 1Characteristic value chart when λl < λB < λr.

1 2 3 4 5

−1 < λ < λl λ = λl λl < λ < λr λ = λr λ > λr

B < 0 B < 0 B > 0 or B < 0 B > 0 B > 0

∆ > 0, ∆ = 0 ∆ < 0 ∆ = 0 ∆ > 0

|B| <√

ω1 real ω1,2 imaginary ω1,2 complex ω1,2 real ω1,2 realω2 imaginary ω1 = ω∗

2 ω1 = ω∗2 ω1 = ω2

We denote the even solutions of ODE II as ve2(x) and the odd solutions as vo

2(x). Whenλ ≥ λr or λI ≤ λ ≤ λl, both ω1 and ω2 are real. Thus

ve2(x) = c3µ1(x) + c4

µ1(x) − µ2(x)

ω1 − ω2,(4.34)

Page 136: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

260 YIXIN GUO AND CARSON C. CHOW

Table 2Characteristic value chart when λB < λl < λr.

1 2 3 4 5 6

−1 < λ < λI λI ≤ λ < λl λ = λl λl < λ < λr λ = λr λ > λr

B < 0 or B > 0 B > 0 B < 0 B > 0 B > 0B > 0

∆ > 0, ∆ > 0, ∆ = 0 ∆ < 0 ∆ = 0 ∆ > 0

|B| <√

∆ |B| >√

ω1 real ω1,2 real ω1,2 real ω1,2 complex ω1,2 real ω1,2 realω2 imaginary ω1 = ω2 ω1 = ω∗

2 ω1 = ω2

where µ1(x) = eω1x + e−ω1x and µ2(x) = eω2x + e−ω2x. We use (4.34) because it is more con-venient to resolve the degenerate case of ω1 = ω2. As λ → λ−

r , µ1 → µ2, and ε = ω1 −ω2 → 0,(4.34) becomes

ve2(x) = c3(e

ω1x + e−ω1x) + c4(eω1x + e−ω1x) − (eω1xe−εx + e−ω1xeεx)

ε.

Replacing eεx by 1 + εx and e−εx by 1 − εx and taking the limit as ε → 0 yield

ve2(x) = c3(e

ω1x + e−ω1x) + c4x(eω1x − e−ω1x)

= 2c3 cosh px + 2c4x sinh px.(4.35)

Equation (4.34) approaches (4.35) as λ → λ−r . It matches the solution ve

2(x) as λ → λ+r , which

is given in (4.37).

Similarly, vo2(x) can be written as

vo2(x) = c3(e

ω1x − e−ω1x) + c4(eω1x − e−ω1x) − (eω2x − e−ω2x)

ω1 − ω2.

When λl < λ < λr, ω1 and ω2 are complex. Let ω1 = p + iq, ω2 = p− iq. When v2(x) iseven, write ve

2(x) as

ve2(x) = 2c3 cos qx cosh px + 2c4

sin qx

qsinh px.(4.36)

As λ → λ+l or λ−

r , q → 0,

ve2(x) → 2c3 cosh px + 2c4x sinh px.(4.37)

vo2(x) can be written as

vo2(x) = 2c3 cos qx sinh px− 2c4

sin qx

qcosh px,

where p =

√√B2+|∆|

2(1+λ) cos θ, p =

√√B2+|∆|

2(1+λ) sin θ, and θ = 12 arctan

√|∆|B .

Page 137: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 261

When −1 < λ < λI , ω1 is real and w2 is imaginary. Let ω2 = iq, where q =√√

∆−B2(1+λ) .

Then

ve2(x) = c3(e

ω1x + e−ω1x) + 2c4 cos(qx),(4.38)

vo2(x) = c3(e

ω1x − e−ω1x) + 2c4sin(qx)

q.(4.39)

5. Stability criteria. By Theorem 4.1, v1(x) and v2(x) must match at −xT , and v2(x)and v3(x) must match at xT . By Lemma 4.3, the matching conditions at −xT are same as thematching conditions at xT for v(x) even or odd. Therefore, it suffices to apply the matchingcondition to v2(x) and v3(x) at xT for the even and odd cases separately. This reduces thedimensionality of the resulting eigenvalue problem by a factor of two. In general, the matchingconditions can be written as

T1 :

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

[v(xT )] = v3(x+T ) − v2(x

−T ) = 0,

[v′(xT )] = v′3(x+T ) − v′2(x

−T ) =

2α(1 − aA)

1 + λv(xT ),

[v′′(xT )] = v′′3(x+T ) − v′′2(x−T ) =

2(aA− 1)

c(1 + λ)v(xT ),

[v′′′(xT )] = v′′′3 (x+T ) − v′′′2 (x−T ) =

2(1 − a3A)

c(1 + λ)v(xT ) +

2α(aA− 1)

1 + λv′(x−T ),

where v(xT ) = v3(x+T ) and v′(x−T ) = v′2(x

−T ).

A given stationary pulse solution u0(x) will be specified by a set of parameters a, A, α, xT ,and uT . The eigenvalues λ that determine stability of pulse solutions are given by system T1.To compute these eigenvalues, we require the appropriate form of the eigenfunctions v2(x)and v3(x). We do so by finding characteristic values (4.32) corresponding to the parametersspecifying the given stationary pulse solution. We expedite this process by calculating theconstants B (4.30) and C (4.31) and then using Tables 1 and 2 to deduce the characteristicvalue types. We then substitute the appropriate form for v2(x) and v3(x) into T1, wherecoefficients c3 and c4 in v2(xT ) and c5 and c6 in v3(xT ) are unknown. We replace v(xT ) byv3(x

+T ) and v′(x−T ) by v′2(x

−T ). This results in a 4 × 4 homogeneous linear system with four

unknown free parameters c3, c4, c5, c6. We must do this for both even and odd eigenfunctionsresulting in two separate linear systems that must be solved.

The coefficient matrix of this system must be singular for a nontrivial solution (c3, c4,c5, c6). Hence, the determinant D(λ) of the coefficient matrix must be zero. Thus, the solutionof D(λ) = 0 is an eigenvalue and it determines the stability of the stationary solution. If thereexists a λ such that 0 < λ < λb and D(λ) = 0, then the standing pulse is unstable. If thereis no positive λ such that 0 < λ < λb and D(λ) = 0, the standing pulse is stable. Ourdeterminant D(λ) for stability is similar to the Evans function [17, 18, 19, 20].

5.1. Stability of the small and large pulse. Two single-pulse solutions were shown to existin the accompanying paper [26] for parameters a = 2.4, A = 2.8, α = 0.22, uT = 0.400273,and β = 1. The large pulse has a higher amplitude and larger width and is denoted by ul(x).

Page 138: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

262 YIXIN GUO AND CARSON C. CHOW

The small pulse is slightly above threshold and much narrower than ul(x) and is denoted byus(x). The explicit forms are given by

ul(x) =

0.665 cos(0.31x) cosh(1.49x) − 3.78 sin(0.31x) sinh(1.49x) + 0.33, x ∈ [−xT , xT ],

6.237e−2.4|x| − 1.604e−|x| otherwise,

where xT = 0.683035, and

us(x) =

0.22 cos(0.31x) cosh(1.49x) − 8.03 sin(0.31x) sinh(1.49x) + 0.33, x ∈ [−xT , xT ],

1.203e−2.4|x| − 0.416e−|x| otherwise,

where xT = 0.202447.We first calculate the upper bound for the eigenvalue λb, which is different for the large

pulse and small pulse because λb depends on xT . Let λlb be the upper bound for the large

pulse and λsb be the upper bound for the small pulse. For the parameter set a = 2.4, A = 2.8,

α = 0.22, uT = 0.400273, the upper bounds are λlb = 1.25917 and λs

b = 1.66628.For the above set of parameters, v3(x) always has the following form:

v3(x) = c5e−ax + c6e

−x.

The form of v2(x) depends on ω1 and ω2. For this specific set of parameters, the left andright solutions of ∆ (4.33) are λl = −0.627692 and λr = 0.192861. When 0 ≤ λ ≤ λr, bothω1 and ω2 are complex, implying

v2(x) =

⎧⎪⎪⎨⎪⎪⎩ve2(x) = 2c3 cos qx cosh px + 2c4

sin qx

qsinh px, v2(x) is even,

vo2(x) = 2c3 cos qx sinh px− 2c4

sin qx

qcosh px, v2(x) is odd,

where p, q are real and c3, c4 are unknown.Substituting ve

2(x) (vo2(x)) and v3(x) into system T1 results in an unwieldy 4 × 4 linear

system in c3, c4, c5, and c6. We use Mathematica [59] to calculate the determinant of thecoefficient matrix as a function of λ.

When 0.192861 = λr ≤ λ ≤ λlb = 1.25917, ω1,2 is real, and v2(x) has the form

v2(x) =

⎧⎪⎪⎨⎪⎪⎩c3(e

ω1x + e−ω1x) + c4(eω1x + e−ω1x) − (eω2x + e−ω2x)

ω1 − ω2, v2(x) is even,

c3(eω1x − e−ω1x) − c4

(eω1x − e−ω1x) − (eω2x + e−ω2x)

ω1 − ω2, v2(x) is odd.

Figure 3 gives a plot of D(λ) on the domain [0, λb], combining the regimes where ω1,2 is realand complex. We see that there is no positive λ that satisfies D(λ) = 0. Figure 4 shows D(λ)for odd v(x). We see that D(λ) = 0 only when λ = 0, which is consistent with Theorem A.6.The lack of a positive zero of D(λ) indicates that the large pulse is stable.

For the same set of parameters, a = 2.4, A = 2.8, α = 0.22, uT = 0.400273, the upperbound of the small pulse is λs

b = 1.66628. Repeating the same procedure as for the largepulse, we plot D(λ) for both ve

2(x) and vo2(x) (Figures 5 and 6). The existence of a positive

eigenvalue λ = λ∗ satisfying D(λ∗) = 0 in Figure 5 implies the instability of the small singlepulse. The plot of D(λ) corresponding to vo(x) in Figure 6 identifies the zero eigenvalue.

Page 139: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 263

λr λb

b0 1

λ0

50

D(λ)

Figure 3. Plot of D(λ) for large single pulse ul(x) when v2(x) is even. a = 2.4, A = 2.8, α = 0.22,uT = 0.400273, xT = 0.683035, λr = 0.192861, λl

b = 1.25917. There is no positive λ such that D(λ) = 0,λ ≤ λl

b.

λr λb

b0 1

λ0

50

D(λ)

Figure 4. Plot of D(λ) for large single pulse ul(x) when v2(x) is odd. a = 2.4, A = 2.8, α = 0.22,uT = 0.400273, xT = 0.683035, λr = 0.192861, λl

b = 1.25917. There is no positive λ such that D(λ) = 0,λ ≤ λl

b. When v2(x) is odd, D(λ) does identify the zero eigenvalue.

5.2. Stability and instability for different gain α. For both the large single pulses andthe small single pulses, D(λ) is monotonically increasing (see Figures 7 and 8). However, D(0)for small pulses is negative. As λ increases, D(λ) crosses the λ-axis and becomes positive.Therefore, D(λ) has a positive zero. For the large pulse, D(0) is positive and D(λ) has nopositive zero. We follow D(0) for a range of α ∈ (0.22, 0.59) in Figure 9 and find that D(0)is always negative for small pulses and positive for large pulses. Hence, the large pulses arestable and the small pulses are unstable in this range.

5.3. Stability of the dimple pulse ud(x) and the instability of the third pulse. Whenthere are only two single pulses, the large pulse could be a dimple pulse instead of a singlepulse. This dimple pulse has the same stability properties as a large pulse. The parameterset a = 2.4, A = 2.8, α = 0.22, uT = 0.18, and xT = 2.048246 corresponds to a dimple pulse.

Page 140: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

264 YIXIN GUO AND CARSON C. CHOW

λr λb

sλ∗1

λ

−80

−40

0

40

D(λ)

Figure 5. Plot of D(λ) for small single pulse us(x) when v2(x) is even. a = 2.4, A = 2.8, α = 0.22,uT = 0.400273, xT = 0.683035, λr = 0.192861, λs

b = 1.66628, λ∗ = 0.603705. There is one positive λ = λ∗

such that D(λ∗) = 0, λ∗ ≤ λsb.

λr λb

s0 1

λ0

50

D(λ)

Figure 6. Plot of D(λ) for small single pulse us(x) when v2(x) is odd. a = 2.4, A = 2.8, α = 0.22,uT = 0.400273, xT = 0.683035, λr = 0.192861, λs

b = 1.66628. There is no positive λ such that D(λ) = 0,λ ≤ λs

b. When v2(x) is odd, D(λ) = 0 at λ = 0 identifies the zero eigenvalue.

Carrying out the stability calculation yields D(λ) shown in Figures 10 and 11. We see thatthere is no zero crossing and thus the dimple pulse is stable. This is true for all dimple pulseswe tested in this category.

As shown in [26] and [27], for certain parameter regimes, there can be more than twocoexisting pulses. When there are three pulses, the third pulse can be either a single pulse ora dimple pulse. For example, when A = 2.8, a = 2.2, α = 0.8, uT = 0.2, the third pulse is thesingle pulse

u(x) =

1.28 cos(0.47x) cosh(1.2x) + 1.27 sin(0.47x) sinh(1.2x) + 0.8129, x ∈ [−xT , xT ],

198.78e2|x| − 15.15e−|x| otherwise,

where xT = 2.20629. D(λ) shown in Figure 12 indicates that this pulse is unstable. When

Page 141: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 265

0.5 20 1

λ0

50

D(λ)

α=0.22

α=0.35α=0.45

α=0.59

Figure 7. Plots of D(λ) for large single pulses with different gain α. a = 2.4, A = 2.8, α = 0.22,uT = 0.400273.

21

λ

−80

−40

0

40

D(λ)

α=0.22

α=0.59α=0.35

α=0.45

Figure 8. Plots of D(λ) for small single pulses with different gain α. a = 2.4, A = 2.8, α = 0.22,uT = 0.400273.

a = 2.6, A = 2.8, α = 0.6178, uT = 0.063, the third pulse is the dimple pulse

u(x) =

0.35 cos(1.112x) cosh(1.112x) + 0.24 sin(1.112x) sinh(1.112x) + 0.163, x ∈ [−xT , xT ],

232.89e2.6|x| − 9.31e−|x| otherwise,

where xT = 1.98232. As seen in Figure 13, D(λ) crosses zero for a positive λ, indicating thatit is unstable. In all the cases that we have examined, we find that the third pulse is unstable.

6. Double pulse and its stability. For certain parameter regimes, there can be double-pulse solutions which have two disjoint open and finite intervals for which the synaptic inputu(x) is above threshold [26, 27, 35]. An example is shown in Figure 14. We consider symmetric

Page 142: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

266 YIXIN GUO AND CARSON C. CHOW

0.22 0.59α

−80

−60

−40

−20

0

20

D(0)

Figure 9. Plots of D(0) for both large single pulses (blue branch) and small single pulse (red branch) withα ∈ (0.22, 0.59). a = 2.4, A = 2.8, uT = 0.400273.

λr λb0 1

λ0.846137

25

50

75

D(λ)

Figure 10. Plot of D(λ) for dimple pulse when v2(x) is even. a = 2.4, A = 2.8, α = 0.22, xT = 2.048246,λr = 0.192861, λd

b = 2.48147. There is no positive λ such that D(λ) = 0.

double pulses that satisfy the equation

u(x) =

∫ x1

−x2

w(x− y)f [u(y)]dy +

∫ x2

x1

w(x− y)f [u(y)]dy,(6.1)

where x1,2 > 0. Thus u > uT for x ∈ (x1, x2) ∪ (−x2,−x1), u = uT for x = −x2,−x1, x1, x2,and u < uT outside of these regions and approaches zero as x → ∞. We show their existencein [26] and [27].

Linearizing the dynamical neural field equation (1.1) around a stationary double-pulse

Page 143: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 267

λr λb0 1

λ

25

50

75

D(λ)

Figure 11. Plot of D(λ) for dimple pulse when v2(x) is odd. a = 2.4, A = 2.8, α = 0.22, uT = 0.18,xT = 2.048246, λr = 0.192861, λs

b = 2.48147. There is no positive λ such that D(λ) = 0, λ ≤ λdb . When v2(x)

is odd, D(λ) does identify the zero eigenvalue because D(λ) = 0 at λ = 0. This is consistent with Theorem A.6.

0.5 1λ∗

λ

−0.04

D(λ)

Figure 12. Plot of D(λ) for the third pulse (a single pulse) when v2(x) is even. a = 2.2, A = 2.8, α = 0.8,uT = 0.2, xT = 2.0629, c = 2.75017, D(0) = −0.0153. There is a positive λ such that D(λ) = 0.

solution u(x) gives eigenvalue equation

(1 + λ)v(x) = w(x− x1)v(x1)

c1+ w(x + x1)

v(−x1)

c1+ w(x− x2)

v(x2)

c2(6.2)

+ w(x + x2)v(−x2)

c2+ α

(∫ −x1

−x2

w(x− y)v(y)dy +

∫ x2

x1

w(x− y)v(y)dy

).

The eigenvalues λ of (6.2) possess the same properties as those of the eigenvalue equation forthe single-pulse solutions.

For simplicity, we consider the Amari case in which α = 0. The solution of (6.2) for α > 0

Page 144: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

268 YIXIN GUO AND CARSON C. CHOW

0.2λ∗λ

−0.2

1.8

D(λ)

Figure 13. Plot of D(λ) for the third pulse (a dimple pulse) when v2(x) is even. a = 2.6, A = 2.8,α = 0.6178, uT = 0.063, xT = 1.98232, c = 2.21523, D(0) = −0.094. There is a positive λ such that D(λ) = 0.

−x1

−x2

x1

x2

x

−0.3

0.8

u

uT

Figure 14. Double pulse for Amari case in which α = 0. A = 2.8, a = 2.6, α = 0, uT = 0.26, x1 = 0.279525,x2 = 1.20521.

would involve a long calculation. For α = 0, the eigenvalue equation (6.2) becomes

(1 + λ)v(x) = w(x− x1)v(x1)

c1+ w(x + x1)

v(−x1)

c1(6.3)

+ w(x− x2)v(x2)

c2+ w(x + x2)

v(−x2)

c2,

where c1 = u′(x1) and c2 = u′(−x2). Then u′(−x1) = −c1 and u′(x2) = −c2. Using anapproach similar to Theorem A.4 in the appendix, we can show that λ is real. By takingthe derivative of (6.1), we can also show that zero is an eigenvalue of system (6.3), and thecorresponding eigenfunction is u′(x).

Page 145: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 269

λ1 λ2 λ20.5 1λ

−0.3

0.3

d(λ)

Figure 15. Plot of polynomial d(λ) for the small double pulse shown in Figure 14.

Setting x = x1, x = −x1, x = x2, and x = −x2 in (6.3) gives a four-dimensional system⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

w(0) − λ− 1

c1

w(2x1)

c1

w(x1 − x2)

c2

w(x1 + x2)

c2

w(2x1)

c1

w(0) − 1 − λ

c1

w(x1 + x2)

c2

w(x1 − x2)

c2

w(x1 − x2)

c1

w(x1 + x2)

c1

w(0) − 1 − λ

c2

w(2x2)

c2

w(x1 + x2)

c1

w(x1 − x2)

c1

w(2x2)

c2

w(0) − 1 − λ

c2

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎝

v(x1)

v(−x1)

v(x2)

v(−x2)

⎞⎟⎟⎟⎟⎟⎟⎠

= 0.(6.4)

The determinant D(λ) of coefficient matrix in system (6.4) is a fourth order polynomial.Since zero is an eigenvalue, then D(λ) = λd(λ), where d(λ) is a third order polynomial.Consequently, the stability of the stationary solution u(x) is determined by the roots of athird order polynomial d(λ), which can be found numerically. We computed d(λ) for the twodouble pulses shown in Figure 14. Figure 15 shows a plot of the third order polynomial d(λ)for the small double pulse. It has three positive zeros indicating instability. The plot of d(λ)for the large double pulse as shown in Figure 16 has two positive zeros. Therefore, both thesmall and the large double pulses are unstable. We have not found any stable double pulsesfor any parameter sets that we tested. However, we have not fully investigated the parameterspace of A, a, and uT .

7. Discussion. Our results show that although many types of pulse solutions are possible,only the family of large pulses and associated dimple pulses are stable. For the situation ofthree coexisting pulses, the third and largest pulse is always unstable. It is possible that morethan three pulses can coexist, although we did not investigate situations beyond three. Thedouble pulses we examined were not stable in accordance with previous work [35].

The caveat is that we were only able to examine specific examples individually or overlimited parameter ranges. Although we have an analytical expression for the eigenvalues, the

Page 146: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

270 YIXIN GUO AND CARSON C. CHOW

λ1 λ2 λ30.3 0.3λ

−0.01

0.01

d(λ)

Figure 16. Plot of polynomial d(λ) for the large double pulse shown in Figure 14.

length of these expressions makes them difficult to analyze. As a result, we were unable tomake as strong a claim as Amari, who showed that large pulses are always stable and smallpulses are always unstable [3]. It may be possible to find some patterns in the expressionsto make more general deductions. From our parameter explorations, we were unable to findstable pulse solutions other than the large and associated dimple pulse.

In this paper, we derived an eigenvalue problem for the linear stability of standing pulses.Then we used an equivalent ODE formulation of the eigenvalue problem to develop the Evansfunction for the neural field equation (1.1). Alternatively, one can derive the Evans functionusing the integral form of the neural field equation instead of using ODEs. This approachmight be able to give a more general stability criteria, which would compensate the limitationof our ODE approach. Evans functions for models with nonlocal terms have been constructedfor traveling wave solutions and periodic solitary waves [10, 30, 50, 61]. To the best of ourknowledge, this approach has not been applied to standing pulses of the neural field equation.

We wish to note that numerical simulations on discretized lattices can give misleadingresults regarding the stability and existence of pulse solutions of the associated continuumneural field equation. We conducted some numerical experiments using a discretization of theneural field equation (1.1), and to our surprise we were able to easily find examples of stabledimple and double pulses even though the continuum analogue shows that these solutionseither do not exist or cannot be stable. The resolution to this paradox is that a discretelattice may stabilize solutions that are marginally stable in the continuum case.

Consider the Amari neural network equation consisting of N neurons,

duidt

= ∆xN∑j=0

w(∆x(i− j))Θ[uj − uT ],(7.1)

where w(i−j) is given by (1.3), Θ(·) is the Heaviside function, and ∆x gives the discretizationmesh size. For an initial condition for which uj > uT on a contiguous set of points i . . . kand k− i is less than the expected width of the large pulse in the analogous continuum neuralfield equation, the numerical solution converges toward the expected large-pulse solution.

Page 147: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 271

However, if the initial set of points is larger than the width of the large pulse (we havenot fully investigated how much larger it needs to be), then there is a possibility that thesimulation will converge toward an entirely different state.

For example, a numerical simulation of the parameter set N = 200, ∆x = 0.1, A = 1.8,a = 1.6, and uT = 0.124, with an initial condition ui = 1 for i ∈ 50 . . . 150, converges to astable dimple-pulse state shown in Figure 17. Different initial domains will lead to differentattracting states where the width is close to the initial domain width. For a large enoughinitial domain, the dimple pulse will break into a stable double pulse. Increasing the initialdomain can lead to increasingly higher number stable multiple pulses.

−10 −5 0 5 10

−0.2

0.2

0.4

u(x)

uT

Figure 17. Result of numerical simulation of (7.1) for parameters N = 200, ∆x = 0.1, A = 1.8, a = 1.6,and uT = 0.124. The arbitrary discretization length scale is chosen so that x = 0.1i.

We can show that these states do not exist in the analogous continuum neural field equa-tion. Consider a stationary pulse solution of (1.1) for α = 0. A pulse of width xT satisfies

u(x) = φ(x, xT ),(7.2)

where

φ(x, xT ) =

∫ xT

−xT

Ae−a|x−y| − e−|x−y|dy.(7.3)

The pulse can exist if it satisfies the existence condition

uT = φ(xT , xT ),(7.4)

from which the width xT can be obtained. A plot of the existence condition is shown inFigure 18.

It is immediately apparent that the large pulse does not exist. The existence functionapproaches u = uT from above for large enough xT . While it is very close to uT , it nevercrosses it. However, for the analogous discretized equation (7.1), the discrete mesh can breakthe symmetry of this nearly marginal mode and result in a family of stable pulse solutions forarbitrary widths larger than a given width.

Page 148: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

272 YIXIN GUO AND CARSON C. CHOW

0 2 4 6

xT

0.1uT

0.2

0.3

Φ(xT , xT)

Figure 18. Existence condition for pulse solutions of neural field equation (7.2) for parameters A = 1.8,a = 1.6, and uT = 0.124.

This effect can be intuitively understood by examining Figure 17. The neurons imme-diately adjacent to the edge of the pulse are significantly below threshold and thus have noeffect on the rest of the network. A perturbation on the order of the distance they are belowthreshold would be necessary to cause these neurons to fire and influence the network. Inthe continuum equation, the neurons on the boundary of the pulse are precisely at threshold.Arbitrarily small perturbations can push the field above threshold and influence the otherneurons. A stable pulse must withstand these edge perturbations. Discretization eliminatesthese destabilizing edge perturbation effects.

We can make a simple estimate of how fine the discretization mesh must be in order forthese discrete affects to disappear. The distance the neuron adjacent to the pulse is belowthreshold is approximately given by ∂xφ(x = xT , xT )dx ∼ (A−1)dx. For the parameter set ofour simulation, the continuum existence condition shows that φ(xT ,−xT )−uT > 0.001. Thusto eliminate the discreteness effect, we require the adjacent neuron to be above threshold,i.e., (A − 1)dx < 0.001, as it would be in the continuum case. This leads to an estimate ofdx < 0.00125. Hence, for a domain of dimension x > 20, a network size of N > 16,000 isnecessary to eliminate the discreteness effect.

Biological neural networks are inherently discrete. Thus this discreteness effect may beexploited by the brain to stabilize localized excitations. Our numerical simulation is an exam-ple of a discretized line attractor [55] where the width of the pulse is determined by the initialcondition. Although the discrete network may have a richer structure, this does not implythat the study of continuum neural field equations are not necessary. Field equations lendthemselves more readily to analysis and many insights into the structure and properties ofneural networks have been gained by studying them. We suggest that studies combining neu-ral field equations, discrete neural network equations, and biophysically based spiking neuronsmay be a fruitful way to uncover the dynamics of these systems [28, 34, 51].

Appendix. Properties of the eigenvalue problem. We prove some properties of theeigenvalue problem (2.11) with the connection function given by (1.3). First consider functions

Page 149: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 273

φ1(x) =1

2a

∫ ∞

−∞e−a|x−y|(Fu + Θu)v(y)dy,

φ2(x) =1

2

∫ ∞

−∞e−|x−y|(Fu + Θu)v(y)dy,

where F (u) = α(u− uT ), Θ(u) is the Heaviside function, and subscript denotes partial differ-entiation.

Lemma A.1. The eigenfunction v(x) satisfies

(1 + λ)v = 2(aAφ1 − φ2).

Proof.

(1 + λ)v = w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c+ α

∫ xT

−xT

w(x− y)v(y)dy

=

∫ ∞

−∞w(x− y)

δ(x− xT ) + δ(x + xT )

cv(y)dy +

∫ ∞

−∞w(x− y)Fuv(y)dy

=

∫ ∞

−∞w(x− y)Θuv(y)dy +

∫ ∞

−∞w(x− y)Fuv(y)dy

= A

∫ ∞

−∞e−a|x−y|(Fu + Θu)v(y)dy −

∫ ∞

−∞e−|x−y|(Fu + Θu)v(y)dy

= 2(aAφ1 − φ2).

Lemma A.2. Functions φ1 and φ2 satisfy

−φ′′1 + a2φ1 = (Fu + Θu)v,(A.1)

−φ′′2 + a2φ2 = (Fu + Θu)v.(A.2)

Proof. The second order derivative of φ1(x) is

φ′′1 =

a

2

[∫ x

−∞e−a(x−y)(Fu + Θu)vdy +

∫ ∞

xea(x−y)(Fu + Θu)vdy

](A.3)

− (Fu + Θu)v.

Subtracting (A.3) + a2φ1(x) yields

−φ′′1 + a2φ1 = (Fu + Θu)v.(A.4)

−φ′′2 + a2φ2 = (Fu + Θu)v can be obtained in the same fashion.Lemma A.3. limx→±∞ φ1,2 = 0 and limx→±∞ φ′

1,2 = 0 provided that v(x) is bounded on(−∞,∞) and exponentially decays to zero as x → ±∞.

Proof. When x >> xT ,

φ1(x) =1

2a

[αe−ax

∫ xT

−xT

eayv(y)dy + e−a(x−xT ) v(xT )

c+ e−a(x+xT ) v(−xT )

c

].

Page 150: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

274 YIXIN GUO AND CARSON C. CHOW

Hence, limx→∞ φ1 = 0 provided that v(x) is bounded on [−xT , xT ].When x << −xT < 0, as x → −∞,

φ1(x) =1

2a

[αeax

∫ xT

−xT

e−ayv(y)dy + ea(x−xT ) v(xT )

c+ ea(x+xT ) v(−xT )

c

]→ 0,

φ′1 =

1

2

[−∫ x

−∞e−a(x−y)(Fu + Θu)vdy +

∫ ∞

xea(x−y)(Fu + Θu)vdy

].

As x → ∞,

limx→∞

φ′1 = lim

x→∞

−1

2

∫ x

−∞e−a(x−y)(Fu + Θu)vdy

= limx→∞

−e−ax

2

∫ xT

−xT

eaydy + eayv(xT )

c

]= 0.

As x → −∞,

limx→−∞

φ′1 = lim

x→−∞

−1

2

∫ ∞

xea(x−y)(Fu + Θu)vdy

= limx→∞

−eax

2

∫ xT

−xT

eaydy + eaxTv(xT )

c

]= 0.

Similarly, one can prove that limx→±∞ φ2 = 0 and limx→±∞ φ′2 = 0. Therefore, limx→±∞ φ1,2

= 0 and limx→±∞ φ′1,2 = 0.

Theorem A.4. The eigenvalue λ in (2.11) is always real.Proof. Using the results of Lemma A.2, aAφ1(A.1) − φ2(A.2) gives

aAφ1(−φ′′1 + a2φ1) − φ2(−φ′′

2 + φ2) = (Fu + Θu)v(aAφ1 − φ2),(A.5)

where φ1,2 are the complex conjugates of φ1,2. Integration by parts gives∫ ∞

−∞φ1φ

′′1dx = φ1φ

′1

∣∣∞−∞ −

∫ ∞

−∞φ′

1φ′1dx = −

∫ ∞

−∞

∣∣φ′1

∣∣2 dx,and similarly

∫∞−∞ φ2φ

′′2dx = −

∫∞−∞ |φ′

2|2dx. From Lemma A.1,

1

2(1 + λ)v = aAφ1 − φ2,

1

2(1 + λ)v = aAφ1 − φ2.

Integrating both sides of (A.5) gives

aA

(∫ ∞

−∞

∣∣φ′1

∣∣2 dx + a2

∫ ∞

−∞|φ1|2 dx

)(A.6)

−(∫ ∞

−∞

∣∣φ′2

∣∣2 dx +

∫ ∞

−∞|φ2|2 dx

)=

1

2(1 + λ)

∫ ∞

−∞|v|2 (Fu + Θu)dx.

Page 151: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 275

Using∫ ∞

−∞|v|2 Θudx =

1

c

∫ ∞

−∞|v|2 (δ(x− xT ) + δ(x + xT )) dx =

1

c

(|v(xT )|2 + |v(−xT )|2

)in (A.6) and rearranging give

1

2(1 + λ) =

aA(∫∞

−∞ |φ′1|

2 dx + a2∫∞−∞ |φ1|2 dx

)−

(∫∞−∞ |φ′

2|2 dx +

∫∞−∞ |φ2|2 dx

)∫∞−∞ Fu |v|2 dx + 1

c

(|v(xT )|2 + |v(−xT )|2

) .(A.7)

The right-hand side of (A.7) is real; therefore, λ is real.Theorem A.5. The eigenvalue λ in (2.11) is bounded by λb ≡ 2k0

c + 2αk1xT − 1, wherek0 is the maximum of |w(x)| on [0, 2xT ] and |w(x − y)| ≤ k1 for all (x, y) ∈ J × J , whereJ = [−xT , xT ].

Proof. We write the eigenvalue problem (2.11) as

(1 + λ)v = Lv,(A.8)

where operator L is defined as (2.12).Function w(x− y) is continuous on square J ×J . We take the norm of both sides of (A.8)

(1 + λ)‖v‖ = ‖Lv‖

with norm

‖ · ‖ = maxx∈J

| · |.

Thus

‖Lv‖ =

∥∥∥∥w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c+ α

∫ xT

−xT

w(x− y)v(y)dy

∥∥∥∥≤ max

x∈J

∣∣∣∣w(x− xT )v(xT )

c

∣∣∣∣ + maxx∈J

∣∣∣∣w(x + xT )v(−xT )

c

∣∣∣∣+ max

x∈J

∣∣∣∣α∫ xT

−xT

w(x− y)v(y)dy

∣∣∣∣≤ |w(x− xT )| ‖v‖

c+ |w(x + xT )| ‖v‖

c+ α ‖v‖

∫ xT

−xT

maxx∈J

|w(x− y)| dy

≤ 2k0‖v(x)‖

c+ 2αk1xT‖v(x)‖,

where

k0 = maxx∈J

|w(x− xT )| = maxx∈J

|w(x + xT )|

since w(x) is symmetric and |w(x− y)| ≤ k1 for all (x, y) ∈ J × J . Therefore,

(1 + λ)‖v(x)‖ = ‖Lv(x)‖ ≤ 2k0‖v(x)‖

c+ 2αk1xT‖v(x)‖,

Page 152: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

276 YIXIN GUO AND CARSON C. CHOW

leading to

λ ≤ 2k0

c+ 2αk1xT − 1 ≡ λb.

Theorem A.6. λ = 0 is an eigenvalue.Proof. Consider the equilibrium equation

u(x) =

∫ ∞

−∞w(x− y)f [u(y)]dy

=

∫ xT

−xT

w(x− y) α [u(y) − uT ] + 1 dy,(A.9)

where u(x) is a stationary standing pulse solution. After a change of variables p = x − y,(A.9) becomes

u(x) =

∫ x+xT

x−xT

w(p) α [u(x− p) − uT ] + 1 dp.(A.10)

Differentiating (A.10) with respect to x yields

u′(x) = w(x + xT ) [α(u(−xT ) − uT ) + 1] − w(x− xT ) [α(u(xT ) − uT ) + 1](A.11)

+ α

∫ x+xT

x−xT

w(p)u′(x− p)dp.

Since u(−xT ) = u(xT )uT and u′(−xT ) = c = −u′(xT ),

u′(x) = w(x + xT )u′(−xT )

c− w(x− xT )

−u′(xT )

c+ α

∫ x+xT

x−xT

w(p)u′(x− p)dp

= w(x− xT )u′(xT )

c+ w(x + xT )

u′(−xT )

c+ α

∫ xT

−xT

w(x− y)u′(y)dy.(A.12)

Equation (A.12) is the eigenvalue problem (2.11) with eigenvalue λ satisfying 1 + λ = 1,resulting in λ = 0. The corresponding eigenfunction is u′(x). Therefore, λ = 0 is an eigenvalueof (2.11) corresponding to eigenfunction u′(x).

Theorem A.7. Consider the operator

L = T1 + T2,(A.13)

where

T1(v(x)) = w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c, T1 : C [−xT , xT ] → C [−xT , xT ] ,

T2(v(x)) = α

∫ xT

−xT

w(x− y)v(y)dy, T2 : C [−xT , xT ] → C [−xT , xT ] .

Both T1 and T2 and hence L are compact operators.

Page 153: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 277

Proof. It is obvious that both T1 and T2 are linear operators. The boundedness of T1

follows from

‖T1v‖ = maxx∈J

∣∣∣∣w(x− xT )v(xT )

c+ w(x + xT )

v(−xT )

c

∣∣∣∣≤ |w(x− xT )|‖v(x)‖

c+ |w(x + xT )| ‖v(x)‖

c

≤ 2k0‖v‖c

.

Let vn be any bounded sequence in C[−xT , xT ] and ‖vn‖ ≤ c0 for all n. Let y1n = T1vn. Then

‖y1n‖ ≤ ‖T1‖‖vn‖. Hence sequence y1

n is bounded. Since w(x, t) = w(x − t) is continuous onJ ×J and J ×J is compact, w is uniformly continuous on J ×J . Hence, for any given ε1 > 0,there is a δ1 > 0 such that, for t = xT and all x1, x2 ∈ J satisfying |x1 − x2| < δ1,

|w(x1 − xT ) − w(x2 − xT )| < c

2c0ε1.

Consequently, for x1, x2 as before and every n, one can obtain

∣∣y1n(x1) − y1

n(x2)∣∣ =

∣∣∣∣[w(x1 − xT ) − w(x2 − xT )]vn(xT )

c

+ [w(x1 + xT ) − w(x2 + xT )]vn(−xT )

c

∣∣∣∣< |w(x1 − xT ) − |w(x2 − xT )| c0

c+ |w(x1 + xT ) − w(x2 + xT )| c0

c

<c

2c0ε1c0c

+c

2c0ε1c0c

= ε1.

The boundedness of T2 follows from

‖T2v‖ ≤ ‖v‖maxx∈J

∫ xT

−xT

|w(x− t)|dt.

Similarly, let y2n = T2vn. Then y2

n is bounded. For any given ε2 > 0, there is a δ2 > 0 suchthat, for any t ∈ J and all x1, x2 ∈ J satisfying |x1 − x2| < δ2,

|w(x1 − t) − w(x2 − t)| < ε22xT

,

∣∣y2n(x1) − y2

n(x2)∣∣ =

∣∣∣∣∫ xT

−xT

[w(x1 − t) − w(x2 − t)]vn(t)dt

∣∣∣∣< 2xT

ε22xT c0

= ε2.

This proves the equicontinuity of y1n and y2

n. By Ascoli’s theorem, both sequences haveconvergent subsequences. vn is an arbitrary bounded sequence and y1

n = T1vn, y2n = T2vn. The

compactness of T1 and T2 follows from the criterion that an operator is compact if and only

Page 154: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

278 YIXIN GUO AND CARSON C. CHOW

if it maps every bounded sequence xn in X onto a sequence Txn in Y which has a convergentsubsequence.

Theorem A.8. λ = −1 is the only possible accumulation point of the eigenvalues of L andevery spectral value λ = −1 of L is an eigenvalue of L. Thus the only possible essentialspectrum of compact operator L is at λ = −1.

Proof. Letting γ = (1 + λ), the eigenvalue problem becomes

γv(x) = Lv(x),

and the linear operator L is compact on the normed space C[−xT , xT ]. γ is the eigenvalue ofoperator L. The spectrum of a compact operator is a countable set with no accumulation pointdifferent from zero. Each nonzero member of the spectrum is an eigenvalue of the compactoperator with finite multiplicity [32, 31]. Therefore, the only possible point of accumulationfor the spectrum set of compact operator L is γ = 0; i.e., λ = −1 and every spectral valueλ = −1 of L is an eigenvalue of L. This suggests that the only possible essential spectrum isat λ = −1. All the spectral values λ such that λ > −1 are eigenvalues.

Lemma A.9. The zero of B, λB, obeys −1 < λB < λr. For the case a3 > A, λl < λB < λr,and for the case a3 < A, λB < λl < λr.

Proof. Set

B = (1 + λ)(a2 + 1) + 2α(1 − aA) = 0.

The zero of B is

λB = −a2 + 1 + 2α− 2aAα

a2 + 1= −1 +

2α(aA− 1)

a2 + 1> −1.

∆ is a quadratic function in λ and it has two zeros. The left zero is

λl =1 − a2 + 2aAα + 2α− 4α

√aA

a2 − 1.

The right zero is

λr =1 − a2 + 2aAα + 2α + 4α

√aA

a2 − 1.

The difference between λr and λB is

λr − λB =4aα(a + A) + 4α

√aA(a2 + 1)

a4 − 1> 0.

Therefore, −1 < λB < λr.

The difference between λB and λl is λB − λl = 4α(√aA−1)(a2−

√aA)

a4−1. The sign of λB − λl

depends on a2 −√aA. If a2 −

√aA is positive, i.e., a3 > A, then λl < λB < λr. If a2 −

√aA

is negative, i.e., a3 < A, then λB < λl < λr.Lemma A.10. (i) For a3 > A and λl < λB < λr, B does not intersect the left branch or

the right branch of√

∆. (ii) For a3 < A and λB < λl < λr, B intersects only the left branchof

√∆ once at λI .

Page 155: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 279

Proof. It is not difficult to see that B does not intersect the right branch of√

∆ for both(i) and (ii).

√∆ is linear in λ with slope a2 − 1 for large λ. The slope of B is a2 + 1. Both

a2 − 1 and a2 + 1 are positive and a2 + 1 > a2 − 1; thus B and the right branch of√

∆ nevermeet. When λl < λB < λr, B < 0 for λ < λB and

√∆ > 0 for λ < λl < λB. Therefore, B and√

∆ never intersect. In (ii), B intersects the left branch of√

∆ at λI = 2Aα−2aα−aa .

Acknowledgments. We thank G. Bard Ermentrout, William Troy, Xinfu Chen, JonathanRubin, and Bjorn Sandestade for illuminating discussions.

REFERENCES

[1] C. D. Aliprantis, Problems in Real Analysis: A Workbook with Solutions, Academic Press, New York,1999.

[2] C. D. Aliprantis and Burkinshaw, Principles of Real Analysis, Academic Press, New York, 1998.[3] S. Amari, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybernet., 27

(1977), pp. 77–87.[4] M. A. Arbib, ed., The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, MA,

1995.[5] K. E. Atkinson, Numerical Solution of Integral Equations of the Second Kind, Cambridge University

Press, Cambridge, UK, 1997.[6] C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers I:

Asyptotic Methods and Perturbation Theory, Springer-Verlag, New York, 1999.[7] W. E. Boyce and R. C. DiPrima, Introduction to Differential Equations, John Wiley and Sons, New

York, 1970.[8] A. R. Champneys and J. P. McKenna, On solitary waves of a piece-wise linear suspended beam model,

Nonlinearity, 10 (1997), pp. 1763–1782.[9] S. Coombes, G. J. Lord, and M. R. Owen, Waves and bumps in neuronal networks with axo-dendritic

synaptic interactions, Phys. D, 178 (2002), pp. 219–241.[10] S. Coombes and M. R. Owen, Evans functions for integral neural field equations with Heaviside firing

rate function, SIAM J. Appl. Dyn. Syst., 3 (2004), pp. 574–600.[11] L. M. Delves and J. L. Mohamed, Computational Methods for Integral Equations, Cambridge Univer-

sity Press, Cambridge, UK, 1988.[12] D. G. Duffy, Green’s Functions with Applications, Chapman and Hall/CRC, Boca Raton, FL, 2001.[13] S. A. Ellias and S. Grossberg, Pattern formation, contrast control, and oscillations in the short-term

memory of shunting on-center off-surround networks, Biol. Cybernet., 20 (1975), pp. 69–98.[14] G. B. Ermentrout, XPPAUT, Simulation Software Tool.[15] G. B. Ermentrout, Neural networks as spatio-temporal pattern-forming systems, Rep. Progr. Phys., 61

(1998), pp. 353–430.[16] G. B. Ermentrout, Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT

for Researchers and Students, Software Environ. Tools 14, SIAM, Philadelphia, 2002.[17] J. W. Evans, Nerve axon equations, I: Linear approximations, Indiana Univ. Math. J., 21 (1972), pp.

877–955.[18] J. W. Evans, Nerve axon equations, II: Stability at rest, Indiana Univ. Math. J., 22 (1972), pp. 75–90.[19] J. W. Evans, Nerve axon equations, III: Stability of the nerve impulse, Indiana Univ. Math. J., 22 (1972),

pp. 577–594.[20] J. W. Evans, Nerve axon equations, IV: The stable and unstable impulse, Indiana Univ. Math. J., 24

(1975), pp. 1169–1190.[21] G. B. Folland, Fourier Analysis and Its Applications, Wadsworth and Brooks/Cole Advanced Books

and Software, Pacific Grove, CA, 1992.[22] J. M. Fuster, Prefrontal Cortex: Anatomy, Physiology, and Neuropsychology of the Frontal Lobe,

Lippincott-Raven, Philadelphia, 1997.[23] F. Garvan, The Maple Book, Chapman and Hall, London, 2001.

Page 156: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

280 YIXIN GUO AND CARSON C. CHOW

[24] C. D. Green, Integral Equation Methods, Nelson, London, 1969.[25] D. H. Griffel, Applied Functional Analysis, Ellis Horwood, Chichester, UK, 1985.[26] Y. Guo, Existence and Stability of Standing Pulses in Neural Networks, Ph.D. thesis, University of

Pittsburgh, Pittsburgh, PA, 2003.[27] Y. Guo and C. C. Chow, Existence and Stability of Standing Pulses in Neural Networks: I. Existence,

SIAM J. Appl. Dyn. Syst., 4 (2005), pp. 217–248.[28] B. S. Gutkin, C. R. Laing, C. L. Colby, C. C. Chow, and G. B. Ermentrout, Turning on and

off with excitation: The role of spike-timing asynchrony and synchrony in sustained neural activity,J. Comp. Neurosci., 11 (2001), pp. 121–134.

[29] E. Haskell and P. C. Bressloff, On the formation of persistent states in neuronal networks modelsof feature selectivity, J. Integ. Neurosci., 2 (2003), pp. 103–123.

[30] T. Kapitula, N. Kutz, and B. Sandstede, The Evans function for nonlocal equations, Indiana Univ.Math. J., to appear.

[31] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, New York, 1995.[32] E. Kreyszig, Introductory Functional Analysis with Applications, John Wiley and Sons, New York, 1978.[33] Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, Springer-Verlag, New York, 1998.[34] C. R. Laing and C. C. Chow, Stationary bumps in networks of spiking neurons, Neural Comp., 13

(2001), pp. 1473–1493.[35] C. R. Laing and W. C. Troy, Two-bump solutions of Amari type models of working memory, Phys. D,

178 (2003), pp. 190–218.[36] C. R. Laing, W. C. Troy, B. Gutkin, and G. B. Ermentrout, Multiple bumps in a neuronal model

of working memory, SIAM J. Appl. Math., 63 (2002), pp. 62–97.[37] C. R. Laing and W. C. Troy, PDE methods for nonlocal models, SIAM J. Appl. Dyn. Syst., 2 (2003),

pp. 487–516.[38] E. K. Miller, C. A. Erickson, and R. Desimone, Neural mechanisms of visual working memory in

prefrontal cortex of the macaque, J. Neurosci., 16 (1996), pp. 5154–5167.[39] N. Morrison, Introduction to Fourier Analysis, Wiley-Interscience, New York, 1994.[40] J. D. Murray, Mathematical Biology, Springer-Verlag, New York, 2002.[41] J. G. Nicholls, From Neuron to Brain: A Cellular Molecular Approach to the Function of the Nervous

System, Sinauer Associates, Sunderland, MA, 1992.[42] Y. Nishiura and M. Mimura, Layer oscillations in reaction-diffusion systems, SIAM J. Appl. Math.,

49 (1989), pp. 481–514.[43] E. Part-Enander, A. Sjoberg, B. Melin, and P. Isaksson, The MATLAB Handbook, Addison–

Wesley, Reading, MA, 1998.[44] L. A. Peletier and W. C. Troy, Spatial Patterns: Higher Order Models in Physics and Mechanics,

Birkhauser Boston, Boston, 2001.[45] D. E. Pelinovsky and V. G. Yakhno, Generation of collective-activity structures in a homogeneous

neuron-like medium. I. Bifurcation analysis of static structures, Internat. J. Bifur. Chaos Appl. Sci.Engrg., 6 (1996), pp. 81–87, 89–100.

[46] D. J. Pinto and G. B. Ermentrout, Spatially structured activity in synaptically coupled neuronalnetworks: II. Lateral inhibition and standing pulses, SIAM J. Appl. Math., 62 (2001), pp. 226–243.

[47] A. D. Polianin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, FL,1998.

[48] D. L. Powers, Boundary Value Problems, Harcourt Academic Press, New York, 1999.[49] M. Rahman, Complex Variables and Transform Calculus, Computational Mechanics Publications, Bos-

ton, 1997.[50] J. E. Rubin, A nonlocal eigenvalue problem for the stability of a traveling wave in a neuronal medium,

Discrete Contin. Dyn. Syst. Ser. A, 4 (2004), pp. 925–940.[51] J. E. Rubin, D. Terman, and C. C. Chow, Localized bumps of activity sustained by inhibition in a

two-layer thalamic network, J. Comp. Neurosci., 10 (2001), pp. 313–331.[52] J. E. Rubin and W. C. Troy, Sustained spatial patterns of activity in neuronal populations without

recurrent excitation, SIAM J. Appl. Math., 64 (2004), pp. 1609–1635.

Page 157: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

STABILITY OF PULSES IN NEURAL NETWORKS 281

[53] W. Rudin, Principles of Mathematical Analysis, McGraw–Hill, New York, 1976.[54] E. Salinas and L. F. Abbott, A model of multiplicative neural responses in parietal cortex, Proc. Natl.

Acad. Sci. USA, 93 (1996), pp. 11956–11961.[55] S. H. Seung, How the brain keeps the eyes still, Proc. Natl. Acad. Sci. USA, 93 (1996), pp. 13339–13344.[56] S. H. Strogatz, Nonlinear Dynamics and Chaos, Perseus Books, New York, 1994.[57] S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-Verlag, New

York, 1990.[58] H. R. Wilson and J. D. Cowan, A mathematical theory of the functional dynamics of cortical and

thalamic nervous tissue, Kybernetic, 13 (1973), pp. 55–80.[59] S. Wolfram, The Mathematica Book, 4th ed., Cambridge University Press, Cambridge, UK, 1999.[60] E. Zeidler, Nonlinear Functional Analysis and Its Applications I: Fixed-Point Theorems, Springer-

Verlag, New York, 1986.[61] L. Zhang, On stability of traveling wave solutions in synaptically coupled neuronal networks, Differential

Integral Equations, 16 (2003), pp. 513–536.

Page 158: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

POSTER PRESENTATION Open Access

Modulation of thalamocortical relay by basalganglia in Parkinson’s disease and dystoniaYixin Guo1, Choongseok Park2, Min Rong1, Robert M Worth3, Leonid L Rubchinsky2,4*

From Twentieth Annual Computational Neuroscience Meeting: CNS*2011Stockholm, Sweden. 23-28 July 2011

Two major neurological disorders – Parkinson’s diseaseand dystonia – are believed to involve pathology in theactivity of the basal ganglia, a subcortical brain structure,whose output nuclei (internal Globus Pallidus, GPi) pro-jects to thalamus and modulates thalamocortical relay.While these disorders may ultimately involve different net-work and cellular pathologies, some pathological physiol-ogy may be shared between them because surgicaltreatment of both conditions includes surgical lesion orelectrical stimulation to GPi (pallidotomy and GPi DBS).This work compares the thalamocortical relay responsesto inhibitory inputs from internal segment of GPi in Par-kinson’s disease and in dystonia.Experimental data suggest that both conditions are

marked by stronger oscillatory activity. In dystonia thisactivity becomes pathologically strong in the theta andalpha bands [1,2], while in Parkinson’s disease this is thebeta-band activity [3]. The activity itself is patterned intime [4], complicating the computational study of itsrole. To compare the modulation of thalamocorticalrelay, we use experimental data recorded from GPi ofhuman subjects with Parkinson’s disease or dystonia andstudy the difference of the quality of thalamocorticalrelay in these conditions following the computationalsetup, presented earlier in [5].The results of the study of the “hybrid” system (com-

putational model of TC cell modulated by experimentaldata) reveal a substantial similarity in the properties ofrelay in Parkinson’s disease and in dystonia. TC relayfidelity is substantially impaired due to the pathologicalpattern of GPi signals in both conditions. The results

are robust with respect to variations of the model detailsand the types of incoming excitatory synaptic input.The results suggest that even though the rhythmicity in

Parkinson’s disease and dystonia are confined to differentfrequency bands, their effect on the dynamics of down-stream circuits is similar. Thus given the differences indystonic and parkinsonian symptoms these results suggestthe existence of mechanisms beyond pathological rhythmi-city and thalamocortical relay in at least in one of the con-ditions. On the other hand, overlap in some motor deficitsof dystonia and Parkinson’s disease may be attributed tothe existence of similar pathological rhythmicities and theresulting deficiencies of thalamic relay.

AcknowledgementsThis study was supported by NIH grant R01NS067200 (NSF/NIH CRCNSprogram) and Antelo Devereux Award from Drexel University.

Author details1Department of Mathematics, Drexel University, Philadelphia, PA, 19104, USA.2Department of Mathematical Sciences and Center for MathematicalBiosciences, Indiana University Purdue University Indianapolis, Indianapolis, IN46202, USA. 3Department of Neurological Surgery, Indiana University Schoolof Medicine, Indianapolis, IN 46202, USA. 4Stark Neurosciences ResearchInstitute, Indiana University School of Medicine, Indianapolis, IN 46202, USA.

Published: 18 July 2011

References1. Hammond C, Bergman H, Brown P: Pathological synchronization in

Parkinson’s disease: networks, models and treatments. Trends Neurosci2007, 30:357-364.

2. Silberstein P, Kühn AA, Kupsch A, Trottenberg T, Krauss JK, Wöhrle JC,Mazzone P, Insola A, Di Lazzaro V, Oliviero A, Aziz T, Brown P: Patterning ofglobus pallidus local field potentials differs between Parkinson’s diseaseand dystonia. Brain 2003, 126:2597-2608.

3. Starr PA, Rau GM, Davis V, Marks WJ Jr, Ostrem JL, Simmons D, Lindsey N,Turner RS: Spontaneous pallidal neuronal activity in human dystonia:comparison with Parkinson’s disease and normal macaque. JNeurophysiol 2005, 93:3165-3176.

4. Park C, Worth RM, Rubchinsky LL: Fine temporal structure of betaoscillations synchronization in subthalamic nucleus in Parkinson’sdisease. J Neurophysiol 2010, 103:2707-2716.

* Correspondence: [email protected] of Mathematical Sciences and Center for MathematicalBiosciences, Indiana University Purdue University Indianapolis, Indianapolis, IN46202, USAFull list of author information is available at the end of the article

Guo et al. BMC Neuroscience 2011, 12(Suppl 1):P275http://www.biomedcentral.com/1471-2202/12/S1/P275

© 2011 Guo et al; licensee BioMed Central Ltd. This is an open access article distributed under the terms of the Creative CommonsAttribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction inany medium, provided the original work is properly cited.

Page 159: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

5. Guo Y, Rubin JE, McIntyre CC, Vitek JL, Terman D: Thalamocortical relayfidelity varies across subthalamic nucleus deep brain stimulationprotocols in a data-driven computational model. J Neurophysiol 2008,99:1477-1492.

doi:10.1186/1471-2202-12-S1-P275Cite this article as: Guo et al.: Modulation of thalamocortical relay bybasal ganglia in Parkinson’s disease and dystonia. BMC Neuroscience2011 12(Suppl 1):P275.

Submit your next manuscript to BioMed Centraland take full advantage of:

• Convenient online submission

• Thorough peer review

• No space constraints or color figure charges

• Immediate publication on acceptance

• Inclusion in PubMed, CAS, Scopus and Google Scholar

• Research which is freely available for redistribution

Submit your manuscript at www.biomedcentral.com/submit

Guo et al. BMC Neuroscience 2011, 12(Suppl 1):P275http://www.biomedcentral.com/1471-2202/12/S1/P275

Page 2 of 2

Page 160: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

POSTER PRESENTATION Open Access

Entrainment of a thalamocortical neuron toperiodic sensorimotor signalsDennis Guang Yang*, Yixin Guo

From Twentieth Annual Computational Neuroscience Meeting: CNS*2011Stockholm, Sweden. 23-28 July 2011

In this work we study the dynamics of a 3-dimensionalconductance-based model of a single thalamocortical(TC) neuron in response to sensorimotor signals. Inparticular, we focus on the entrainment of the system toa periodic excitatory signal that alternates between ‘on’and ‘off’ states lasting for time T1 and T2, respectively.By exploiting invariant sets of the system and their asso-ciated invariant fiber bundles that foliate the phasespace, we reduce the dynamics to the composition oftwo 2-dimensional maps, with the two components of

one of the maps being simply a uniform shift and a uni-form decay. With this reduction in computational com-plexity, we are able to analyze the model’s response tothe excitatory signal while varying T1 and T2 systemati-cally. We find that for fixed T2 but different T1 thereexist and in some cases co-exist entrained periodic oscil-lations with different number of spikes (see Figure 1 forthe case with T2 = 60 milliseconds). For relatively largeT2 (above 55 milliseconds) it is also possible that themodel responds to the excitatory signal with delayed

* Correspondence: [email protected] Mathematics, Drexel University, Philadelphia, PA 19104, USAFull list of author information is available at the end of the article

Figure 1 The top panel shows the T1 intervals (with T2 = 60 milliseconds) for the existence of different families of entrained periodic oscillations(green for sub-threshold oscillations, red for 1-spike oscillations, dark blue for 2-spike oscillations, and light blue for 2-spike oscillations with adelayed second spike.) Panels A, B, C, and D illustrate the periodic excitatory signals with T2 = 60 milliseconds and T1 = 1.014, 5.399, 8.545, and16.15 milliseconds, respectively. Panels A1, A2, B1, B2, C1, C2, and D1 show the time series plots of the different types of periodic oscillationsentrained to the excitatory signals with T1 values indicated in the top panel.

Yang and Guo BMC Neuroscience 2011, 12(Suppl 1):P135http://www.biomedcentral.com/1471-2202/12/S1/P135

© 2011 Yang and Guo; licensee BioMed Central Ltd. This is an open access article distributed under the terms of the CreativeCommons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, andreproduction in any medium, provided the original work is properly cited.

Page 161: Research Portfolio Yixin Guo Department of Mathematics Drexel Universityyixin/Yixin_research.pdf · 2012-09-19 · Research Portfolio . Yixin Guo . Department of Mathematics . Drexel

spikes. Furthermore, we find that the size of the T1

intervals that allow coexistence of different types ofentrained oscillations becomes larger as T2 increases.

Published: 18 July 2011

Reference1. Guo Y, Rubin JE, McIntyre CC, Vitek JL, Terman D: Thalamocortical relay

fidelity varies across subthalamic nucleus deep brain stimulationprotocols in a data-driven computational model. J Neurophysiol 2008,99:1477-1492.

doi:10.1186/1471-2202-12-S1-P135Cite this article as: Yang and Guo: Entrainment of a thalamocorticalneuron to periodic sensorimotor signals. BMC Neuroscience 2011 12(Suppl 1):P135.

Submit your next manuscript to BioMed Centraland take full advantage of:

• Convenient online submission

• Thorough peer review

• No space constraints or color figure charges

• Immediate publication on acceptance

• Inclusion in PubMed, CAS, Scopus and Google Scholar

• Research which is freely available for redistribution

Submit your manuscript at www.biomedcentral.com/submit

Yang and Guo BMC Neuroscience 2011, 12(Suppl 1):P135http://www.biomedcentral.com/1471-2202/12/S1/P135

Page 2 of 2


Recommended