Home >
Documents >
Self-synchronization of networks with a strong …...In fact, [30] proved the synchronization of...

Share this document with a friend

12

Transcript

Self-synchronization of networks with a strong kernel of integrateand fire excitatory neurons

ELEONORA CATSIGERASInstituto de Matematica

Universidad de la RepublicaAv. Herrera y Reissig 565, Montevideo

Abstract: We study the global dynamics of networks of pulsed-coupled neurons that are modeled as integrateand fire oscillators. We focus on excitatory networks with a strong kernel. We prove the synchronization of thewhole network from any initial state, and find a bound from above of the transients until the full synchronizationis achieved. The methodology of research is by exact mathematical definitions and statements and by deductiveproofs, from standard arguments of the mathematical theory of abstract dynamical systems. We include examplesof applications to diverse fields, and also a brief review of other mathematical methods of research on generalnetworks of dynamically interacting units.

Key–Words: Pulse-coupled networks, synchronization, neural networks, integrate and fire oscillators.

1 IntroductionThe large-scaled dynamics of systems that change ontime and are composed by many interacting units,emerges from the free dynamics of each unit and fromthe rules of interactions among them. Such dynamicalsystems are called, in a general and abstract context,networks. Applications of the mathematical analysisof networks are abundant in diverse fields of Scienceand Technology (see Section 4).

In particular, some class of networks come frommathematical dynamical models of the living nervoussystem, and are called neuronal networks. Amongthese networks we are focusing on those composedby integrate and fire neurons (e.g. [4, 7]), that are as-sumed to be pulsed coupled (e.g. [19]).

The pulse-coupling means that the interactionsare instantaneous and are not produced all along time,but at certain discrete sequence of instants that areseparated by regular or irregular time intervals. Anautonomous system decides by itself the instants atwhich the interactions among the units of the net-work are produced. In other words, the network self-organizes. In Section 2, we pose the exact statementsof the mathematical model under study.

The emergent dynamics of pulsed-coupled net-works is usually modeled by impulsive differential orintegro-differential equations (e.g. [18, 38]). For in-stance, the spikes of the neurons during the synapticactivity in the living nervous system is mathematicallystudied by the solution of impulsive differential equa-

tions (e.g.[6]).The global dynamics of impulsive differential

or integro-differential equations is a source of manymathematically open questions, whose answers aremostly unknown, except in particular cases or in lowdimensions (e.g. [46, 51]). In general, dynamical sys-tems of coupled units (even if they are not pulsed-coupled or modeled by impulsive equations), posenew open problems to Mathematics, which are partic-ularly difficult to solve if the interactions’ parametersbelong to an intermediate range, neither too strong nortoo weak (e.g. [49]).

The research in this paper focusses on the syn-chronization mathematical problem of pulsed-couplednetworks of integrate and fire neurons without delay.

The synchronization phenomenon of severalidentical (or at least similar) dynamical units appearsin Physics: For instance, the global synchronizationof networks with large complexity was mostly studiedfor mutually coupled identical oscillators (e.g. [33]and references therein). Those results were appliedto study the behavior of Light Controlled Oscillators(LCO) in [35, 36]. LCO systems are used to study di-verse biological systems, as for instance, the emergentsynchronized dynamics of populations of the south-eastern fireflies: a large number of insects flash al-together as the result of the mutual interactions pro-duced by the light impulsive signals among them.Such an experimental result with electronic simulatedfireflies was obtained in [36], while [37] proved math-

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 786 Issue 7, Volume 12, July 2013

ematically the synchronization for two electronic fire-flies, which are governed by linear differential equa-tions during the time-intervals between consecutivefires.

Other results of synchronization for arbitrarilylarge networks of pulsed-coupled units, are alreadyknown and rigorously proved: For instance, [32]proved the synchronization of large completely con-nected networks of identical units, with constant pos-itive interactions, and assuming that the evolutionamong pulses is linear on time. [7] proved it for largecompletely connected networks of identical units,with non constant positive interactions, and non lineardependence on time. [4] prove it for completely con-nected networks, non constant positive interactions,and arbitrary dependence on time, provided that cer-tain stability property holds.

In Theorem 8 of this paper, we generalize the pre-vious results cited above: We prove the synchroniza-tion of an arbitrarily large number of dynamical units,governed by impulsive differential equations of anytype, provided that the state variable of each neuronis increasing on time (during the time-intervals be-tween consecutive interactive impulses). We assumearbitrary, positive or null interactions, and a network’sgraph that is not necessarily complete, but has a strongkernel with strictly positive weights (Definition 6).

The synchronization of neuronal networks is rele-vant in the nervous system, not because the whole sys-tem synchronizes (it certainly does not), but becausesome specialized subnetworks synchronize. Theselatter groups of cells, allow a living individual acquirestable biological rhythms that coexist with other nonsynchronized regions of the brain. The existence ofstable biological rhythms are essential for life. Forinstance, the heart peacemaker neurons work in syn-chrony [27]. Stable partial synchronization is alsonecessary for the regulation of the information that isgenerated or processed by the nervous system. Thisinformation is not properly chaotic. Namely, it doesnot necessarily increase with a positive rate foreverin the future, but acquires a formed of self-controlledstructured information [47].

Periodic limit cycles that are not necessarily syn-chronized orbits, appear also in mathematical modelsof biological neuronal networks that are not all exci-tatory [8, 7]. Nevertheless, since the periods (and alsothe finite number of periodic orbits) may be arbitrar-ily large (e.g. [8]), the observed behaviour during afinite interval or time seems to be irregular. This phe-nomenon is called virtual chaos in [8], or stable chaosin [34]. The same argument shows that also when thenetwork finally synchronizes, it may look exhibitingvirtual chaos during the transitory times, if these tran-sients last too long.

As said above, in Theorem 8 we prove synchro-nization of the network. Besides, by Formula (8) webound from above the transient duration until full syn-chronization is achieved. Along the proofs of these re-sults we obtain intermediate other results, such as theformation, during the transients, of different patternsof clustered cells that mutually synchronize and do notinclude all the cells of the network (Lemmas 11 and12). The proofs assume that the network’s dynamicsis autonomous and deterministic, the impulses are ex-citatory (i.e. positive or null), no delay exists, and thecells are integrate and fire oscillators. Nevertheless, ifthose hypothesis on the model do not hold, still syn-chronization, or at least phase locking, can be proved.In fact, [30] proved the synchronization of coupledoscillators that are stochastically modeled. Also syn-chronization was proved in some pulsed-coupled net-works with delayed interactions [48]. Synchroniza-tion among two chaotic dynamical units (the cells arenot oscillators) was proved when the interactions arecontinuous on time, governed by a system of differen-tial equations of fractional order [45]. Finally, in [43]some mutually coupled chaotic circuits were provedto exhibit the so called inverse lag synchronization,which is, roughly speaking, the phase lock on oppo-site phases.

2 The mathematical modelAlong this paper, each neuron i of the network is mod-eled as an integrate and fire oscillator. This means thatits instantaneous state is described by a real variablexi which has two phases: a integration phase, whichis the integral flow that solves an ordinary differentialequation, and a firing phase, which is governed by aspiking rule according to which the state of the neu-ron resets and sends instantaneous actions to the otherneurons of the network.

2.1 The integration phase

During the integration phase the variable xi (if theneuron i were hypothetically isolated from the net-work) is the solution of an autonomous differentialequation with initial condition xi(0), of the followingtype:

dxidt

= f(xi) if L ≤ xi < U, L ≤ xi(0) < U (1)

where t ∈ R is time, U > 0 and L < 0 are con-stants, and f : [L,U ] 7→ R+ is a positive Lipschitz-continuous function: As the domain of f is the com-pact interval [L,U ] and f is continuous, the positivevalues of f are bounded away from zero. Namely,there exists a constant a > 0 such that

f(x) ≥ a > 0 ∀ x ∈ [L,U ]. (2)

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 787 Issue 7, Volume 12, July 2013

Therefore, from the differential equation (1) we ob-tain that xi(t) is strictly increasing on time t, and itsderivative with respect to t is larger than a positiveconstant a. In other words, the velocity accordingto which the variable xi(t) increases, is larger thana > 0. Since the initial condition xi(0) is lower thanU , we deduce that there exists a finite time Ti > 0(which depends on the initial state xi(0) of the neuroni) such that xi reaches the upper bound U . Precisely,

xi(T−i ) := lim

t→+T−i

xi(t) = U. (3)

Definition 1 The constant upper bound U > 0 of thestate xi of each neuron, is called the threshold level.The lowest constant value L < 0 that xi can hypo-thetically take, is called the low bound. The modelstates that the differential equation (1) holds while thestate xi has not arrived to its threshold level and is notsmaller than the low bound.

2.2 The instantaneous firing phase

Definition 2 At any instant T > 0 at which the statexi arrives to (or is larger than) the threshold level U- in particular, at the first instant Ti > 0 satisfyingEquality (3), - we say that the neuron i spikes or fires.Such an instant T is called a spiking instant.

A firing of a neuron, by hypothesis, produces twoconsequences:◦ First, at each instant T > 0 for which xi(T−) =U , the state xi of the neuron resets. By changing theorigin in the real axis of the variable xi, if necessary, itis not restrictive to assume that the reset value is zero.Explicitly:

∀ T > 0, if limt→T−

xi(t) = U, then xi(T ) = 0.

(4)In particular, for the first instant Ti > 0 at which theneuron i fires, we have xi(Ti) = 0.◦ Second, at each spiking instant T > 0, the neuron isends an instantaneous action signal Ai,j to the otherneurons j = i of the network. This model assumesthat Ai,j is a real number that depends only on i andj, but not on time.

Definition 3 A neuron i is excitatory if Ai,j ≥ 0 forall j = i, and it is inhibitory if Ai,j ≤ 0 for all j = i.We say that a network N is excitatory if all its neuronsare excitatory. Along this paper we only consider ex-citatory networks. Note that not all the interactionsmust be strictly positive.

2.3 The interactions

If T > 0 is an instant at which only one neuron j = ispikes, then the state xi - of the neuron i that receivesthe action signalAj,i ≥ 0 from the neuron j - suffers adiscontinuity jump which is defined by the followingrule:

xi(T ) = xi(T−) +Aj,i if < U,

xi(T ) = 0 otherwise

If many neurons j1, . . . , jN (all different from i)spike simultaneously at an instant T > 0, then thestate xi of the neuron i suffers a discontinuity jumpdefined by:

xi(T ) = xi(T−) +

N∑l=1

Ajl,i if < U, (5)

xi(T ) = 0 otherwise (6)

2.4 Strong excitatory kernel

Definition 4 The network’s graph is defined such thatits vertices are the neurons, and their edges are di-rected and weighted with the value Ai,j for eachnonzero action signal Ai,j = 0 from the neuron (orvertex) i to the neuron (or vertex) j = i. Note thatthe network’s graph is not necessarily symmetric, i.e.Ai,j may differ from Aj,i. Also maybe only one of thetwo directed edges between i and j may exist (in sucha case the action in the opposite direction is zero).

Definition 5 A kernel K of an excitatory network Nis, if it exists, a complete subgraph, such thatAi,j > 0for all i ∈ K and for all j ∈ N , j = i. In Figure 1we draw a simple example of a network of six neuronswith a kernel of three neurons.

Figure 1: The network N = {1, 2, 3, 4, 5, 6} has akernel K = {1, 2, 3}.

The neurons that are not in the kernel may havenull or positive interactions among them, and fromthem to the neurons in the kernel.

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 788 Issue 7, Volume 12, July 2013

Definition 6 We say that a kernel K of an excitatorynetwork N is strong if the number k of neurons in Kis at least 3 and if the minimum excitatory action froma neuron in K to the other neurons of N is strongenough to satisfy the following inequality:

min{Ai,j : i ∈ K, j ∈ N , i = j} ≥ U − L√k

, (7)

where U > 0 is the threshold level and L < 0 the lowbound of the states of the neurons (cf. Definition 1).

In the above definition, the interactions Ai,j fori ∈ K, j ∈ N , i = j, need not to be actually strong,in an absolute sense. In fact, the number k of neuronsin the kernel K may be large enough, so Inequality(7) holds for a small value of min{Ai,j : i ∈ K, j ∈N , i = j}. In other words, for arbitrarily small pos-itive interactions Ai,j , the kernel K is strong, (i.e. itsatisfies Inequality (7)), if it has a sufficiently largenumber of neurons. On the contrary, if the kernel hasa small number of neurons (say for instance, k = 3),it will be strong only if the minimum positive interac-tion Ai,j is large enough to satisfy Inequality (7).

3 Synchronization

3.1 Statement of the main result

Let N be a network composed by m integrate and fireneurons. We call the m-th. vector

x(0) =(x1(0), . . . , xi(0), . . . , xm(0)

)the initial state of the network. We denote by

x(t) =(x1(t), . . . , xi(t), . . . , xm(t)

),

the state of network at instant t, and call {x(t)}t≥0 theorbit with initial state x(0).

Definition 7 We say that the orbit {x(t)}t≥0 is syn-chronized or that the network with initial state x(0) issynchronized, if

xi(t) = xj(t) ∀ t ≥ 0 ∀ i = j.

We say that the orbit - or the network- with initial statex0 synchronizes after a transitory time T0, if T0 ≥ 0is the minimal non negative real number such that

xi(t) = xj(t) ∀ t ≥ T0 ∀ i = j.

Theorem 8 Let N be an excitatory network with astrong kernel K. Then, from any initial state thereexists a transitory time T0 ≥ 0 (which, in general, de-pends on the initial state) such that the network syn-chronizes for all t ≥ T0. Besides,

T0 ≤U − L

min{f(x) : L ≤ x ≤ U}, (8)

where U > 0 is the threshold level, L < 0 is the lowbound of the state xi of each neuron i ∈ N , and f isthe real function in the second member of the differen-tial equation (1).

We start the proof of Theorem 8 with Lemma 9in the following Subsection, and end it in Subsection3.3.

3.2 Lemmas

Assume that N is an excitatory network with a strongkernel K as in the hypothesis of Theorem 3.3. To sim-plify the notation, in the sequel we agree that i = jwhen we write i, j to denote two neurons of the net-work N .

Let N be the minimum positive natural numbersuch that

N ≥ U − L

min{Aj,i: j ∈ K, i ∈ N}. (9)

Lemma 9 In the hypothesis of Theorem 8, if at leastN neurons of the kernel K spike simultaneously atsome instant T0 > 0, then the neurons of the wholenetwork spike altogether at instant T0.

Proof: Arguing by contradiction, let us assume thatthere exists a neuron i ∈ N that does not spike atinstant T0. We have

xi(T0) < U. (10)

Since L < 0 is the low bound of the state xi of anyneuron, we know

L ≤ xi(T−0 ) < U,

where xi(T−0 ) denotes the limit when t → T−

0 of thestate xi(t) at instant t, which is governed by the dif-ferential equation (1). By hypothesis, there exists atleast N neurons j1, j2, . . . , jN of the kernel K spik-ing at instant T0. Thus, these neurons jl are sendingactions Ajl,i to the neuron i. Since the whole networkis excitatory, we have Aj,i ≥ 0 for all j = i. In par-ticular, for all j ∈ JT0 (where JT0 denotes the subset

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 789 Issue 7, Volume 12, July 2013

of all the neurons that spike at instant T0), we haveAj,i ≥ 0. Applying Equality (5), we obtain

xi(T0) = xi(T−0 ) +

∑j∈JT0

Aj,i ≥ xi(T−0 ) +

N∑l=1

Ajl,i.

From Definition 5, Ajl,i > 0 for all jl ∈ K, ∀ i ∈ N .Besides xi(T−

0 ) ≥ L. Thus, we obtain

xi(T0) ≥ L+N ·min{Aj,i: j ∈ K, i ∈ N}.

Taking into account the definition of the natural num-ber N as the minimum N ≥ 1 that satisfies Inequality(9), we deducexi(T0) ≥ L +

U − L

min{Aj,i: j ∈ K, i ∈ N}·min{Aj,i: j ∈ K, i ∈ N}

= L+ (U − L) = U.Therefore, we deduce that xi(T0) ≥ U , which contra-dicts to inequality (10) and ends the proof of Lemma9. �

Lemma 10 There exists a strictly increasing se-quence {tn}n≥1 of instants tn > 0 such that:

(i) For all n ≥ 1 there exists at least one neuron of thekernel K spikes at instant tn.(ii) No neuron of the kernel K spikes during the timeintervals (tn, tn+1) for all n ≥ 1, and also during[0, t1).

Proof: Such a sequence {tn}n≥1 exists, due toEquality (3). In fact, denote t0 = 0. Since any neu-ron i (and in particular any neuron of the kernel) hasa minimal spiking instant Ti > 0, then, from any in-stant tn > 0, there always exists a first next instanttn+1 > tn for which at least one neuron of the kernelspikes. Since tn+1 can be chosen minimal with sucha property, no neuron spikes during the time intervals[0, t1) and (tn, tn+1). �

Lemma 11 In the hypothesis of Theorem 8, for eachinitial state there exists a minimal instant T0 > 0such that at least N neurons of the kernel K spikesimultaneously at T0. Moreover, T0 = tn0 for some1 ≤ n0 ≤ N , where N ≥ 1 is the minimum pos-itive natural number that satisfies Inequality (9) and{tn}n≥1 is the strictly increasing sequence of Lemma10.

Proof To prove Lemma 11, it is enough to prove thefollowing assertion:Assertion (A) to be proved: There exists a minimalpositive natural number n0 ≥ 1 such that at instant

tn0 at least N neurons of the kernel K spike simulta-neously.In fact, if we prove Assertion (A), then we deduceLemma 11 by defining T0 := tn0 .

Consider the instant t1. By Lemma 10, which as-serts the existence of the sequence {tn}n≥1, there ex-ists at least one neuron, say i1 ∈ K, that spikes atinstant t1. Thus, if N = 1, Assertion (A) holds triv-ially taking n0 = 1. If N ≥ 2, then either there existat least N neurons in K that spike at instant t1 (jointwith i1), or there exists at most N − 1 neurons that doso. In the first case, Assertion (A) holds for n0 = 1,and there is nothing more to prove. In the second case,denote by It1 the subset of neurons in N that spike atinstant t1, and denote by K ∩ It1 the subset of neu-rons in the kernel K that spike at instant t1. We areassuming that

#(K ∩ It1

)≤ N − 1, (11)

where # denotes “the number of elements of”.By hypothesis K is strong, so it satisfies Defini-

tion 6. Thus, k = #K ≥ 3 satisfies Inequality (7).We have:

√k >

U − L

min{Ai,j : i ∈ K, j ∈ N}. (12)

Thus, recalling that N ≥ 1 is the minimum naturalnumber that satisfies Inequality (9), we deduce

N + 1 <U − L

min{Ai,j : i ∈ K, j ∈ N}.

Joining with Inequality (12), we obtain√k > N + 1, k > (N + 1)2. (13)

If we denote by #(K\ It1

)the number of cells in the

kernel K that do not spike at instant t1, we have

#(K \ It1

)=(#(K)

)−(#(K ∩ It1)

)= k−

(#(K∩It1)

)≥ (N+1)2−

(#(K∩It1)

)(14)

Joining Inequalities (11) and (14) se obtain:

#(K\ It1

)≥ (N +1)2− (N − 1) > N2− (N − 1).

(15)Since at least the neuron i1 spikes at time t1, and thenetwork is excitatory, we deduce - from Equality (5)applied to any neuron j ∈ K \ It1 - the followingproperty:

U > xj(t1) = xj(t−1 ) +

∑i∈It1

Ai,j ≥ L+Ai1,j

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 790 Issue 7, Volume 12, July 2013

≥ L+min{Ai,j : i ∈ K, j ∈ N}.Applying inequality (9), we obtain:

U > xj(t1) ≥ L+U − L

N∀ j ∈ K \ It1 . (16)

Now, we will generalize Inequalities (15) and (16) forother instants th with h ≥ 1, as follows:

Assertion (B): If at the instants t1, . . . , tl, . . . , th (forsome h ≥ 1), at most N − 1 cells of the kernel Kspike, then (denoting by Itl the subset of all the cellsthat spike at instant tl) the following inequalities hold:

#(K \ (

h∪l=1

Itl))> N2 − h(N − 1). (17)

U > xj(th) ≥ L+(U − L)h

N∀ j ∈ K \

( h∪l=1

Itl

).

(18)Let us prove Assertion (B) by induction on the naturalnumber h ≥ 1. In (15) and (16), we have proved (B)if h = 1. Assume that (17) and (18) hold for someh ≥ 1 and let us prove them for h + 1. Since at mostN − 1 cells of the kernel K spike at the instant tl forall 1 ≤ l ≤ h+ 1, then in particular for l = h+ 1 weobtain:

#(K ∩ Ith+1

)≤ N − 1.

Therefore, using Inequality (17) for h, we obtain:

#(K \ (

h+1∪l=1

Itl))=

#(K \ (

h∪l=1

Itl))

−(#(K ∩ Ith+1

))>

N2 − h(N − 1) −(#(K ∩ Ith+1

))

≥

N2 − h(N − 1)− (N − 1) = N2 − (h+ 1)(N − 1).

Thus, we have proved Inequality (17) for h+1. Now,let us prove Inequality (18) for h+1, assuming that itholds for h. Let us fix any neuron

j ∈ K \( h+1∪

l=1

Itl

)⊂ K \

( h∪l=1

Itl

).

Since the neuron j had not spiked in instantst1, . . . , th, th+1, its variable xj(t) had increased witht, since the instant t1 up to instant th+1, and is smallerthan the threshold level U . Therefore, applying In-equality (18) for h, we get:

xj(t−h+1) ≥ xj(th) ≥ L+

(U − L)h

N. (19)

By Lemma 10, at instant th+1 at least one cell, sayih+1 ∈ K spikes. Then, applying the action’s rule (5),we obtain:

U > xj(th+1) = xj(t−h+1) +

∑i∈Ith+1

Ai,j ≥

≥ xj(t−h+1) +Ai1,j .

Joining with Inequality (19), we deduce:

U > xj(th+1) ≥ L+(U − L)h

N+Ai1,j ≥

≥ L+(U − L)h

N+min{Ai,j : i ∈ K, j ∈ N}.

Substituting the last term in the above inequality bythe expression obtained from Inequality (9), we get

U > xj(th+1) ≥ L+(U − L)h

N+U − L

N=

= L+(U − L)(h+ 1)

N.

This is Inequality (18) for h + 1, ending the proof ofAssertion (B).

Now, let us show that Assertion (B) implies As-sertion (A). Arguing by contradiction, let us supposethat for all 1 ≤ h ≤ N , at most N − 1 cells spikeat instant th. Then, we can apply Assertion (B) withh = N . From Inequality (17) we obtain:

#(K \ (

N∪l=1

Itl))> N2 −N(N − 1) = N ≥ 1.

So, there exists more than N neurons that have notspiked at instants t1, t2, . . . , tN (including instant tN ).From Inequality (18) with h = N , we have:

U > xj(tN ) ≥ L+(U − L)N

N= L+ (U −L) = U

∀ j ∈ K \( h∪

l=1

Itl

).

We conclude that more thanN neurons, which we callj, do not spike at instant tN , but nevertheless, the val-ues xj(tN ) of their respective state variables at instanttN are larger or equal than the threshold level U . Thiscontradicts Definition 2 of the spiking instants. Wehave proved Assertion (A), which implies Lemma 11,as wanted. �

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 791 Issue 7, Volume 12, July 2013

Lemma 12 In the hypothesis of Theorem 8, letN ≥ 1be the minimal positive natural number that satisfiesInequality (9). If T0 > 0 is the first instant such that atleast N neurons of the kernel K spike simultaneously,then there exists at least one neuron i0 ∈ K that doesnot spike during the time interval [0, T0) (excludingits right extreme T0).

Proof: Let {tn}n≥1 be the increasing sequence ofinstants of Lemma 10. Applying Lemma 11,

T0 = tn0 for some 1 ≤ n0 ≤ N.

If n0 = 1, then by condition (ii) of Lemma 10,no spike occurs during the time interval [0, t1) =[0, tn0) = [0, T0). So, in this case Lemma 12 holdstrivially. If n0 ≥ 2, then we take into account thatT0 = tn0 is the first instant at which at least N neu-rons of K spike. In other words:

At any instant

t1, . . . , tn0−1

at most N − 1 neurons of the kernel K spike, and be-sides, from condition (ii) of Lemma 10, no neuronspikes at the intermediate times t ∈ (tn, tn+1). Weconclude that:

Assertion (C): There exist at most

(N − 1)(n0 − 1) < N · n0 ≤ N2

neurons of the kernel K that spike during the time in-terval [0, tn0) = [0, T0).

From the definition ofN as the minimum positivenatural number that satisfies Inequality (9), we deduce

U − L

min{Ai,j , i ∈ K, j ∈ N}≥ N + 1. (20)

By hypothesis, Inequality (7) holds. Combining In-equalities (7) and (20), we obtain:

√k ≥ (N + 1) > N, k > N2

where k is the number of neurons of the kernel K.Then, from Assertion (C) we deduce that there existless than k neurons of K that spike during the timeinterval [0, T0). But k = #K. We conclude that thereexists at least one neuron of the kernel K that doesnot spike during such an interval, ending the proof ofLemma 12. �

3.3 End of the Proof of Theorem 8

The Proof of Theorem 8 Consider the minimal pos-itive natural number N ≥ 1 that satisfies Inequality(9). From Lemma 11, for any initial state of the net-work, there exists a first instant T0 ≥ 0 (which maydepend on the initial state), such that at least N neu-rons of the kernel K spike simultaneously at T0. Ap-plying now Lemma 9, all the neurons of the networkspike simultaneously at T0. Thus, at instant T0, fromthe reset rule (4) we have

xi(T0) = 0 ∀ i ∈ N .

By a translation of the origin of time axis to T0, thenew initial state of each neuron i is zero. By hypothe-sis, all the cells are identical: i.e. they satisfy (duringtheir integration periods) the same differential equa-tion (1). Since at instant T0 all of them have the samestate equal to zero, and from T0 they are all governedby the same differential equation, we deduce that allof them arrive together to the threshold level U , at aninstant T1 > T0. Thus, all the neurons of the networkspike together (again) at time T1 > T0; and besides

xi(t) = xj(t) ∀ T0 ≤ t < T1 ∀ i = j.

Applying again the reset rule (4) at instant T1, we ob-tain

xi(T1) = 0 ∀ i ∈ N .

Arguing by induction, if all the cells spike simultane-ously at an instant Tn, then from instant Tn and untilthey arrive again to the threshold level U at instantTn+1 > Tn, we have:

xi(t) = xj(t) ∀ Tn ≤ t < Tn+1 ∀ i = j. (21)

We assert that

limn→+∞

Tn = +∞. (22)

In fact, Tn+1−Tn is the constant time that any neuroni (whose state at instant Tn is xi(Tn) = 0) takes toarrive to the threshold level U > 0, being governeduniquely by the differential equation (1). Since theinstantaneous velocity at which xi(t) increases with tis dxi/dt, we obtain:

Tn+1 − Tn ≥xi(T

−n+1)− xi(Tn)

max{dxi(t)/dt: Tn ≤ t ≤ Tn+1}=

=U − 0

max{f(x) : 0 ≤ x ≤ U}= b > 0,

where b is a constant. Therefore Tn+1 ≥ Tn + b forall n ≥ 0, which implies Tn ≥ nb → +∞, whenn→ +∞. This proves Equality (22), as wanted.

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 792 Issue 7, Volume 12, July 2013

Since Equality (21) holds for all n ≥ 0, and Tn →+∞, we deduce that

xi(t) = xj(t) ∀ t ≥ T0, ∀ i = j.

So, by Definition 7, the network synchronizes for allt ≥ T0, proving the first part of Theorem 8.

Now, to end the proof of Theorem 8, it is onlyleft to prove Inequality (8), which bounds from abovethe transitory time T0. As proved in the first partabove, the network synchronizes (as a later time) atthe first instant T0 ≥ 0 at which at least N neurons ofthe kernel K spike simultaneously. Applying Lemma12, there exists a neuron i0 that does not spike duringthe time interval [0, T0). But, from Lemma 9, all theneurons of the network K spike at instant T0. Then,T0 equals the waiting time until the neuron i0 spikesfor the first time. Therefore, during the time inter-val [0, T0) the state xi0(t) of the neuron i0 is strictlyincreasing on time, governed by the differential equa-tion (1) plus the positive discontinuity jumps that areproduced on the state of i by the actions that comefrom other neurons j = i. Thus, the time T0 is notlarger than the time T ∗

0 that would take the variablexi0 , to arrive from the lowest possible level L < 0 atinstant 0, to the highest possible level U > 0 at in-stant T0, if it were only governed by the differentialequation (1):

T0 ≤ T ∗0 .

So, let us compute T ∗0 as if xi0 were only governed by

(1). Applying the mean value theorem of the deriva-tive, there exists τ ∈ [0, T ∗

0 ] such that

dxi0dt

∣∣∣∣t=τ

=U − L

T ∗0 − 0

,

from which

T ∗0 =

U − L

(dxi0/dt)|t = τ=

=U − L

f(xi0(τ))≤ U − L

min{f(x): L ≤ x ≤ U}.

We deduce that

T0 ≤ T ∗0 ≤ U − L

min{f(x): L ≤ x ≤ U},

which is Inequality (8) as wanted, ending the proof ofTheorem 8. �

4 Applications and other methods ofresearch

Diverse mathematical tools are used to analyze theemergent global dynamics of networks of interact-ing units. For instance, the Linear Matrix Inequal-ity (LMI) approach, is used to study the stability and

Lyapunov exponents in applications to Control Engi-neering, Communications, Manufacturing and Man-agement [20, 23], and in particular to prove synchro-nization of artificial neuronal networks [28].

Classical arguments in the theory of differentialequations and abstract dynamical systems are used fornetworks with low dimensions, mostly those that arecontinuously coupled on time. For instance, phaselocking (instead of full synchronization) is proved inthe emergent dynamics of low-dimensional networksof interacting predator-prey communities [42, 13].The dynamics of populations is also modeled by im-pulsive differential equations, as a pulsed coupled net-work, in some examples of vaccination and biologicalcontrol (see for instance [24] and references therein).On small networks, equilibrium states, periodic orbitsand synchronized periodic orbits are searched in theemergent dynamics of a network composed by two in-teracting populations in a social system [41].

Other mathematical approach to study the emer-gent behaviour of finite networks of interacting unitsis provided by the Game Theory. Finite games are alsostudied as networks (e.g. [31]). As a difference withthe approach given by the Theory of Dynamical Sys-tems, the Game Theory usually models the networktaking into account the existence of different strate-gies according to which the cells interact dynamically(e.g. [16]). The excitatory interaction among the cellsin a neuronal network work similarly as the so calledimitative strategy in the theory of games. Combin-ing arguments of Dynamics and Game Theory, the socalled evolutive games, under an imitative strategy hy-pothesis, are applied to study a social network of firmsand workers [1]. The synchronization of coupled dy-namical units appears also in the research of economiccycles [44]. Classical arguments of Dynamics underthe hypothesis of Game Theory, is applied to study anonlinear network composed by two interacting com-petitors with delay times in the insurance market [50].

Artificial Neuronal Networks (ANNs; see for in-stance [10] and the references therein) is a methodol-ogy that is being widely used in applications, mainlyto simulate and predict the behaviour of physical orsocial systems with large complexity, and for engi-neering design of artificial intelligence and data pro-cessing. The methodology of research with ANNs,usually applies a combination of different mathemat-ical tools, from Dynamics, Probability Theory andStatistics, to Numerical Analysis, Optimization andComputational algorithms. The purpose of an ANNis, in general, to process the information that is re-ceives from the neurons in the so called input layer,provide results obtained from the neurons of the socalled output layer , by means of a black box, whichis composed by the neurons of one or more hidden

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 793 Issue 7, Volume 12, July 2013

layers. The system is controlled by diverse feed-backor feed-forward couplings among the layers. A se-quential learning process is produced by the ANN bythe accumulation of the previous dynamical experi-ences that the information data provides. The maincharacteristic of an ANN that allows it to work asa learning machine and makes it useful in applica-tions is its self-organizing capability. This ability isgoverned by the so called self-organizing-mappings(SOM) which are functional representations. Amongthe self-organizing features, some subsets of neuronsin the network may self-synchronize, or at least peri-odically self-synchronize.

For instance, the self-synchronization of groupsof an ANN is applied to model a network of the me-dieval society [5]. Also the SOM technique is appliedin the research of the market of equity funds [9]. Dif-ferent architectures of ANNs and SOM are applied inthe prediction and surety bonding in Construction En-gineering [2], in meteorologic predictions of the windspeed [14], in the optimization of the compressionratio and other criteria for image processing [21, 3],in the study of dataset clustering in Gastroenterology[11], to simulate and detect earning management onaccounting and financial applications [17], etc. A par-ticularly flexible application of ANNs is the Field-Programmable-Gate-Array (FPGA). It is the design ofintegrated-circuits based ANNs such that the architec-ture of the network and the self-organizing mappingsthat allow the network work as a learning machine, areprogrammable.

5 Final commentsIn Neuroscience, the formation of groups of neu-rons into synchronized clusters is based on the mu-tual synaptical connections that are modeled as a bio-electrical circuit [15]. In general, in electrical circuitssynchronization provides security and stability of thesystem, and in the case of data processing systems, itprovides sufficient reliability [40].

Nevertheless, a synchronized orbit (and also anyother periodic limit orbit) is usually not obtained forfree. If attracting all or most orbits (as we have provedin Theorem 8), synchronization implies a sacrifice inthe theoretic optimum amount of information that thesystem processes or memorizes with respect to thenumber of neurons that are used for the task. In fact,according to the abstract mathematical definition ofentropy in the Ergodic Theory (e.g. [22]), if the globalattractor is periodic, the system has zero entropy. Inspite of this mathematical prorperty, the null entropyonly means that the limit rate of increasing informa-tion is zero, when time goes to infinite. But the system

may still be able to process or memorize a positive fi-nite amount of information, while its transients last.In fact, by Formula (8) we find a bound from aboveof the transitory time T0. The bound works for anyinitial state, but the transitory time, and the patternsof clustered neurons that are formed during the tran-sitory time intervals, depend on the initial state of thenetwork (Lemmas 11 and 12).

As a consequence, if the network N under ob-servation, is a part of a larger network, and if thislarger network sends signals to N , then it modifiesthe instantaneous state of one or more of the cells inN . Thus, N will generically loose its synchroniza-tion as a consequence of the signals it receives fromits exterior. From the lost of synchrony, and whilethe transitory time T0 lasts, the network N will showa specific pattern in the sequence {tn}n≥1 of spikinginstants and in the clusters (or subsets) of spiking neu-rons (Lemmas 10 and Lemma 12). This pattern de-pends on the stimulus that N received from the exter-nal macro-network. Since the system is assumed tobe deterministic, each external stimulus provokes thereproduction (in N ) of each transitory pattern. Thetransitory time interval T0 until the network recuperesits synchronization, and the pattern it shows, dependon the received stimulus, but may last much more thanthe stimulus itself. In fact, the (optimum) upper boundof the transitory time in Inequality (8) may be verylarge for certain choice of the parameters of the net-work N .

Finally, we notice that the synchronization resultwould be false in general, if the model of each neu-ron (as an integrate and fire oscillator) were not one-dimensional. Many neurons in the nervous system aremodeled by multi-dimensional differential equations(see for instance [39]). And not all the asymptoti-cal dynamics of a model of a biological neuron in amulti-dimensional setting are globally stable. [12].In fact, the one-dimensional phase-amplitude descrip-tion of some neurons has been recently proved to beinadequate, in some cases, for the analysis of rhythmsand oscillations [25]. If the limit set of a the dynamicsof a neuron i has (for instance) more than one limitcycle, then the perturbation produced on the state ofi, for example by the actions of other neurons in thenetwork, may move the state from the basin of attrac-tion of one limit cycle to the basin of a different limitcycle. The resultant dynamics may be chaotic, ratherthan periodic [26].

Acknowledgements: The author thanks the partialsupport by Agencia Nacional de Investigacion e In-novacion (ANII) and Comision Sectorial de Inves-tigacion Cientıfica (CSIC) of the Universidad de laRepublica, both institutions in Uruguay.

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 794 Issue 7, Volume 12, July 2013

References:

[1] E. Accinelli, S. London, and E. Sanchez Carrera,A Model of Imitative Behavior in the Population ofFirms and Workers, Quaderni del Dipartimento diEconomia Politica 554, 2009, University of Siena.

[2] A. Awad and A. Fayek, Adaptive learning ofcontractor default prediction model for suretybonding, Journal of Construction Engineeringand Management 139, to appear, 2013, doi:10.1061/(ASCE)CO.1943-7862.0000639

[3] P. T. Bharathi and P. Subashini, Optimization ofimage processing techniques using Neural Net-works A Review, WSEAS Trans. Information Sci.and Appl. 8, 2011, pp. 300–328.

[4] S. Bottani, Synchronization of integrate and fireoscillators with global coupling, Physical ReviewE 54, 1996, pp. 2334–2350 doi: 10.1103/Phys-RevE.54.2334

[5] R. Boulet, B. Jouve, F. Rossi, N. Villa,Batch kernel SOM and related Laplacianmethods for social network analysis. Neu-rocomputing 71, 2008, pp. 1257–1273doi:443/10.1016/j.neucom.2007.12.026

[6] A. J. Catlla , D.G. Schaeffer, T. P. Witelski, E. E.Monson, A. L. Lin, On spiking models for synapticactivity and impulsive differential equations. SIAMReview 50, 2008, pp. 553–569.

[7] E. Catsigeras, P. Guiraud Integrate and Fire Neu-ral Networks, Piecewise Contractive Maps andLimit Cycles. Journ. Math. Biology 66, to appear,2013, doi: 10.1007/s00285-012-0560-7

[8] B. Cessac, A discrete time neural network modelwith spiking neurons. Rigorous results on the spon-taneous dynamics, Journ. Math. Biology 56, 2008,pp. 311–345.

[9] J. H. Chen, C. F. Huang and A. P. Chen, ApplyingSelf-Organizing Mapping Neural Network for Dis-covery Market Behavior of Equity Fund, WSEASTrans. on Information Sci. and Appl 7, 2010, pp.231–240

[10] M. Cottrell, M. Olteanu, F. Rossi, J. Rynkiewicz,N. Villa-Vialaneix, Neural Networks for ComplexData, Kunstliche Intelligenz 26, 2012, pp. 373–380

[11] A. Dahabiah, J. Puentes, and B. Solaiman, Gas-troenterology Dataset Clustering Using Possibilis-tic Kohonen Maps, WSEAS Trans. onf InformationSci. and Appl. 7, 2010, pp. 508–521

[12] G. B. Ermentrout, D. H. Terman , MathematicalFoundations of Neuroscience, Springer, 2010

[13] J. Feng, L. Zhu, H. Wang, Stability of Ecosysteminduced by mutual interference between predators,Procedia Environmental Sciences 2, 2010, pp. 42–48

[14] P. M. Fonte, G. Xufre Silva and J. C. Quadrado,Wind Speed Prediction using Artificial Neural Net-works. In the book: Proceedings of the 6th WSEASInt. Conf. on NEURAL NETWORKS, WSEASPress, Lisbon, 2005, pp. 134-139

[15] A. Fukasawa and Y. Takizawa, Activity of aNeuron and Formulation of a Neural Group basedon Mutual Injection in keeping with System Syn-chronization. In the book: Latest Trends in Cir-cuits, Automatic Control and Signal Processing,E. Catsigeras (Editor), WSEAS Press, Barcelona,2012 pp. 53–58

[16] R. Golamen, Why learning doesn’t add up: equi-librium selection with a composition of learningrules, Int. Journ. Game Theory 40, 2011, pp. 719–733 doi: 10.1007/s00182-010-0265-3

[17] H. Hoglund, Detecting Earnings ManagementUsing Neural Networks, Doctoral Thesis Han-ken School of Economics, Economics and Soci-ety Series 121, Edita Prima Ltd., Helsinki, 2010https://helda.helsinki.fi/handle/10227/742

[18] E. M. Izhikevich, Weakly Pulse-Coupled Os-cillators, FM Interactions, Synchronization, andOscillatory Associative Memory, IEEE Trans. OnNeural Networks 10, 1999, pp. 508–526.

[19] E. M. Izhikevich, Dynamical Systems in Neuro-science: The Geometry of Excitability and Burst-ing. MIT Press, 2007

[20] F. Jiang, J. Shena, X. Li, The LMI methodfor stationary oscillation of interval neuralnetworks with three neuron activations underimpulsive effects, Nonlinear Analysis: RealWorld Applications 14, 2013, pp. 1404–1416,doi:10.1016/j.nonrwa.2012.10.004

[21] A. Khashman, K. Dimililer, Image Compressionusing Neural Networks and Haar Wavelet, WSEASTransactions on Signal Processing 4, 2008, pp.330–339

[22] J. L. F. King, Entropy in ergodic theory. In thebook: Mathematics of Complexity and Dynami-cal Systems, R. A. Meyers (editor), pp. 205–223,Springer, New York, 2012, DOI 10.1007/978-1-4614-1806-1

[23] N. Krivulin, Evaluation of the Lyapunov Ex-ponent for Stochastic Dynamical Systems withEvent Synchronization. In the book: Recent Re-searches in Circuits, Systems, Multimedia and Au-tomatic Control, V. Niola, Z. Bojkovic and M.I. Garcia-Planas (editors), pp. 152–157, WSEASPress, Rovaniemi, 2012

[24] C. Li, Dynamics of Stage-structured PopulationModels with Harvesting Pulses, WSEAS Transac-tions on Math. 11, 2012, pp. 74–82

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 795 Issue 7, Volume 12, July 2013

[25] K. K. Lin, K. C. A. Wedgwood, S. Coombesand L.-S. Young, Limitations of perturbative tech-niques in the analysis of rhythms and oscillations,Journ. of Math. Biology 66, 2013, pp. 139–161

[26] K. K. Lin , L. S. Young, Shear-induced chaos.Nonlinearity 21, 2008, pp. 899–922.

[27] M. J. Lopez, A. Consegliere, J. Lorenzo andL. Garcıa, Computer Simulation and Method forHeart Rhythm Control Based on ECG Signal Ref-erence Tracking, WSEAS Trans. on Systems 9,2010, pp. 263–272

[28] J. G. Lu, G. Chen, Global asymptotical synchro-nization of chaotic neural networks by output feed-back impulsive control: An LMI approach, Chaos,Solitons and Fractals 41, 2009, pp. 2293–2300

[29] L. P. Maguire, T. M. McGinnity, B. Glackin, A.Ghani, A. Belatreche and J. Harkin, Challenges forlarge-scale implementations of spiking neural net-works on FPGAs, Neurocomputing 71, 2007, pp.13–29

[30] G. S. Medvedev, Synchronization of coupledstochastic limit cycle oscillators, Physics Letters A374, 2010, pp. 1712–1720.

[31] I. Miltaich, Representation of finite games asnetwork of congestion, Int. Journ. Game Theory43, to appear, 2013, doi: 10.1007/s00182-012-0363-5

[32] R. E. Mirolloand S. H. Strogatz, Synchroniza-tion of pulse-coupled biological oscillators, SIAMJ. Appl. Math. 50, 1990, pp. 1645–1662.

[33] L. M. Pecora, Synchronization of oscillators incomplex networks, Journal of Physics 70, 2008,pp. 1175–1198

[34] A. Politi and A. Torcini, Stable chaos, In thebook: Nonlinear Dynamics and Chaos: Advancesand Perspectives - Understanding Complex Sys-tems, M. Thiel, J. Kurths, M.C. Romano, G.Karolyi and A. Moura (editors), Springer, 2010

[35] G. M. Ramırez Avila, J. L. Guisset, J. L.Deneubourg, Synchronization in light-controlledoscillators, Physica D 182, 2003, pp. 254–273

[36] N. Rubido, C. Cabeza, A. C. Martı, andG. M. Ramırez Avila, Experimental results onsynchronization times and stable states in lo-cally coupled light-controlled oscillators, Phil.Trans. Royal Soc. A 367, 2009, pp. 3267–3280,doi:10.1098/rsta.2009.0085

[37] N. Rubido, C. Cabeza, S. Kahan, G. M. RamırezAvila, and A. C. Marti, Synchronization regions oftwo pulse-coupled electronic piecewise linear os-cillators, Europ. Phys. Journ. D 62, 2011, pp. 51–56, doi: 10.1140/epjd/e2010-00215-4

[38] G. T. Stamov, I. Stamova, Almost periodic solu-tions for impulsive neural networks with delay, Ap-plied Math. Modelling 31, 2007, pp. 1263–1270

[39] M. Stork, Bursting Oscillations of Neurons andSynchronization. In the book: Latest Trends in Cir-cuits, Automatic Control and Signal Processing,E. Catsigeras (editor), WSEAS Press, Barcelona,2012 pp. 81–86

[40] Y. Takizawa and A. Fukasawa, Formulation ofa Neural System and Modeling of TopographicalMapping in Brain. In the book: Latest Trends inCircuits, Automatic Control and Signal Processing,E. Catsigeras (editor), WSEAS Press, Barcelona,2012 pp. 59–64

[41] F. Tramontana, L. Gardini, P. Ferri, The dy-namics of the NAURU model with two switchingregimes, Journ. of Economic Dynamics and Con-trol 34, 2010, pp. 681–695

[42] D. A. Vasseur, J. Fox, Phase-locking and en-vironmental fluctuations generate synchrony in apredatorprey community, Nature 460, 2009, pp.1007–1010, doi:10.1038/nature08208

[43] C. K. Volos, Inverse Lag Synchronization in Mu-tually Coupled Nonlinear Circuits. In the book:Latest Trends on Communications, N. Mastorakis,V. Mladenov, Z. Bojkovic (editors), N. Bardis (as-soc. editor), WSEAS Press, Corfu, 2010, pp. 31–36

[44] C. K. Volos, I. M. Kyprianidis and I. N.Stouboulos, Synchronization Phenomena in Cou-pled Nonlinear Systems Applied in Economic Cy-cles, WSEAS Trans. on Systems 11, 2012, pp. 681–690

[45] H. Wang, J. Ma, Chaos Control and Synchro-nization of a Fractional-order Autonomous Sys-tem, WSEAS Trans. of Mathematics 11, 2012, pp.700–711

[46] X. Wang, D. Liu, C. Li, Global solutions for sec-ond order impulsive integro-differential equationsin Banach spaces, WSEAS Trans. of Mathematics11, 2012, pp. 103–113

[47] J. Williams, Regulated Turbulence in Informa-tion. In the book: Convergence - InterdisciplinaryCommunications, W. Østreng (editor), pp. 162–171, Centre for Advanced Study, Oslo, 2005

[48] X. Yang, C. Huang, Q. Zhu, Synchronization ofswiched neural networks with mixed delays via im-pulsive control, Chaos, Solitons and Fractals 44,2011, pp. 817–826.

[49] L.-S, Young, Open problem: Chaotic phenom-ena in three setting: large, noisy and out of equilib-rium, Nonlinearity 21, 2008, pp. 245–252.

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 796 Issue 7, Volume 12, July 2013

[50] J. Zhang and J. Ma, Research on Delayed Com-plexity Based on Nonlinear Price Game of Insur-ance Market, WSEAS Trans. on Mathematics 10,2011, pp 368–376

[51] L. Zhang, J. Yin, J. Liu, The Solutions of InitialValue Problems for Nonlinear Fourth-Order Im-pulsive Integro-Differential Equations in BanachSpaces, WSEAS Trans. on Mathematics 11, 2012,pp. 204–221

WSEAS TRANSACTIONS on MATHEMATICS Eleonora Catsigeras

E-ISSN: 2224-2880 797 Issue 7, Volume 12, July 2013

Recommended