Mathematical methods in biology
Master of Mathematics, second year
Benôıt PerthameSorbonne Université,
UMR 7598, Laboratoire J.-L. Lions, BC187
4, place Jussieu, F-75252 Paris cedex 5
October 26, 2018
2
Contents
1 Lotka-Volterra equations 1
1.1 Movement and growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Nonnegativity principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Operator monotonicity for competition or cooperative systems . . . . . . . . . . . . . . . 3
1.4 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Reaction kinetics and entropy 7
2.1 Reaction rate equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The law of mass action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Elementary examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Small numbers of molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Slow-fast dynamics and enzymatic reactions 13
3.1 Slow-fast dynamics (monostable) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Slow-fast dynamics (general) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Enzymatic reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Belousov-Zhabotinskii reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Some material for parabolic equations 23
4.1 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 The spectral decomposition of Laplace operators (Dirichlet) . . . . . . . . . . . . . . . . . 24
4.3 The spectral decomposition of Laplace operators (Neumann) . . . . . . . . . . . . . . . . 26
4.4 The Poincaré and Poincaré-Wirtinger inequalities . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 Rectangles: explicit solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.6 Brownian motion and the heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5 Relaxation, perturbation and entropy methods 31
5.1 Asymptotic stability by perturbation methods (Dirichlet) . . . . . . . . . . . . . . . . . . 31
5.2 Asymptotic stability by perturbation methods (Neuman) . . . . . . . . . . . . . . . . . . 33
5.3 Entropy and relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.4 Entropy: chemostat and SI system of epidemiology . . . . . . . . . . . . . . . . . . . . . . 36
5.5 The Lotka-Volterra prey-predator system with diffusion (Problem) . . . . . . . . . . . . . 37
3
CONTENTS 1
5.6 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6 Blow-up and extinction of solutions 41
6.1 Semilinear equations; the method of the eigenfunction . . . . . . . . . . . . . . . . . . . . 42
6.2 Semilinear equations; the energy method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.3 Keller-Segel system; the method of moments . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.4 Non-extinction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7 Linear instability, Turing instability and pattern formation 51
7.1 Turing instability in linear reaction-diffusion systems . . . . . . . . . . . . . . . . . . . . . 52
7.2 Spots on the body and stripes on the tail . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3 The simplest nonlinear example: the non-local Fisher/KPP equation . . . . . . . . . . . . 56
7.4 Phase transition: what is NOT Turing instability . . . . . . . . . . . . . . . . . . . . . . . 58
7.5 Gallery of parabolic systems giving Turing patterns . . . . . . . . . . . . . . . . . . . . . . 59
7.5.1 A cell polarity system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.5.2 The CIMA reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.5.3 The diffusive Fisher/KPP system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.5.4 The Brusselator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.5.5 The Gray-Scott system (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.5.6 Schnakenberg system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.5.7 The FitzHugh-Nagumo system with diffusion . . . . . . . . . . . . . . . . . . . . . 68
7.5.8 The Gierer-Meinhardt system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.6 Models from ecology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.6.1 Competing species and Turing instability . . . . . . . . . . . . . . . . . . . . . . . 69
7.6.2 Prey-predator system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.6.3 Prey-predator system with Turing instability (problem) . . . . . . . . . . . . . . . 70
7.7 Keller-Segel with growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8 Transport and advection equations 73
8.1 Transport equation; method of caracteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.2 Advection and volumes transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.3 Advection (Dirac masses) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.4 Examples and related equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.4.1 Long time behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.4.2 Nonlinear advection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.4.3 The renewal equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.4.4 The Fokker-Planck equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.4.5 Other stochastic aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2 CONTENTS
Chapter 1
Lotka-Volterra equations
As first examples, we present two general classes of equations that are used in biology: Lotka-Volterra
systems and chemical reactions. These are reaction-diffusion equations, or in a mathematical classifica-
tion, semilinear equations. Our goal is to explain what mathematical properties follow from the set-up
of the model: nonnegativity properties, monotonicity and entropy inequalities. These are very general
examples, for more material and more biologically oriented textbooks, the reader can also consult for
instance [33, 32, 38, 22, 9, 1].
1.1 Movement and growth
Class I are used in the area of population biology, and ecological interactions, and are characterized by
birth and death. For 1 ≤ i ≤ I, we denote by ni(t, x) the population densities of I interacting speciesat the location x ∈ Rd (d = 2 for example). We assume that these species move randomly accordingto Brownian motion with a bias according to a velocity Ui(t, x) ∈ Rd and that they have growth ratesRi(t, x) (meaning birth minus death). Then, the system describing the dynamic of these population
densities is
∂
∂tni − Di∆ni︸ ︷︷ ︸
random motion
+ div(Uini)︸ ︷︷ ︸oriented drift
= ni Ri ,︸ ︷︷ ︸growth and death
t ≥ 0, x ∈ Rd, i = 1, ..., I. (1.1)
In the simplest cases, the diffusion coefficients Di > 0 are constants depending on the species (they
represent active motion of individuals) and the bulk velocity Ui vanishes. Here we insist on the birth and
death rates Ri; they depend strongly on the interactions between species and we express this fact as a
nonlinearity
Ri(t, x) = Ri(n1(t, x), n2(t, x), ..., nI(t, x)
). (1.2)
A standard family of such nonlinearities results in quadratic interactions and is written
Ri(n1, n2, ..., nI) = ri +I∑i=1
cijnj ,
2 Relaxation and the energy method
with ri the intrinsic growth rate of the species i (it can be positive or negative) and cij the interaction
effect of species j on species i. The coefficients cij are usually neither symmetric nor nonnegative. One
can distinguish
• cij < 0, cji > 0: species i is a prey for j, species j is a predator for i. When several species eat no otherfood, these are called trophic chains/interactions,
• cij > 0, cji > 0: a mutualistic interaction (both species help the other and benefit from it) to bedistinguished from symbiosis (association of two species whose survival depends on each other),
• cij < 0, cji < 0: a direct competition (both species compete for example for the same food),• cii < 0 is usually assumed to represent intra-specific competition.
The quadratic aspect is related to the necessary binary encounters for the interaction to occur. Better
models include saturation effects: for instance the effect of too numerous predators is to decrease their
individual efficiency. This leads ecologists rather to use (with cii = −cii > 0)
Ri(n1, n2, ..., nI) = ri − ciini +I∑
j=1, j 6=i
cijnj
1 + nj,
the terminology Holing II is used in the case where j refers to a prey, i to a predator. Ecological networks
are described by such systems: diversity refers to the number of species I itself, connectance refers to the
proportion of interacting species, i.e. such that cij 6= 0. Connectance is low for trophic chain and higherfor a food web (species use several foods).
The original prey-predator system of Lotka and Volterra has two species, I = 2 (i = 1 the prey, i = 2
the predator). The prey (small fishe) can feed on abundant zooplankton and thus r1 > 0, while predators
(sharks) will die out without small fishe to eat (r2 < 0). The sharks eat small fishs in proportion to the
number of sharks (c12 < 0), while the shark population grows proportionally to the small fishe they can
consume c21 > 0). Therefore, we find the rule
r1 > 0, r2 < 0, c11 = c22 = 0, c12 < 0, c21 > 0.
1.2 Nonnegativity principle
In all generality, solutions of the Lotka-Volterra system (1.1) satisfy very few qualitative properties; there
are neither conservation laws (because they contain birth and death), nor entropy properties, a concept
which does not seem to be relevant for ecological systems. As we shall see the quadratic aspect may lead
to blow-up (solutions exist only for a finite time, see Chapter 6). Let us only mention here, that the
model is consistent with the property that population density be nonnegative.
Lemma 1.1 (Nonnegativity principle) Assume that the initial data n0i are nonnegative functions of
L2(Rd), that Ui ≡ 0 and that there is a locally bounded function Γ(t) such that |Ri(t, x)| ≤ Γ(t). Then,the weak solutions in C
(R+;L2(Rd)
)to the Lotka-Volterra system (1.1) satisfy ni(t, x) ≥ 0.
The definition and usual properties of weak solutions are given in Chapter 4, but we do not require
1.3. OPERATOR MONOTONICITY FOR COMPETITION OR COOPERATIVE SYSTEMS 3
that theoretical background to show the formal manipulations leading to this result.
Proof. (Formal) Here, we use the method of Stampacchia. Set pi = −ni, then we have
∂
∂tpi −Di∆pi = pi Ri.
Multiply by (pi)+ := max(0, pi) and integrate by parts. We find
1
2
d
dt
∫Rd
(pi(t, x)
)2+dx+Di
∫Rd
∣∣∇(pi)+∣∣2 = ∫Rd
(pi(t, x)
)2+Ri (1.3)
and thus1
2
d
dt
∫Rd
(pi(t, x)
)2+dx ≤ Γ(t)
∫Rd
(pi(t, x)
)2+.
Therefore, we have ∫Rd
(pi(t, x)
)2+dx ≤ e2
∫ t0
Γ(s)ds
∫Rd
(p0i (x)
)2+dx.
But, the assumption n0i ≥ 0 implies that∫Rd(p0i (x)
)2+dx = 0, and thus, for all times we have
∫Rd(pi(t)
)2+dx =
0, which means ni(t, x) ≥ 0.
(Rigorous) The difficulty here stems from the ‘linear’ definition of weak solutions of (1.1), which does not
allow for nonlinear manipulations such as multiplication by (pi)+ because this is not a smmoth function.
Section ?? explains how to resolve this difficulty.
Exercise. Perform the same formal proof with the drift terms Ui. Which assumption is needed on Ui?
Exercise. In the formal context of the Lemma 1.1, show that
d
dt
∫Rdni(t, x)dx =
∫RdRi(t, x)ni(t, x)dx,
∫Rdni(t, x)
2dx ≤∫Rd
(n0i (x)2dx e2
∫ t0
Γ(s)ds .
1.3 Operator monotonicity for competition or cooperative sys-
tems
The positivity property is general but a rather weak property. Comparison principle and monotonicity
principle are much stronger properties but require either competition or cooperative systems.
For simplicity we consider only two cooperative species, I = 2 in equation (1.1)–(1.2), and we make
the cooperative assumption
∂
∂n2R1(n1, n2) ≥ 0,
∂
∂n1R2(n1, n2) ≥ 0. (1.4)
4 Relaxation and the energy method
Lemma 1.2 (Monotonicity principle for cooperative systems) Consider a cooperative system, that
is, (1.4) and two solutions (n1, n2), (m1,m2) such that for some constants 0 ≤ ni ≤ Γ1, 0 ≤ mi ≤ Γ1
and |Ri| ≤ Γ2, | ∂∂niRj | ≤ Γ3. Then, the Lotka-Volterra system (1.1)–(1.2) is ordered for all times
n0i ≤ m0i , i = 1, 2 =⇒ ni(t) ≤ mi(t), ∀t ≥ 0, i = 1, 2.
Proof. Subtracting the two equations for n1 and m1, we find successively, by adding and subtracting
convenient terms,
∂
∂t(n1 −m1)−D1∆(n1 −m1) = (n1 −m1)R1(n1, n2) +m1
(R1(n1, n2)−R1(m1,m2)
),
∂
∂t
(n1 −m1)2+2
− D1(n1 −m1)+∆(n1 −m1) = (n1 −m1)2+R1(n1, n2)
+m1(n1 −m1)+(R1(n1, n2)−R1(m1, n2)
)+m1(n1 −m1)+
(R1(m1, n2)−R1(m1,m2)
).
The first and second lines are controlled by bounds on R
d
dt
∫Rd
(n1 −m1)2+2
+D1
∫Rd|∇(n1 −m1)+|2 ≤ (Γ2 + Γ3)
∫Rd
(n1 −m1)2+
+
∫Rdm1(n1 −m1)+
(R1(m1, n2)−R1(m1,m2)
),
We now use 0 ≤ m1 ≤ γ1, ∂∂n2R1(n1, n2) ≥ 0 and a Lipschitz constant of R1(n1, n2), to conclude
d
dt
∫Rd
(n1 −m1)2+2
≤ (Γ2 + Γ3)∫Rd
(n1 −m1)2+ + Γ1Γ3∫Rd
(n1 −m1)+(n2 −m2)+.
The same argument on n2 −m2 leads to the similar inequality
d
dt
∫Rd
(n2 −m2)2+2
≤ (Γ2 + Γ3)∫Rd
(n2 −m2)2+ + Γ1Γ3∫Rd
(n1 −m1)+(n2 −m2)+,
and adding these two inequalities, for u(t) =∫Rd [(n1−m1)
2+ + (n2−m2)2+] and 2ab ≤ a2 + b2, we obtain
for some Γ the inequalitydu(t)
dt≤ Γu(t).
Since u(0) = 0 from our initial order assumption, we conclude that u(t) = 0 for all t ≥ 0. That is theconclusion of the lemma.
Exercise. Show that the result of Lemma 1.2 holds true for the system
∂
∂tni − Di∆ni = Ri
(n1(t, x), n2(t, x), ..., nI(t, x)
).
1.4. CHALLENGES 5
Exercise. For a cooperative system, assume that we are given an initial data such that (n01, n02) is a
smooth subsolution of the steady state equation. Write the equation for ∂ni(t)∂t and prove that
∂ni(t)
∂t≥ 0.
Exercise. Write the proof that order is preserved for a general k × k cooperative system.
Exercise. In the case of 2 by 2 competitive systems, that means,
∂
∂n2R1(n1, n2) ≤ 0,
∂
∂n1R2(n1, n2) ≤ 0. (1.5)
Show that the order 0 ≤ n1 ≤ m1, 0 ≤ m2 ≤ n2 is preserved. There is no such general statement forlarger systems.
1.4 Challenges
Variability. There is always a large variability in living populations. For that reason, solutions of mod-
els with fixed parameters, for movement or growth, usually fit poorly with experiments or observations.
However, the models are useful for explaining qualitatively these observations but not for predicting in-
dividual behavior, a major problem when dealing with medical applications for example. A usual point
of view is that these parameters are distributed; one can use a statistical representation (see the software
Monolix at http://software.monolix.org/) in pharmacology. One can also use so-called structured popula-
tion models, see [32, 34, 38]. When modeling Darwinian evolution (see adaptive evolution in Section ??),
the parameters are part of the model solution and mutations are seen as changes in the model coefficients
selected by adaptation to the environment.
Small numbers. In physics and chemistry, 1023 is the normal order of magnitude for a number of
molecules. Populations that are studied with Lotka-Volterra equations are much smaller and 106 is al-
ready a large number. This means that exponentially decaying tails are very quickly meaningless because
they represent a number of individuals, less than one. The interpretation of such tails should be ques-
tioned carefully and several types of corrections can be included; demographic stochasticity is used in
Monte-Carlo simulations, which correspond to the survival threshold in PDEs1.
1Gauduchon, M. and Perthame, B. Survival thresholds and mortality rates in adaptive dynamic: conciliating determin-
istic and stochastic simulations. Mathematical Medicine and Biology 27 (2010), no. 3, 195–210
6 Relaxation and the energy method
Chapter 2
Reaction kinetics and entropy
When large numbers of molecules are involved in a chemical reaction, the kinetics are well described
by reaction rate equations. These are nonlinear equations describing the population densities (concen-
trations) of the reacting molecules and the specific form of these nonlinearities is usually prescribed by
the law of mass action. This section deals with this aspect of reaction kinetics. The derivation of these
equations is postponed to Section ?? where we introduce the chemical master equation, which better
describes a small number of reacting molecules, an important topic in cell biology.
2.1 Reaction rate equations
The general form of the equations leads to a particular structure for the right hand side of semi-linear
parabolic equations. They are written
∂
∂tni − Di∆ni︸ ︷︷ ︸
molecular diffusion
+ ni Li = Gi︸ ︷︷ ︸reaction
, t ≥ 0, x ∈ Rd, i = 1, 2, ..., I. (2.1)
The quantities ni ≥ 0 are molecular concentrations, the loss terms Li ≥ 0 depend on all the moleculesnj (with which the molecule i can react) and the gain terms Gi ≥ 0, which also depends on the nj ’s,denote the rates of production of ni from the other reacting molecules. The molecular diffusion rate of
these molecules is Di and can be computed according to the Einstein rule from the molecular size.
For a single reaction, the nonlinearities Li and Gi take the form
niLi =∑
reactions p
apikp
I∏j=1
napjj , Gi =
∑reactions p
bpi kp
I∏j=1
napjj . (2.2)
The powers apj , bpj ∈ N represent the number of molecules j necessary for the reaction p; this is the lawof mass action.
Nonnegativity property. We have factored the term ni in front of Li to ensure that this term van-
ishes at ni = 0. The loss term Li is not singular here because the product contains api ≥ 1 when
7
8 CHAPTER 2. REACTION KINETICS AND ENTROPY
kpi > 0 (the reactant i really reacts, otherwise the corresponding Li vanishes). For that reason, as for
the Lotka-Volterra systems, the nonnegativity property holds true.
Lemma 2.1 A weak solution of (2.1) with Gi ≥ 0 and nonnegative initial data satisfies ni(t, x) ≥ 0,∀i = 1, 2, ..., I.
Proof. Adapt the proof of Lemma 1.1.
Even though we do not know that Gi ≥ 0, this lemma is useful because one may argue as follows.Solutions are first built using the positive part of Gi, then the lemma tells us that the ni’s are positive.
From formula (2.2), the positivity of the n′js implies that Gi is positive and thus that we have built the
solution to the correct problem.
Conservation of atoms. The second main property of these models comes from the conservation
of atoms, which asserts that some quantities should be constant in time. For each atom k = 1, ...,K, one
defines the number Nki of atoms k in the molecule i. Then all reactions should preserve the number of
atoms and, for all k ∈ {1, ...,K}, the coefficients α, β, k and k′ should be such thatI∑i=1
Nki [ni Li −Gi] = 0, ∀(ni) ∈ RI . (2.3)
This implies that
∂
∂t
I∑i=1
Nkini = ∆
I∑i=1
NkiDini, (2.4)
d
dt
∫Rd
I∑i=1
Nkini(t, x)dx = 0, 1 ≤ k ≤ K,
and thus, a priori, the weighted L1 bound holds∫Rd
I∑i=1
Nkini(t, x)dx =
∫Rd
I∑i=1
Nkin0i (x)dx, 1 ≤ k ≤ K. (2.5)
Except when all the diffusion rates Di’s are equal (then its main part is the heat equation), it is not easy
to extract conclusions from equation (2.4) except the conservation law (2.5). Very few general tools, such
as M. Pierre’s duality estimate, are available, see the survey1.
There is a third property, the entropy dissipation that we shall discuss later.
2.2 The law of mass action
Irreversible reaction. To begin with, consider molecular species Si undergoing the single irreversible
reaction
a1S1 + ...+ aISIk+−→ b1S1 + ...+ bISI ,
1Pierre, M. Global existence in reaction-diffusion systems with control of mass: a survey. Milan J. Math. 78 (2) (2010),
417–455
2.2. THE LAW OF MASS ACTION 9
where ai and bi ∈ N, (ai)i=1,...,I 6= (bi)i=1,...,I , denote the number of molecules involved as reactants andproducts. Then, using the law of mass action, the equation for the population number densities ni of
species i is written
∂
∂tni −Di∆ni + aik+
I∏j=1
najj︸ ︷︷ ︸
loss of molecules by reaction
= bik+
I∏j=1
najj︸ ︷︷ ︸
gain of reaction product
, i = 1, 2, ..., I.
From the conservation of atoms, it is impossible that for all i we have bi ≥ ai (resp. ai ≤ bi). Therefore,at least one reactant (ai < bi) should disappear and one product (bi > ai) be produced.
Reversible reaction. More interesting is when several reactions occur. Then they are represented by
a sum of such terms over each chemical reaction. For example, still with ai 6= bi ∈ N, the reversiblereaction
a1S1 + ...+ aISIk+−⇀↽−k−
b1S1 + ...+ bISI ,
leads to reaction rate equations
∂∂tni −Di∆ni
loss of forward reacting molecules︷ ︸︸ ︷+ aik+
I∏j=1
najj +
loss of backward reacting molecules︷ ︸︸ ︷bik−
I∏j=1
nbjj
= bik+
I∏j=1
najj︸ ︷︷ ︸
gain of forward reaction product
+ aik−
I∏j=1
nbjj .︸ ︷︷ ︸
gain of backward reaction product
A more convenient form is
∂
∂tni −Di∆ni = (bi − ai)
k+ I∏j=1
najj − k−
I∏j=1
nbjj
, i = 1, 2, ..., I. (2.6)With this form, one can check the fundamental entropy property for reversible reactions. Define
S(t, x) :=
I∑i=1
ni[
ln(ni) + σi − 1], with
I∑i=1
σi(ai − bi) = ln k+ − ln k−. (2.7)
There are different possible choices for the constant σi, which can be more or less convenient depending
on the case. In all cases, one can check the
Lemma 2.2 (Entropy property for reversible reactions) The entropy dissipation equality holds
d
dt
∫S(t, x)dx = −
I∑i=1
Di
∫|∇ni|2
nidx−D(t, x) ≤ 0,
D(t, x) :=
∫ [ln(k+
I∏j=1
najj )− ln(k−
I∏j=1
nbjj )] [k+
I∏j=1
najj − k−
I∏j=1
nbjj
]dx ≥ 0.
10 CHAPTER 2. REACTION KINETICS AND ENTROPY
The entropy dissipation property is very important because it dictates the long term behavior of the
reaction. Mathematically it is also useful to prove a priori estimates for the quantity |∇ni|2
nibut also for
integrability properties of powers of ni, which are necessary to define solutions of the reaction-diffusion
equation. This has been used recently in a series of papers by L. Desvillettes and K. Fellner2 for estimates
and relaxation properties, and by T. Goudon and A. Vasseur3 for regularity properties.
2.3 Elementary examples
Isomerization. The reversible isomerization reaction is the simplest chemical reaction. The atoms
within the molecule are not changed but only their spatial arrangement. The reaction is represented as
S1k+−⇀↽−k−
S2
and corresponds, in (2.6), to a1 = 1, b1 = 0, a2 = 0, b2 = 1, which meads to∂∂tn1 −D1∆n1 + k+n1 = k−n2,∂∂tn2 −D2∆n2 + k−n2 = k+n1.
(2.8)
The conserved quantity is simply
d
dt
∫[n1(t, x) + n2(t, x)]dx = 0,
∫[n1(t, x) + n2(t, x)]dx =
∫[n01(x) + n
02(x)]dx.
The formula (2.7) for the entropy (with a1 = b2 = 1, b1 = a2 = 0) gives
S(t, x) = n1[
ln(k+n1)− 1]
+ n2[
ln(k−n2)− 1], (2.9)
and one can check the entropy dissipation relation
ddt
∫Rd S(t, x)dx = −
∫Rd[D1|∇n1|2n1
+D2|∇n2|2n2
]dx
−∫Rd[
ln(k+ n2)− ln(k− n1)][k−n2 − k+n1
]dx ≤ 0.
Dioxygen dissociation. The standard degradation reaction of dioxygen to monoxygen is usually asso-
ciated with hyperbolic models for fluid flows rather than diffusion. This is because it is a very energetic
reaction occurring at very high temperature (for atmosphere re-entry vehicles for example) and with
reaction rates depending critically on this temperature, see [17]. But for our purpose here, we forget
these limitations and consider the dissociation rate k+ > 0 of n1 = [O2] in n2 = [O], and conversely its
recombination with rate k− > 0
O2k+−⇀↽−k−
O +O.
2L. Desvillettes and K. Fellner, Entropy methods for reaction-diffusion equations: Slowly growing a-priori bounds. Rev.
Mat. Iberoamericana, 24 (2) (2008), 407–431.3T. Goudon and A. Vasseur, regularity analysis for systems of reaction-diffusion equations, Annales de l ’ENS, 43 (1)
(2010), 117–141.
2.3. ELEMENTARY EXAMPLES 11
This leads to a1 = 1, b1 = 0, a2 = 0, b2 = 2, and thus we obtain∂∂tn1 −D1∆n1 + k+n1 = k−(n2)
2,
∂∂tn2 −D2∆n2 + 2k−(n2)
2 = 2k+n1,(2.10)
with initial data n01 ≥ 0, n02 ≥ 0. According to the law of mass action, the term (n2)2 arises because theencounter of two atoms of monoxygen is required for the reaction.
We derive the conservation law (number of atoms is constant) by a combination of the equations
∂
∂t[2n1 + n2]−∆[2D1n1 +D2n2] = 0,
which implies that for all t ≥ 0∫Rd
[2n1(t, x) + n2(t, x)
]dx = M :=
∫Rd
[2n01(x) + n
02(x)
]dx. (2.11)
For the simple case of the reaction (2.10), the formula (2.7) for the entropy (with a1 = 1, b2 = 2,
b1 = a2 = 0) gives
S(t, x) = n1[
ln(k+n1)− 1]
+ n2[
ln(k1/2− n2)− 1
]. (2.12)
One can readily check that
Lemma 2.3 (Entropy inequality)
ddt
∫Rd S(t, x)dx = −
∫Rd[D1|∇n1|2n1
+D2|∇n2|2n2
]dx
−∫Rd[
ln(k− (n2)2)− ln(k+ n1)
][k−(n2)
2 − k+n1]dx ≤ 0.
Exercise. Deduce from (2.11) and n1 ≥ 0, n2 ≥ 0 the a priori bound
2k−
∫ T0
∫Rd
(n2)2 dxdt ≤M(1 + 2k+T ).
Hint. In (2.10), integrate the equation for n1.
Hemoglobin oxidation. Hemoglobin Hb can bind to dioxygen O2 to form HbO2 according to the
reaction
Hb+O2k+−⇀↽−k−
HbO2.
This reaction is important in brain imaging because the magnetic properties of Hb and HbO2 are dif-
ferent; the consumption of oxygen by neurons generates desoxyhemoglobin that can be detected by MRI
(magnetic resonance imaging) and thus, indirectly, indicates the location of neural activity.
The resulting system of PDEs, for n1 = [Hb], n2 = [O2] and n3 = [HbO2], is∂∂tn1 −D1∆n1 + k+n1n2 = k−n3,∂∂tn2 −D2∆n2 + k+n1n2 = k−n3,∂∂tn3 −D3∆n3 + k−n3 = k+n1n2.
12 CHAPTER 2. REACTION KINETICS AND ENTROPY
From this system, we can derive two conservation laws for the total number of molecules Hb and O2,
∂
∂t[n1 + n3]−∆[D1n1 +D3n3] = 0,
∂
∂t[n2 + n3]−∆[D2n2 +D3n3] = 0.
these imply an L1 control
M :=
∫[n1(t) + n2(t) + n3(t) + n4(t)] =
∫[n01 + n
02 + n
03 + n
04].
As in the case of dioxygene dissociation, integrating the third equation, one concludes the quadratic
estimate
k+
∫ ∞0
∫Rdn1n2dx dt ≤M(1 + k−T ).
From (2.7) (with a1 = 1, b2 = 2, b1 = a2 = 0), this also comes with an entropy
S(t, x) = n1[ln(k1/2+ n1 − 1] + n2[ln(k
1/2+ n2 − 1] + n3[ln(k−n3 − 1].
Mathematical references and studies on this system can be found in B. Andreianov and H. Labani4.
Exercise. Another simple and generic example is the reversible reaction A+Bk+−⇀↽−k−
C +D
∂∂tn1 −D1∆n1 + k+n1n2 = k−n3n4,∂∂tn2 −D2∆n2 + k+n1n2 = k−n3n4,∂∂tn3 −D3∆n3 + k−n3n4 = k+n1n2,∂∂tn4 −D4∆n4 + k−n3n4 = k+n1n2.
(2.13)
with the ni ≥ 0 and the single atom conservation law∫Rd
[n1(t, x) + n2(t, x) + n3(t, x) + n4(t, x)
]dx = M :=
∫Rd
[n01(x) + n
02(x) + n
03(x) + n
04(x)
]dx.
Choose ki so that S =
4∑i=1
ni ln(kini) is a convex entropy and write the entropy inequality for the chemical
reaction (2.13).
2.4 Small numbers of molecules
It may happen that in biological systems, e.g. intracellular bio-molecular interactions, the number of
molecules is not large so that the reaction rate equations are not applicable. The alternative is to use the
so-called chemical master equations that describe the dynamic of individual molecules.
These models are close to the jump processes described in Chapter ?? and with a brief presentation in
Section ??. They allow for a number of asymptotic limits (from discrete to continuous, from continuous
jumps to drift-diffusion). For these reasons, there is a large literature on this subject, on both the
theoretical and numerical aspects5,6,7.4B. Andreianov and H. Labani, Preconditioning operators and L∞ attractor for a class of reaction-diffusion systems.
Comm. Pure Appl. Anal. Vol. 11, No. 6 (2012) 2179–21995A. De Masi, E. Presutti, Mathematical methods for hydrodynamic limits. Springer-Verlag, Berlin, 19916D. T. Gillespie, Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81(25) 2340–2361 (1977)7J.-L. Doob, Markoff chains- Denumerable case. Trans. Amer. Math. Soc. 58(3), 455–473 (1945)
Chapter 3
Slow-fast dynamics and enzymatic
reactions
In many situations, one can encounter very different time scales for different reactions, and more generally
phenomena. When a time scale is very short compared to the other, one wishes to consider that the fast
phenomenon is at equilibrium. This chapter gives circumstances where this is legitimate. A typical
example is the case of enzymatic reactions, where a low level of enzymes regulates a reaction. Other
examples arise in the spiking dynamic of neurons.
Throughout this chapter we ignore diffusion and work with ordinary differential equations. For a
parameter ε > 0, we study the system of two equations with a fast and a slow time scale εduε(t)dt = f
(uε(t), vε(t)
), uε(t = 0) = u
0,
dvε(t)dt = g
(uε(t), vε(t)
), vε(t = 0) = v
0.(3.1)
Formally, the limiting system should be written as 0 = f(ū(t), v̄(t)
),
dv̄(t)dt = g
(ū(t), v̄(t)
), v̄(t = 0) = v0.
(3.2)
There are however several objections to such a general aymptotic result. Firstly, to pass to the limit in
nonlinear expressions as f(uε(t), vε(t)
), one needs compactness and for uε(t) the time derivative is not
under control. Secondly, the loss of an initial condition for ū(t) should be explained. Thirdly, the relation
0 = f(ū(t), v̄(t)
)does not necessarily define ū(t) as a function of v̄(t) so as to insert it in the differential
equation for v̄(t). This last observation leads us to distinguish two cases
• ∂∂uf(u, v) < 0, then we can hope to find a smooth mapping F (v) such that f(F (v), v) = 0. We call itthe ‘monostable’ case.
• Otherwise, some organizing principle should be found to decide which branch of solution of f(u, v) = 0is selected and we present some general results in section 3.2.
14 Slow-fast dynamics and enzymatic reactions
The sign of ∂∂uf(u, v) also appears when one studies the stability of a steady state. The linearized
matrix is 1ε ∂∂uf(u, v) 1ε ∂∂vf(u, v)∂∂u (gu, v)
∂∂v g(u, v)
= 1ε
∂∂uf(u, v) ∂∂vf(u, v)0 0
+O(1).Again stability is related to the property ∂∂uf(u, v) < 0.
3.1 Slow-fast dynamics (monostable)
We begin with the simplest situation where a fast scale can be considered at equilibrium, meaning that
we can invert the relation
0 = f(u, v)⇐⇒ u = F (v),
with F a smooth function from R to R. Then, we end up with the single equation
dv̄(t)
dt= g(F (v̄(t)), v̄(t)
), v̄(t = 0) = v0.
In particular the initial data for ū is lost, a phenomena called initial layer.
We are going to assume
f, g are Lipschitzian, (3.3)
∂f
∂u(u, v) ≤ −α < 0 for some constant α > 0. (3.4)
The second assumption implies that the equation on uε is monostable, as one can see it in the
Exercise. With the assumption (3.4), for the differential equation with v fixed
du(t)
dt= f
(u(t), v
), u(t = 0) = u0,
1. prove that |du(t)dt | ≤ |f(u0, v)|e−αt,
2. show that there is a limit u(t)→ U as t→∞.Hint. Write the equation on z(t) = du(t)dt .
Theorem 3.1 We assume (3.3)–(3.4) and that for some T > 0 there is a constant M such that |uε(t)|+|vε(t)| ≤M . Then,(i) vε(t) is Lipschitzian, uniformly in ε, for t ∈ [0, T ],(ii) uε(t) is Lipschitzian, uniformly in ε, for t ∈ [τ,∞), for all τ ∈]0, T ],(iii) there are ū(t), v̄(t) ∈ C([0, T ]) such that, as ε→ 0, uε(t)→ ū(t) uniformly in [τ, T ] and vε(t)→ v̄(t),uniformly in [0, T ] and (3.2) holds.
Proof. The statement (i) is obvious since g(uε(t), vε(t)) is bounded. For statement (ii), we define
zε :=duε(t)
dt.
Differentiating the equation on uε, we compute using the chain rule
dzε(t)
dt= zε
∂f
∂u(uε, vε) +
∂f
∂v(uε, vε)
dvε(t)
dt,
3.1. SLOW-FAST DYNAMICS (MONOSTABLE) 15
1
2
dzε(t)2
dt= z2ε
∂f
∂u(uε, vε) + zε
∂f
∂v(uε, vε) g(uε, vε).
Thanks to assumption (3.3) and (3.4), we find
dzε(t)2
dt≤ −2αz2ε + αz2ε +
1
α[∂f
∂v(uε, vε) g(uε, vε)]
2 ≤ −αz2ε +K
and thus
zε(t)2 ≤ z20e−αt/ε +
K
α=f(u0, v0)2
ε2e−αt/ε +
K
α.
This proves (ii).
For (iii) , we use the Ascoli-Arzela theorem and conclude that for a subsequence, we have uniform
convergence. Then we may pass to the limit in the integral form. The full family converges because of
uniqueness of solutions of (3.2).
Exercise. if the intital data is well-prepared, that means f(u0, v0) = 0, prove that uε(t) is uniformly
lipschitzian on [0,∞).
Exercise. Consider the system, with Lipschitz continous f and g, εduε(t)dt = f
(uε(t), vε(t)
), uε(t = 0) = u
0 ∈ (0, 1),dvε(t)dt = g
(uε(t), vε(t)
), vε(t = 0) = v
0,
assuming that f(0, v) = f(1, v) = 0, f(u, v) ≥ 0 for u ∈ (0, 1).1. Show that 0 < uε(t) < 1 for all times.
2. Show that∫∞
0|duε(t)dt |dt ≤ 1 and conclude that, after extracting a subsequence, uε(t)→ ū(t) as ε→ 0,
almost everywhere and in all Lp(0,∞), 1 ≤ p 0, there is a constant M(T ) such that
|vε(t)| ≤M(T ), 0 ≤ t ≤ T,
Proof. We can write,
εduε(t)
dt= f
(0, vε(t)
)+
∫ uε(t),0
∂f(σ, vε(t)
)∂u
dσ,
ε2d uε(t)
2
dt ≤ uε(t)f(0, vε(t)
)− αuε(t)2
≤ α2 uε(t)2 + C[1 + vε(t)
2]− αuε(t)2
because f(·, ·) is Lipschitz continuous. In the same way, we find
ε2d vε(t)
2
dt ≤ C|vε(t)|[1 + |uε(t)|+ |vε(t)|]
≤ α2 uε(t)2 + C[1 + vε(t)
2]
16 Slow-fast dynamics and enzymatic reactions
Adding up these two inequalities rises
1
2
d
dt[vε(t)
2 + ε uε(t)2] ≤ C[1 + vε(t)2] ≤ C[1 + vε(t)2 + ε uε(t)2]
and the Gronwall inequality gives us that vε(t)2 + ε uε(t)
2 is locally bounded.
To bound uε(t)2 itself, we may now write the inequation for uε(t)
2 as
ε
2
d uε(t)2
dt≤ −α
2uε(t)
2 + C[1 +M(T )]
which proves the result because max((u0)2. 2C[1+M(T )α
)is a solution with a larger initial data.
3.2 Slow-fast dynamics (general)
The assumption (3.4) is very strong and one can extend the result to more general systems but with
weaker conclusions. We keep the system εduε(t)dt = f
(uε(t), vε(t)
), uε(t = 0) = u
0,
dvε(t)dt = g
(uε(t), vε(t)
), vε(t = 0) = v
0,(3.5)
f, g are Lipschitz continuous and C1. (3.6)
Theorem 3.3 With the assumption (3.6) and assuming that uε(t), vε(t) are bounded on [0, T ], we have∫ T0
|f(uε(t), vε(t))|2dt ≤ C(T,K)ε.
If, for all v, f(u, v) does not vanish on an interval in u, then after extaction of a subsequence
uε(n) → ū(t) a.e. vε(n) → v̄(t) uniformly on [0, T ]
and the relations holdsdv̄(t)
dt= g(ū(t), v̄(t)
), f
(ū(t), v̄(t)
)= 0.
Proof. 1. We define
F (u, v) =
∫ u0
f(σ, v)dσ
and write ∫ T0
|f(uε(t), vε(t))|2dt = ε∫ T
0
f(uε(t), vε(t))duε(t)
dtdt.
SincedF (u(t), v(t))
dt= f(u(t), v(t))
du(t)
dt+∂F (u, v)
∂v
dv(t)
dt,
we conclude that∫ T0
|f(uε(t), vε(t))|2dt = ε∫ T
0
[dF (uε(t), vε(t))
dt− ∂F (uε, vε)
∂vg(uε, vε)
]dt
= ε[F (uε(T ), vε(T ))− F (u0, v0) + TK
].
3.2. SLOW-FAST DYNAMICS (GENERAL) 17
Because uε(t), vε(t) are locally uniformly bounded, we conclude.
2. We define
G(u, v) =
∫ u0
f(σ, v)2dσ.
We have
d
dtG(uε(t), vε(t)) = f(uε(t), vε(t))
2 duε(t)
dt+ 2g(uε(t), vε(t))
∫ uε(t)0
f(σ, vε(t)) ∂vf(σ, vε(t))dσ
=f(uε(t), vε(t))
2
εf(uε(t), vε(t)) + 2g(uε(t), vε(t))
∫ uε(t)0
f(σ, vε(t)) ∂vf(σ, vε(t))dσ
and thus G(uε(t), vε(t)) is locally BV. As a consequence for a subsequence, there is Ḡ(t) such that
G(uε(n)(t), vε(n)(t))→ Ḡ(t) a.e. as n→∞.
But because vε is uniformly Lipschitzian, by the Ascoli theorem, we also have vε(n)(t)→ v̄(t) and thus
G(uε(n)(t), v(t))→ Ḡ(t) a.e. as n→∞.
3. Because G(u, v) is strictly increasing in u from our assumption, the above convergence implies that
uε(n)(t) converges a.e. to some ū(t) ∈ L∞(0, T ). The end follows immediately.
Exercise. (Morris-Lecar system) Consider the system{εdgε(t)dt = G(vε(t))− gε(t),dvε(t)dt = g
(vε(t)
)+ gε(t)(V − vε(t)).
Identify the limiting system and show convergence.
Exercise. (Bistable case) Consider the case when, for a smooth function a : R→ (0, 1) we have
f(0, v) = f(1, v) = f(a(v), v) = 0,
f(u, v) =
< 0 for 0 < u < a(v),> 0 for a(v) < u < 1.1. Prove that
∫ T0|f(uε(t), vε(t))|dt ≤ C(T )ε.
2. Show that uε is uniformly with bounded total variation on [0, T ].
3. Show that, for a subsequence, as k →∞, vεk(t)→ v̄ (uniformly), and uεk(t)→ ū(t) (in Lp(0, T ), with1 ≤ p
18 Slow-fast dynamics and enzymatic reactions
2. Show that, for a subsequence, as k →∞, vεk(t)→ v̄ (uniformly), and uεk(t)→ ū(t) (in Lp(0, T ), with1 ≤ p
3.3. ENZYMATIC REACTIONS 19
of the complex is a reversible reaction, but the conversion to P +E is an irreversible reaction, leading to
the representation
S + Ek+−⇀↽−k−
C = SEkp−→ P + E.
Still using the law of mass action, this leads to the equations (we do not consider the space variable and
molecular diffusion, this is not the point here)ddtnS = k−nC − k+nSnE ,
ddtnE = (k− + kp)nC − k+nSnE ,
ddtnC = k+nSnE − (k− + kp)nC ,
ddtnP = kpnC .
This reaction comes with the initial data n0S > 0, n0E > 0 and n
0C = n
0P = 0. One can easily verify that
the substrate is entirely converted into the product
limt→∞
nP (t) = n0S , lim
t→∞nS(t) = 0, lim
t→∞nE(t) = n
0E , lim
t→∞nC(t) = 0.
A conclusion that is incorrect if n0E = 0. This is a fundamental observation in enzymatic reactions that
a very small amount of enzyme is sufficient to convert the substrate into the product.
Two conservation laws hold true in these equations and this helps us to understand the above limit,nE(t) + nC(t) = n
0E + n
0C = n
0E ,
nS(t) + nC(t) + nP (t) = n0S + n
0C + n
0P = n
0S .
(3.9)
Because the properties of the dynamic do not depend on nP , the system is equivalent to a 2×2 systemddtnS = k−nC − k+nS(n
0E − nC),
ddtnC = k+nS(n
0E − nC)− (k− + kp)nC .
(3.10)
Briggs and Haldane(1925) arrived to the conclusion that this system can be further simplified with the
quasi-static approximation on nC . That means that nC(t) can be computed algebraically from nS(t),
thus leading to a single ODE for nS(t)ddtnS = k−nC − k+nS(n
0E − nC),
0 = k+nS(n0E − nC)− (k− + kp)nC ,
(3.11)
still with the initial data nS(t = 0) = n0S (but no initial condition on nC). We can of course write this in
a more condensed way. Since nC =k+nSn
0E
k−+k++k+nS, and after adding the two equations, we find the typical
enzymatic reaction kinetic equation
d
dtnS = −n0E
kpk+nSk− + kp + k+nS
(3.12)
In other words, the law of mass action does not apply in this approximation and one finds the so-called
Michaelis-Menten law for the irreversible reactions, a Hill function rather than a polynomial.
Although not obvious at first glance, this result makes rigorous sense and we have the
20 Slow-fast dynamics and enzymatic reactions
Proposition 3.4 (Validity of Michaelis-Menten law) Assume n0E is sufficiently small. The solu-
tions of (3.10) and of (3.12) satisfy
supt≥0|nS(t)− nS(t)| ≤ C(n0S) n0E
for some constant C(n0S) independant of n0E.
We recall that nC(t = 0) = uεC(t = 0) = 0 and nS(t = 0) = u
εS(t = 0) > 0.
Proof. We reduce the system to slow-fast dynamic (see [13] for an extended presentation of the subject).
To simplify the notations, we set ε := n0E and we define the slow time and new unknowns as
ε := n0E , s = εt, uεS(s) = nS(t) ≥ 0, uεC(s) =
nC(t)
n0E≥ 0.
Then the two systems are respectively written asddsu
εS = k−u
εC − k+uεS(1− uεC), ε ddsu
εC = k+u
εS(1− uεC)− (k− + kp)uεC ,
ddsuS = k−uC − k+uS(1− uC), 0 = k+uS(1− uC)− (k− + kp)uC .
(3.13)
With these notations the result in Proposition 3.4 is equivalent to
sups≥0|uεS(s)− uS(s)| ≤ Cε.
The situation is the favorable case of Section 3.1 with the addition that the convergence is uniform
globally and not only locally.
The details of the proof are left to the reader because it is a general conclusion from the theory of
slow-fast dynamic. We shall make use of the two consequences of (3.13)
d
ds[uεS + εu
εC ] = −kpuεC ,
d
dsuS = −kpuC .
We have from (3.9) and uεC(t = 0) = 0,
(i) 0 ≤ uεC ≤ 1, 0 ≤ uC ≤ 1,(ii) 0 ≤ uεS ≤ n0S , 0 ≤ uS ≤ n0S ,(iii) 0 ≤ uεC ≤ uM < 1, 0 ≤ uC ≤ uM < 1 with uM :=
k+n0S
k+n0S+k−+kpindependent of ε,
(iv) | ddsuεC(s)| ≤ K for some K(n0S),
(v) From step (iv), introduce the bounded quantity rε(s) := − ddsuεC(s) + k+u
εC(s)
2, and
Rε(s) = uεS(s) + εuεC(s)− uS(s).
Compute that
uεC(s) =εrε(s)− k+uεCRε + k+uεS
k+uS + k− + kp.
uεC − uC =εrε + k+(1− uεC)Rε − εk+uεC
k+uS + k− + kp.
3.4. BELOUSOV-ZHABOTINSKII REACTION 21
Conclude thatd
dsRε(s) + kp
k+(1− uεC)k+uS + k− + kp
Rε(s) = εkp−rε + k+uεC
k+uS + k− + kp.
(vi) Using steps (iii) and (v), conclude that |Rε(s)| ≤ Cε for a constant C.Hints. (ii) uεS + u
εC decreases.
(iii) Write ddsuεC ≤ k+n0S(1− uεC)− (k− + kp)uεC .
(iv) Write the equation for z(s) = ddsuεC(s) as ε
ddsz+ λ(s)z(s) = µ(s) with λ > k−+ kp and µ bounded.
(vi) Write 12ddsR
ε(s)2 +ARε(s)2 ≤ εB|Rε(s)|, with A > 0, B > 0 two constants.
3.4 Belousov-Zhabotinskii reaction
Figure 3.2: Periodic regime of the system of ODEs (3.14) that mimics the Belousov-Zhabotinskii reaction.The parameters are kA = kB = 1 and kC = 2 (Left), kC = 20 (Right). The solution nB is the top (red) curve,
nA is in blue and nC in green. In abscissae is time.
For all complex chemical reactions, the detailed description of all elementary reactions is not realistic.
Then, as for the enzymatic reaction, one simplifies the system by assuming that some reactions are much
faster than others, or that some components are in higher concentrations than others. These manipu-
lations may violate mass conservation and entropy inequality may be lost; this is a condition to obtain
pattern formation as in the example of the CIMA reaction in Section 7.5.2.
The famous Belousov-Zhabotinskii reaction is known as the first historical example to produce peri-
odic patterns. This was discovered in 1951 by Belousov, it remained unpublished because no respectable
chemist at that time could accept this idea. Belousov and Zhabotinskii received the Lenin Prize in 1980,
a decade after Belousov’s death and the discovery in the USA of other periodic reactions, simpler to
reproduce.
22 Slow-fast dynamics and enzymatic reactions
We borrow a simple example from A. Turner2, this avoids entering the details of a real chemical reaction
(refer to [32] for a complete treatment). The simple example consists of three reactants denoted as A,
B, C, and three irreversible reactions (because of this feature, entropy dissipation and relaxation do not
hold).
A+BkA−→ 2A, B + C kB−→ 2B, C +A kC−→ 2C.
Therefore, following the general formalism in (2.6), the system is written asddtnA = nA(kAnB − kCnC),ddtnB = nB(kBnC − kAnA),ddtnC = nC(kCnA − kBnB).
(3.14)
Notice that
nA(t) > 0, nB(t) > 0, nC(t) > 0, nA(t) + nB(t) + nC(t) = n0A + n
0B + n
0C .
Numerical simulations are presented in Figure 3.2. For kc = 20, which is very large compared to kA
and kB one can estimate that the equations imply that kCnAnC = O(1) than thus nAnC ≈ 0. This isoberved on the simulation on the right.
2A. Turner, A Simple Model of the Belousov-Zhabotinsky Reaction from First Principles, 2009,
http://www.aac.bartlett.ucl.ac.uk/processing/samples/bzr.pdf
http://www.aac.bartlett.ucl.ac.uk/processing/samples/bzr.pdf
Chapter 4
Some material for parabolic
equations
4.1 Boundary conditions
When working in a domain Ω (a connected open set with sufficiently smooth boundary), we encounter
two elementary types of boundary conditions. The reaction-diffusion systems (1.1) or (2.1) are completed
either by the Dirichlet or the Neumann boundary conditions.
• Dirichlet boundary condition on ∂Ω.
ni = 0 on ∂Ω.
This boundary condition means that individuals or molecules, when they reach the boundary are defini-
tively removed from the population Ω, which therefore diminishes. This interpretation stems from the
Brownian motion underlying the diffusion equation (see below). But we can see that indeed, if we consider
the conservative chemical reactions (2.1) with (2.3), then
d
dt
∫Ω
I∑i=1
ni(t, x)dx =
I∑i=1
Di
∫∂Ω
∂
∂νni,
with ν the outward normal to the boundary. But with ni(t, x) ≥ 0 in Ω and ni(t, x) = 0 in ∂Ω, we have∂∂νni ≤ 0, therefore the total mass diminishes, for all t ≥ 0,
d
dt
∫Ω
I∑i=1
ni(t, x)dx ≤ 0,∫
Ω
I∑i=1
ni(t, x)dx ≤∫
Ω
I∑i=1
n0i (x)dx.
• Neumann boundary condition on ∂Ω.
∂
∂νni = 0 on ∂Ω,
still with ν the outward normal to the boundary. This means that individuals or molecules are reflected
when they hit the boundary. In the computation above for (2.1) with (2.3), the normal derivative vanishes
24 Some material for parabolic equations
and we find directly mass conservation
d
dt
∫Ω
I∑i=1
ni(t, x)dx = 0,
∫Ω
I∑i=1
ni(t, x)dx =
∫Ω
I∑i=1
n0i (x)dx.
• Mixed or Robin boundary condition on ∂Ω.
αni +∂
∂νni = 0 on ∂Ω, α ≥ 0.
When reaching the boundary, there is either reflexion or removal with a propability related to α (α = 0
no removal, α =∞ no reflexion). The mass balance reads
d
dt
∫Ω
I∑i=1
ni(t, x)dx =
I∑i=1
Di
∫∂Ω
∂
∂νni = −α
I∑i=1
Di
∫∂Ω
ni < 0.
• Full space. There is a large difference between the case of the full space Rd and the case of a boundeddomain. This can be seen by the results of spectral analysis in Section 4.2, which do not hold in the same
form in Rd. The countable spectral basis wk(x) is replaced by e−ix.ξ, and leads to the Fourier transform.
4.2 The spectral decomposition of Laplace operators (Dirichlet)
The spectral decomposition of the Laplace operator is going to play an important role in subsequent
chapters, therefore we introduce some material now. We begin with the Dirichlet boundary condition −∆u = f in Ω,u = 0 on ∂Ω. (4.1)Theorem 4.1 (Dirichlet) Consider a bounded connected open set Ω, then there is a spectral basis
(λk, wk)k≥1 for (4.1), that is,
(i) λk is a non-decreasing sequence with 0 < λ1 < λ2 ≤ λ3 ≤ ... ≤ λk ≤ ... and λk −→∞k→∞
,
(ii) (λk, wk) are eigenelements, i.e., for all k ≥ 1 we have −∆wk = λkwk in Ω,wk = 0 on ∂Ω,(iii) (wk)k≥1 is an orthonormal basis of L
2(Ω),
(iv) we have w1(x) > 0 in Ω and the first eigenvalue λ1 is simple, and for k ≥ 2, the eigenfunction wkchanges sign and can be multiple.
Remark 1. The hypothesis that Ω is connected is simply used to guarantee that the first eigenvalue
is simple and the corresponding eigenfunction is positive in Ω. Otherwise, we have several additional
nonnegative eigenfunctions with a first eigenfunction in one component and 0 in the others.
Remark 2. The sequence wk is also orthogonal in H10 (Ω) (for Dirichlet boundary conditions) and H
1(Ω)
4.2. THE SPECTRAL DECOMPOSITION OF LAPLACE OPERATORS (DIRICHLET) 25
for Neumann boundary conditions (see below). Indeed, if wk is orthogonal to wj in L2(Ω), then from the
Laplace equation for wk and the Stokes formula∫Ω
∇wj .∇wk = λk∫
Ω
wjwk = 0.
Therefore, orthogonality in L2 implies orthogonality in H10 or H1.
Remark 3. Note that for k ≥ 2, the eigenfunction wk changes sign because∫
Ωw1wk = 0 and w1 has a
sign.
Remark 4. Notice the formula∞∑i=1
wi(x)wi(x′) = δ(x− x′).
Indeed, this means that for all test function in ϕ ∈ C(Ω̄) ∩ L2(Ω),
∞∑i=1
wi(x)
∫Ω
wi(x′)ϕ(x′)dx′ =
∫Ω
ϕ(x′)δ(x− x′)dx′ = ϕ(x),
which is the decomposition of ϕ on the orthonormal basis (wi)i≥1
Proof of Theorem 4.1. We only sketch the proof. For additional matters see [15] Ch. 5, [6] p. 96. The
result is based on two ingredients. (i) The spectral decomposition of self-adjoint compact linear mappings
on Hilbert spaces is a general theory that extends the case of symmetric matrices. (ii) The simplicity
of the first eigenvalue with a positive eigenfunction is also a consequence of the Krein-Rutman theorem
(infinite dimension version of the Perron-Froebenius theorem).
First step. 1st eigenelements. In the Hilbert space H = L2(Ω), we consider the linear subspace V =
H10 (Ω). Then we define the minimum on V
λ1 = min∫Ω|u|2=1
∫Ω
|∇u|2dx. (4.2)
This minimum is achieved because a minimizing sequence (un) will converge strongly inH and weakly in V
to w1 ∈ V with∫
Ω|w1|2 = 1, by the Rellich compactness theorem (see [10, 6]). Therefore,
∫Ω|∇w1|2dx ≤
lim infn→∞∫
Ω|∇un|2dx = λ1. This implies equality and that λ1 > 0. The variational principle associated
to this minimization problem says that
−∆w1 = λ1w1,
which implies that w1 is smooth in Ω (by elliptic regularity)
Second step. Positivity. Because in V ,∣∣∇|u|∣∣2 = |∇u|2 a.e., the construction above tells us that |w1| is
also a first eigenfunction and we may assume that w1 is nonnegative. By the strong maximum principle
applied to the Laplace equation, we obtain that w1 is positive inside Ω (because it is connected). This
also proves that all the eigenfunctions associated with λ1 have a sign in Ω because, in a connected open
set, w1 cannot satisfy the three properties (i) be smooth, (ii) change sign and (iii) |w1| be positive also.
26 Some material for parabolic equations
Third step. Simplicity. Finally, we can deduce the simplicity of this eigenfunction because if there were
two independent eigenfunction, a linear combination would allow us to build one which changes sign (by
orthogonality to w1 for example) and this is impossible by the above positivity argument.
Fourth step. Other eigenelements. We may iterate the construction. Denote Ek the finite dimensional
subspace generated by the k-th first eigenspaces. We work on the closed subspace E⊥k of H, and we may
define
λk+1 = minu∈E⊥k ∩V,
∫Ω|u|2=1
∫Ω
|∇u|2dx.
This minimum is achieved by the same reason as before. The variational form gives that the minimizers
are solutions of the k + 1-th eigenproblem. They can form a multidimensional space but always finite
dimensional; otherwise we would have an infinite dimensional subspace of L2(Ω), whose unit ball is
compact thanks to the Rellich compactness theorem, since∫
Ω|∇u|2dx ≤ λk+1 in this ball
Also λk → ∞ as k → ∞ because one can easily build (with oscillations or sharp gradients) functionssatisfying
∫Ω|un|2 = 1 and
∫Ω|∇un|2dx ≥ n.
4.3 The spectral decomposition of Laplace operators (Neumann)
The same method applies to the Neumann boundary condition −∆u = f in Ω,∂u∂ν = 0 on ∂Ω.
(4.3)
We state without proof the
Theorem 4.2 (Neumann) Consider a C1 bounded connected open set Ω, then there is a spectral basis
(λk, wk)k≥1 for (4.3), i.e.,
(i) λk is a non-decreasing sequence with 0 = λ1 < λ2 ≤ λ3 ≤ ... ≤ λk ≤ ... and λk −→∞k→∞
,
(ii) (λk, wk) are eigenelements, i.e., for all k ≥ 1 we have −∆wk = λkwk in Ω,∂wk∂ν = 0 on ∂Ω,
(iii) (wk)k≥1 is an orthonormal basis of L2(Ω)
(iv) w1(x) =1
|Ω|1/2 > 0, and for k ≥ 2, the eigenfunction wk changes sign.
4.4 The Poincaré and Poincaré-Wirtinger inequalities
As an outcome of the proof in Section 4.2, we recover the Poincaré inequality for ω a bounded domain
λD1
∫Ω
|u|2 ≤∫
Ω
|∇u|2dx, ∀u ∈ H10 (Ω), (4.4)
4.5. RECTANGLES: EXPLICIT SOLUTIONS 27
where λD1 denotes the first eigenvalue of the Laplace-Dirichlet equation (4.1). This inequality is just
another statement equivalent to (4.2).
The Poincaré-Wirtinger inequality states that
λN2
∫Ω
|u− 〈u〉|2 ≤∫
Ω
|∇u|2dx, ∀u ∈ H1(Ω), (4.5)
where λN2 is the first positive eigenvalue for the Laplace-Neumann equation (4.3) and
〈u〉 = 1Ω
∫Ω
u(x)dx,
is the L2 projection on the first eigenfunction with Neumann boundary conditions (a constant). Therefore
this statement is nothing but the characterization of the second eigenvalue given in the proof in Section 4.2.
4.5 Rectangles: explicit solutions
In one dimension, one can compute explicitly the spectral basis because the solutions of −u′′ = λu areall known. On Ω = (0, 1) we have
wk = ak sin(kπx), λk = (kπ)2, (Dirichlet)
wk = bk cos((k − 1)πx
), λk =
((k − 1)π
)2, (Neuman)
and ak and bk are normalization constants that ensure∫ 1
0|wk|2 = 1.
In two dimensions, on a rectangle (0, L1) × (0, L2) we see that the family is better described by twoindices k ≥ 1 and l ≥ 1 and we have
wkl = akl sin(kπx1L1
) sin(lπx2L2
), λkl =((k
L1)2 + (
l
L2)2)π2, (Dirichlet)
wkl = bkl cos((k − 1)π x
L1
)cos((l − 1)π x
L2
), λkl =
((k − 1L1
)2 + (l − 1L2
)2)π2, (Neuman)
These examples indicate that
• The first positive eigenvalue is of the order of 1/max(L1, L2)2 = 1diam(Ω)2 . For a large domain (evenwith a large aspect ratio) we can expect that the first eigenvalues are close to zero and that the eigenvalues
are close to each other.
• Except the first one, eigenvalues can be multiple (take L1/L2 ∈ N).• Large eigenvalues are associated with highly oscillating eigenfunctions.
28 Some material for parabolic equations
Figure 4.1: Two sample paths of one dimensional Brownian motion according to the approximation (4.6).Abscissae tk, ordinates Xk.
Figure 4.2: Two sample paths of two dimensional Brownian motion simulated according to (4.6).
4.6 Brownian motion and the heat equation
The relationship between Brownian motion and the heat equation explains, in a simple framework, why
a random walk of individuals leads to the terms −D∆n for the population density n(t, x). It is ratherdifficult to construct rigorously Brownian motion. However, it is easy to give an intuitive approximation,
which is sufficient to build the probability law of Brownian trajectories.
To do so, we follow the Euler numerical method for an ODE with a time step ∆t that uses discrete
times tk = k∆t.
In a probability space with a measure denoted dP (ω), the initial position X0 ∈ Rd being given with aprobability law n0(x), we define iteratively a discrete trajectory Xk(ω) ∈ Rd as follows. We set
Xk+1(ω) = Xk(ω) +√
∆t Y k(ω) (4.6)
where Y k(ω) denotes a d-dimensional random variable independent of Xk (one speaks of independent
4.6. BROWNIAN MOTION AND THE HEAT EQUATION 29
increments) and with a normal law N(y). We recall that ‘independent’ means
Ef(Xk, Y k) :=∫f(Xk(ω), Y k(ω)
)dP (ω) =
∫Rd×Rd
f(x, y)dnk(x) N(y)dy (4.7)
with
• N(y) = 1(2π)d/2
e−|y|2/2, the normal law,
• dnk(x) the law of the process Xk, defined as P(Φ(Xk(ω))
)=∫
Φ(x)dnk(x).
Two simulations are presented. Figure 4.1 depicts, in the one dimensional case (tk, Xk), two iterates
Xk (for two different ω). Figure 4.2 shows two iterates Xk in two dimensions.
Our purpose is to compute the law dnk(x) at the limit ∆t → 0. To do so, we first use a C3 functionu, with u, Du, D2u and D3u bounded. We use the Taylor expansion of u to compute
u(Xk+1) = u(Xk) +√
∆tDu(Xk).Y k +∆t
2D2u(Xk).(Y k, Y k) +O
(∆t|Y k|
)3/2,
and then
Eu(Xk+1) = Eu(Xk) +∆t
2ED2u(Xk).(Y k, Y k) +O
(∆t)3/2
. (4.8)
Indeed, because N(·) is radially symmetric, the formula (4.7) used with f(x, y) = Du(Xk).Y k yields
EDu(Xk).Y k = EDu(Xk).∫RdyN(y)dy = 0.
Because∫Rd yiyjN(y)dy = δij , a further use of the formulas (4.8), (4.7) and the definition of the
probability law mentioned earlier, give∫Rdu(x)dnk+1(x) =
∫Rdu(x)dnk(x) +
∆t
2
∫Rd
∆u(x)dnk(x) +O(∆t)3/2
.
After dividing by ∆t and integration by parts of the term ∆u, we may rewrite this as∫Rdu[dnk+1 − dnk
∆t− 1
2∆dnk
]= O
(∆t)1/2
.
This holds true for any smooth function u and this means that, in the weak sense,
dnk+1 − dnk
∆t− 1
2∆dnk = O
(∆t)1/2
.
At the limit in the sense of distributions, as ∆t → 0, we obtain a probability law with a density n(t, x)that satisfies
∂∂tn(t, x)−
12∆n(t, x) = 0,
n(0, x) = n0(x).
In particular, even though n0 is a probability measure, it follows from the regularizing effects of the heat
equation that n(t, x) is a (smooth) function and not only a measure as soon as t > 0.
30 Some material for parabolic equations
The equation for n(t, x) (the heat equation here) is generically called the Kolmogorov equation for the
limiting process (Brownian motion).
Exercise. Prove that the density of the probability law nk satisfies the integral equation
nk+1(x)− nk(x) =∫ [
nk(x+√
∆t y)− nk(x)]N(y)dy. (4.9)
Derive the heat equation for the limit n(t, x), as ∆t→ 0, if it exists. Show that this derivation uses onlytwo y-moments of N and not the full normal law.
See also the scattering equation in Chapter ??.
Chapter 5
Relaxation, perturbation and
entropy methods
Solutions of nonlinear parabolic equations and systems presented in Chapter 1 can exhibit various and
sometimes complex behavior, a phenomena usually called pattern formation. In which circumstances can
such complex behavior happen? A first answer is given in this chapter by indicating some conditions for
relaxation to trivial steady states; then nothing interesting can happen! We present relaxation results by
perturbation methods (small nonlinearity) and entropy methods. Indeed, before we can understand how
patterns occur in parabolic systems, a necessary step is to understand why patterns should not appear
in principle! Solutions of parabolic equations undergo regularization effects that lead them to constant
or simple states. Several asymptotic results can be stated in this direction and we present some of them
in this chapter. Sections 5.1 and 5.2 have been very much influenced by the book [21].
5.1 Asymptotic stability by perturbation methods (Dirichlet)
The simplest possible long term behavior for a semilinear parabolic equation is simply the relaxation
towards a stable steady state (that we choose to be 0 here). This is possible when the following two
features are combined
• the nonlinear part is a small perturbation of a main (linear differential) operator,• this main linear operator has a positive dominant eigenvalue.Of course this simple relaxation behavior is rather boring and appears as the opposite of pattern forma-
tion when Turing instability occurs, and this will be described later, see Chapter 7.
To illustrate this, we consider, in a bounded domain Ω, the semi-linear heat equation with a Dirichlet
32 Relaxation and the energy method
boundary condition∂∂tui(t, x)−Di∆ui(t, x) = Fi(t, x;u1, ..., uI), 1 ≤ i ≤ I, x ∈ Ω,
ui(t, x) = 0, x ∈ ∂Ω,
ui(t = 0, x) = u0i (x) ∈ L2(Ω).
(5.1)
We assume that F (t, x; 0) = 0, so that u ≡ 0 is a steady state solution. Is it a stable and attractive state?Is the number of individuals leaving the domain through ∂Ω compensated by the birth term?
We shall use here a technical result. The Laplace operator (with Dirichlet boundary condition) has a
first eigenvalue λ1 > 0, associated with a positive eigenfunction, w1(x), which is unique except multipli-
cation by a constant,
−∆w1 = λ1 w1, w1 ∈ H10 (Ω). (5.2)
This eigenvalue is characterized as being the best constant in Poincaré inequality (see Section 4.2 or the
book [10])
λ1
∫Ω
|v(x)|2 ≤∫
Ω
|∇v|2, ∀v ∈ H10 (Ω),
with equality only for v = µw1, µ ∈ R.
Theorem 5.1 (Asymptotic stability) Assume miniDi = D > 0 and that there is a (small) constant
L > 0 such that ∀u ∈ RI , t ≥ 0, x ∈ Ω,
|F (t, x;u)| ≤ L|u|, or more generally, F (t, x;u) · u ≤ L |u|2 , (5.3)
δ := Dλ1 − L > 0, (5.4)
then, ui(t, x) vanishes exponentially as t→∞, namely,∫Ω
|u(t, x)|2 ≤ e−2δt∫
Ω
|u0(x)|2. (5.5)
Proof. We multiply the parabolic equation (5.1) by ui and integrate by parts
1
2
d
dt
∫Ω
ui(t)2 +Di
∫Ω
∣∣∇ui(t)∣∣2 = ∫Ω
ui(t)Fi(t, x;u(t)
),
and using the characterization (5.2) of λ1, we conclude
1
2
d
dt
∫Ω
I∑i=1
ui(t)2 +Dλ1
∫Ω
I∑i=1
ui(t)2 ≤ L
∫Ω
I∑i=1
ui(t)2.
The result follows from the Gronwall lemma.
Exercise. Consider a smooth bounded domain Ω ⊂ Rd, a real number λ > 0 and two smooth andLipschitz continuous functions R(u, v), Q(u, v), such that R(0, 0) = Q(0, 0) = 0. Let
(u(t, x), v(t, x)
)be
solutions of the system ∂∂tu−∆u = R(u(t, x), v(t, x)), t ≥ 0, x ∈ Ω,
u(t, x) = 0 sur ∂Ω,
∂∂tv + λv = Q(u(t, x), v(t, x)).
5.2. ASYMPTOTIC STABILITY BY PERTURBATION METHODS (NEUMAN) 33
1. Recall the Poincaré inequality for u.
2. Assume |R(u, v)| ≤ L(|u| + |v|) and |Q(u, v)| ≤ L(|u| + |v|), give a size condition for L such that forall initial data, the solution (u, v) converges exponentially to (0, 0) for t→∞.
Solution. A simple answer is min(λ1, µ) − 2L =: δ > 0. A more elaborate condition is based on thepositive real number such that λ1 − µ = a− a−1 and is λ1 − L− a−1 = µ− L− a =: δ > 0.
5.2 Asymptotic stability by perturbation methods (Neuman)
The next simplest possible long term behavior for a parabolic equation is relaxation towards an homoge-
neous (i.e., independent of x) solution, which is not constant in time. This is possible when two features
are combined
• the nonlinear part is a small perturbation of a main (differential) operator,• this main operator has a non-empty kernel (0 is the first eigenvalue).
Consider again, in a bounded domain Ω with outward unit normal ν, the semi-linear parabolic equation
with a Neumann boundary condition∂∂tui(t, x)−Di∆ui(t, x) = Fi(t;u1, ..., uI), 1 ≤ i ≤ I, x ∈ Ω,∂
∂ν(x)ui(t, x) = 0, x ∈ ∂Ω,
ui(t = 0, x) = u0i (x) ∈ L2(Ω).
(5.6)
The Laplace operator (with a Neumann boundary condition) has λ1 = 0 as a first eigenvalue, associated
with the constants w1(x) = 1/√|Ω| as eigenfunction. We shall use its second eigenvalue λ2, characterized
by the Poincaré-Wirtinger inequality (see [10] and Section 4.2)
λ2
∫Ω
|v(x)− 〈v〉|2 ≤∫
Ω
|∇v|2, ∀v ∈ H1(Ω), (5.7)
with the average of v(x) defined as
〈v〉 = 1|Ω|
∫Ω
v.
Note that this is also the L2 projection on the eigenspace spanned by w1.
Theorem 5.2 (Relaxation to a homogeneous solution) Assume miniDi = D > 0 and(F (u)− F (v)
)· (u− v) ≤ L |u− v|2 , ∀u, v ∈ RI , (5.8)
δ := Dλ2 − L > 0, (5.9)
then, ui(t, x) tends to become homogeneous with an exponential rate, namely,∫Ω
|u(t, x)− 〈u(t)〉|2 ≤ e−2δt∫
Ω
|u0(x)− 〈u0〉|2. (5.10)
34 Relaxation and the energy method
Proof. Integrating in x equation (5.6), we find
d
dt〈ui〉 = 〈Fi(t;u)〉,
therefore,d
dt[ui − 〈ui〉]−Di∆[ui − 〈ui〉] = Fi(t;u)− 〈Fi(t;u)〉.
Thus, using assumption (5.8), we find
12ddt
∫Ω|u− 〈u〉|2 +D
∫Ω|∇(u− 〈u〉
)|2 =
∫Ω
(F (t;u)− 〈F (t;u)〉
)· (u− 〈u〉)
=∫
ΩF (t;u) · (u− 〈u〉)
=∫
Ω
(F (t;u)− F (t; 〈u〉)
)· (u− 〈u〉)
≤ L∫
Ω|u− 〈u〉|2.
Therefore, with notation (5.9)
d
dt
∫Ω
|u− 〈u〉|2 ≤ −2δ∫
Ω
|u− 〈u〉|2.
The result (5.10) follows directly.
Exercise. Explain why we cannot allow a dependency on x in the nonlinearity Fi(t;u).
Exercise. Let v ∈ H2(Ω) satisfy ∂v∂ν = 0 on ∂Ω (Neumanncondition).1. Prove, using the Poincaré-Wirtinger ineqality (5.7), that
λ2
∫Ω
|∇v|2 ≤∫
Ω
|∆v|2. (5.11)
2. In the context of Theorem5.2, assume that
I∑i,j=1
DjFi(t;u)ξiξj ≤ L|ξ|2, ∀ξ ∈ RI . Using the above
inequality, prove that ∫Ω
∑i
|∇ui(t, x)|2 ≤ e−2δt∫
Ω
∑i
|∇u0i (x)|2. (5.12)
3. Deduce a variant of Theorem 5.2.
Hints. 1. Integrate by parts the expression∫
Ω|∇v|2. 2. Use the equation for ddt
∂ui∂xl
.
5.3 Entropy and relaxation
We have seen in Lemma 2.3 that reaction kinetic equations as in (2.10) are endowed with an entropy
(2.12). This originates from the microscopic N -particle stochastic systems from which reaction kinetics
are derived (see Section ??).
5.3. ENTROPY AND RELAXATION 35
This entropy inequality is also very useful because it can be used to show relaxation to the steady
state, independently of the size of the constants k1, k2. To do that we consider a Neumann boundary
condition in a bounded domain Ω∂∂tn1 −D1∆n1 + k1n1 = k2(n2)
2, t ≥, x ∈ Ω,∂∂tn2 −D2∆n2 + 2k2(n2)
2 = 2k1n1,
∂∂νn1 =
∂∂νn2 = 0 on ∂Ω.
(5.13)
Theorem 5.3 The solutions of (5.13), with n0i ≥ 0, n0i ∈ L1(Ω), n0i ln(n0i ) ∈ L1(Ω), satisfy that
ni(t, x)→ Ni, as t→∞
with Ni the constants defined uniquely by
2N1 +N2 =
∫Ω
[2n01(x) + n02(x)]dx, k2(N2)
2 = k1N1.
Proof. Then S(t, x) = n1[
ln(k1n1)− 1]
+ n2[
ln(k1/22 n2)− 1
]satisfies, following Lemma 2.3,
ddt
∫ΩS(t, x)dx = −
∫Ω
[D1|∇n1|2n1
+D2|∇n2|2n2
]dx
−∫
Ω
[ln(k2 n
22)− ln(k1 n1)
][k2(n2)
2 − k1n1]dx.
And, because S is bounded from below, it also gives a bound on the entropy dissipation∫∞0
∫Ω
[D1|∇n1|2n1
+D2|∇n2|2n2
]dxdt
+∫∞
0
∫Ω
[ln(k2 n
22)− ln(k1 n1)
][k2(n2)
2 − k1n1]dxdt ≤ C(n01, n02).
(5.14)
This is again a better integrability estimate in x than the L1log estimate (derived from mass conservation)
because of the quadratic term (n2)2.
From a qualitative point of view, this says that the chemical reaction should lead the system to a
spatially homogeneous equilibrium state. Indeed, formally at least, the integral (5.14) can be bounded
only if
∇n1 = ∇n2 ≈ 0 as t→∞,
k2(n2)2 ≈ k1n1, ∇n1 = ∇n2 ≈ 0 as t→∞.
The first conclusion says that the dynamic becomes homogeneous in x (but may depend on t). The second
conclusion, combined with the mass conservation relation (2.11) shows that there is a unique possible
asymptotic homogeneous state because the constant state satisfies k2(N2)2 = k1N1 = k1(
M|Ω|−N2), which
has a unique positive root.
Exercise. Consider (2.10) with a Neumann boundary condition and set
S(t, x) =1
k1Σ1(k1n1(t, x)
)+
1
2k1/22
Σ2(k
1/22 n2(t, x)
).
36 Relaxation and the energy method
1. We assume that Σ1, Σ2 satisfy Σ′1(u) = Σ
′2(u
1/2). Show that Σ1 is convex if, and only if, Σ2 is convex.
2. Under the conditions in question 1, show that the equation (2.10) dissipates entropy.
3. Adapt Stampacchia’s method to prove L∞ bounds on n1, n2 (and which is the appropriate quantity
for the maximum principle). What are the intuitive Lp bounds.
Exercise. Consider (2.10) with Dirichlet boundary conditions and n0i ≥ 0.1. Using the inequality
∂nj∂ν ≤ 0, which holds because nj ≥ 0, show that M(t) decreases where
M(t) =
∫Ω
[2n1(t, x) + n2(t, x)
]dx.
2. Consider the entropies of the previous exercise with the additional condition Σ′i(0) = 0. Show that
the equation (2.10) dissipates entropy.
(iii) Show that solutions of (2.10) with Dirichlet boundary conditions tend to 0 as t→∞.
5.4 Entropy: chemostat and SI system of epidemiology
Examples of entropy also appear in biological models. We treat here an example that arises as a first
modeling stage in two applications: 1. ecology and the model of the chemostat (the u represents a
nutrient, v a population that consumes the nutrient), 2. epidemiology with the celebrated Suceptible-
Infected model (SI system). In both cases the model isddtu = B − µuu− ruv,ddtv = ruv − µvv,
(5.15)
with B > 0, r > 0, µu > 0 and µv > 0 parameters.
In the chemostat B represents the renewal of nutrients u, µ the removal or degradation of nutrients
and r the consumption rate by the population and µv is the mortality and removal rate of the population.
In epidemiology, B represents the newborn, µu the mortality rate, r the encounter rate between sus-
ceptible and infected individuals (these encounters are responsible of new infected), µv is the mortality
of the infected (and recovery rate in the SIR model).
There is always a trivial (healthy) steady state
ū0 = B/µu, v̄0 = 0,
and a positive steady state
ū = µv/r, v̄ =rB − µuµv
rµvif rB > µuµv. (5.16)
Depending on the sign of v̄ we may have two different Lyapunov functionals (entropies)
Lemma 5.4 A ssume rB > µuµv and define
S̄(u, v) = −ū ln(u)− v̄ ln(v) + u+ v,
5.5. THE LOTKA-VOLTERRA PREY-PREDATOR SYSTEM WITH DIFFUSION (PROBLEM) 37
we haved
dtS̄ = − 1
u(√ūB − u
√v̄r + µu)
2. (5.17)
Lemma 5.5 We assume rB ≤ µuµv and define
S(u, v) = −ū0 ln(u) + u+ v,
we haved
dtS = − v
µu(µvµu − rB)−
1
µuu(B − µuu)2. (5.18)
We leave the proofs of these lemmas to the reader and go directly to the conclusion
Proposition 5.6 Solutions of the system (5.15) behave as follows
If Br > µuµv, then the entropy S̄ is convex and all solutions with v0 > 0 converge as t → ∞ to the
positive steady state.
If Br ≤ µuµv, then solutions become extinct (they converge to the trivial steady state as t → ∞).
The proof is standard and left to the reader. The steps are (i) u(t) is bounded, (ii) S(t) decreases,
in the case v̄ < 0, the limit can be −∞ and this means that v(t) vanishes and the result (ii) follows,otherwise S stays bounded and converges to a finite value, (iii) we conclude thanks to the right hand side
of (5.17).
5.5 The Lotka-Volterra prey-predator system with diffusion (Prob-
lem)
In the case of the Lotka-Volterra prey-predator system we can show relaxation towards a homogeneous
solution. The coefficients of the model need not be small as is required in Theorem 5.2. This is because
the model comes with a physically relevant quantity (such as entropy), which gives a global control.
Exercise. Consider the prey-predator Lotka-Volterra system without diffusion∂∂tn1 = n1[ r1 − an2],∂∂tn2 = n2[−r2 + bn1],
where r1, r2, a and b are positive constants and the initial data n0i are positive.
1. Show that there are local solutions and that they remain positive.
2. Show that the entropy (Lyapunov functional)
E(t) = −r1 lnn2 + an2 − r2 lnn1 + bn1,
38 Relaxation and the energy method
is constant. Show that E is bounded from below and that E → ∞ as n1 + n2 → ∞. Conclude thatsolutions are global.
3. What is the unique steady state solution?
4. Show, using question 2, that the solutions are periodic (trajectories are closed).
Solution 1. By local Lipschitz regularity of the right hand side, the Cauchy-Lipschitz theorem asserts
that there is a local solution, i.e., defined on a maximal interval [0, T ∗]. We can write
n1(t) = n01e
∫ t0
[r1−an2(s)]ds > 0,
and, same thing for n2.
2. Set ϕi = lnni and write ∂∂tϕ1 = r1 − an2 = r1 − ae
ϕ2 ,
∂∂tϕ2 = −r2 − bn1 = −r2 + be
ϕ1 .
This is a Hamiltonian system and the Hamiltonian is constant along trajectories
H(t) = −r1ϕ2(t) + aeϕ2(t) − r2ϕ1(t) + beϕ1(t)] = H(0).
This proves that the ϕi(t) remain bounded from above and below, and thus that solutions are global.
3. n̄1 = r2/b, n̄2 = r1/a.
4. In each quadrant defined by the origin (n̄1, n̄2), we can write n2 as a function of n1 (or vice versa)
and conclude from that.
Exercise. Let Ω be a smooth bounded domain. Consider smooth positive solutions of the Lotka-Volterra
equation with diffusion and a Neumann boundary condition∂∂tn1 − d1∆n1 = n1[ r1 − an2],∂∂tn2 − d2∆n2 = n2[−r2 + bn1],∂ni∂ν = 0 on ∂Ω, i = 1, 2,
where d1, d2, r1, r2, a and b are positive constants and the initial data n0i are positive.
1. Consider the quantity m(t) =∫
Ω[bn1(t, x) + an2(t, x)]dx. Show that m(t) ≤ m(0)ert and find the
value r.
2. Show that the convex entropy
E(t) =
∫Ω
[−r1 lnn2 + an2 − r2 lnn1 + bn1]dx,
a) is bounded from below, b) is decreasing.
Conclude that m(t) is bounded.
3. What finite integral do we obtain from the entropy dissipation?
3. Assume that the quantities ∇ lnni(t, x) converge, as t→∞,a. What are their limits?
b. What can you conclude about the behavior of ni(t, x) as t→∞?
5.6. PROBLEM 39
5.6 Problem
The goal of this problem is to show that for rank-1 nonlinearities, the long term behaviour is determinded
by simple states in x without a size condition.
Let Ω a smooth bounded domain. Consider a smooth positive solution of the Lotka-Volterra system
with diffusion and Neumann boundary condition∂∂tni −Di∆ni + ai(x)ni = ni ri(t), t ≥ 0, x ∈ Ω, i = 1, 2, ..., I,∂ni∂ν = 0 on ∂Ω, i = 1, 2, ..., I,
ni(t = 0, x) = n0i (x),
with the nonlinearity defined, for some given positive and smooth functions(ψi(x)
)i=1,...,I
, by
ri(t) = Ri( ∫
Ω
ψ1(x)n1(t, x)dx, ...,
∫Ω
ψI(x)nI(t, x)dx).
1. We also define the first eigenfunctions Ni(x) > 0 for the eigenvalue λi, defined by −Di∆Ni + ai(x)Ni = λiNi, x ∈ Ω,∂Ni∂ν = 0 on ∂Ω,
∫ΩNi(x)
2dx = 1, i = 1, 2, ..., I.
Explain why the pair (Ni, λi) exists and give the corresponding Poincaré-Wirtinger inequality.
2. We consider ñi the solution of∂∂t ñi −Di∆ñi + ai(x)ñi = λiñi, t ≥ 0, x ∈ Ω, i = 1, 2, ..., I,∂ñi∂ν = 0 on ∂Ω, i = 1, 2, ..., I,
ñi(t = 0, x) = n0i (x).
Prove that∫
Ωñi(t, x)Ni(x)dx =
∫Ωñ0i (x)Ni(x)dx. Find the constant αi such that, as t→∞,
‖ñi(t, x)− αiNi(x)‖L2(Ω) ≤ ‖ñ0i (x)− αiNi(x)‖L2(Ω)e−µit
and identify µi.
3. We write the solution of (5.6) as ni(t, x) = ρi(t)ñi(t, x). Identify the evolution equation givingddtρi(t)
in terms of the ρj and ñj .
4. Assume that for some M > 0 we have Ri(Y1, ...YI
)< λi whenever Yi > M , i = 1, ..., I. Show that the
ρi(t) are bounded and that for some µ > 0 we have
‖ni(t, x)− ρi(t)αiNi(x)‖L2(Ω) ≤ C0e−µt.
5. In dimension 1, assuming R(+∞) < λ, R(−∞) > λ and R′ < 0. Show that the long term dynamic isgiven by that of the equation
%̇(t) = %(t) R(%(t)α
∫ψ(x)N(x)dx
),
and that n(t, x) converges generically to a steady state ρ̄N(x) as t→∞.
[Hint.] ρ̇i(t) = ρi(t)(ri − λi)
40 Relaxation and the energy method
Chapter 6
Blow-up and extinction of solutions
We know from Chapter 5 that for ‘small’ nonlinearities, the solutions of parabolic systems relax to an
elementary state. When the nonlinearity is too large, it is possible that solutions of nonlinear parabolic
equations do not exist for all times or simply vanish, two scenarios that are the first signs of visible
nonlinear effects.
The mechanisms can be seen in simple ordinary differential equations. Blow-up means that the solution
becomes pointwise larger and larger and eventually becomes infinite in finite time. To see this, consider
the equation
ż(t) = z(t)q, z(0) > 0, q > 1.
Its solution z(t) > 0 is given by
z(t)q−1 = z(0)q−1/(1− (q − 1)tz(0)q−1
)and thus it tends to infinity at t = T ∗ := 1(q−1)z(0)q−1 . For q = 2, it is a typical illustration of the
alternative arising in the Cauchy-Lipschitz Theorem; solutions can only tend to infinity in finite time
(case z(0) > 0), or they are globally defined (case z(0) < 0).
The mechanism of extinction is illustrated by the equation with the opposite sign
ż(t) = −z(t)q, z(0) > 0, q > 1.
Its solution
z(t)q−1 = z(0)q−1/(1 + (q − 1)tz(0)q−1
)(6.1)
vanishes as t→∞.The purpose of this Chapter is to study in which respect these phenomena can occur or not when
diffusion is added.
We begin with the semilinear parabolic equation with Dirichlet boundary conditions
∂
∂tu−∆u = u2.
We present several methods for proving blow-up in finite time. The first two methods are for semi-linear
parabolic equations, the third method is illustrated by the Keller-Segel system for chemotaxis; this is
41
42 Blow-up and extinction of solutions
more interesting because it blows-up in all Lp norms, for all p > 1, but the L1 norm is conserved (this
represents the total number of cells in a system where cell multiplication is ignored).
The last section is devoted to a counter-intuitive result with respect to extinction.
6.1 Semilinear equations; the method of the eigenfunction
To study the case of nonlinear parabolic equations, we consider the model∂∂tu−∆u = u
2, t ≥ 0, x ∈ Ω,
u(x) = 0, x ∈ ∂Ω,
u(t = 0, x) = u0(x) ≥ 0.
(6.2)
Here we treat the case when Ω is a bounded domain. To define a distributional solution is not completely
obvious because the right hand side u2 should be well defined, that is why we require that u belongs to
L2; then it remains to solve the heat equation with a right hand side in L1. That is why we call solution
of (6.2), a function satisfying for some T > 0,
u ∈ L2((0, T )× Ω
), u ∈ C
([0, T );L1(Ω)
). (6.3)
The question then is to know if effects of diffusion are able to overcome the effects of quadratic non-
linearity and (6.2) could have global solutions. The answer is given by the next theorems
Theorem 6.1 Assume that Ω is a bounded domain, u0 ≥ 0, u0 is sufficiently large (in a weighted L1
space introduced below), then there is a time T ∗ for which the solution of (6.2) satisfies
‖u‖L2((0,T )×Rd) −→T→T∗
∞.
Of course, this result means that u(t) also blows up in all Lp norms, 2 ≤ p ≤ ∞ because we work in abounded domain. It is also possible to prove that the blow-up time is the same for all these norms [35].
Proof of Theorem 6.1. We are going to arrive at a contradiction in the hypothesis that a function
u ∈ L2((0, T )× Ω
)can be a solution of (6.2) when T exceeds an explicit value computed later.
First we, notice that u(t, x) ≥ 0 because u0 ≥ 0.
Because we are working in a bounded domain, the smallest eigenvalue of the operator −∆ exists andis associated with a positive eigenfunction (see Section 4.2) −∆w1 = λ1w1, w1 > 0 in Ω,w1(x) = 0, x ∈ ∂Ω, ∫Ω(w1)2 = 1. (6.4)
6.1. SEMILINEAR EQUATIONS; THE METHOD OF THE EIGENFUNCTION 43
For a solution, we can multiply equation (6.2) by w1 and integrate by parts. We arrive at
ddt
∫Ωu(t, x) w1(x) dx =