+ All Categories
Home > Documents > Existence and asymptotic behaviour for solutions of dynamical equilibrium systems

Existence and asymptotic behaviour for solutions of dynamical equilibrium systems

Date post: 01-Feb-2017
Category:
Upload: zaki
View: 213 times
Download: 1 times
Share this document with a friend
14
EVOLUTION EQUATIONS AND doi:10.3934/eect.2014.3.1 CONTROL THEORY Volume 3, Number 1, March 2014 pp. 1–14 EXISTENCE AND ASYMPTOTIC BEHAVIOUR FOR SOLUTIONS OF DYNAMICAL EQUILIBRIUM SYSTEMS Zaki Chbani and Hassan Riahi Laboratory LIBMA Mathematics, Faculty of Sciences Semlalia Cadi Ayyad University 40000 Marrakech, Morroco (Communicated by Hedy Attouch) Abstract. In this paper, we give an existence result for the following dy- namical equilibrium problem: h du dt ,v - u(t)i + F (u(t),v) 0 v K and for a.e. t 0, where K is a closed convex set in a Hilbert space and F : K ×K R is a monotone bifunction. We introduce a class of demipositive bifunctions and use it to study the asymptotic behaviour of the solution u(t) when t →∞. We obtain weak convergence of u(t) to some solution x K of the equilibrium problem F (x, y) 0 for every y K. Our applications deal with the asymp- totic behaviour of the dynamical convex minimization and dynamical system associated to saddle convex-concave bifunctions. We then present a new neural model for solving a convex programming problem. 1. Introduction. Throughout, H is a real Hilbert space with inner product , ·i and induced norm k·k. Let K H a nonempty closed convex set. For a bifunction F : K × K -→ R, that verifies F (x, x) = 0 for all x K, the formulation of an equilibrium problem (EP ) is given by: find u K such that F (u, v) 0 for all v K. (1) The set of solutions of (EP ) is denoted by S F . Equilibrium problems are among the most interesting and intensively studied classes of problems; they include fun- damental nonlinear mathematical problems like optimization, variational and hemi- variational inequalities, and complementarity problems. Many problems of practical interest in optimization, economics, and engineering involve equilibrium in their de- scription. Here, we list some of their special cases. Convex optimization: Let ϕ : K -→ R. It is requested to look for (M ) u K such that ϕ(u) ϕ(v) v K. If we set F (u, v)= ϕ(v) - ϕ(u), the problem (M ) can be viewed as a partic- ular case of (EP ). We can also translate (M ) in a variational form by taking: F (x, y)= ϕ 0 (x; y - x) where ϕ 0 (x; d) := lim t0 + 1 t (ϕ(x + td) - ϕ(x)) is the di- rectional derivative of ϕ at the point x in the direction d. Let us note that ϕ 0 (x; d) = sup ξ∂ϕ(x) hξ,di where ∂ϕ is the convex subdifferential of ϕ, and there- fore (EP) is reduced to the variational formulation of (M). Using first order op- timality condition, this also means 0 ϕ(x), where ϕ(u)= ϕ(u) if u K and ϕ(u)=+otherwise. When ϕ is convex and Gˆ ateaux-differentiable on K with 2010 Mathematics Subject Classification. Primary: 37N40, 46N10, 49J40, 90C33. Key words and phrases. Monotone bifunction, Cauchy problem, demipositive bifunction, as- ymptotic behaviour, convex minimisation, saddle point problem. 1
Transcript

EVOLUTION EQUATIONS AND doi:10.3934/eect.2014.3.1CONTROL THEORYVolume 3, Number 1, March 2014 pp. 1–14

EXISTENCE AND ASYMPTOTIC BEHAVIOUR FOR

SOLUTIONS OF DYNAMICAL EQUILIBRIUM SYSTEMS

Zaki Chbani and Hassan Riahi

Laboratory LIBMA Mathematics, Faculty of Sciences SemlaliaCadi Ayyad University

40000 Marrakech, Morroco

(Communicated by Hedy Attouch)

Abstract. In this paper, we give an existence result for the following dy-

namical equilibrium problem: 〈 dudt

, v − u(t)〉+ F (u(t), v) ≥ 0 ∀v ∈ K and for

a.e. t ≥ 0, where K is a closed convex set in a Hilbert space and F : K×K → Ris a monotone bifunction. We introduce a class of demipositive bifunctions and

use it to study the asymptotic behaviour of the solution u(t) when t→∞. We

obtain weak convergence of u(t) to some solution x ∈ K of the equilibriumproblem F (x, y) ≥ 0 for every y ∈ K. Our applications deal with the asymp-

totic behaviour of the dynamical convex minimization and dynamical system

associated to saddle convex-concave bifunctions. We then present a new neuralmodel for solving a convex programming problem.

1. Introduction. Throughout, H is a real Hilbert space with inner product 〈·, ·〉and induced norm ‖ · ‖. Let K ⊂ H a nonempty closed convex set. For a bifunctionF : K × K −→ R, that verifies F (x, x) = 0 for all x ∈ K, the formulation of anequilibrium problem (EP ) is given by:

find u ∈ K such that F (u, v) ≥ 0 for all v ∈ K. (1)

The set of solutions of (EP ) is denoted by SF . Equilibrium problems are amongthe most interesting and intensively studied classes of problems; they include fun-damental nonlinear mathematical problems like optimization, variational and hemi-variational inequalities, and complementarity problems. Many problems of practicalinterest in optimization, economics, and engineering involve equilibrium in their de-scription. Here, we list some of their special cases.Convex optimization: Let ϕ : K −→ R. It is requested to look for(M) u ∈ K such that ϕ(u) ≤ ϕ(v) ∀v ∈ K.If we set F (u, v) = ϕ(v) − ϕ(u), the problem (M) can be viewed as a partic-ular case of (EP ). We can also translate (M) in a variational form by taking:F (x, y) = ϕ′(x; y − x) where ϕ′(x; d) := limt→0+

1t (ϕ(x+ td)− ϕ(x)) is the di-

rectional derivative of ϕ at the point x in the direction d. Let us note thatϕ′(x; d) = supξ∈∂ϕ(x)〈ξ, d〉 where ∂ϕ is the convex subdifferential of ϕ, and there-

fore (EP) is reduced to the variational formulation of (M). Using first order op-timality condition, this also means 0 ∈ ∂ϕ(x), where ϕ(u) = ϕ(u) if u ∈ K andϕ(u) = +∞ otherwise. When ϕ is convex and Gateaux-differentiable on K with

2010 Mathematics Subject Classification. Primary: 37N40, 46N10, 49J40, 90C33.Key words and phrases. Monotone bifunction, Cauchy problem, demipositive bifunction, as-

ymptotic behaviour, convex minimisation, saddle point problem.

1

2 ZAKI CHBANI AND HASSAN RIAHI

Gateaux-gradient ∇ϕ(x) at x ∈ K, then we set F (u, v) := 〈∇ϕ(u), v − u〉 in orderto treat (M) in the form of (EP ).Saddle points: Let L : K1 ×K2 −→ R. The point (u1, u2) ∈ K1 ×K2 is called asaddle point of L iff(SP ) L(u1, v2) ≤ L(v1, u2) ∀(v1, v2) ∈ K1 ×K2.Set K := K1 × K2 and F ((u1, u2), (v1, v2)) = L(v1, u2) − L(u1, v2). Then, (EP )includes the problem (SP ).Variational inequalities: Let A : K −−→−→ H be a set-valued mapping with Axnonempty, convex, weak-compact for all x ∈ K. It is requested to find(V I) x ∈ K, ξ ∈ Ax such that 〈ξ, y − x〉 ≥ 0 ∀y ∈ KSet F (x, y) := maxξ∈Ax〈ξ, y − x〉. Then x solves (V I) if, and only if, x ∈ SF .Fixed point set-valued mappings: Let T : K −−→−→ K be a set-valued mappingwith T (x) nonempty, convex and weakly compact for all x ∈ K. Consider the fixedpoint problem :(FP ) find x ∈ K such that x ∈ T (x).Set F (x, y) := maxξ∈T (x) 〈x− ξ, y− x〉. Then x ∈ SF if, and only if, x is a solutionto (FP ), i.e. x ∈ Fix(T ).

For more details on this issue, one can reads the pioneer works of Mosco [25],Blum-Oettli [6] and also Yuan [32], Chadli-Chbani-Riahi [15, 16] and the bibliogra-phy therein.

Our goal in this paper is to approximate the solutions of the equilibrium prob-lems by introducing the following evolution equilibrium problem (EEP ) : findu ∈ C0([0,+∞);H) such that du

dt ∈ L1loc([0,+∞);H), and 〈

dudt , v − u(t)〉+ F (u(t), v) ≥ 0, ∀v ∈ K, for a.e. t ≥ 0, (1)u(t) ∈ K, t ≥ 0, (2)u(0) = u0 (3).

This problem presents a new format that contains the variational formulation ofthe Cauchy problem:

du

dt+A(u(t)) 3 0,

where A : H → 2H is a monotone operator. In fact, (EEP ) provides weak solutionsof the Cauchy problem, see for example Brezis [7, 8, 9] and Browder [12]. Forthe Cauchy problem itself, see Brezis [10], Browder [13], Haraux [22], Attouch-Damlamian [2], Barbu [5], Showalter [30] and Zeidler [33, 34].

It is clear that if we consider in (EEP ) the implicit discretization scheme dudt =

uk+1−ukλk+1

, we fall back on the proximal algorithm adapted to equilibrium problems

by Moudafi [27]:

λk+1F (uk+1, v) + 〈uk+1 − uk, v − uk+1〉 ≥ 0∀v ∈ K.

The paper is organized as follows: In Section 2 we present a general existenceresult for equilibrium problems (Lemma 2.3). The concept of resolvent associatedto these problems, see [17], is recalled with some of its properties. We also recallsome lemmas that will be needed in the paper. Section 3 deals with the dynamicalapproach associated to monotone bifunctions. We present an existence result of thedynamical problem (EEP).

We introduce in Section 4 a class of demipositive bifunctions that we use tostudy the asymptotic behaviour of solution of this problem. This concept gener-alizes the one introduced by Bruck [14] for monotone operators. In Section 5 we

DYNAMICAL EQUILIBRIUM SYSTEMS 3

treat the asymptotic behaviour of solutions for the evolution equilibrium problem(EEP). Section 6 is devoted to applications, our first one deals with the asymptoticbehaviour for the classical steepest descent differential inclusion:

du

dt+ ∂ϕ(u(t)) 3 0.

This problem was studied by Brezis [9, 10, 11], Bruck [14] and Baillon [3]. Af-terwards, we treat the same behaviour of dynamical system associated to saddleconvex-concave bifunctions. We obtain the weak convergence of the trajectory un-der the condition (A). This hypothesis is verified if we assume that the bifunctionis strictly convex (resp. strictly concave) with respect to the first variable (resp.the second one). And as stated in remark (2), this condition differs from that givenby Rockafellar in [29]. It should be noted that, in general, one can not expect morethan an ergodic convergence for the trajectory, see Baillon-Brezis [4].

In 1985 and 1986 Hopfield and Tank [23] proposed a neural network for solvinglinear programming problems. Their seminal work has inspired many researchers toinvestigate alternative neural networks for solving linear and nonlinear programmingproblems, see [19], [31] and [24] for a historical and bibliographical study of theseproblems. We consider the following convex programming problem (CP ):

min f(x) subject to x ≥ 0 and g(x) ≤ 0,

where f is a real-valued convex function and g(x) = (g1(x), · · · , gm(x)) is anm-dimensional vector-valued function, both defined for non-negative vectors x =(x1, · · · , xn) and the gi are convex. The Tank and Hopfield model can be describedin compact form as:

x = C−1{−∇f(x)−∇g(x)g+(x)− 1

sR−1x}s,

where C is an n × n diagonal matrix due to the self-capacitance of each neuron, sis the penalty parameter, and R is an n× n diagonal matrix with

rii =1

1pi

+∑mj=1−(dji)

,

pi being the self-conductance of each neuron.Our neural model for solving (CP ) is described by:

(NM)

〈x(t) +∇f(x(t)) +

∑m1 ln(1 + λi(t))∇gi(x(t)), z − x(t)〉 ≥ 0, ∀z ∈ Rn+,

〈λ(t), β − λ(t)〉 −∑m

1

1

1 + λi(t)gi(x(t))(βi − λi(t)) ≥ 0, ∀β ∈ Rm+ .

(x(0), λ(0)) = (x0, λ0).

This model sheds new light on such problems and generalizes those existing in thelitterature. 0ur result of convergence of the solution of the neural model is basedon the study of the asymptotic behavior of the solution of an associated dynamicalequilibrium problem. Unlike the results established in the pre-cited papers wherestrong convexity is required on the cost function f , we only need that f , or at leastone of the gi is strictly convex.

2. Preliminaries. In the following, we assume that K is a nonempty closed convexsubset of H, and for F : K ×K → R we consider the following usual conditions:

4 ZAKI CHBANI AND HASSAN RIAHI

(H1) F (x, x) = 0 for each x ∈ K;(H2) F is monotone, i.e., F (x, y) + F (y, x) ≤ 0 for each x, y ∈ K;(H3) limt↓0 F (tz + (1− t)x, y) ≤ F (x, y) for any x, y, z ∈ K;(H4) for each x ∈ K, y → F (x, y) is convex and lower semicontinuous.

We begin by the following comparison result:

Lemma 2.1 (Minty’s Lemma). [6] Let F : K ×K → R and consider the problems(EP): find x ∈ K such that F (x, y) ≥ 0, ∀y ∈ K, and (DEP): find x ∈ K suchthat F (y, x) ≤ 0, ∀y ∈ K.

(i) If F satisfies (H2), then each solution of (EP) is a solution of (DEP).(ii) Conversely, if F satisfies (H1), (H3) and (H4) then each solution of (DEP) is

a solution of (EP).

If all conditions (H1)−(H4) are satisfied, the solution sets of (EP) and (DEP) aresimilar. The next Lemma introduce the notion of resolvent associated to bifunctions.This concept is crucial in the proof of existence for (EEP ).

Lemma 2.2. [17] Suppose that F : K × K → R satisfies (H1), (H2), (H4). Thenthe following are equivalent:

(i) F is maximal : (x, u) ∈ K ×H and F (y, x) ≤ 〈u, x − y〉,∀y ∈ K imply thatF (x, y) + 〈u, x− y〉 ≥ 0,∀y ∈ K;

(ii) for each x ∈ H and λ > 0, there exists a unique z = JFλ (x), called the resolventof F at x, such that

λF (z, y) + 〈y − z, z − x〉 ≥ 0, ∀y ∈ K. (2)

Let us mention, see [16, Lemma 2.1], that maximality of F is assured underconditions (H1) and (H3).

The proof of Lemma (2.2) is based upon the following generalized KKM-Fan’slemma.

Lemma 2.3. (see [16]) Let E be a topological vector space and K a closed convexsubset of E. Consider ϕ, ψ : K ×K −→ R such that

(A1) ϕ is monotone ( i.e. ϕ(v, w) + ϕ(w, v) ≤ 0 for each v, w ∈ K) and for everyfixed w ∈ K, ϕ(., w) is upper hemicontinuous;

(A2) ϕ(v, v) = ψ(v, v) = 0 for all v ∈ K;(A3) for every fixed v ∈ K, ϕ(v, .) and ψ(v, .) are convex;(A4) for every fixed v ∈ K, ϕ(v, .) is lower semicontinuous;(A5) for every fixed w ∈ K, ψ(., w) is upper semicontinuous;(A6) there exist B ⊂ K convex compact and v0 ∈ B such that ϕ(w, v0)+ψ(w, v0) <

0 for each w ∈ K \B.

Then, there exists v ∈ K such that ϕ(v, w) + ψ(v, w) ≥ 0 for all w ∈ K.

Lemma 2.4. [18] Under conditions (H1)− (H4), we have

(i) JFλ is firmly nonexpansive; i.e, for all x, y ∈ H

‖JFλ (x)− JFλ (y)‖2 ≤ 〈JFλ (x)− JFλ (y), x− y〉;

(ii) x = JFλ (x)⇔ F (x, y) ≥ 0,∀y ∈ K.

The associated Yosida approximate is defined as follows [26, 16]:

AFλ (x) =1

λ(x− JFλ x). (3)

DYNAMICAL EQUILIBRIUM SYSTEMS 5

Following [1] (see also [21]), the authors associate to each bifunction F the operatorAF defined by:

AF (x) = {z ∈ H : F (x, y) + 〈z, x− y〉 ≥ 0 ∀y ∈ K} if x ∈ K,and AF (x) = ∅ otherwise.

Note that the values of AF are convex and closed in H, and AF is monotone if,and only if, F is monotone, whenever dom (AF ) = K.

In [21], the maximality of the bifunction F is taken as maximality of AF forinclusion in the set of monotone operators. Under conditions (H1) − (H4), twocited maximality notions are similar, and also the associated Yosida approximates:

y = AFλ (x) = 1λ (I − JFλ )(x) =

(λI + (AF )−1

)−1(x), which is equivalent to

F (x− λy, u) + 〈y, (x− λy)− u〉 ≥ 0 ∀u ∈ K. (4)

Since the values of AF are convex and closed in H, this allows us to define, for eachx ∈ K, the minimal section operator Ao by:

Aox = ProjAF (x)0 ∈ AF (x).

Next, let us introduce and prove a lemma that plays a key role in the analysis ofexistence of solutions of (EEP).

Lemma 2.5. If F satisfies (H1)− (H4), then

(i) for each x ∈ K, ‖AFλ x‖ ≤ ‖Aox‖.(ii) for each x, y ∈ K, λ‖AFλ x−AFλ y‖2 ≤ 〈AFλ x−AFλ y, y − x〉.

Proof. For (i), using Jλx and Aox ∈ AF (x), we have that

F (Jλx, x)− 〈AFλ x, x− Jλx〉 ≥ 0

and

F (x, Jλx) + 〈Aox, x− Jλx〉 ≥ 0.

By summing these two inequalities and using monotonicity of F , we obtain 〈Aox−AFλ x, x − Jλx〉 ≥ 0, so that λ〈Aox − AFλ x,AFλ x〉 ≥ 0, i.e. ‖AFλ x‖2 ≤ 〈Aox,AFλ x〉,and thus ‖AFλ x‖ ≤ ‖Aox‖.

For (ii), similarly we have for each x, y ∈ H

F (Jλx, Jλy)− 〈AFλ x, Jλy − Jλx〉 ≥ 0

and

F (Jλy, Jλx)− 〈AFλ y, Jλx− Jλy〉 ≥ 0

By adding again and using monotonicity of F , we obtain, 〈AFλ y−AFλ x, Jλy−Jλx〉 ≥0. The equality Jλx = x− λAFλ gives to us

〈AFλ y −AFλ x, y − x〉+ λ〈AFλ y −AFλ x,AFλ x−AFλ y〉 ≥ 0,

which leads to λ‖AFλ x−AFλ y‖2 ≤ 〈AFλ x−AFλ y, y − x〉.

To prove our result of weak asymptotic convergence, we will need the famousLemma of Opial.

Lemma 2.6 (Opial’s weak convergence Lemma). [28] Let {xk} be a sequence andB a nonempty set in a Hilbert space that satisfy

1. for every y ∈ B, the real sequence (‖xk − y‖) converges, and2. each weak-cluster point of {xk} belongs to B.

Then the whole sequence {xk} weakly converges to some point x ∈ B.

6 ZAKI CHBANI AND HASSAN RIAHI

In the sequel, Γ0(H) will denote the set of all proper lower-semicontinuous convexfunctions ϕ : H → R ∪ {+∞}. For ϕ in Γ0(H), its domain is dom ϕ = {x ∈H : ϕ(x) < +∞} and the subdifferential of ϕ at a point x ∈ dom ϕ is the set∂ϕ(x) = {z ∈ H : ϕ(y) − ϕ(x) ≥ 〈z, y − x〉 ∀y ∈ H}. For a nonempty subset Kof H, its indicator function is defined as δK(x) = 0 if x ∈ K and δK(x) = +∞otherwise.

3. Existence result. We consider the following differential equation: for λ > 0{dudt (t) + 1

λ (u(t)− J(u(t))) = 0u(0) = u0 ∈ K.

(5)

in which J is Lipschitz. The following lemma is a direct consequence of the Cauchy-Lipschitz, Picard Theorem see ([9], Theorem I.4).

Lemma 3.1. Assume K is closed, convex nonempty set in H and J : K → K isLipschitz: ‖J(x) − J(y)‖ ≤ ‖x − y‖, for all x, y ∈ K. Then, for each u0 ∈ K, andλ > 0 there exists a unique absolutely continuous function uλ : [0,∞) → H whichis a solution of (5) with u(t) ∈ K, t > 0, u(0) = u0 and it is characterized by

uλ(t) = e−t/λu0 +1

λ

∫ t

0

e(s−t)/λJ(u(s))ds.

Theorem 3.2. Under conditions (H1)− (H4), we have, for each u0 ∈ K, (EEP )admits a unique solution.

Proof. By Lemma 2.4, we have that the resolvent JFλ is Lipschitz. So, Lemma 3.1ensures that equation (5) for JFλ admits a unique solution. For each λ > 0, let uλbe the solution of

uλ(t) +AFλ uλ(t) = 0, t ≥ 0, with uλ(0) = u0 ∈ K. (6)

If h > 0, then uλ(t+h) is a solution of (6) and by using (ii) of Lemma 2.5, we have,

d

dt‖uλ(t+ h)− uλ(t)‖2 = −2〈AFλ uλ(t+ h)−AFλ uλ(t), uλ(t+ h)− uλ(t)〉 ≤ 0,

so, ‖uλ(t+h)− uλ(t)‖ ≤ ‖uλ(h)− uλ(0)‖. Dividing by h and letting h→ 0, we get‖uλ(t)‖ ≤ ‖uλ(0)‖, that leads to

‖AFλ uλ(t)‖ = ‖uλ(t)‖ ≤ ‖uλ(0)‖ = ‖AFλ uλ(0)‖ = ‖AFλ u0‖ ≤ ‖Aou0‖. (7)

We shall now prove {uλ} is a Cauchy family in C([0, T ] , H). For λ, β > 0, we have

d

dt‖uλ(t)− uβ(t)‖2 = −2〈AFλ uλ(t)−AFβ uβ(t), uλ(t)− uβ(t)〉,

by writing uλ = λAFλ uλ + Jλuλ and likewise for uβ , we obtain

〈AFλ uλ −AFβ uβ , uλ − uβ〉 = 〈AFλ uλ −AFβ uβ , λAFλ uλ − βAFβ uβ〉+〈AFλ uλ −AFβ uβ , Jλuλ − Jβuβ〉

(8)

By the same way as in proof of (ii) of Lemma 2.5, one can show that

〈AFλ uλ −AFβ uβ , Jλuλ − Jβuβ〉 ≥ 0.

Then, the right hand side of equation (8) is bounded below by

‖AFλ uλ‖2 + ‖AFβ uβ‖2 − (λ+ β)〈AFλ uλ, AFβ uβ〉 ≥ ‖AFλ uλ‖2 + ‖AFβ uβ‖2−λ(‖AFλ uλ‖2 + 1

4‖AFβ uβ‖2) − β(‖AFβ uβ‖2 + 1

4‖AFλ uλ‖2)

≥ −λ+ β

4‖Aou0‖2

DYNAMICAL EQUILIBRIUM SYSTEMS 7

We deduce thatd

dt‖uλ(t)− uβ(t)‖2 ≤ λ+ β

2‖Aou0‖2

and, hence

‖uλ(t)− uβ(t)‖ ≤ ‖Aou0‖(λ+ β

2t

) 12

.

Then, the sequence {uλ} is uniformly Cauchy on each [0, T ], hence uniformly con-vergent to some u ∈ C([0, T ] , H) with

‖uλ(t)− u(t)‖ ≤(λ

2T

) 12

‖Aou0‖ for every 0 ≤ t ≤ T and λ > 0.

From the estimate ‖Jλuλ(t) − uλ(t)‖ = λ‖AFλ uλ(t)‖ ≤ λ‖Aou0‖, we deduce that

Jλuλ converges in C([0, T ] , H) to u. The relation (7) implies that duλdt is bounded

in the Hilbert space L2(0, T,H), so there is a subsequence which weakly converges.Passing to limits in

uλ(t) = u0 +

∫ t

0

uλ(s)ds

we find that the limit is u and by uniqueness of weak limits, we conclude w −limλ→0

duλdt (t) = du

dt (t) in H. Injecting equation (6) in (2), we obtain

F (Jλ(uλ(t)), v) + 〈duλdt

(t), v − Jλ(uλ(t))〉 ≥ 0, ∀v ∈ K. (9)

Passing to the limit in (9) and using that F is monotone and F (v, ·) is lower semi-continuous, we obtain,

F (v, u(t)) ≤ limλ→0

F (v, Jλ(uλ(t))) ≤ 〈dudt

(t), v − u(t)〉.

Using Lemma 2.1, we finally get for each t > 0,

F (u(t), v) + 〈dudt

(t), v − u(t)〉 ≥ 0, ∀v ∈ K. (10)

4. Demipositive bifunctions. Using Opial’s Lemma 2.6, in order to prove weakconvergence of {u(t)} when t → ∞, one of two main conditions is that each weakcluster point lies in SF . The key tool to assure this condition is the concept ofdemipositivity, first developed in Bruck [14] for monotone operators. We give anatural extension of this notion to monotone bifunctions

Definition 4.1. A monotone bifunction F : K ×K → R is called demipositive ifthere exists z ∈ SF such that for every sequence {un} ⊂ K converging weakly to y,we have:

F (un, z)→ 0 implies that y ∈ SF .

The following proposition gives examples for bifunctions with demipositive prop-erty.

Proposition 1. Each of the following condition is sufficient for a bifunction F tobe demipositive:

1. F verifies (H1)− (H4) and int SF 6= ∅.2. F verifies (H1)−(H4) and is 3−monotone, i.e. F (x, y)+F (y, z)+F (z, x) ≤ 0

for each x, y, z ∈ K,SF 6= ∅.

8 ZAKI CHBANI AND HASSAN RIAHI

3. F is α-strongly monotone for some α > 0, i.e. F (x, y)+F (y, x) ≤ −α‖x−y‖2for each x, y ∈ K.

4. F verifies (H1) − (H4) and F is µ-cocoercive for some µ > 0, i.e. for eachx ∈ K, F (x, ·) is differentiable and

F (x, y) + F (y, x) ≤ −µ‖∇2F (x, ·)(x)−∇2F (y, ·)(y)‖2. (11)

Proof. Take z ∈ SF and un ⇀ u such that limn→∞ F (un, z) = 0.Let us first prove 2. We have, for every y ∈ K, F (un, z) + F (z, y) + F (y, un) ≤

0; since z ∈ SF , we deduce that F (y, un) ≤ −F (un, z). Passing to the limitwhen n → ∞, and thanks to lower semicontinuity of F (y, ·) we obtain F (y, u) ≤lim infn→∞ F (y, un) ≤ 0. We conclude by Minty’s Lemma 2.1.3. We have F (un, z) + F (z, un) ≤ −α‖un − z‖2 and again since z ∈ SF , we obtainα‖un− z‖2 ≤ −F (un, z). Passing to the superior limit, we deduce that un → z andthen u = z.4. Since z ∈ SF , then ∇2F (z, ·)(z) = 0, so by setting vn = ∇2F (un, ·)(un), (11)gives:

F (un, z) + F (z, un) ≤ −µ‖vn‖2.Using F (z, un) ≥ 0 and limn→∞ F (un, z) = 0, we deduce that limn→∞ ‖vn‖ = 0.Now by writing that F (un, y) + 〈vn, un − y〉 ≥ 0 for all y ∈ K, which implies bymonotonicity of F , 〈vn, un − y〉 ≥ F (y, un). Going to the limit when n → ∞, weobtain

F (y, u) ≤ lim infn→∞

F (y, un) ≤ limn→∞

〈vn, un − y〉 = 0,

which assures the result by Lemma 2.1.1. Now take z0 ∈ int SF (let % > 0 such that B(z0, %) ⊂ SF ) and un ⇀ u such thatlimn→∞ F (un, z0) = 0.

Since z0 ∈ int K = int dom(F + δK) = int dom∂(F + δK) (it is a classical resultthat, for ϕ ∈ Γ0(H) one has int domϕ ⊂ int dom∂ϕ see [5, Proposition 1.7] forexample), one can find vn ∈ ∂(F (un, ·) + δK)(z0), which means that for each y ∈ K

F (un, y)− F (un, z0) ≥ 〈vn, y − z0〉. (12)

Without loss of generality, we suppose that vn 6= 0 for all n ∈ N and by consideringy0 = z0 + % vn

‖vn‖ ∈ SF in (12) we have

F (un, y0)− F (un, z0) ≥ %‖vn‖. (13)

Using F (un, y0) ≤ 0 (since y0 ∈ SF ) and that limn→∞ F (un, z0) = 0, we obtain in(13) that limn→∞ ‖vn‖ = 0. Coming back to (12), thanks to monotonicity of F werewrite it as

F (y, un) ≤ −F (un, z0)− 〈vn, y − z0〉.At the limit, we deduce F (y, u) ≤ lim inf F (y, un) ≤ 0 and we conclude by Minty’sLemma.

5. Asymptotic behaviour of solutions.

Theorem 5.1. Suppose F satisfies (H1) − (H4), and is demipositive. Then u(t),the solution of (EEP ), weakly converges, when t→ +∞, to some y ∈ SF .

Proof. Let z ∈ SF be the element used in Definition 4.1 of demipositivity and sethz(t) = 1

2‖u(t)−z‖2. Taking advantage of the monotonicity of F in (10), we obtain

hz(t) = 〈 ddtu(t), u(t)− z〉 ≤ F (u(t), z) ≤ −F (z, u(t)) ≤ 0, (14)

DYNAMICAL EQUILIBRIUM SYSTEMS 9

where the last inequality results from z ∈ SF . Then, the sequence hz(t) is decreas-ing and bounded below, so converges. To apply Opial’s Lemma (2.6), it remainsto show that every weak cluster point of u(t) belongs to SF . Let tn → ∞ andsuppose u(tn) ⇀ y. Now, the fact that u is absolutely continuous implies that hzis absolutely continuous. Then, we have∫ ∞

0

|hz(t)|dt = −∫ ∞0

hz(t)dt =1

2‖u(0)− z‖2 − lim

t→∞‖u(t)− z‖2 <∞,

Hence hz ∈ L1 and there is thus a subsequence (tnk) of (tn) s.t. hz(tnk) → 0 ask →∞. Passing to the limit in (14)

−hz(tnk) ≥ −F (u(tnk), z) ≥ 0,

we obtain then limn→∞ F (u(tnk), z) = 0, so by u(tnk) ⇀ y, and F demipositive, weconclude that y ∈ SF .

6. Applications.

6.1. Constrained convex optimisation. Let ϕ be an element of Γ0(H) and K0

a nonempty closed convex subset in H. Let us define on K × K a bifunctionF (x, y) = ϕ(y) − ϕ(x), where K = dom ϕ = K0 ∩ dom ϕ and ϕ = ϕ + δK0

. It isclear that F verifies (H1)− (H4) and that equation (10) becomes

−u(t) ∈ ∂ϕ(u(t)).

Noting that F is 3−monotone, so demipositive by Proposition 1, then we obtainthe following Bruck’s Theorem, see [14, Theorem 4] as a corollary of Theorem 5.1.

Corollary 1. Suppose that Argmin ϕ 6= ∅, and consider u a solution of

u(t) + ∂ϕ(u(t)) 3 0.

Then, as t→ +∞, u(t) weakly converges to a point in Argmin ϕ.

6.2. Saddle point. Let H1, H2 be two real Hilbert spaces, U (respectively V ) bea nonempty closed convex subset of H1 (respectively H2), and let L : U × V → Rbe a closed convex-concave real bifunction, i.e. for each (u, v) ∈ U × V the realfunctions L(·, v) and −L(u, ·) are convex and lower semicontinuous. Consider thesaddle-point problem:

(SP )

{find (u, v) ∈ U × V such thatL(u, v) ≤ L(u, v) ≤ L(u, v) for each (u, v) ∈ U × V.

We recall, see [20], that (u, v) is a saddle point if and only if

maxv∈V

infu∈U

L(u, v) = minu∈U

supv∈V

L(u, v), (15)

and this number is then equal to L(u, v).By setting H = H1 × H2, K = U × V and FL : K × K → R defined by

FL((u, v), (u′, v′)) := L(u′, v) − L(u, v′), for each (u, v), (u′, v′) ∈ K, we see thatproblems (SP ) and (1) are equivalent. In this case the variational inequality in(EEP) becomes:

L(x, v(t))−L(u(t), y) + 〈u(t), x− u(t)〉+ 〈v(t), y− v(t)〉 ≥ 0∀(x, y) ∈ U × V. (16)

Taking in (16), respectively y = v and x = u, we obtain, respectively

− u ∈ ∂1(L(·, v) + δU )(u) and − v ∈ ∂2(−L(u, ·) + δV )(v). (17)

10 ZAKI CHBANI AND HASSAN RIAHI

Conversely, from these two inclusions, one can easily go back to (16), and then (16)and (17) are similar problems, in the sense that their solution sets are equal.

In the next proposition, we shall give conditions which ensure that FL is demi-positive.

Proposition 2. Let L : U ×V → R be a closed convex-concave real bifunction thatverifies the condition (A):

L has a saddle point (u, v) and, if (u, v) ∈ U × V , L(u, v) = L(u, v) =L(u, v), then (u, v) is a saddle point.

Then the associated bifunction FL is demipositive.

Proof. Let us consider a sequence (un, vn) that weakly converges to a bipoint (u0, v0)and such that limn FL((un, vn), (u, v)) = 0. We have

0 = limn−FL((un, vn), (u, v))≥ lim infn(−L(u, vn)) + lim infn(L(un, v)≥ −L(u, v0) + L(u0, v) = −FL((u0, v0), (u, v)).

where we have used L(·, v) and −L(u, ·) are convex and lower semicontinuous. Butwe already have that FL((u, v), (u0, v0)) ≥ 0 and by monotonicity of FL we de-duce that FL((u0, v0), (u, v)) = 0, so L(u, v0) = L(u0, v). Thanks to (15), we getL(u, v0) = L(u, v) = L(u0, v). Using (A), we have (u0, v0) is a saddle point, andthen FL is demipositive.

Remark 1.

(i) The condition (A) is satisfied if we suppose the following condition (A′): L hasa saddle point (u, v) and, if (u, v) ∈ U × V is such that L((1− λ)u+ λu, v) =L(u, v) = L(u, (1− µ)v + µv) holds for every 0 ≤ λ ≤ 1 and 0 ≤ µ ≤ 1, thenu = u and v = v.

(ii) Condition (A′) is satisfied in turn if the saddle point (u, v) is such that theconvex function L(·, v) is “strictly convex at u” (not affine along any linesegment including u), while the concave function L(u, ·) is “strictly concaveat v”.

Remark 2. Condition (A′) is similar but not comparable to that introduced byRockafellar in [29] where he assumes condition (A′′):

L has a saddle point (u, v) and, if (u, v) ∈ U × V is such that

L((1− λ)u+ λu, (1− µ)v + µv) = (1− λ)(1− µ)L(u, v) + λ(1− µ)L(u, v)+(1− λ)µL(u, v) + λµL(u, v)

holds for every 0 ≤ λ ≤ 1 and 0 ≤ µ ≤ 1, then u = u and v = v.

Here is an example where conditions (A) and (A′) are verified, while (A′′) is not.

Example 6.1. Take a closed convex-concave real bifunction L : [−1, 1]× [−1, 1]→R defined by L(u, v) = (2 + u)v.

Consider the saddle point (u, v) = (−1, 1), then L(u, v) = 1. Suppose that (u, v) ∈[−1, 1]2 satisfies L(u, v) = L(u, v) = L(u, v), which means that 2 + u = 1 = v, i.e.u = −1, v = 1 and then Condition (A) is fulfilled. Condition (A′) is also easilyverifiable.

Consider now the bipoint (u, v) = (0, 1) (which is not a saddle point), we haveL((1 − λ)u + λu, (1 − µ)v + µv) = 1 + λ. A straightforward calculation gives also

DYNAMICAL EQUILIBRIUM SYSTEMS 11

(1− λ)(1− µ)L(u, v) + (1− λ)µL(u, v) + λ(1− µ)L(u, v) + λµL(u, v) = 1 + λ andCondition (A′′) fails.

As consequence of Theorem 5.1, we obtain a result concerning the asymptoticbehaviour of dynamical system associated to closed convex-concave real bifunctions.

Corollary 2. Under the same assumptions as in Proposition 2, let us consider thetwo absolutely continuous functions u and v which are solutions of the dynamicalsystem:

(DS)

u(t) ∈ −∂1 [L(·, v(t)) + δU )] (u(t))v(t) ∈ −∂2 [−L(u(t), ·) + δV )] (v(t))(u(0), v(0)) = (u0, v0).

then, (u(t), v(t)) weakly converges, when t→∞, to a saddle point of L.

Return to Example 6.1, and consider the associate dynamical system (DS): u(t) + v(t) ∈ −N[−1,1](u(t))v(t)− u(t)− 2 ∈ −N[−1,1](v(t))(u(0), v(0)) = (u0, v0).

Then, there exists λ(t) ≥ 0, µ(t) ≥ 0 such that

w(t) = A(t)w(t) + b where w =

(uv

),

A(t) =

(−λ(t) −1

1 −µ(t)

)and b =

(02

).

The solution of this problem can be written

w(t) = R(t)w(0) +R(t)

∫ t

0

R−1(s)bds

where R ∈ M2(R) is the resolvent associated to the cauchy problem: ∂∂tR(t) =

A(t)R(t) with R(0) = I2 (I2 is 2-identity matrix . Corollary 2 tells us that suchsolution must converges at infinity .

Example 6.2. Here is another example where Condition (A) is not satisfied andconclusion of Corollary 2 fails even though the saddle point is unique.

Consider L(u, v) = uv on R2, then the unique saddle point is z = (0, 0) and theassociated bifunction is FL((u, v), (u′, v′)) = L(u′, v)− L(u, v′) = u′v − uv′.

By taking un = (1, 1), we have that FL(un, z) = 0 but (1, 1) does not belong toSFL , which means that FL is not demipositive.

The dynamical system in Corollary 2 becomes: u(t) = −v(t)v(t) = u(t)(u(0), v(0)) = (u0, v0).

Its solution is u(t) = u0 cos t − v0 sin t and v(t) = u0 sin t + v0 cos t, then u2(t) +v2(t) = u20 + v20 , so the trajectories describe concentric circles and can’t converge.

12 ZAKI CHBANI AND HASSAN RIAHI

6.3. Neural model for convex programming. We consider the convex pro-gramming problem (CP ):

min f(x) subject to x ≥ 0 and g(x) ≤ 0,

where f is a real-valued convex function and g(x) = (g1(x), · · · , gm(x)) is anm-dimensional vector-valued function, both defined for non-negative vectors x =(x1, · · · , xn). As indicated in the introduction, our neural model for solving (CP )is described by:

(NM)

〈x(t) +∇f(x(t)) +

∑m1 ln(1 + λi(t))∇gi(x(t)), z − x(t)〉 ≥ 0, ∀z ∈ Rn+,

〈λ(t), β − λ(t)〉 −∑m

11

1+λi(t)gi(x(t))(βi − λi(t)) ≥ 0, ∀β ∈ Rm+ .

(x(0), λ(0)) = (x0, λ0).

Theorem 6.3. Suppose that in (CP ), the function f , or one of the constraint giis strictly convex, and that functions f and g are differentiable. Then, the solution(x(t), λ(t)) of the neural problem (NM) verifies that x(t) converges as t→∞ to asolution of (CP ).

Proof. We associate to the nonlinear convex programming problem (CP ), the fol-lowing modified Lagrangian L defined by:

L(x, λ) = f(x) +

m∑i=1

ln(1 + λi)gi(x),

for x ∈ Rn+ and (λi)i ∈ Rm+ . One can easily seen that if (x, λ) is a saddle point, thenx is an optimum vector for (CP ). The nonlinear dynamical system (DS) becomes:

x(t) +∇f(x(t)) +∑m

1 ln(1 + λi(t))∇gi(x(t)) ∈ −NRn+(x(t))

λ(t)−(

11+λi(t)

gi(x(t)))i∈ −NRm+ (λ(t))

(x(0), λ(0)) = (x0, λ0).

and equivalently,〈x(t) +∇f(x(t)) +

∑m1 ln(1 + λi(t))∇gi(x(t)), z − x(t)〉 ≥ 0, ∀z ∈ Rn+,

〈λ(t), β − λ(t)〉 −∑m

11

1+λi(t)gi(x(t))(βi − λi(t)) ≥ 0, ∀β ∈ Rm+ .

(x(0), λ(0)) = (x0, λ0).

which is nothing other than our neural model (NM). Since we have supposed thatf , or one of the constraint gi is strictly convex, it follows that the Lagrangian L(·, λ)is strictly convex and by definition, L is strictly concave with respect to multipliers.Then by Remark 1 (ii), the bifunction FL is demipositive. Then, by Corollary 2,the solution (x(t), λ(t)) of (NM) verifies that x(t) converges as t→∞ to a solutionof (CP ).

REFERENCES

[1] M. Ait Mansour, Z. Chbani and H. Riahi, Recession bifunction and solvability of noncoerciveequilibrium problems, Comm. Appl. Anal., 7 (2003), 369–377.

[2] H. Attouch and A. Damlamian, Strong solutions for parabolic variational inequalities, Non-linear Anal., 2 (1978), 329–353.

[3] J.-B. Baillon, Un exemple concernant le comportement asymptotique de la solution duprobleme du/dt + ∂ϕ(u) = 0, J. Funct. Anal., 28 (1978), 369–376.

[4] J. B. Baillon and H. Brezis, Une remarque sur le comportement asymptotique des semi-

groupes non lineaires, Houston J. Math., 2 (1976), 5–7.

[5] V. Barbu, Nonlinear Differential Equations of Monotone Types in Banach Spaces, Springer,2010.

DYNAMICAL EQUILIBRIUM SYSTEMS 13

[6] E. Blum and W. Oettli, From optimization and variational inequalities to equilibrium prob-lems, Math. Student, 63 (1994), 123–145.

[7] H. Brezis, Equations et inequations non lineaires dans les espaces vectoriels en dualite, Ann.

Inst. Fourier, 18 (1968), 115–175.

[8] H. Brezis, Inequations variationnelles associees des operateurs d’evolution, in Theory andapplications of monotone operators, Proc. NATO Institute, Venice, (1968), 249–258.

[9] H. Brezis, Operateurs Maximaux Monotones Dans Les Espaces de Hilbert et EquationsD’evolution, Lecture Notes, vol. 5, North-Holland, 1972.

[10] H. Brezis, Analyse Fonctionnelle: Theorie et Applications, Masson, Paris, 1983.

[11] H. Brezis, Asymptotic behavior of some evolution systems, Nonlinear Evolution Equations,

Academic Press, New York, 40 (1978), 141–154.

[12] F. Browder, Non-linear equations of evolution, Annals of Mathematics, Second Series, 80(1964), 485–523.

[13] F. Browder, Nonlinear Operators and Nonlinear Equations of Evolution in Banach Spaces,

Nonlinear Functional Analysis, Symposia in Pure Math., 18, Part 2, F. Browder (Ed.), Amer-

ican Mathematical Society, Providence, RI, 1976.

[14] R. E. Bruck, Asymptotic convergence of nonlinear contraction semigroups in Hilbert spaces,J. Funct. Anal., 18 (1975), 15–26.

[15] O. Chadli, Z. Chbani and H. Riahi, Recession methods for equilibrium problems and applica-

tions to variational and hemivariational inequalities, Discrete Contin. Dyn. Syst., 5 (1999),185–196.

[16] O. Chadli, Z. Chbani and H. Riahi, Equilibrium problems with generalized monotone bi-

functions and Applications to Variational inequalities, J. Optim. Theory Appl., 105 (2000),

299–323.

[17] Z. Chbani and H. Riahi, Variational principle for monotone and maximal bifunctions, Serdica

Math. J., 29 (2003), 159–166.

[18] P. L. Combettes and A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear

Convex Anal., 6 (2005), 117–136.

[19] S. Effati and M. Baymani, A new nonlinear neural network for solving convex nonlinearprogramming problems, Applied Mathematics and Computation, 168 (2005), 1370–1379.

[20] I. Ekeland and R. Temam, Convex Analysis and Variational Problems, Dunod, 1974.

[21] N. Hadjisavvas and H. Khatibzadeh, Maximal monotonicity of bifunctions, Optimization, 59

(2010), 147–160.

[22] A. Haraux, Nonlinear Evolution Equations-Global Behavior of Solutions, Lecture note inMathematics, 841, Springer-Verlag, 1981.

[23] J. J. Hopfield and D. W. Tank, Neural computation of decisions in optimization problems,

Biol. Cybern., 52 (1985), 141–152.

[24] F. Li, Delayed Lagrangian neural networks for solving convex programming problems, Neu-rocomputing, 73 (2010), 2266–2273.

[25] U. Mosco, Implicit variational problems and quasivariational inequalities, Lecture Notes inMathematics, 543 Springer, Berlin, (1976), 83–156.

[26] A. Moudafi, A recession notion for a class of monotone bivariate functions, Serdica Math. J.,26 (2000), 207–220.

[27] A. Moudafi, Proximal point algorithm extended to equilibrium problems, J. Nat. Geom., 15

(1999), 91–100.

[28] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansivemappings, Bull. Amer. Math. Soc., 73 (1967), 591–597.

[29] R. T. Rockafellar, Saddle-points and convex analysis, In Differential Games and Related

Topics, (Kuhn, H.W., Szego, G.P. eds.), North-Holland, (1971), 109–127.

[30] R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential

Equation, Math. Surveys Monogr. 49, Amer. Math. Soc., 1997.

[31] Y. Xia and J. Wang, A recurrent neural network for solving nonlinear convex programs subject

to linear constraints, IEEE Transactions on Neural Networks, 16 (2005), 379–386.

[32] G. X. Z. Yuan, KKM Theory and Applications in Nonlinear Analysis, Marcel Dekker Inc,

New-York, 1999.

14 ZAKI CHBANI AND HASSAN RIAHI

[33] E. Zeidler, Nonlinear Functional Analysis and Its Applications, II B: Nonlinear MonotoneOperators, Springer-Verlag, 1990.

[34] E. Zeidler, Nonlinear Functional Analysis and Its Applications, III: Variational Methods and

Optimization, Springer-Verlag, 1985.

Received February 2013; revised January 2014.

E-mail address: [email protected]

E-mail address: [email protected]


Recommended