+ All Categories
Home > Documents > A neural network for monotone variational inequalities with linear constraints

A neural network for monotone variational inequalities with linear constraints

Date post: 09-Dec-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
11
Physics Letters A 307 (2003) 118–128 www.elsevier.com/locate/pla A neural network for monotone variational inequalities with linear constraints Xing-Bao Gao a , Li-Zhi Liao b,a College of Mathematics and Information Science, Shaanxi Normal University, Xi’an, Shaanxi 710062, PR China b Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong, PRChina Received 25 June 2002; received in revised form 18 November 2002; accepted 22 November 2002 Communicated by A.P. Fordy Abstract Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the necessary and sufficient conditions for the solution, this Letter presents a neural network model for solving linearly constrained variational inequalities. Several sufficient conditions are provided to ensure the asymptotic stability of the proposing network. There is no need to estimate the Lipschitz constant, and no extra parameter is introduced. Since the sufficient conditions provided in this Letter can be easily checked in practice, these new results have both theoretical and application values. The validity and transient behavior of the proposing neural network are demonstrated by some numerical examples. 2002 Elsevier Science B.V. All rights reserved. Keywords: Convergence; Stability; Variational inequality; Neural network MSC: 62M45; 82C32; 92B20 1. Introduction A variational inequality problem is to find a vector u U such that (1) ( VI(F,U) ) (u u ) T F ( u ) 0, u U, where U is a nonempty closed convex subset of R n , and F is a continuous mapping from R n into itself. Problem (1) includes nonlinear complementarity problems (U = R n + ), system of nonlinear equations (U = R n ), and many optimization problems such as linear and quadratic programming, nonlinear programming, extended linear and quadratic programming (see [8,15]), minimax problems and so on. Thus it has many important applications (see [7,9]). The research was supported in part by grants FRG/99-00/II-23 and FRG/00-01/II-63 of Hong Kong Baptist University. * Corresponding author. E-mail address: [email protected] (L.-Z. Liao). 0375-9601/02/$ – see front matter 2002 Elsevier Science B.V. All rights reserved. doi:10.1016/S0375-9601(02)01673-0
Transcript

Physics Letters A 307 (2003) 118–128

www.elsevier.com/locate/pla

A neural network for monotone variational inequalities with linearconstraints

Xing-Bao Gaoa, Li-Zhi Liao b,∗

a College of Mathematics and Information Science, Shaanxi Normal University, Xi’an, Shaanxi 710062, PR Chinab Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong, PR China

Received 25 June 2002; received in revised form 18 November 2002; accepted 22 November 2002

Communicated by A.P. Fordy

Abstract

Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on thenecessary and sufficient conditions for the solution, this Letter presents a neural network model for solving linearly constrainedvariational inequalities. Several sufficient conditions are provided to ensure the asymptotic stability of the proposing network.There is no need to estimate the Lipschitz constant, and no extra parameter is introduced. Since the sufficient conditions providedin this Letter can be easily checked in practice, these new results have both theoretical and application values. The validity andtransient behavior of the proposing neural network are demonstrated by some numerical examples. 2002 Elsevier Science B.V. All rights reserved.

Keywords:Convergence; Stability; Variational inequality; Neural network

MSC:62M45; 82C32; 92B20

1. Introduction

A variational inequality problem is to find a vectoru∗ ∈U such that

(1)(VI (F,U)

)(u− u∗)TF

(u∗) 0, ∀u ∈ U,

whereU is a nonempty closed convex subset ofRn, andF is a continuous mapping fromRn into itself. Problem (1)includes nonlinear complementarity problems (U = Rn+), system of nonlinear equations (U = Rn), and manyoptimization problems such as linear and quadratic programming, nonlinear programming, extended linear andquadratic programming (see [8,15]), minimax problems and so on. Thus it has many important applications(see [7,9]).

The research was supported in part by grants FRG/99-00/II-23 and FRG/00-01/II-63 of Hong Kong Baptist University.* Corresponding author.

E-mail address:[email protected] (L.-Z. Liao).

0375-9601/02/$ – see front matter 2002 Elsevier Science B.V. All rights reserved.doi:10.1016/S0375-9601(02)01673-0

X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128 119

In many engineering and scientific applications, such as traffic equilibrium and network economics problems,U

often has the following structure:

(2)U = u ∈Rn |Du= b, u ∈Ω

,

whereD ∈Rm×n, rank(D)=m, m n, b ∈Rm, andΩ is a simple closed convex subset ofRn, for example,Ω isa boxu ∈ Rn | c u d or a ballu ∈ Rn | ‖u‖ r.

By attaching a Lagrange multiplierv ∈ Rm to the linear constraintDu = b, we obtain an equivalent form ofproblem (1): findz∗ ∈ C such that

(3)(z− z∗)T

f(z∗) 0, ∀z ∈ C,

where

(4)z=[u

v

], f (z)=

[F(u)−DTv

Du− b

], C =Ω ×Rm.

For solving problem (1) and problem (3), (4), there are many iterative methods (see [5,9,10]). However, thesealgorithms are not suitable for a real-time on-line implementation on the computer. Therefore, it is of great interestin practice to develop some neural network models which could provide real-time on-line solution.

Recently, neural networks for optimization problems have achieved many significant results (see [1,2,6,11,13,18,19]). Among them, Kennedy and Chua [11] proposed a neural network which employs both the gradient methodand penalty function method for solving nonlinear problems. Their energy function can be viewed as an “inexact”penalty function, and thus the true optimizer can only be obtained when the penalty parameter is infinite. Xiaand Wang [18] proposed a neural network for problem (1). In particular, problem (3), (4) can be also solved bythis model. However, their model needs to estimate the Lipschitz constant and its structure is quite complex. It iswell known that it is hard to estimate this constant in practice. By overcoming this shortfall, this Letter proposesa neural network model for solving problem (3), (4) by means of the necessary and sufficient conditions of itssolution. There is no need to estimate the Lipschitz constant in our model. Therefore our model’s structure is muchsimpler than the existing ones. Several sufficient conditions are provided to ensure the stability and convergence ofthe proposing neural network. Since the proposing network model can be used to solve a broad class of optimizationproblems, thus it has great application potential. The effectiveness of the proposing network is demonstrated byseveral numerical examples.

In our following discussions, we let‖ · ‖ denote the Euclidean norm,In denote the identity matrix of ordern,and(a)cb = minc,maxa, b, wherec b. ∇F(u) denotes the Jacobian matrix of the differentiable mappingF

evaluated atu and∇g(u) denotes the gradient vector of the differentiable functiong(u) evaluated atu. For anyvectoru ∈ Rn, uT denotes its transpose. The projection operatorPΩ on a closed convex setΩ ⊆Rn is defined by

PΩ(x)= arg minu∈Ω ‖x − u‖.

A basic property of the projection mapping on a closed set is (see [12])

(5)[x − PΩ(x)

]T[PΩ(x)− y

] 0, ∀x ∈ Rn, y ∈Ω.

A mappingF is said to be monotone onΩ ⊆Rn if, for each pair of pointsx, y ∈Ω , there is

(6)(x − y)T[F(x)− F(y)

] 0.

F is said to be strictly monotone onΩ if strict inequality holds wheneverx = y.Throughout the Letter, we letC∗ = z= (uT, vT)T ∈ C | z solves problem (3), (4). Now we make the following

assumptions.

Assumption A1. C∗ = ∅ and there exists a finitez ∈C∗.

120 X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128

Assumption A2. The mappingF is continuously differentiable on an open convex set includingΩ .

Assumption A3. The mappingF is monotone onΩ .

Lemma 1 (Barbalat) [16].If a differentiable functionf (t) has a finite limit ast → ∞, and df (t)dt

is uniformly

continuous, thendf (t)dt

→ 0 as t → ∞.

The rest of the Letter is organized as follows. In Section 2, a neural network model for problem (3), (4) isproposed. The stability and convergence of the proposing network are analyzed in Section 3. The simulationresults of our proposing neural network are reported in Section 4. Finally, some concluding remarks are drawnin Section 5.

2. A neural network model

First, we describe the necessary and sufficient conditions for the solution of problem (3), (4), which builds thetheoretical foundation in constructing a network.

Lemma 2. z∗ = ((u∗)T, (v∗)T)T ∈ C∗ if and only ifu∗ = PΩ [u∗ − F(u∗)+DTv∗] andDu∗ = b.

Proof. From (3), (4) and (5), it can be easily proved.The result of Lemma 2 indicates thatu∗ is the projection of some vector onΩ . For simplicity, we denote

z = (uT, vT)T, z∗ = ((u∗)T, (v∗)T)T, andu = PΩ [u− F(u)+DT(v −Du + b)]. Then the above result suggeststhe following neural network model for (3), (4) which consists of the following merit function and dynamicalsystem:

The merit function:

(7)V (z)= (u− u)T[F(u)−DT(v −Du+ b)

] + 1

2

[‖Du− b‖2 + ‖z− z∗‖2 − ‖u− u‖2], ∀z ∈ C,

wherez∗ ∈C∗ and is finite.The dynamical system:

(8)dz

dt= d

dt

(u

v

)= −G(z)= −

(u− u

Du− b

).

It should be noted that the function defined in (7) was introduced by Xia and Wang [19] for globally projecteddynamical systems with an asymmetric connection weights whenD = 0 andb = 0.

Obviously, whenΩ is simple, for example,Ω = u ∈ Rn | li ui hi for i = 1,2, . . . , n (−li or hi couldbe+∞), then the projection operatorPΩ(u) can be easily implemented by using some activation functions in [2].Moreover, one can easily see that the circuit realizing neural network (8) consists ofm+n integrators,n processorsfor F(u), 2mn amplifies, some summers, andn processors or activation functions forPΩ(·). Thus the complexityof the proposing neural network depends only onF(u) whenΩ is simple. In contrast with the circuit of the modelin [18], the model (8) does not need to computeF(u), and has no extra parameters. Thus our model (8) is muchsimpler.

For the dynamical system (8), we have the following result.

Lemma 3. z ∈ C∗ if and only ifz is an equilibrium point of network(8).

Proof. From Lemma 2, the results are trivial.

X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128 121

3. Stability analysis

In this section, we will study some stability and convergence properties for (8). First, we explore some propertiesfor V (z).

Lemma 4. LetV (z) be the function defined in(7). Then the followings are true.

(i) If (1) holds andz∗ ∈ C∗ is finite, thenV (z) 12‖z− z∗‖2 for all z ∈C.

(ii) If (1) and(2) hold andz∗ ∈ C∗ is finite, then for allz ∈ C, we have

(9)∇V (z)TG(z) (u− u)T∇F(u)(u− u)+ (u− u∗)T[

F(u)− F(u∗)].

Proof. (i) By settingy = u ∈Ω andx = u− [F(u)−DT(v −Du+ b)] in (5), we have

(10)(u− u)T[F(u)−DT(v −Du+ b)

] ‖u− u‖2.

This implies the result.(ii) According to [4],

∇V (z)=(u− u∗ + F(u)−DT(v −Du+ b)

v − v∗)

+(∇F(u)+DTD − In DT

−D 0

)(u− u

Du− b

).

Thus

∇V (z)TG(z)= [u− u∗ + F(u)−DT(v −Du+ b)

]T(u− u)− ‖u− u‖2

+ (u− u)T[∇F(u)+DTD

](u− u)+ (

v − v∗)T(Du− b)

= (u− u

)T∇F(u)(u− u) + (

v − v∗)T(Du− b)+ ∥∥D(u− u)

∥∥2

+ (u− u∗)T[

F(u)−DT(v −Du+ b)] + [

u− u− F(u)+DT(v −Du+ b)]T(

u− u∗) (u− u)T∇F(u)(u− u)+ ∥∥D(u− u)

∥∥2 + ‖Du− b‖2

(11)+ (u− u∗)T[

F(u)− F(u∗)] + (

u− u∗)T[F

(u∗) −DTv∗],

where the last step is from[u− F(u)+DT(v −Du+ b)− u

]T(u− u∗) 0

by settingx = u− [F(u)−DT(v −Du+ b)] andy = u∗ ∈Ω in (5). Therefore (3) and (11) imply (9).The following result guarantees the existence and uniqueness for the solutionz(t) of neural network (8).

Theorem 1. If (A1)–(A3) hold, then for anyz0 ∈ C, there exists a unique continuous solutionz(t) ∈ C of (8) withz(0)= z0 for all t 0.

Proof. From (5), we know that the projection mappingsPΩ(·) is nonexpansive. SinceF(u) is continuouslydifferentiable, it is easy to prove thatF(u) andG(z) are locally Lipschitz continuous.

From (8) and Lemma 3.7 in [17], we have

d

dt

∥∥z(t)−PC(z(t)

)∥∥2 = d

dt

∥∥u(t)− PΩ(u(t)

)∥∥2 = 2(u(t)− PΩ

(u(t)

))T du(t)

dt

= −2(u(t)− PΩ

(u(t)

))T[u(t)− u(t)

] −2

∥∥u(t)− PΩ(u(t)

)∥∥2,

122 X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128

where the last step comes from[u(t)− PΩ

(u(t)

)]T[PΩ

(u(t)

) − u(t)] 0

derived from (5). Thus

0∥∥z(t)− PC

(z(t)

)∥∥2 ∥∥z0 − PC

(z0)∥∥2 = 0.

This indicates thatz(t)= PC(z(t)) ∈C.From (2), (3) and [14], we know that∇F(u) is positive semi-definite for allu ∈Ω . Thus by (6) and Lemma 4(ii),

we have

d

dtV

(z(t)

) = −∇V (z(t)

)TG

(z(t)

) 0.

This implies thatV (z(t)) is monotonically nonincreasing on[0,∞). From Lemma 4(i), we can easily see thatz(t)

is bounded for allt 0. From the Picard–Lindelöf theorem, we know that there exists a unique solutionz(t) of (8)for any initial z0 ∈ C in [0,+∞).

The results of Lemma 3 and Theorem 1 indicate that neural network model (8) is well defined. In the proof ofTheorem 1, we see that ifz0 /∈ C, then the solutionz(t) of (8) will eventually move intoC and stay inC from thenon. Now we prove the convergence result.

Theorem 2. Assume that(A1)–(A3) hold. LetE = z ∈C |G(z)T∇V (z)= 0, andz(t) be the solution of(8) withz(0)= z0 ∈C, thendist(z(t),E) tends to zero ast → +∞, wheredist(z,E)= minw∈E ‖z−w‖.

Proof. From Theorem 1,z(t) ∈C is the unique solution of (8) withz(0)= z0 ∈C for all t 0.From Lemma 4(i), the functionV (z) is bounded below. From the proof of Theorem 1,V (z(t)) is non-negative

and monotonically nonincreasing for allt 0. ThenV (z(t)) has a finite limit ast → ∞. Furthermore, by (8)and the boundedness ofz(t) for t 0, z(t) is uniformly continuous. SoG(z(t)) and∇V (z(t)) are also uniformlycontinuous from their continuity. Thus limt→+∞ V (z(t))= 0 by Lemma 1. This result and the boundedness ofz(t)

for t 0 imply thatE = ∅, and dist(z(t),E) tends to zero ast → +∞. ObviouslyC∗ ⊆ E, but in generalE /⊆C∗, i.e., neural network (8) might not converge toC∗. Thus, in the

following, several sufficient conditions are provided to ensure that the solution of neural network (8) converges toa solution of problem (3), (4).

Theorem 3. Suppose that(A1)–(A3) hold. If, in addition, one of the following conditions is satisfied:

(i) let

∇F(u)=(F11(u) −F12(u)

F12(u)T F22(u)

),

whereF12(u) ∈ Rn1×n2 , n= n1+n2, F11(u) ∈Rn1×n1 is positive definite(but not necessarily symmetric), andF22(u) ∈ Rn2×n2 is symmetric and positive semi-definite for allu ∈ U , or F11(u) is symmetric and positivesemi-definite, andF22(u) is positive definite (but not necessarily symmetric) for allu ∈ U ;

(ii) F is strictly monotone onU ;(iii) let z∗ (finite) ∈ C∗, ∀u ∈U , there exists ak(u,u∗) > 0 such that(

u− u∗)TF(u)= 0 implies F(u)= k

(u,u∗)F (

u∗);(iv) ∇F(u) is symmetric onU ;

X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128 123

(v) ∀u ∈Ω , if (u− u∗)TF(u)= 0 andDu= b, thenu= u, wherez∗ ( finite) ∈ C∗;(vi) ∀u ∈Ω , if (u− u)∇F(u)(u− u)= 0, (u− u∗)TF(u)= 0 andDu= b, thenu= u, wherez∗ ( finite) ∈C∗;

then for anyz0 ∈ C, the solutionz(t) of (8) with z(0)= z0 will converge to a solution of problem(3), (4).

Proof. First, we proveE = C∗. From Theorem 2, we only need to proveE ⊆ C∗. If z ∈ E, thendV (z)/dt = 0,and by (11) and (3), we have

(12)

(u− u)T∇F(u)(u− u)= 0,

(u− u∗)T[

F(u)− F(u∗)] = 0,(

u− u∗)T[F(u∗)−DTv∗] = 0, Du=Du= b,

whereu ∈Ω andz∗ (finite) ∈C∗. Thusu andu= PΩ [u− F(u)+DTv] ∈ U , and

(13)(u− u∗)T

F(u)= (u− u∗)T

F(u∗) = 0.

(i) Suppose thatF11(u) is positive definite, andF22(u) is symmetric and positive semi-definite for allu ∈ U .Let u= (xT, yT)T, u= (xT, yT)T andu∗ = ((x∗)T, (y∗)T)T, wherex, x andx∗ ∈ Rn1, y, y andy∗ ∈Rn2. Then

x = x from the first equation in (12), and

0= (u− u∗)T[

F(u)− F(u∗)] =

1∫0

(u− u∗)T∇F (

u∗ + s(u− u∗))(u− u∗)ds

=1∫

0

(x − x∗)T

F11(u∗ + s

(u− u∗))(x − x∗)ds +

1∫0

(y − y∗)T

F22(u∗ + s

(u− u∗))(y − y∗)ds.

From the assumption onF11(u) andF22(u) onU , it follows that

x = x∗ and F22(u∗ + s

(u− u∗))(y − y∗) = 0 for all s ∈ [0,1].

Thus

(u− u)T[F(u)− F

(u∗)] =

1∫0

(u− u)T∇F (u∗ + s

(u− u∗))(u− u∗)ds

(14)=1∫

0

(y − y)TF22(u∗ + s

(u− u∗))(y − y∗)ds = 0.

Sinceu ∈U , by (3), (4) and (13), we have

(15)(u− u)TF(u∗) = (

u∗ − u)TF

(u∗) = −(

u− u∗)T[F

(u∗) −DTv∗] 0.

From (10), (14), (15), it follows that

‖u− u‖2 (u− u)T[F(u)−DTv

] = (u− u)T[F(u)− F

(u∗)] + (u− u)TF

(u∗) 0.

Thereforeu= u. From Lemma 3, this result andDu= b imply z ∈ C∗. ThusE ⊆ C∗.Similarly, we can proveE ⊆ C∗ whenF11(u) is symmetric and positive semi-definite andF22(u) is positive

definite for allu ∈U .(ii) Suppose thatF is strictly monotone onU , thenu= u∗ by the second equation in (12) andu ∈U . But from

(5), we have∥∥u− u∗∥∥2 = ∥∥u− u∗∥∥2 (v − v∗)T

D(u− u∗) = 0.

124 X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128

That isu= u∗ = u. ThusE ⊆ C∗.(iii) From the assumption and (13), we know thatF(u)= k(u,u∗)F (u∗) (k(u,u∗) > 0). Thus by (10) and (15),

we have

(16)‖u− u‖2 (u− u)TF(u)= k(u,u∗)(u− u)TF

(u∗) 0.

That isu= u. SoE ⊆ C∗ from Lemma 3.(iv) Since∇F(u) is symmetric and monotone onU , there exists a differentiable convex functiong(u) such that

∇g(u)= F(u) for all u ∈ U , andg(u∗)= miny∈U g(y).From the convexity ofg(u) and (13), it follows that

g(u∗) g(u)+ (

u∗ − u)TF(u)= g(u).

Thusg(u) = g(u∗). This impliesu ∈ U is an optimal solution of miny∈U g(y). Hence(y − u)TF(u) 0 for ally ∈ U . From the left inequality of (16), we haveu= u. SoE ⊆ C∗.

The results (v) and (vi) are trivial.Now we prove the convergence ofz(t) to a solution of problem (3), (4). From Theorem 1,z(t) | t 0

is bounded. Thus there exist a limit pointz and a sequencetn with 0 < t1 < t2 < · · · < tn < tn+1 < · · · andlimn→+∞ tn = +∞ such that

(17)limn→+∞ z(tn)= z.

From the proof of Theorem 2,V (z(t)) is continuous and limt→+∞ V (z(t))= 0. Thusz ∈E. Soz ∈ C∗ byC∗ =E.Similar to the definition ofV (z) in (7), we definite

V (z)= (u− u)T[F(u)−DT(v −Du+ b)

] + 1

2

[‖Du− b‖2 + ‖z− z‖2 − ‖u− u‖2], ∀z ∈ C.

Then similar to the proofs of Lemma 4(i) and (ii), we can conclude thatV (z) 12‖z− z‖2 for z ∈C andV (z(t)) is

monotonically nonincreasing in[0,+∞). From the continuity of the functionV (z), it follows that∀ε > 0, ∃δ > 0such that

(18)V (z) < ε2

2, if ‖z− z‖ δ.

From (17), (18) and the monotonicity ofV (z(t)), there exists atN such that

∥∥z(t)− z∥∥2 2V (

z(t)) 2V (

z(tN ))< ε2, whent tN .

That is, limt→+∞ z(t)= z. Remark. Neural network (8) withD = 0 andb = 0 is just the globally projected dynamical system proposed byFriesz et al. [3] withλ = α = 1. Thus the results obtained above can be also applied to their globally projecteddynamical system.

4. Illustrative examples

In this section, four examples are provided to illustrate both the theoretical results achieved in Section 3 and thesimulation performance of the dynamical system.

X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128 125

(a) (b)

Fig. 1. Transient behavior of network (8) in Example 1. (a) The first initial point. (b) The second initial point.

Example 1. Consider a nonlinear variational inequality problem VI(F,U ) where the constraint setU and themappingF are defined by

U =x ∈ R5

∣∣∣∣∣5∑

i=1

xi = 10, xi 0, i = 1,2, . . . ,5

and

F(x)=Mx + ρN(x)+ q,

whereM is a 5× 5 asymmetric positive definite matrix given below andNi(x)= arctan(xi − 2), i = 1,2, . . . ,5.The parameterρ is used to vary the degree of asymmetry and nonlinearity. The data used in this example is givenas follows:q = (5.308,0.008,−0.938,1.024,−1.312)T and

M =

0.726 −0.949 0.266 −1.193 −0.5041.645 0.678 0.333 −0.217 −1.443

−1.016 −0.225 0.769 0.934 1.0071.063 0.567 −1.144 0.550 −0.548

−0.259 1.453 −1.073 0.509 1.026

.

ThusD =(1, 1, 1, 1, 1),b =10, andΩ = x ∈ R5 | x 0. ∀ρ 0, its exact solution isx∗ = (2,2,2,2,2)T.Since∇F(u) is positive definite, this problem satisfies Theorem 3 (ii).Fig. 1(a) and (b) depict the trajectories of neural network (8) with the initial values(3.5,0.5,2.2,1,2.5,1.5)T,

ρ = 1 and(6,4.5,3,1.5,0,−1.5)T, ρ = 0.01 converging to its solutionx∗ andy∗ = 2, respectively.

Example 2. Consider the following quadratic programming problem:

min(x2

1 + x22 + x1x2 − 5x1 − 5x2

)s.t. 2x1 + x2 + x5 = 3,

−x1 + 3x3 + 2x4 + x6 = 2,2x2 + x3 + x5 = 3,x ∈Ω,

126 X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128

(a) (b)

Fig. 2. Transient behavior of (8) in Example 2. (a) The first initial point. (b) The second initial point.

whereΩ = x ∈ R6 | 0 xi 2 for i = 1,2,3,4, 0 x5 1, 0 x6 3. Its optimal solution isx∗ =(0.75,1.5,0, β,0,2.75− 2β)T, where 0 β 1.375.

Since the problem is a convex quadratic problem, Theorem 3(iv) is satisfied.Fig. 2(a) and (b) depict the trajectories of neural network (8) with initial points(2,1,1.5,0.4,0.8,2.5,0,−0.5,

−1)T and(0,2,1,0.6,0.2,3,−0.5,−0.3,−1)T converging tox∗ = (0.75,1.5,0,0.911,0,0.928)T and(0.75,1.5,0,0.839628,0,1.070744)T, y∗ = (−1,0,−0.125)T, respectively.

Example 3. Consider the following nonlinear programming problem:

min f (x)= 0.4x1 + x21 + x2

2 − x1x2 + 0.5x23 + 0.5x2

4 + x31/30,

s.t. x1 + 0.5x2 − x3 = 0.4,0.5x1 + x2 − x4 = 0.5,x 0.

The optimal solution isx∗ = (0.2,0.4,0,0)T and the optimal value isf (x∗)= 0.20026667.Obviously, the objective function is convex onΩ = x ∈R4 | x1 0, and Theorem 3(iv) is satisfied.Fig. 3(a) and (b) depict the trajectories of neural network (8) with initial points(0.6,0.4,0.2,−0.2,−0.1,0.3)T

and(0.1,−0.1,−0.2,0.2,0,0.4)T converging tox∗ andy∗ = (52/375,199/375)T, respectively.

Example 4. Consider the following linear complementarity problem:

(19)x 0, Nx + q 0, xT(Nx + q)= 0,

whereN ∈ Rn×n andq ∈ Rn. Obviously, this problem can be solved by (8) withD = 0 andb = 0. Taking anexample with

N =

1 −2 1 −10 1 −1 2

−1 1 2 −31 −2 −1 2

and q =

1−11

−1

.

X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128 127

(a) (b)

Fig. 3. Transient behavior of network (8) in Example 3. (a) The first initial point. (b) The second initial point.

(a) (b)

Fig. 4. Transient behavior of network (8) in Example 4. (a) The first initial point. (b) The second initial point.

Its solution isx∗ = (0,0,1,1)T. One can verify that this problem satisfies Theorem 3(vi), but Theorem 3(v) is notsatisfied.

Fig. 4(a) and (b) depict the trajectories of neural network (8) with different initial points(0.7,0.3,0.8,1.2)T and(2,1,0.6,0.2)T converging to the exact solutionx∗.

5. Concluding remarks

In this Letter, we present a neural network model for solving monotone linearly constrained variationalinequality problems. Six sufficient conditions are provided to ensure the asymptotical stability of the proposingneural network. Comparing with the existing neural networks, the main advantage of the proposing network isthat it does not need to estimate Lipschitz constant and computeF(u). Thus the new model is more suitableto be implemented in hardware. Since the proposing network can be used directly to solve linear and convex

128 X.-B. Gao, L.-Z. Liao / Physics Letters A 307 (2003) 118–128

quadratic programming problems, nonlinear programming problems, complementarity problems, extended linearand quadratic problems, etc, then it has great application potential. Our simulation results have demonstrated theconvergence behavior and the characteristics of the proposing network.

Acknowledgements

The authors would like to thank the two anonymous reviewers for their comments and suggestions on an earlierversion of this Letter.

References

[1] A. Bouzerdorm, T.R. Pattison, IEEE Trans. Neural Networks 4 (2) (1993) 293.[2] A. Cichocki, R. Unbehauen, Neural Networks for Optimization and Signal Processing, Wiley, 1993.[3] T.L. Friesz, D.H. Bernstein, N.J. Mehta, R.L. Tobin, S. Ganjlizadeh, Oper. Res. 42 (1994) 1120.[4] M. Fukushima, Math. Prog. 53 (1992) 99.[5] D.R. Han, H.K. Lo, J. Optim. Theor. Appl. 112 (3) (2002) 549.[6] Q.M. Han, L.-Z. Liao, H.D. Qi, Q. Qi, J. Global Optim. 19 (2001) 363.[7] P.T. Harker, J.S. Pang, Math. Prog. B 48 (1990) 161.[8] B.S. He, Sci. China, Ser. A 39 (1996) 395.[9] B.S. He, L.-Z. Liao, J. Optim. Theor. Appl. 112 (2002) 111.

[10] B.S. He, J. Zhou, Appl. Math. Lett. 13 (2000) 123.[11] M.P. Kennedy, L.O. Chua, IEEE Trans. Circuits Systems 35 (6) (1988) 554.[12] D. Kinderlehrer, G. Stampacchia, An Introduction to Variational Inequalities and their Applications, Academic Press, New York, 1980.[13] L.-Z. Liao, H.D. Qi, L.Q. Qi, J. Comput. Appl. Math. 131 ((1–2)) (2001) 343.[14] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equation in Several Variables, Academic Press, New York, 1970.[15] R.T. Rockafellar, SIAM J. Control Optim. 25 (3) (1987) 781.[16] J.-J.E. Slotine, W. Li, Applied Nonlinear Control, Prentice–Hall, Englewood Cliffs, NJ, 1991.[17] T.E. Smith, T.L. Friesz, D.H. Bernstein, Z.G. Suo, in: M.C. Ferris, J.S. Pang (Eds.), Complementarity and Variational Problems, State of

Art, SIAM, Philadelphia, 1997, p. 405.[18] Y.S. Xia, J. Wang, IEEE Trans. Neural Networks 9 (6) (1998) 1311.[19] Y.S. Xia, J. Wang, IEEE Trans. Automatic Control 46 (4) (2001) 635.


Recommended