+ All Categories
Home > Documents > Penalty-based SAA method of stochastic nonlinear complementarity problems

Penalty-based SAA method of stochastic nonlinear complementarity problems

Date post: 04-Nov-2023
Category:
Upload: wits
View: 0 times
Download: 0 times
Share this document with a friend
17
JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.2010.6.241 MANAGEMENT OPTIMIZATION Volume 6, Number 1, February 2010 pp. 241–257 PENALTY-BASED SAA METHOD OF STOCHASTIC NONLINEAR COMPLEMENTARITY PROBLEMS Ming-Zheng Wang a, b and M. Montaz Ali b a School of Management Dalian University of Technology, Dalian 116024 Liaoning Province, China b School of Computational and Applied Mathematics University of the Witwatersrand, Wits-2050 Johannesburg, South Africa (Communicated by Xiaojun Chen) Abstract. We consider a class of stochastic nonlinear complementarity prob- lems. We first formulate the stochastic complementarity problem as a stochas- tic programming model. Based on this reformulation, we propose a penalty- based sample average approximation (in short, SAA) method for stochastic complementarity problem and prove its convergence. Finally, we report some numerical test results to show the efficiency of our method. 1. Introduction. Equilibrium is a central concept in numerous disciplines includ- ing management science, operations research, economics and engineering. As one of very important powerful methodologies for the study of equilibrium problems, deterministic complementarity problems, which is to find x ∈ℜ n such that x 0, F (x) 0, x T F (x)=0, (1) have been investigated in the literature, see Cottle et. al [5] and Facchinei and Pang [6], and Zhang[13] and the references therein. However, as in many practical problems, some elements of (1) may involve some random factors or uncertainties, for instance, the random demand from retailers in a supply chain, and so on. Thus, many practical problems can be modeled as the following stochastic complementarity problem: To find x such that x 0, F (x, ω) 0, x T F (x, ω)=0, for a. eΩ, (2) where Ω is the underlying sample space. The stochastic complementarity problems have been receiving much attention in the recent literature. Problem (2) may not have a solution in general. So far, there have been some ways to deal with it. G¨ urkan et.al. [8] used an expectation to give 2000 Mathematics Subject Classification. 90C33, 90C30, 90C15. Key words and phrases. Stochastic nonlinear complementarity problems, stochastic program- ming, sample average approximation, penalty method, convergence. The first author is partially supported by the National High Technology Research and Devel- opment Program of China(2008AA04Z107), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry and NRFC(10771026) and Claude Leon Foundation in South Africa. 241
Transcript

JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.2010.6.241MANAGEMENT OPTIMIZATIONVolume 6, Number 1, February 2010 pp. 241–257

PENALTY-BASED SAA METHOD OF STOCHASTIC

NONLINEAR COMPLEMENTARITY PROBLEMS

Ming-Zheng Wang a, b and M. Montaz Alib

aSchool of ManagementDalian University of Technology, Dalian 116024

Liaoning Province, China

bSchool of Computational and Applied MathematicsUniversity of the Witwatersrand, Wits-2050

Johannesburg, South Africa

(Communicated by Xiaojun Chen)

Abstract. We consider a class of stochastic nonlinear complementarity prob-lems. We first formulate the stochastic complementarity problem as a stochas-tic programming model. Based on this reformulation, we propose a penalty-based sample average approximation (in short, SAA) method for stochasticcomplementarity problem and prove its convergence. Finally, we report somenumerical test results to show the efficiency of our method.

1. Introduction. Equilibrium is a central concept in numerous disciplines includ-ing management science, operations research, economics and engineering. As oneof very important powerful methodologies for the study of equilibrium problems,deterministic complementarity problems, which is to find x ∈ ℜn such that

x ≥ 0, F (x) ≥ 0, xT F (x) = 0, (1)

have been investigated in the literature, see Cottle et. al [5] and Facchinei andPang [6], and Zhang[13] and the references therein.

However, as in many practical problems, some elements of (1) may involve somerandom factors or uncertainties, for instance, the random demand from retailers ina supply chain, and so on. Thus, many practical problems can be modeled as thefollowing stochastic complementarity problem: To find x such that

x ≥ 0, F (x, ω) ≥ 0, xT F (x, ω) = 0, for a. e. ω ∈ Ω, (2)

where Ω is the underlying sample space.The stochastic complementarity problems have been receiving much attention in

the recent literature. Problem (2) may not have a solution in general. So far, therehave been some ways to deal with it. Gurkan et.al. [8] used an expectation to give

2000 Mathematics Subject Classification. 90C33, 90C30, 90C15.Key words and phrases. Stochastic nonlinear complementarity problems, stochastic program-

ming, sample average approximation, penalty method, convergence.The first author is partially supported by the National High Technology Research and Devel-

opment Program of China(2008AA04Z107), the Scientific Research Foundation for the ReturnedOverseas Chinese Scholars, State Education Ministry and NRFC(10771026) and Claude LeonFoundation in South Africa.

241

242 MING-ZHENG WANG AND M. MONTAZ ALI

a single stochastic variational inequality containing stochastic nolinear complemen-tary problem, that is, to find a point x ∈ S such that

(y − x)T E[F (x, ω)] ≥ 0.

Chen and Fukushima [2] considered the following stochastic linear complemen-tarity problem (SLCP):

x ≥ 0, M(ω)x + q(ω) ≥ 0, xT (M(ω)x + q(ω)) = 0, a. e. ω ∈ Ω,

where for each ω, M(ω) ∈ ℜn×n and q(ω) ∈ ℜn. The authors formulated theSLCP as a problem of minimizing an expected residual defined by an NCP function,which is referred to as the ERM method. Then, they employed a quasi-Monte Carlomethod and give some convergence results under suitable assumptions on the asso-ciated matrices. Further research can be read in Chen et.al. [3] and Fang et.al. [7].Zhang and Chen [20] generalized ERM method in [2] for the nonlinear complemen-tary problems. Lin et.al. [10] researched new restricted NCP functions and errorbounds for stochastic nonlinear complementarity problems. Lin and Fukushima [11]proposed stochastic mathematical programming with equilibrium constraints refor-mulations for stochastic nonlinear complementarity problems. They gave an algo-rithm to solve them for the case of discrete random variables. They however didnot deal with the continuous random variable case. Lin [9] proposed another newstochastic mathematical programming with equilibrium constraints reformulationsfor stochastic nonlinear complementarity problems and gave a penalty -based MonteCarlo algorithm for them.

A number of numerical methods have been proposed for nonlinear complemen-tary problems but few can be applied directly to solve stochastic nonlinear com-plementary problems because the expectation E[F (x, ω)] is not observable or itscomputation is very complex. Under these circumstances, new methods are needed.Until now, there are a few algorithms to solve stochastic nonlinear complementarityproblems. Lin [9] proposed a penalty -based Monte Carlo algorithm to solve sto-chastic mathematical programming with equilibrium constraints reformulations ofstochastic nonlinear complementarity problems. Wang and Ali [18] proposed a SAAmethod to solve the stochastic programming formulations of stochastic nonlinearcomplementarity problems.

In this paper, we investigate the stochastic nonlinear complementarity problem(2) from a different new point of view. We first propose the stochastic programmingreformulation for stochastic nonlinear complementarity problems. We then presenta sample-average approximation-based penalty method for solving them based onthis reformation.

The SAA method and its variants, known under various names such as “stochas-tic counterpart method”, “ sample-path method ”, “ simulated likelihood method ”,have been discussed in the stochastic programming and statistics literature (see e.g.,Bastin et. al. [1], Gurkan et. al. [8], Robinson [14], Ruszcynski and Shapiro [15],Shapiro [17]). In particular, Robinson [14] and Gurkan et. al. [8] proposed thesample path optimization (SPO) method and showed that a sequence of solu-tions to the SPO problem converges to its true counterpart under moderate con-ditions. The SAA method is not new in the field of statistics. Shapiro first in-troduced this method to solve stochastic mathematical programs with equilibriumconstraints [17]. Later, Meng and Xu [12], and Xu and Meng [19] further investi-gated the SAA method on stochastic mathematical programs with (nonsmooth)

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 243

equality constraints. These observations motivate us to solve the proposed stochas-tic complementarity problem based on the SAA method.

The rest of this paper is organized as follows. Section 2 gives out a new refor-mulation of the stochastic complementary problem and its properties. In Section 3,we construct a penalty-based sample average approximation method and prove itsconvergence. In Section 4, numerical results are presented. Conclusions are madein Section 5.

2. Reformulation. In this section, we present the stochastic programming refor-mulation for problem (2) and its existence conditions. We know that complementaryproblems (2) can be represented equivalently as the following stochastic equationsystem with constraints

x ≥ 0, F (x, ω) ≥ 0, E[(

n∑

i=1

xiFi(x, ω))2] = 0, for a. e. ω ∈ Ω. (3)

From calculus, it is easy to show that the point x∗ is a solution of problem (2) if andonly if x∗ is a solution of problem (3). In order to solve problem (3), we reformulateproblem (3) as the following stochastic programming:

min E[(∑n

i=1 xiFi(x, ω))2]

s. t. F (x, ω) ≥ 0, x ≥ 0,

for a. e. ω ∈ Ω.

(4)

Equivalently,

min E[(∑n

i=1 xiFi(x, ω))2]

s. t. E[∑n

i=1(minFi(x, ω), 0)2] = 0,

x ≥ 0.

(5)

Problem (5) is a simpler optimization problem than problems (4). The correspond-ing Lagrangian function of the problem (5) is the following:

L(x, λ, µ) := E

[( n∑

i=1

xiFi(x, ω)

)2]

+ µE

[ n∑

i=1

(

min Fi(x, ω), 0

)2]

− λT x.

where µ ∈ ℜ1 and λ ∈ ℜn.A basic difficulty of solving stochastic optimization problem (5) is that the mul-

tidimensional integral can not be computed with a high accuracy for dimension d,say, greater than 5. In practice, many stochastic optimization problems can notbe well-solved because either distributions of some random vectors are unknownor involved expectations of some random vectors can be unobservable. Our aim inthis paper is to propose an effective algorithm for solving stochastic complementaryproblems under the assumption that either distributions of some random vectorsor involved expectations of some random vectors can only be obtained by samplingdata. To this end, the following assumptions will be made throughout this paper.

Assumptions::(A1): It is possible to generate independently and identically distributed (in

short, iid) samples ω1, ω2, · · · of realizations of the random vector ω ∈ Ω.

(A2): For some point x ∈ ℜ+, the function E[F 2i (x, ·)], i = 1, 2, · · · , n, is finite.

244 MING-ZHENG WANG AND M. MONTAZ ALI

(A3): There exists an F -measurable function κ : Ω → ℜ+ such that E[κ2(ω)] isfinite and

‖F (x, ω) − F (y, ω)‖ ≤ κ(ω)‖x − y‖

for all x, y ∈ ℜ+ and for every ω ∈ Ω.

Define the feasible set of problem (5) by S = x|E[∑n

i=1(minFi(x, ω), 0)2] =0, x ≥ 0, and denote the relative interior of the set B by riB.

Theorem 2.1. Assume that (A2)−(A3) hold. x∗ solves the complementary problem(2) if and only if x∗ globally solves problem (5) and E[(

∑n

i=1 x∗

i Fi(x∗, ω))2] = 0.

Proof. =⇒ (Necessity)Suppose that x∗ is a solution of the complementary problem (2), we show that x∗ isalso an optimal solution of (5). Since x∗ is a solution of the complementary problem(2), one has that

x∗ ≥ 0, F (x∗, ω) ≥ 0, x∗T F (x∗, ω) = 0 for a. e. ω ∈ Ω.

So one has further that

x∗ ≥ 0, minFi(x∗, ω), 0 = 0,

n∑

i=1

x∗

i Fi(x∗, ω) = 0, for a. e. ω ∈ Ω.

Hence, one has that

x∗ ≥ 0, E[

n∑

i=1

(minFi(x∗, ω), 0)2] = 0, E[(

n∑

i=1

x∗

i Fi(x∗, ω))2] = 0.

We know from above that x∗ is also a feasible point of the optimization problem(5). Since the function E[(

∑n

i=1 xiFi(x, ω))2] ≥ 0 for arbitrary x ∈ ℜ+. This showthat x∗ is an optimal solution of problem (5).

⇐=(Sufficiency)Suppose that x is an optimal solution of the problem (5) and E[(

∑n

i=1 xiFi(x, ω))2] =0. We will show that x is also a solution of the complementary problem (2). Sincex is an optimal solution of the problem (5), we know that x is a feasible solution ofthe problem (5), that is, one has that

x ≥ 0, E[

n∑

i=1

(minFi(x, ω), 0)2] = 0.

Since x satisfies E[(∑n

i=1 xiFi(x, ω))2] = 0, one has that

x ≥ 0, E[n

i=1

(minFi(x, ω), 0)2] = 0, E[(n

i=1

xiFi(x, ω))2] = 0.

Equivalently, one has that

x ≥ 0,

n∑

i=1

(minFi(x, ω), 0)2 = 0, (

n∑

i=1

xiFi(x, ω))2 = 0, for a. e. ω ∈ Ω.

One has further that

x ≥ 0, minFi(x, ω), 0 = 0,

n∑

i=1

xiFi(x, ω) = 0, for a. e. ω ∈ Ω.

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 245

that is, one has that

x ≥ 0, F (x, ω) ≥ 0, xT F (x, ω) = 0, for a. e. ω ∈ Ω.

This show that x is a solution of the complementary problem (2).

3. The SAA method and its convergence.

3.1. The existence of solutions. With the help of a penalty technique, we obtainthe following approximation problem of (5):

minx

E[(∑n

i=1 xiFi(x, ω))2] + ρE[∑n

i=1(minFi(x, ω), 0)2]

s. t. x ≥ 0,(6)

where ρ > 0 is a penalty parameter.

Definition 3.1. A point x∗ is said to be stationary to optimization problem (5) ifthere exist Lagrangian multiplier vectors λ∗ and µ∗ such that

∇xL(x, µ∗, λ∗) = 0, (7)

0 ≤ λ∗⊥x∗ ≥ 0, (8)

E[

n∑

i=1

(minFi(x∗, ω), 0)2] = 0. (9)

Theorem 3.2. Assume that assumptions (A2) − (A3) hold and one has that

riS ∩ ri dom(E[(

n∑

i=1

xiFi(x, ω))2]) 6= ∅.

Assume that the function F (·, ω) is continuously differentiable in x for everyω ∈ Ω. If there exist Lagrangian multipliers λ∗ and µ∗ such that

λ∗ = 2E

[( n∑

i=1

x∗

i Fi(x∗, ω)

)

F (x, ω) +

( n∑

i=1

x∗

i Fi(x∗, ω)

)

∇F (x∗, ω)x∗

+2µ∗∇F (x∗, ω))minF (x∗, ω), 0

]

, (10)

0 ≤ λ∗⊥x∗ ≥ 0, (11)

E[

n∑

i=1

(minFi(x∗, ω), 0)2] = 0, (12)

then a point x∗ is a stationary of problem (5).

Proof. It is enough to prove that formula (7) holds. Under the assumption thatriS ∩ ri dom(E[(

∑n

i=1 xiFi(x, ω))2]) 6= ∅, we know that problem (5) have at least astationary point along proceeding of the proof in Proposition 32 in [15]. Accordingto the assumption on the function F (x, ω), we know that the function F (x, ω) isLipschitz in x for every ω ∈ Ω, that is, there exists a nonnegative F−measurablefunction K(ω) such that ‖F (x, ω) − F (y, ω)‖ ≤ K(ω)‖x − y‖ for any x and y and

246 MING-ZHENG WANG AND M. MONTAZ ALI

for every ω ∈ Ω. Of course, one has that |Fi(x, ω)−Fi(y, ω)| ≤ K(ω)‖x− y‖, i =1, 2, · · · , n for any x and y and for every ω ∈ Ω. Since one has that

minFi(y, ω), 0 − minFi(x, ω), 0

=

0 if Fi(y, ω) ≥ 0, Fi(x, ω) ≥ 0,

−Fi(x, ω) if Fi(y, ω) ≥ 0, Fi(x, ω) < 0,

Fi(y, ω) if Fi(y, ω) < 0, Fi(x, ω) ≥ 0,

Fi(y, ω) − Fi(x, ω) if Fi(y, ω) < 0, Fi(x, ω) < 0,

it follows that

|minFi(y, ω), 0 − minFi(x, ω), 0| ≤ |Fi(y, ω) − Fi(x, ω)| ≤ K(ω)‖y − x‖.

This shows that the function minFi(x, ω), 0 is also global Lipschitz continuouswith respect to x. So we know that minFi(x, ω), 0 is Lipschitz continuous in thewhole compact neighborhood N (x∗, δ) of the point x∗. From proposition 2.3.13in [4], we know that the function (minFi(x, ω), 0)2 is also Lipschitz continuousin the whole compact neighborhood N (x∗, δ). According to assumption (A2), weknow that there exists a positive scalar b(x∗) such that ‖E[F (x∗, ω)]‖ ≤ b(x∗). Sincethe function F (·, ω) is continuously differentiable in x for every ω ∈ Ω, one has that

(x∗

i + (∆x)i)Fi(x∗ + ∆x, ω) − x∗

i Fi(x∗, ω)

= (x∗

i + (∆x)i)[Fi(x∗, ω) + ∇T Fi(x

∗, ω)∆x + o(‖∆x‖)] − x∗

i Fi(x∗, ω)

= x∗

i [∇T Fi(x

∗, ω)∆x] + Fi(x∗, ω)(∆x)i + (∆x)i[∇

T Fi(x∗, ω)∆x] + o(‖∆x‖).

One has further that

|(x∗

i + ∆xi)Fi(x∗ + ∆x, ω) − x∗

i Fi(x∗, ω)|

= |x∗

i [∇T Fi(x

∗, ω)∆x] + Fi(x∗, ω)(∆x)i + (∆x)i[∇

T Fi(x∗, ω)∆x] + o(‖∆x‖)|

≤ (|x∗

i | · ‖∇T Fi(x

∗, ω)‖ + |Fi(x∗, ω)| + ‖∇T Fi(x

∗, ω)‖o(∆x

‖∆x‖))‖∆x‖

≤ (|x∗

i | · K(ω) + b(x∗) + K(ω)o(∆x

‖∆x‖))‖∆x‖.

This shows that the function xiFi(x, ω) is Lipschitz with respect to x in the wholecompact neighborhood N (x∗, δ) of x∗. Then the function

∑n

i=1 xiFi(x, ω) is alsoLipschitz with respect to x in the whole compact neighborhood N (x∗, δ) of x∗.From Proposition 2.3.13 in [4], we know that the function (

∑n

i=1 xiFi(x, ω))2 isLipschitz continuous with respect to x in the whole neighborhood N (x∗, δ) of x∗.From Proposition 5.1 in Shapiro [16], we know that

∇(E[(

n∑

i=1

xiFi(x, ω))2] + µE[

n∑

i=1

(min Fi(x, ω), 0)2])

= 2E[(

n∑

i=1

x∗

i Fi(x∗, ω))F (x, ω) + (

n∑

i=1

x∗

i Fi(x∗, ω))∇F (x∗, ω)x∗ (13)

+2µ∗∇F (x∗, ω))minF (x∗, ω), 0].

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 247

The derivative ∇L(x∗, λ∗, µ∗) = 0 if and only if one has that

∇(E[(n

i=1

xiFi(x, ω))2] + µ∗E[n

i=1

(min Fi(x, ω), 0)2] − xT λ∗]))

x=x∗

= 0.

By using (13), one has further that

0 = 2E[(

n∑

i=1

x∗

i Fi(x∗, ω))F (x∗, ω) + (

n∑

i=1

x∗

i Fi(x∗, ω))∇F (x∗, ω)x∗

+2µ∗∇F (x∗, ω))minF (x∗, ω), 0]− λ∗.

that is,

2E[(

n∑

i=1

x∗

i Fi(x∗, ω))F (x∗, ω) + (

n∑

i=1

x∗

i Fi(x∗, ω))∇F (x∗, ω)x∗

+2µ∗∇F (x∗, ω))minF (x∗, ω), 0] = λ∗.

This completes the proof of the theorem.

Theorem 3.3. Assume (A2)−(A3) hold and riS∩ ri dom(E[(∑n

i=1 xiFi(x, ω))2]) 6=∅. If x∗ is a global minimizer of problem (6) for any ρ > 0 large enough, then x∗ isa global minimizer of problem (5).

Proof. Assume that x∗ is a global minimizer of problem (6) for any ρ > 0 largeenough, we will show that x∗ is also a global minimizer of problem (5).

Under the assumption that riS ∩ ri dom(E[(∑n

i=1 xiFi(x, ω))2]) 6= ∅, we knowthat problem (5) have at least a stationary point along proceeding of the proof inProposition 32 in [15]. According to assumptions of this Theorem, one has that

x∗ ≥ 0,

E[

n∑

i=1

(minFi(x∗, ω), 0)2] = 0. (14)

Suppose, contradict-wise, that there exists a i0 ∈ 1, 2, · · · , n such that

E[(minFi0(x∗, ω), 0)2] > 0.

Then one has that

E[

n∑

i=1

(minFi(x∗, ω), 0)2] > 0.

Since the set x | x ≥ 0, E[∑n

i=1(minFi(x, ω), 0)2] = 0 6= ∅, we know that theoptimal value of problem (5) is finite. According to the assumption, one has that

E[(

n∑

i=1

x∗

i Fi(x∗, ω))2] + ρE[

n∑

i=1

(minFi(x∗, ω), 0)2]

= mingE[(

n∑

i=1

xiFi(x, ω))2] + ρE[

n∑

i=1

(minFi(x, ω), 0)2] | x ≥ 0

248 MING-ZHENG WANG AND M. MONTAZ ALI

≤ ming

E[(

n∑

i=1

xiFi(x, ω))2]

+ρE[n

i=1

(minFi(x, ω), 0)2]

x ≥ 0E[

∑n

i=1(minFi(x, ω), 0)2] = 0

≤ ming

E[(

n∑

i=1

xiFi(x, ω))2]

x ≥ 0E[

∑n

i=1(minFi(x, ω), 0)2] = 0

, (15)

where ming represents the global minimum. This makes a contradiction by lettingρ → +∞. This prove that (14) holds.

Then we know that x∗ is a feasible point of the problem (5) and one has that

E[(

n∑

i=1

x∗

i Fi(x∗, ω))2]

≥ ming

E[(

n∑

i=1

xiFi(x, ω))2]

x ≥ 0E[

∑n

i=1(minFi(x, ω), 0)2] = 0

. (16)

Since x∗ is a global minimizer of the problem 6, by (15), one has, for arbitraryx ≥ 0, that

E[(

n∑

i=1

x∗

i Fi(x∗, ω))2]

≤ E[(

n∑

i=1

x∗

i Fi(x∗, ω))2] + ρE[

n∑

i=1

(minFi(x∗, ω), 0)2]

≤ ming

E[(n

i=1

xiFi(x, ω))2]

x ≥ 0E[

∑n

i=1(minFi(x, ω), 0)2] = 0

. (17)

According to (16) and (17), one has that

E[(n

i=1

x∗

i Fi(x∗, ω))2]

= ming

E[(

n∑

i=1

xiFi(x, ω))2]

x ≥ 0E[

∑n

i=1(minFi(x, ω), 0)2] = 0

,

that is, x∗ is a global minimizer of the problem (5). This proof is completed.

Theorem 3.4. Assume that assumptions (A2)− (A3) hold. Assume that the func-tion F (·, ω) is continuously differentiable in x for a. e. ω ∈ Ω. If x∗ is a sta-tionary point of problem (5) with the Lagrangian multipliers µ∗ and λ∗, then x∗ isalso a stationary point of problem (6) for any ρ ≥ |µ∗|. Conversely, assume thatriS ∩ ri dom(E[(

∑n

i=1 xiFi(x, ω))2]) 6= ∅ and if x∗ is a stationary point of problem(6) and one has that E[

∑n

i=1(minFi(x∗, ω), 0)2] = 0, then x∗ is a stationary point

of problem (5).

Proof.

(i): Assume that x∗ is a stationary point of the problem (5). We will show thatx∗ must be also a stationary point of problem (6) for any ρ > 0 largely enough.

Note that problem (6) can be equivalently reformulated as the followingconstrained optimization:

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 249

min(x,z)

E[(∑n

i=1 xiFi(x, ω))2] + ρz

s. t. z ≥ E[∑n

i=1(minFi(x, ω), 0)2],

z ≥ −E[∑n

i=1(minFi(x, ω), 0)2],

x ≥ 0.

(18)

where ρ > 0 is a penalty parameter.Define the Lagrangian function of problem (18) by

L(x, z, l, t, ν) = E[(

n∑

i=1

xiFi(x, ω))2] + ρz + l[E[

n∑

i=1

(minFi(x, ω), 0)2] − z]

−t[z + E[

n∑

i=1

(minFi(x, ω), 0)2]] − νT x.

If x∗ is a stationary point of problem (6), there exists a point z∗ such that(x∗, z∗) is a stationary point. According to Definition 3.1, we know that thereexist Lagrangian multipliers l∗, t∗ and ν∗ such that

∇(x,z)L(x, z, l∗, t∗, ν∗)|(x,z)=(x∗,z∗) = 0

0 ≤ l∗ ⊥

(

E[

n∑

i=1

(minFi(x∗, ω), 0)2] − z∗

)

≥ 0

0 ≤ t∗ ⊥

(

z∗ + E[

n∑

i=1

(minFi(x∗, ω), 0)2

)

≥ 0

0 ≤ ν∗ ⊥ x∗ ≥ 0,

that is,

2E[(

n∑

j=1

x∗

jFj(x∗, ω))F (x∗)] + 2E[(

n∑

j=1

x∗

jFj(x∗, ω))∇F (x∗, ω)x∗]

+2(l∗ − t∗)E[∇F (x∗, ω)minF (x∗, ω), 0] − ν∗ = 0, (19)

ρ − l∗ − t∗ = 0, (20)

0 ≤ l∗ ⊥ [E[

n∑

i=1

(minFi(x∗, ω), 0)2] − z∗] ≥ 0 (21)

0 ≤ t∗ ⊥ [z∗ + E[n

i=1

(minFi(x∗, ω), 0)2]] ≥ 0 (22)

0 ≤ ν∗ ⊥ x∗ ≥ 0. (23)

For a given ρ ≥ |µ∗|, we taking λ∗ := ν∗, x∗ = x∗, l∗ = ρ+µ∗

2 , t∗ = ρ−µ∗

2 and

z∗ = E[∑n

i=1(minFi(x∗, ω), 0)2] = 0. It follows that

250 MING-ZHENG WANG AND M. MONTAZ ALI

0 ≤ λ∗ ⊥ x∗ ≥ 0,

E[

n∑

i=1

(minFi(x∗, ω), 0)2] = 0,

λ∗ = 2E[(

n∑

j=1

x∗

jFj(x∗, ω))F (x∗)] + 2E[(

n∑

j=1

x∗

jFj(x∗, ω))∇F (x∗, ω)x∗]

+2(l∗ − t∗)E[∇F (x∗, ω)minF (x∗, ω), 0].

This implies that (10)-(12) hold. This shows that x∗ also is a stationarypoint of problem (6).

(ii): Assume that x∗ is a stationary point of problem (6) and one has that

E[

n∑

i=1

(minFi(x∗, ω), 0)2] = 0

for all i = 1, 2, · · · , n. We will show that x∗ must be also a stationary point ofproblem (5). Under the assumption that riS∩ ri dom(E[(

∑n

i=1 xiFi(x, ω))2]) 6=∅, we know that problem (5) have at least a stationary point along proceedingof the proof in Proposition 32 in [15]. Since x∗ is a stationary point of problem(6), according to Definition 1 and the process of proof of item (i), we knowthat there exist multipliers l∗, t∗, v∗ such that (19)-(23) hold.

Let µ∗ = l∗ − t∗ in (19), λ∗ := v∗ in (23). It follows that

2E[(

n∑

j=1

x∗

jFj(x∗, ω))F (x∗)] + 2E[(

n∑

j=1

x∗

jFj(x∗, ω))∇F (x∗, ω)x∗]

+2(l∗ − t∗)E[∇F (x∗, ω)minF (x∗, ω), 0]. = λ∗,

0 ≤ λ∗ ⊥ x∗ ≥ 0.

This implies that (10)-(12) hold. This shows that x∗ is also a stationarypoint of problem (5).

The proof of this Theorem is completed.

3.2. Numerical scheme for sample average approximation. In this subsec-tion, we present sample-based approximation of the optimization problem (6) andrelated mathematical theories for convergence. Since our scheme that produces theconvergent results is based on sample, we present the penalty-based sample averageapproximation method that asymptotically guarantees the convergent results. Webegin with the sample-based approximation of (6).

We use sample average approximation method for solving problem (6). We obtainthe following problem:

minx

1N

N∑

j=1

[(

n∑

i=1

xiFi(x, ωj)

)2

+ ρn∑

i=1

(

minFi(x, ωj), 0

)2]

s. t. x ≥ 0

(24)

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 251

where ρ > 0 is a penalty parameter. Define the Lagrangian function LN(x, u; ρ) ofproblem (24) as follows

LN(x, u; ρ) =1

N

N∑

j=1

[( n∑

i=1

xiFi(x, ωj)

)2

+ ρ

n∑

i=1

(

minFi(x, ωj), 0

)2]

− uT x.

We will call the method to solve (5) as the penalty-based sample average ap-proximation method and denoted it by PBSAAM. In the following, we give thePBSAAM algorithm for solving problem (5).

Algorithm 1:: The PBSAAM AlgorithmStep 1:: Choose ρk > 0 and Nk > 0. Set k := 0.Step 2:: Use an optimization solver to minimize (24) whose each function eval-

uation is based on iid samples ω1, ω2, · · · , ωNk . Record the minimizer xk(Nk)and the optimal value f(xk(Nk)) and go to Step 3.

Step 3:: If the termination criterion is satisfied, then stop; Otherwise, chooseNk+1 ≥ Nk and ρk+1 ≥ ρk. Set k = k + 1, and return to Step 2.

Since the optimal multiplier set of problem (24) is bounded, Without loss ofgenerality, we assume that there exists a constant bounded set C such that the setC contains all optimal multipliers.

Theorem 3.5. Assume that (A1) and (A3) hold and the function F (x, ω) is con-tinuously differentiable in x ∈ ℜn for every ω ∈ Ω. Assume one has that riS ∩ri dom(E[(

∑n

i=1 xiFi(x, ω))2]) 6= ∅. Suppose that the PBSAAM Algorithm generatesa sequence xk(Nk) of the stationary points of problem (24) with ρ = ρk. Assumethat the sequence ρk increasingly converges to ρ for a given ρ large enough. If thesolution x∗ is an accumulation point of the sequence xk(Nk) with probability oneand E[(

∑n

i=1 minFi(x∗, ω), 0)2] = 0. Assume that (A2) holds at x∗ and the func-

tion F (x, ω) is dominated by an integrable function on some compact neighborhoodof x∗. Then x∗ is a stationary point of problem (5).

Proof. Since xk(Nk) is a stationary point of the problem (24) with ρ = ρk, thenthere exists a multiplier vector uk(Nk) from (19)-(23) such that

uk(Nk) =1

Nk

Nk∑

j=1

[

2

( n∑

i=1

xki (Nk)Fi(x

k(Nk), ωj)

)

F (xk(Nk), ωj)

+2

( n∑

i=1

xki (Nk)Fi(x

k(Nk), ωj)

)

∇F (xk(Nk), ωj)xk(Nk)

+2(lk(Nk) − tk(Nk))∇F (xk(Nk), ωj)minF (xk(Nk), ωj), 0

]

, (25)

0 ≤ uk(Nk) ⊥ xk(Nk) ≥ 0. (26)

According to the assumption that the sequence ρk increasingly converges to ρ,one has that the sequence lk(Nk)+tk(Nk) is bounded. Since lk(Nk) ≥ 0, tk(Nk) ≥0, it is easy to know the sequence lk(Nk)− tk(Nk) is also bounded. Without lossof generalization, we assume that limk→∞[lk(Nk) − tk(Nk)] = l∗ − t∗.

252 MING-ZHENG WANG AND M. MONTAZ ALI

Choose a sequence γm of positive numbers converging to zero. Define Vm =x ≥ 0||x − x∗| ≤ γm and

δmi (ω) = sup

x∈Vm

|Fi(x, ω) − Fi(x∗, ω)|,

∆mi (ω) = sup

x∈Vm

|∇Fi(x, ω) −∇Fi(x∗, ω)|.

According to assumptions (A1)-(A3), we know that

limm→∞

E[δmi (ω)] = E[ lim

m→∞

δmi (ω)] = 0.

According to Proposition 7 in Ruszcynski and Shapiro [15], we know1N

∑N

j=1 δmi (ωj) converges to E[δm

i (ω)] w.p. 1 uniformly in some compact neighbor-hood of x∗.

According to the assumptions of this theory, we know the functions ∇Fi(x, ω)are continuous in x and uniformly bounded in every compact neighborhood of x∗

for every ω ∈ Ω. One has that

limm→∞

E[∆mi (ω)] = E[ lim

m→∞

∆mi (ω)] = 0.

According to Proposition 7 in Ruszcynski and Shapiro [15], we know1N

∑N

j=1 ∆mi (ωj) converges to E[∆m

i (ω)] w.p. 1 uniformly in some compact neigh-borhood of x∗.

Define

A(xk(Nk), ωj) =

n∑

i=1

xki (Nk)Fi(x

k(Nk), ωj),

A(x∗, ωj) =

n∑

i=1

x∗

i Fi(x∗, ωj).

Similar to above proof of proceeding and using above results and Proposition 7 in

Ruszcynski and Shapiro [15], we can further obtain three functions 1N

N∑

j=1

A(x, ωj),

1N

N∑

j=1

A(x, ωj)∇F (x, ωj)x and 1N

N∑

j=1

(l − t)∇F (x, ωj)minF (x, ωj), 0 converge to

the following three functions E[A(x∗, ω)], E[A(x∗, ω)∇F (x∗, ω)x∗] andE[(l∗ − t∗)∇F (x∗, ω)minF (x∗, ω), 0] w.p.1 uniformly in some compact neighbor-hood of x∗, respectively. According to (25), one has further that the sequenceuk(Nk) converges to u∗ uniformly.

Taking the limit k → ∞ for both side of the formula (25), one has that

limk→∞

uk(Nk)

= E[2A(x∗, ω)F (x∗, ω) (27)

+2A(x∗, ω)∇F (x∗, ω)x∗ + 2(l∗ − t∗)∇F (x∗, ω)minF (x∗, ω), 0].

Let u∗ = limk→∞ uk(Nk), one has that

u∗ = E[2A(x∗, ω)F (x∗, ω) + 2A(x∗, ω)∇F (x∗, ω)x∗

+2(l∗ − t∗)∇F (x∗, ω)minF (x∗, ω), 0].

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 253

One has further that

E[2A(x∗, ω)F (x∗, ω) + 2A(x∗, ω)∇F (x∗, ω)x∗

+2(l∗ − t∗)∇F (x∗, ω)minF (x∗, ω), 0]− u∗ = 0.

Using (27), taking the limit for both of the formula (26) as k → ∞, one has that

0 ≤ u∗ ⊥ x∗ ≥ 0 (28)

According to definition of LN (x, u; ρ), one has that

∇LN(x, u; ρ) =1

Nk

Nk∑

j=1

[

2

( n∑

i=1

xki (Nk)Fi(x

k(Nk), ωj)

)

F (xk(Nk), ωj)

+2

( n∑

i=1

xki (Nk)Fi(x

k(Nk), ωj)

)

∇F (xk(Nk), ωj)xk(Nk)

+2(lk(Nk) − wk(Nk))∇F (xk(Nk), ωj)minF (xk(Nk), ωj), 0

]

− u.

Using the above results and Proposition 7 Ruszcynski and Shapiro [15], we knowthat the function ∇LNk

(x, u; ρ) converges to the function ∇L(x∗, u∗; ρ) w.p. 1uniformly in some compact neighborhood of (x∗, u∗) for every fixed ρ.

Since ρ = limk→∞ ρk and the function ∇LNk(x, u; ρ) and the function

∇L(x∗, u∗; ρ) are linear with respect to ρ. The function ∇LNk(x, u; ρk) converges

to the function ∇L(x∗, u∗; ρk) for k large enough in some compact neighborhood of(x∗, u∗). In addition one has that E[(minFi(x

∗, ω), 0)2] = 0. Further, accordingto Proposition 19 in Ruszcynski and Shapiro [15], one has that x∗ is a stationarypoint of the problem (5). Thus, the proof of this theorem is completed.

Remark 1. Under the assumptions of Theorem 5, xk(Nk) is the sequence of theglobal minimizers of problem (6) with ρ = ρk. Therefore, if the condition that

E[(

n∑

i=1

minFi(x∗, ω), 0)2] = 0

is deleted, the fact that x∗ is also a global optimizer of problem (5) still holds.

4. Numerical results. In this section, we present numerical results obtained bythe PBSAAM method. We have tested our method on three examples.

In our experiments, we set the initial values of Nk and ρk as N1 = 100 andρ1 = 50, respectively. Then, we employed the random number generator unifrnd

in Matlab 7.5 to generate independently and identically distributed random sam-ples ω1, ω2, · · · , ωNk

from Ω. Because problems (24) can be solved by using thesophisticated deterministic optimization meothds, for example, Newton methodsand trust region method, we solved problems (24) with N = Nk and ρ = ρk by thesolver fmincon in Matlab 7.5 to obtain the approximated optimal xk(Nk). Theinitial point was x0 = (0, · · · , 0)T . The obtained solution xk(Nk) was used as thestarting point in the next iteration. In addition, the parameters were updated byNk+1 := min10Nk, 105 and ρk+1 := min10ρk, 105. Obj denotes the values ofthe objective function of problem (24) at xk(Nk). We define the safety or empiricalreliability of problem (2) as the probability for the solution xk(Nk) to be feasible forthe constraints x|Fi(x, ω) ≥ 0, i = 1, 2, · · · , n, denoted by reliab(xk(Nk)), where

reliab(xk(Nk)) = Πni=1Pω ∈ Ω|Fi(x

k(Nk), ω) ≥ 0.

254 MING-ZHENG WANG AND M. MONTAZ ALI

The numerical results shown in Tables 1-3 reveal that our proposed method wasable to successfully solve the problems considered.

Example 1 (Lin [9]). Consider the stochastic complementary problem (2) in whichω is uniformly distributed on Ω = [0, 1] and F : ℜ3 × Ω → ℜ3 is given by

F (x, ω) =

x1 − ωx2 + 3 − 2ω

−ωx1 + 2x2 + ωx3 − 2 − ω

ωx2 + 3x3 − 3 − ω

.

This problem has a unique solution x∗ = (0, 1, 1)T for each ω ∈ Ω. The optimalvalues of approximation problem (24) with Nk and ρk corresponding to this exam-ple is zero, as shown in Table 1.

Table 1: The computational results for Example 1

Nk ρk xk(Nk) Obj reliab(xk(Nk))102 5 × 10 (0.0002, 1.0000, 1.0000) 1.0685e-006 100%103 5 × 102 (0.0001, 1.0000, 1.0000) 6.1115e-008 100%104 5 × 103 (0.0001, 1.0000, 1.0000) 5.7217e-008 100%105 5 × 104 (0.0000, 1.0000, 1.0000) 9.2872e-028 100%

Example 2 (Lin [9]). Consider the stochastic complementary problem (2) in whichω is uniformly distributed on Ω = [0, 1] and F : ℜ2 × Ω → ℜ2 is given by

F (x, ω) =

(

x1 + ωx2 − 2 + ω

ωx1 + 2x2 + 1 + ω

)

.

This problem has no common solution for all ω ∈ Ω, which has been proved byLin [9]. Note that (4) corresponding to this example can be rewritten as

min E

[(

x1(x1 + ωx2 − 2 + ω) + x2(ωx1 + 2x2 + 1 + ω)

)2]

s. t. x1 + ωx2 − 2 + ω ≥ 0,

ωx1 + 2x2 + 1 + ω ≥ 0,

x ≥ 0, for a. e. ω ∈ [0, 1].

(29)

In fact, the constraint ωx1 +2x2 +1+ω > 0 for each ω ∈ [0, 1]. One might deletethis constraint in the above minimization problem. However we did not delete thisredundant constraint when we coded our Matlab program. to show our method candeal with optimization problems with redundant constraints. The numerical resultsof approximation problem (24) with Nk and ρk corresponding to this example areshown in Table 2.

Table 2: The computational results for Example 2

Nk ρk xk(Nk) Obj reliab(xk(Nk))102 5 × 10 (1.7370, 0.0000) 0.5498 73.70%103 5 × 102 (1.8975, 0.0000) 0.9669 89.75%104 5 × 103 (1.9646, 0.0000) 1.1974 96.46%105 5 × 104 (1.9907, 0.0000) 1.2879 99.07%

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 255

Example 3. Consider the stochastic nonlinear complementary problem (2) inwhich ω is uniformly distributed on Ω = [0, 1] and F : ℜ3 × Ω → ℜ3 is givenby

F (x, ω) =

x21 − ωx2 + 3 − 2ω

−ωx1 + 2x22 + ωx3 − 2 − ω

ωx2 + 3x23 − 3 − ω

.

This example is constructed by us. This problem has a solution x∗ = (0, 1, 1)T

for each ω ∈ Ω. The numerical results of approximation problem (24) with Nk andρk corresponding to this example are shown in Table 3.

Table 3: The computational results for Example 3

Nk ρk xk(Nk) Obj reliab(xk(Nk))102 5 × 10 (0.0000, 0.9994, 1.0013) 2.2868e-005 100%103 5 × 102 (0.0004, 1.0002, 1.0000) 8.3262e-007 100%104 5 × 103 (0.0000, 1.0000, 1.0000) 1.2324e-008 100%105 5 × 104 (0.0000, 1.0000, 1.0000) 9.3156e-028 100%

Tables 1-3 presented the solutions, xk(Nk), of problems (24) forNk = 102, 103, 104, 105 and ρk = 50, 500, 5000, 50000, respectively, along withthe corresponding objective function values, Obj , and the empirical reliability,reliabk(xk(Nk)).

Examples 1 and 2 have been appeared as tested examples for the proposedmethod for stochastic nonlinear complementary problems in Lin [9]. Our proposedmethod can well-solve these examples, but solutions for Example 2 presented byus are different from the ones by Lin [9]. We presented the results of Lin [9] forExample 2 in Table 4.

Table 4: The computational results for Example 2 presented by Lin [9]

Nk ρk xk(Nk) Obj reliab(xk(Nk))102 102 (1.1153, 0.0000) 0.7905 11.53%103 103 (1.0329, 0.0000) 0.4175 3.29%104 104 (1.0027, 0.0000) 0.3242 0.27%105 105 (1.0007, 0.0000) 0.3301 0.07%

Under the definition of reliability reliab(x), probabilities of solutions for Example 2presented by Lin [9] is close to zero or very little, see Table 4. Under the definitionof reliability reliab(x), probabilities of solutions for Example 2 obtained by us aregreater than 0.89 except the case when N1 = 100, ρ1 = 50. Hence, we greatlyimprove the feasibility of constraints x|x ≥ 0, F (x, ω) ≥ 0 at solutions than theones presented by Lin [9], but we obtained the higher optimal objective functionvalues than the ones by Lin [9].

From the above analysis for Examples 1-3, our preliminary numerical resultsfor these examples indicate that the proposed stochastic programming formula-tion and PBSAAM method yield a reasonable solution of the stochastic nonlin-ear complementary problem (2). In particular, our method has desirable prop-erties regarding the reliability for the random demand requirement constraint,Pω ∈ Ω|F (x, ω) ≥ 0, holds with greater than probability 0.89 for all cases exceptfor the case when N1 = 100, ρ1 = 50 in Example 2.

256 MING-ZHENG WANG AND M. MONTAZ ALI

From numerical results presented by Chen and Fukushima [2] and Zhang andChen[20], we know that the solution of the ERM formulation has higher reliabilityand delivered rate than one of the EV (Expected Value) formulation. We nowcompare our results with those presented in [2, 20]. We first present the results ofChen and Fukushima [2] and Zhang and Chen[20]. Table 5 presents the numericalresults of the ERM methods on Example 2 as given by by Chen and Fukushima [2]and Zhang and Chen[20].

Table 5: Results of the ERM method on Example 2

Nk xk(Nk) Obj reliab(xk(Nk))102 (1.4353, 0.0171) 0.0700 44.58%103 (1.4291, 0.0137) 0.0833 43.68%104 (1.5269, 0.0000) 0.0823 52.69%105 (1.4965, 0.0000) 0.0826 49.65%

Comparing Table 2 with Table 5, we can see that the solution we obtained hashigher reliability than the one obtained by Chen and Fukushima [2] and Zhang andChen[20], but our optimal objective value is higher than the one obtained by Chenand Fukushima [2] and Zhang and Chen[20].

5. Conclusion. In this paper, we proposed a new reformulation for stochastic non-linear complementarity problems. We presented an penalty-based sample-averageapproximation method for solving them based on this reformulation. The numericalresults revealed that the PBSAAM method proposed by us is efficient. The solu-tions obtained by our method have high safety and can be used as one of scientificgists for managers making decisions.

Acknowledgments. The authors appreciate the comments made by the associateeditor Professor Xiaojun Chen and two referees of this paper.

REFERENCES

[1] F. Bastin, C. Cirllo and P. L. Toint, Convergence theory for nonconvex stochastic program-

ming with an application to mixed logit, Mathematical Programming, 108 (2006), 207–234.[2] X. Chen and M. Fukushima, Expected residual minimization method for stochastic linear

complementarity problems, Mathematics of Operations Research, 30 (2005), 1022–1038.[3] X. Chen, C. Zhang and M. Fukushima, Robust solution of stochastic matrix linear comple-

mentarity problems, Mathematical Programming, 116 (2009), 51–80.[4] F. H. Clarke, “Optimization and Nonsmooth Analysis,” SIAM, 1990.[5] R. W. Cottle, J.-S. Pang and R. E. Stone, “The Linear Complementarity Problem,” Academic

Press, San Diego, CA, 1992.[6] F. Facchinei and J-S. Pang, “Finite-dimensional Variational Inequalities and Complementarity

Problems,” Springer, New York, 2003.[7] H. Fang, X. Chen and M. Fukushima, Stochastic R0 matrix linear complementarity problems,

SIAM journal on Optimization, 18 (2007), 482–506.

[8] G. Gurkan, A. Y. Ozge and S. M. Robinson, Sample-path solutions of stochastic variational

inequalities, Mathematical Programming, 84 (1999), 313–333.[9] G. H. Lin, Monte Carlo sampling and penalty method for stochastic nonlinear complemen-

tarity problems, Mathematics of Computation, 78 (2009), 1671–1686.[10] G. H. Lin, X. Chen and M. Fukushima, New restricted NCP functions and their applications

to stochastic NCP and stochastic MPE, Optimization, 56 (2007), 641–653.[11] G. H. Lin and M. Fukushima, New reformulations for stochastic nonlinear complementarity

problems, Optimization Methods and Software, 21 (2006), 551–564.

PENALTY-BASED STOCHASTIC COMPLEMENTARITY PROBLEMS 257

[12] F. W. Meng and H. Xu, A regularized sample average approximation method for stochastic

mathematical programs with nonsmooth equality constraints, SIAM Journal on Optimization,17 (2006), 891–919.

[13] L. P. Zhang, A nonlinear complementarity model for supply chain network equilibrium, Jour-nal of Industrial and Management Optimization, 3 (2007), 727–737.

[14] S. M. Robinson, Analysis of sample-path optimization, Mathematics of Operations Research21 (1996), 513–528.

[15] Ruszcynski A. and A. Shapiro, Eds., “Stochastic programming,” Handbooks in OR&MS, Vol.10, North-Holland Publishing Company, Amsterdam, 2003.

[16] A. Shapiro, Statistical inference of stochastic optimization problems, Edited in ProbabilisticConstrained Optimization: Theory and Applications, Kluwer Academic Publishers, 2000,91–116.

[17] A. Shapiro, Stochastic mathematical programs with equilibrium constraints, Preprint, Schoolof Industrial and System Engineering, Georgia Institute of Technology, An thalanta, Georgia,USA, 2004.

[18] M. Z. Wang and M. M. Ali, Stochastic nonlinear complementarity problems: Stochastic pro-

gramming reformulation and penalty-based approximation method, Journal of OptimizationaTheory and Applications, 2009, DOI 10.1007/s10957-009-9606-4.

[19] H. Xu and F. Meng, Convergence analysis of sample average approximation methods for a

class of stochastic mathematical programs with equality constraints, Mathematics of Opera-tions Research, 32 (2007), 648–668.

[20] C. Zhang and X. Chen, Stochastic nonlinear complementary problem and application to traffic

equilibrium under uncertainty, Journal of Optimization Theory and Applications, 137 (2008),277–295.

Received October 2008; 1st revision April 2009; final revision October 2009.

E-mail address: [email protected]

E-mail address: [email protected]


Recommended