+ All Categories
Home > Documents > First and second-order optimality conditions using approximations for vector equilibrium problems...

First and second-order optimality conditions using approximations for vector equilibrium problems...

Date post: 04-Dec-2016
Category:
Upload: le-thanh
View: 213 times
Download: 1 times
Share this document with a friend
20
J Glob Optim (2013) 55:901–920 DOI 10.1007/s10898-012-9984-2 First and second-order optimality conditions using approximations for vector equilibrium problems with constraints Phan Quoc Khanh · Le Thanh Tung Received: 21 April 2011 / Accepted: 27 August 2012 / Published online: 28 September 2012 © Springer Science+Business Media New York 2012 Abstract We consider various kinds of solutions to nonsmooth vector equilibrium prob- lems with functional constraints. By using first and second-order approximations as gen- eralized derivatives, we establish both necessary and sufficient optimality conditions. Our first-order conditions are shown to be applicable in many cases, where existing ones cannot be used. The second-order conditions are new. Keywords Equilibrium problems · Ky Fan inequality · Optimality conditions · First and second-order approximations · Weak solutions · Firm solutions · Henig-proper solutions · Strong Henig-proper solutions · Benson-proper solutions Mathematics Subject Classification (2010) 90C46 · 91B50 1 Introduction Optimality conditions for nonsmooth optimization have become one of the most important topics in the study of optimization-related problems. Various notions of generalized deriv- atives have been developed to be appropriate for the wide range of practical situations, see, e.g., the excellent books [2, 5, 24, 25, 27]. We can observe a domination in use of the Clarke derivative, Aubin contingent derivative and Mordukhovich coderivative and limiting sub- differential. However, in particular problems, sometimes many other generalized derivatives have advantages. Therefore, even new notions are being still introduced to meet the diversity of needs in practice. P. Q. Khanh (B ) Department of Mathematics, International University of Hochiminh City, Linh Trung, Thu Duc, Hochiminh City, Vietnam e-mail: [email protected] L. T. Tung Department of Mathematics, College of Sciences, Cantho University, Cantho, Vietnam e-mail: [email protected] 123
Transcript
Page 1: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920DOI 10.1007/s10898-012-9984-2

First and second-order optimality conditions usingapproximations for vector equilibrium problemswith constraints

Phan Quoc Khanh · Le Thanh Tung

Received: 21 April 2011 / Accepted: 27 August 2012 / Published online: 28 September 2012© Springer Science+Business Media New York 2012

Abstract We consider various kinds of solutions to nonsmooth vector equilibrium prob-lems with functional constraints. By using first and second-order approximations as gen-eralized derivatives, we establish both necessary and sufficient optimality conditions. Ourfirst-order conditions are shown to be applicable in many cases, where existing ones cannotbe used. The second-order conditions are new.

Keywords Equilibrium problems · Ky Fan inequality · Optimality conditions · First andsecond-order approximations · Weak solutions · Firm solutions · Henig-proper solutions ·Strong Henig-proper solutions · Benson-proper solutions

Mathematics Subject Classification (2010) 90C46 · 91B50

1 Introduction

Optimality conditions for nonsmooth optimization have become one of the most importanttopics in the study of optimization-related problems. Various notions of generalized deriv-atives have been developed to be appropriate for the wide range of practical situations, see,e.g., the excellent books [2,5,24,25,27]. We can observe a domination in use of the Clarkederivative, Aubin contingent derivative and Mordukhovich coderivative and limiting sub-differential. However, in particular problems, sometimes many other generalized derivativeshave advantages. Therefore, even new notions are being still introduced to meet the diversityof needs in practice.

P. Q. Khanh (B)Department of Mathematics, International University of Hochiminh City, Linh Trung,Thu Duc, Hochiminh City, Vietname-mail: [email protected]

L. T. TungDepartment of Mathematics, College of Sciences, Cantho University, Cantho, Vietname-mail: [email protected]

123

Page 2: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

902 J Glob Optim (2013) 55:901–920

The main stream of models dealt with in the literature so far has been kinds of minimiza-tion problems. They are natural generalizations of the classical mathematical programmingproblems and occur most frequently. But, other problem settings become more and moreattractive to researchers, especially general ones which include minimization problems andother optimization-related models. During the last several decades, variational inequalities,equilibrium problems and variational inclusion or relation problems have been being inten-sively investigated. However, the efforts have been focused on the solution existence, stabilityand sensitivity, well-posedness, properties of solution sets and numerical methods. Very fewcontributions to optimality conditions for these problems can be found in the literature. Wethink that the model of equilibrium problems plays a central role, since it extends variationalinequalities to a completely nonlinear case. Variational inclusion and relation problems aremore general but different from equilibrium problems not very much. Hence, we considervector cases of this model in the present paper. The terminology “equilibrium problems”was proposed (in 1994) in [4]. After that, this terminology have been popularly used in theliterature. But, in fact, this problem setting was already earlier called the Ky Fan inequality,following the important work [6] in 1972. Furthermore, the Ky Fan result on the existence ofsolutions to this inequality problem was known to be equivalent to many deep existence resultsin nonlinear analysis like the fixed point theorems of Brouwer (1910), Schauder (1930) andKakutani (1941), the Knaster-Kuratowski-Mazurkierwich theorem, known also as the KKM-theorem or the Three Polish lemma, (1926), and the Ekeland variational principle (1974).However, since the name “equilibrium problems” is familiar to the researchers working onoptimization-related problems now, we choose this terminology. For equilibrium problems,we observe only [8,9,21,22,28] concerning optimality conditions. In [8], convex vector equi-librium problems with constraints were studied. Several kinds of epiderivatives of set-valuedmaps were employed to establish optimality conditions for multivalued equilibrium prob-lems in [22]. Since these epiderivatives of set-valued maps always exist, the assumptionsimposed here are relaxed. However, for sufficient conditions in nonconvex problems, radialepiderivatives, which possess larger graphs, have to be used instead of other epiderivatives.So, the gaps between necessary conditions and sufficient ones are relatively big. In [9], usinga well-known scalarization technique and strict derivatives and Mordukhovich coderivatives,necessary optimality conditions for weak and several kinds of proper solutions in vector (sin-gle-valued) equilibrium problems were established. When the problems are convex, theseconditions become sufficient. The data of the problems in question were assumed to bestrictly differentiable or locally Lipschitz there. In [21], optimality conditions in terms of thecontingent variation defined in [7] were discussed. In [28], the authors considered optimalityconditions for differentiable vector equilibrium problems. Weak solutions, Henig-proper andstrong Henig-proper solutions were investigated.

Although some similarity can be recognized between minimization and equilibrium prob-lems, detailed investigations of optimality conditions for equilibrium problems seem to bedifferent and interesting. All the above observations motivate the topic of the present paper:optimality conditions for nonsmooth equilibrium problems. Among so many generalizedderivatives, we rely on the first and second-order approximations introduced in [1,13], sincewe have successfully applied them to minimization models in [16–20]. One of the most sig-nificant advantages of this kind of derivatives is that even functions with infinite discontinuitymay still have second-order approximations. Therefore, it can be in use in highly nondiffer-entiable problems. Furthermore, this derivative definition includes many derivative notionsas special cases (see [16,17]).

The paper is structured as follows. In Sect. 2, we recall needed notions and some ele-mentary facts. Section 3 is devoted to first-order optimality conditions. Here we establish

123

Page 3: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 903

necessary conditions for weak solutions, since they are also necessary for all other strongersolutions. Sufficient conditions are demonstrated for several kinds of proper solutions. Thesekinds may be regarded as representatives for various notions of properness. For comprehen-sive comparisons of properness notions, the interested reader is referred to [10,11,14,15,23].In Sect. 4, second-order approximations are employed in second-order optimality conditionsfor both cases where the involved maps are or are not first-order differentiable. To the bestof our knowledge, there have not been second-order optimality conditions for equilibriumproblems in the literature so far.

2 Preliminaries

Let X, Y be real normed spaces, S ⊆ X and C ⊆ Y be pointed closed convex cones. intS,clS, bdS and coS denote its interior, closure, boundary and convex hull, respectively (shortly,resp), of S. BX stands for the closed unit ball in X . X∗ is the dual space of X . N is the set ofnatural numbers and R

k+ is the nonnegative orthant of the k-dimensional space. For x0 ∈ X ,U(x0) denotes the collection of neighborhoods of (x0). A nonempty convex subset B of aconvex cone C is called a base of C if C = coneB and 0 �∈ clB. We often use the followingcones, for u ∈ X ,

coneS = {λa | λ ≥ 0, a ∈ S}, cone+S = {λa | λ > 0, a ∈ S},S(u) = cone(S + u), C∗ = {y∗ ∈ Y ∗ | 〈y∗, c〉 ≥ 0, ∀c ∈ C} (polar cone),

C∗i = y∗ ∈ Y ∗ | 〈y∗, c〉 > 0, ∀c ∈ C \ {0}} (quasi − interior of C∗).

The recession cone of S is defined by

S∞ ={

lim tnan | an ∈ S, tn > 0 and limn→∞ tn = 0

}.

Definition 2.1 Let x0, v ∈ X and S ⊆ X .

(i) The cone of weak feasible directions to S at x0 is

W f (S, x0) = {u ∈ X | ∃tn → 0+, ∀n, x0 + tnu ∈ S

}.

(ii) The contingent cone (the second-order contingent set, resp) of S at x0 is

T (S, x0) = {v ∈ X | ∃tn → 0+, ∃vn → v,∀n ∈ N, x0 + tnvn ∈ S

}

(T 2(S, x0, v) ={w ∈ X | ∃tn → 0+, ∃wn → w,

∀n ∈ N, x0 + tnv + 1

2t2n wn ∈ S

}, resp).

(iii) The asymptotic second-order tangent cone of S at (x0, v), see [26], is

T ′′(S, x0, v) ={w ∈ X | ∃(tn, rn) → (0+, 0+) : tn

rn→ 0, ∃wn → w,

∀n ∈ N, x0 + tnv + 1

2tnrnwn ∈ S

}

Lemma 2.1 ([16]) If C ⊆ Y is a closed convex cone with intC �= ∅, y0 ∈ −C, y ∈−intC(y0) and 1

tn(yn − y0) → y as tn → 0+, then yn ∈ −intC for all n large enough.

123

Page 4: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

904 J Glob Optim (2013) 55:901–920

Lemma 2.2 ([12]) Assume that X is finite dimensional and x0 ∈ S ⊆ X. If xn ∈ S \ {x0}then there exists u ∈ T (S, x0) \ {0} and a subsequence, denoted again by xn, such that

(i) 1tn

(xn − x0) → u, where tn = ‖xn − x0‖;

(ii) Either w ∈ T 2(S, x0, u) ∩ u⊥ exists such that (xn − x0 − tnu)/ 12 t2

n → w or w ∈T ′′(S, x0, u)∩u⊥\{0} with rn → 0+ exist such that tn

rn→ 0+ and (xn−x0−tnu)/ 1

2 tnrn →w, where u⊥ is orthogonal compliment of u ∈ X.

A subset S of X is called star-shaped at x0 if, for all x ∈ S, allλ ∈ [0, 1], (1−λ)x0+λx ∈ S.A map f : X → Y is said to be C-star-shaped at x0 on S, where S is star-shaped at x0, if forall x ∈ S and all λ ∈ [0, 1], one has

(1 − λ) f (x0) + λ f (x) ∈ f ((1 − λ)x0 + λx) + C.

If S is convex and this relation is fulfilled for all x0 ∈ S, then f is called C-convex. f issaid to be pseudoconvex at x0 if

epi f ⊆ (x0, f (x0)) + Tepi f (x0, f (x0)),

where epi f = {(x, y) ∈ X ×Y | y ∈ f (x)+C} is the epigraph of f . A subset S of X is called(see [3]) arcwise-connected at x0 if, for each x ∈ S, there exists a continuous arc Lx0,x (t)defined on [0,1] such that Lx0,x (0) = x0, Lx0,x (1) = x , and Lx0,x (λ) ∈ S for all λ ∈ (0, 1).f is said to be C-arcwise-connected at x0 on S, where S is arcwise connected at x0, if for allx ∈ S and all λ ∈ [0, 1],

(1 − λ) f (x0) + λ f (x) ∈ f (Lx0,x (λ)) + C.

Let L(X, Y ) stand for the space of continuous linear mappings from X into Y andB(X, X, Y ) denotes the space of the continuous bilinear mappings of X × X into Y . ForA ⊆ L(X, Y ) and x ∈ X (B ⊆ B(X, X, Y ) and x, z ∈ X × X ) denote A(x) := {M(x) :M ∈ A} (B(x, z) := {N (x, z) | N ∈ B}). o(tk), for t > 0 and k ∈ N, denotes a movingpoint such that o(tk)/tk → 0 as t → 0+. Let fα and f be in L(X, Y ). The net { fα} is said

to pointwise converge to f and written as fαp→ f or f = p-lim fα if lim fα(x) = f (x) for

all x ∈ X . A similar definition is adopted for fα, f ∈ B(X, X, Y ). Notice that, if Y = R thepointwise convergence is the star-weak convergence.

A subset A ⊆ L(X, Y ) (B ⊆ B(X, X, Y )) is called asymptotically pointwise compact(shortly, asympt p-compact) if (see [16,17])

(a) each bounded net { fα} ⊆ A (⊆ B, resp) has a subnet { fβ} and f ∈ L(X, Y ) ( f ∈B(X, X, Y )) such that f = p-lim fβ ;

(b) for each net { fα} ⊆ A (⊆ B, resp) with lim ‖ fα‖ = ∞, the net { fα/‖ fα‖} has a subnetwhich pointwise converges to some f ∈ L(X, Y ) \ {0} ( f ∈ B(X, X, Y ) \ {0}).

If pointwise convergence is replaced by convergence, the term asymptotic compactnessis used. If X and Y are finite dimensional, every subset is asympt p-compact and asymptcompact. But, it is not true in infinite dimensions and the asympt p-compactness is weakerthan asympt compactness.

For A ⊆ L(X, Y ) and B ⊆ B(X, X, Y ), we adopt the following notations:

p-clA = { f ∈ L(X, Y ) | ∃{ fα} ⊆ A, f = p − lim fα},p-clB = {g ∈ B(X, X, Y ) | ∃{gα} ⊆ B, g = p − limgα},p-A∞ = { f ∈ L(X, Y ) | ∃{ fα} ⊆ A, ∃tα → 0+, f = p − limtα fα},p-B∞ = g ∈ B(X, X, Y ) | ∃{gα} ⊆ B, ∃tα → 0+, g = p − limtαgα}.

123

Page 5: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 905

Observe that p-clA, p-clB are pointwise closures and p-A∞, p-B∞ are pointwise recessioncones. The generalized derivatives we apply in this paper are the following.

Definition 2.2 ([1,12])

(i) A set A f (x0) ⊆ L(X, Y ) is called a first-order approximation of f : X → Y at x0 ∈ X ,if f (x) − f (x0) ∈ A f (x0)(x − x0) + o(‖x − x0‖).

(ii) A set (A f (x0), B f (x0)) ⊆ L(X, Y )× B(X, X, Y ) is called a second-order approxima-tion of f : X → Y at x0 ∈ X if

(a) A f (x0) is a first-order approximation of f at x0;(b) f (x) − f (x0) ∈ A f (x0)(x − x0) + B f (x0)(x − x0, x − x0) + o(‖x − x0‖2).

From now on, let X, Y and Z be normed spaces, S ⊆ X nonempty, F : X × X → Y, g :X → Z , and C ⊆ Y , K ⊆ Z pointed closed convex cones. Denote � = {x ∈ S : g(x) ∈−K } (the feasible set). Define Fx0 : X → Y by Fx0(y) = F(x0, y) for y ∈ X and assumethat Fx0(x0) = 0 (without loss of generality). The vector equilibrium problem (EP) withconstraints under our consideration is described depending on kinds of solutions as follows.

Definition 2.3 (i) If intC �= ∅, x0 ∈ � is said to be a local weak solution of problem (EP)(x0 ∈ WMin(F, g)), if ∃U ∈ U(x0), F(x0, U ∩ �) �⊆ −intC.

(ii) Assume that C has a bounded base B and δ = inf{‖b‖| b ∈ B}. For any 0 < ε < δ,denote Cε(B) = cone(B +εBY ). x0 ∈ � is called a local strong Henig-proper solutionto (EP) (x0 ∈ SHe(F, g)), if ∃U ∈ U(x0), ∃ε ∈ (0, δ), F(x0, U ∩�)∩ (−intCε(B)) =∅.

(iii) x0 ∈ � is termed a local Henig-proper solution to (EP) (x0 ∈ He(F, g)), if there exista neighborhood U of x0 and a pointed convex cone H ⊆ Y with C \ {0} ⊆ intH suchthat F(x0, U ∩ �) ∩ (−H \ {0}) = ∅.

(iv) x0 ∈ � is called a local Benson-proper solution to (EP) (x0 ∈ Ben(F, g)), if ∃U ∈U(x0), clcone(F(x0, U ∩ �) + C) ∩ (−C) = {0}.

(v) For m ∈ N, x0 ∈ � is said to be a local firm (or strict/isolated) solution of orderm of (EP) (x0 ∈ FMin(F, g, m)), if ∃U ∈ U(x0), ∃γ > 0, ∀x ∈ U ∩ � \ {x0},(F(x0, x) + C) ∩ BY (0, γ ‖x − x0‖m) = ∅.

Of course, if U = X , the word “local” is omitted. For comparisons of these concepts and alsoother kinds of properness notions see, e.g., [10,11,14,23]. In this paper, we are concernedonly with the above defined notions. Hence, we collect below some of their known relations.Assume that intC �= ∅. Then,

(i) SHe(F, g) ⊆ Ben(F, g) ⊆ WMin(F, g);(ii) He(F, g) ⊆ WMin(F, g);

(iii) He(F, g) ⊆ SHe(F, g), provided that C has a bounded base.

3 First-order optimality conditions

3.1 Necessary optimality conditions

From the relations of the kinds, under our consideration, of solutions to (EP), mentioned atthe end of the preceding section, it is clear that WMin(F, g) contains all other solution sets(assuming that intC �= ∅). We establish necessary conditions only for weak solutions, sincethey hold true also for the others. We need the following fact of the weak feasible cone.

123

Page 6: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

906 J Glob Optim (2013) 55:901–920

Proposition 3.1 Assume that C and K have nonempty interior, x0 ∈ � and Ag(x0) is afirst-order approximation of g at x0.

(i) For u �∈ W f (�, x0), there exists Q ∈ Ag(x0) such that Q(u) �∈ −int(K + g(x0)).

(ii) Let u ∈ W f (�, x0). Assume further that Fx0 has an asymptotically p-compact first-orderapproximation AFx0

(x0). If x0 is a local weak solution of (EP), then there exists P ∈p-clAFx0

(x0)∪ p-AFx0(x0)∞ \ {0} such that Pu �∈ −intC.

Proof (i) Suppose to the contrary that Ag(x0) ⊆ −int(K + g(x0)). Then, for large n andr → 0+ with rn → 0,

g

(x0 + 1

nu

)∈ g(x0) + 1

nAg(x0)(u) + r BZ

=(

1 − 1

n

)g(x0) + 1

n

(g(x0) + Ag(x0)(u) + rnBZ

).

Since Ag(x0) ⊆ −int(K + g(x0)), g(x0) + Ag(x0)(u) + rnBZ ⊆ −K for large n.Therefore, g

(x0 + 1

n u) ∈ −K , contradicting the assumption u �∈ W f (�, x0).

(ii) For the sequence tn → 0+ associated with u (in the definition of W f (�, x0)) and nlarge enough, we have

Fx0(x0 + tnu) − Fx0(x0) = Fx0(x0 + tnu) �∈ −intC.

Therefore, there is Pn ∈ AFx0(x0) such that, for such n,

tn Pnu + o(tn) �∈ −intC. (1)

If {Pn} is norm-bounded, we can assume that Pnp→ P ∈ p-clAFx0

(x0). Passing (1) tolimit, we obtain Pu �∈ −intC as required.

If {Pn} is norm-unbounded, (for a subsequence) Pn/‖Pn‖ p→ P ∈ p-AFx0(x0)∞ \ {0}.

Dividing (1) by ‖Pn‖ and letting n → ∞, we also obtain P(u) �∈ −intC . ��Theorem 3.2 Let C and K have nonempty interior and AFx0

(x0), Ag(x0) be asympt p-com-pact first-order approximations of Fx0 and g, resp, at x0. If x0 ∈ WMin(F, g), then, ∀u ∈X, ∃P ∈ p-clAFx0

(x0)∪ p-AFx0(x0)∞ \ {0}, ∃Q ∈ Ag(x0), ∃(c∗, d∗) ∈ C∗ × D∗ \ {(0, 0)},

〈c∗, P(u)〉 + 〈d∗, Q(u)〉 ≥ 0, 〈d∗, g(x0)〉 = 0.

Furthermore, for u with 0 ∈ int(Q(u) + g(x0) + K ) and Q ∈ Ag(x0), one has c∗ �= 0.

Proof If u ∈ W f (�, x0), by Proposition 3.1(ii), ∃P ∈ p-clAFx0(x0)∪ p-AFx0

(x0)∞ \ {0},P(u) �∈ −intC . While if u �∈ W f (�, x0), by Proposition 3.1(i), one has Q ∈ Ag(x0) withQ(u) �∈ −int(K + g(x0)) ⊆ −intK (g(x0)). So, in both cases,

(P(u), Q(u)) �∈ −int[C × K (g(x0))].According to the separation theorem, we have (c∗, d∗) ∈ C∗ × K ∗ \ {(0, 0)} such that

〈c∗, P(u)〉 + 〈d∗, Q(u)〉 ≥ 0, 〈d∗, g(x0)〉 = 0.

Now, let u satisfy 0 ∈ int(Q(u) + g(x0) + K ) for all Q ∈ Ag(x0)(u) and suppose to thecontrary that c∗ = 0. Then, the separation result collapses to

〈d∗, Q(u)〉 ≥ 0 and 〈d∗, g(x0)〉 = 0.

This implies that d∗(Q(u) + g(x0) + d) ≥ 0 for all d ∈ K . So, 0 �∈ int(Q(u) + g(x0) + K ),which contradicts the assumption. ��

123

Page 7: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 907

Notice that, when applying to the special case of the corresponding vector minimization,Theorem 3.2 sharpens Theorem 3.1 of [18], by removing the assumed boundedness of Ag(x0)

and adding the case c∗ �= 0. This removal is important, since a map with a bounded first-order approximation at a point must be continuous (even calm) at this point. In the followingexample, Theorem 3.2, with such a Ag(x0), can reject a candidate for a weak solution, butthe counterpart of [28] cannot.

Example 3.1 Let X = Y = Z = R, x0 = 0, C = K = R+, g(x) = sgn(x)√|x | − 1 and

F(x, y) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

1y − 1

x if x, y �= 0,

1y if x = 0, y �= 0,

1x if x �= 0, y = 0,

0 if x = y = 0.

Let α, β > 0 be arbitrary and fixed. We can check that � = (−∞, 1], C∗ = K ∗ = R+,Fx0(x0) = 0 and AFx0

(x0) = (α,+∞), Ag(x0) = (β,+∞). Hence, clAFx0(x0) = [α,+∞)

and AFx0(x0)∞ = [0,+∞). For u = −1 ∈ X , we see that, ∀P ∈ clAFx0

(x0)∪(AFx0(x0)∞ \

{0}), ∀Q ∈ Ag(x0), ∀(c∗, d∗) ∈ C∗ × K ∗ \ {(0, 0)} with 〈d∗, g(x0)〉 = 0, one has

〈c∗, Pu〉 + 〈d∗, Qu〉 = c∗ · P · (−1) − 0 = −c∗ · P < 0.

According to Theorem 3.2, x0 is not a local weak solution of (EP). However, f and g are notdifferentiable at x0. So, Theorem 3.2 of [28] does not work.

3.2 Sufficient optimality conditions

In this section we address sufficient conditions for each kind of solutions under consideration,except weak solutions, since all such conditions are valid for weak solutions. Under suitableassumptions we have inclusion relations of these solutions. Conditions for the strongest kindof solutions remain valid for the others, but may be too rough for weaker solutions. Further-more, studying each solution kind, we can impose suitable relaxed assumptions. We needthe following fact about C-arcwise-connected maps.

Lemma 3.3 Let f : X → Y be C-arcwise-connected at x0 on V , where V ⊆ X is arcwiseconnected at x0, and for all x ∈ V , the derivative L ′

x0,x (0+) exist (Lx0,x associated with the

arcwise connectedness). Let A f (x0) be an asympt p-compact first-order approximation of f .Let, ∀x ∈ V , ∀P ∈ p-A f (x0)∞ \ {0}, P(L ′

x0,x (0+)) �∈ −C. Then, ∀x ∈ V , f (x) − f (x0) ∈

p-clA f (x0)(L ′x0,x (0

+)) + C.

Proof Since f is C-arcwise-connected at x0 on V , for t ∈ [0, 1] one has

f (x) − f (x0) ∈ 1

t

(f (Lx0,x (t)) − f (x0)

) + C.

As Lx0,x (t) → x0 when t → 0+ and Lx0,x (0) = x0, one gets

f (Lx0,x (t)) − f (Lx0,x (0)) ∈ A f (x0)(Lx0,x (t) − Lx0,x (0)) + o(‖Lx0,x (t) − Lx0,x (0)‖).Then,

f (x) − f (x0) ∈ 1

t[A f (x0)(Lx0,x (t) − Lx0,x (0)) + o(‖Lx0,x (t) − Lx0,x (0)‖)] + C.

123

Page 8: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

908 J Glob Optim (2013) 55:901–920

If A f (x0) is norm-unbounded, taking tn → 0+, we have a sequence Pn ∈ A f (x0) with

Pn/‖Pn‖ p→ P ∈ p-A f (x0)∞ \ {0} such that

− f (x) + f (x0) + 1

tn

[Pn(Lx0,x (tn) − Lx0,x (0)) + o(Lx0,x (tn) − Lx0,x (0))

] ∈ −C. (2)

Dividing (2) by ‖Pn‖ and letting tn → 0+, we obtain P(L ′x0,x (0

+)) ∈ −C , a contradiction.So, A f (x0) must be norm-bounded. Then, passing (2) to limit gives

f (x) − f (x0) ∈ p − clA f (x0)(L ′

x0,x (0+)

) + C.

��Remark 3.1 For the particular case, where V is star-shaped at x0 and f is C-star-shaped atx0 on V or, for the more specific case where f is pseudoconvex at x0, Lemma 3.3 holdswith L ′

x0,x (0+) replaced by x − x0.

We have the following sufficient condition for the Henig-properness.

Theorem 3.4 Let AFx0(x0) and Ag(x0) be asympt p-compact first-order approximations of

Fx0 and g, resp, at x0 ∈ �. Assume that there exists a pointed cone H ⊆ Y with C \ {0} ⊆intH. Then, x0 ∈ He(F, g) if either of the following conditions holds.

(i) (Fx0 , g) is (C ×K )-arcwise-connected at x0; ∀x ∈ �, ∀(P, Q) ∈ p-A(Fx0 ,g)(x0)∞\{0},(P, Q)(L ′

x0,x (0+)) �∈ −(C × K ) and, ∃(c∗, d∗) ∈ H∗ × K ∗ \ {(0, 0)}, ∀(y, z) ∈

p-clA(Fx0 ,g)(x0)(L ′x0,x (0

+)),

〈c∗, y〉 + 〈d∗, z〉 > 0, and 〈d∗, g(x0)〉 = 0.

(ii) (Fx0 , g) is pseudoconvex at x0; ∀x ∈ �, ∀(P, Q) ∈ p-A(Fx0 ,g)(x0)∞ \ {0}, (P, Q)(x −x0) �∈ −(C×K ) and, ∃(c∗, d∗) ∈ H∗×K ∗\{(0, 0)}, ∀(y, z) ∈ p-clA(Fx0 ,g)(x0)(x−x0),

〈c∗, y〉 + 〈d∗, z〉 > 0 and 〈d∗, g(x0)〉 = 0.

Proof Due to the similarity, we check only part (i). Suppose, to the contrary, there exists xn

in � tending to x0 such that Fx0(xn) ∈ −H \ {0} for all n. Making use of Lemma 3.3 andnoticing that Fx0(x0) = 0, one has

(Fx0(xn), g(xn)) − (Fx0(x0), g(x0)) ∈ p-clA(Fx0 ,g)(x0)(L ′x0,xn

(0+)) + H × K .

Therefore, there exist (y′n, z′

n) ∈ p-clA(Fx0 ,g)(x0)(L ′x0,xn

(0+)) and (hn, kn) ∈ H × K suchthat

(Fx0(xn), g(xn)) − (0, g(x0)) = (y′

n, z′n

) + (hn, kn).

Consequently, for all (c∗, d∗) ∈ H∗ × K ∗ and n,

〈c∗, y′n + hn〉 + 〈d∗, z′

n + kn〉 = 〈c∗, Fx0(xn)〉 + 〈d∗, g(xn) − g(x0)〉 ≤ −〈d∗, g(x0)〉.This completes the proof because of the contradiction that

⟨c∗, y′

n

⟩ + ⟨d∗, z′

n

⟩ ≤ − ⟨c∗, hn

⟩ − ⟨d∗, kn

⟩ − ⟨d∗, g(x0)

⟩ ≤ − ⟨d∗, g(x0)

⟩.

��Theorem 3.5 Let AFx0

(x0) and Ag(x0) be asympt p-compact first-order approximations ofFx0 and g, resp, at x0 ∈ �. Let, for a base B of C, δ = inf{‖b‖, b ∈ B} and ε ∈ (0, δ).Then, x0 ∈ SHe(F, g), provided either of the following.

123

Page 9: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 909

(i) (Fx0 , g) is (C×K )-arcwise-connected at x0 on �; ∀x ∈ �, ∀(P, Q) ∈ p-A(Fx0 ,g)(x0)∞\{0}, (P, Q)(L ′

x0,x (0+)) �∈ −(C×K ) and, f∃(c∗, d∗) ∈ Cε(B)∗×K ∗\{(0, 0)}, ∀(y, z) ∈

p-clA(Fx0 ,g)(x0)(L ′x0,x (0

+)),

〈c∗, y〉 + 〈d∗, z〉 ≥ 0 and 〈d∗, g(x0)〉 = 0.

(ii) (Fx0 , g) is pseudoconvex at x0; ∀x ∈ �, ∀(P, Q) ∈ p-A(Fx0 ,g)(x0)∞ \ {0}, (P, Q)

(x−x0) �∈ −(C×K ) and, ∃(c∗, d∗) ∈ Cε(B)∗×K ∗\{(0, 0)}, ∀(y, z) ∈ p-clA(Fx0 ,g)(x0)

(x − x0),

〈c∗, y〉 + 〈d∗, z〉 ≥ 0 and 〈d∗, g(x0)〉 = 0.

Proof Since (i) and (ii) are similar, we check only part (i). Suppose ad absurdum that thereexists xn ∈ � tending to x0 such that Fx0(xn) ∈ −intCε(B). By Lemma 3.3,

(Fx0(xn), g(xn)) − (Fx0(x0), g(x0)) ∈ p-clA(Fx0 ,g)(x0)(L ′x0,xn

(0+)) + C × K .

Likewise as for Theorem 3.4, ∃(y′n, z′

n) ∈ p-clA(Fx0 ,g)(x0)(L ′x0,xn

(0+)), ∃(cn, kn) ∈ Cε(B)×K , (Fx0(xn), g(xn))−(0, g(x0)) = (y′

n, z′n)+(cn, kn). Therefore, for all (c∗, d∗) ∈ Cε(B)∗×

K ∗ and n,⟨c∗, y′

n + cn⟩ + ⟨

d∗, z′n + kn

⟩ = ⟨c∗, Fx0(xn)

⟩ + ⟨d∗, g(xn) − g(x0)

⟩ ≤ − ⟨d∗, g(x0))

⟩.

Hence, we obtain the contradiction that⟨c∗, y′

n

⟩ + ⟨d∗, z′

n

⟩ ≤ − ⟨d∗, g(x0)

⟩.

Theorem 3.5 finds a strong Henig-proper solution, but Theorem 3.7 of [28] does not work,in the following case.

Example 3.2 Let X = Z = R, Y = R2, S = X, x0 = 0, C = R

2+, D = R+, F(x, y) =(y − x, |y| − |x |) and g(x) = x2 − 2x . Take B = {(y1, y2) ∈ R

2 | y1 + y2 = 1, y1 ≥0, y2 ≥ 0} for a base of C and ε ∈ (0, δ) where δ = inf{‖b‖ | b ∈ B} = 1√

2. Then,

Cε(B) ⊆ {(y1, y2) ∈ R2 | y1 + y2 > 0}. We can check that Cε(B)∗ = {(y1, y1) | y1 >

0}, D∗ = R+ and A(Fx0 ,g)(x0) = {((1,±1),−2)} is a first-order approximation of (Fx0 , g)

at x0. Hence, A(Fx0 ,g)(x0)∞ = {((0, 0), 0)} and clA(Fx0 ,g)(x0) = {((1,±1),−2)}. Fur-thermore, � = [0, 2] is convex and (Fx0 , g) is C × K -arcwise-connected at x0 on �. As(P, Q) ∈ A(Fx0 ,g)(x0)∞ \ {((0, 0), 0)} = ∅, the assumption (P, Q)(x − x0) �∈ −(C × K ) issatisfied. The remaining part of Theorm 3.5 is satisfied with (c∗, d∗) = ((1, 1), 0). Therefore,x0 is a local strong Henig-proper solution. However, Fx0 is not Gateaux differentiable at x0

and so Theorem 3.7 of [28] is out of use.

Theorem 3.6 If x0 ∈ � and AFx0(x0) and Ag(x0) are asympt p-compact first-order approx-

imations of Fx0 and g, resp, at x0, then x0 ∈ Ben(F, g) whenever either of the followingconditions holds.

(i) (Fx0 , g) is (C×K )-arcwise-connected at x0 on �; ∀x ∈ �, ∀(P, Q) ∈ p-A(Fx0 ,g)(x0)∞\{0}, (P, Q)(L ′

x0,x ) �∈ −(C × K ) and, ∃(c∗, d∗) ∈ C∗i × K ∗ \ {(0, 0)}, ∀(y, z) ∈p-clA(Fx0 ,g)(x0)(L ′

x0,x (0+)),

〈c∗, y〉 + 〈d∗, z〉 ≥ 0 and 〈d∗, g(x0)〉 = 0.

123

Page 10: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

910 J Glob Optim (2013) 55:901–920

(ii) (Fx0 , g) is pseudoconvex at x0; ∀x ∈ �, ∀(P, Q) ∈ p-A(Fx0 ,g)(x0)∞ \ {0}, (P, Q)

(x − x0) �∈ −(C × K ) and, ∃(c∗, d∗) ∈ C∗i × K ∗ \ {(0, 0)}, ∀(y, z) ∈ p-clA(Fx0 ,g)(x0)

(x − x0),

〈c∗, y〉 + 〈d∗, z〉 ≥ 0 and 〈d∗, g(x0)〉 = 0.

Proof Since (ii) is similar to (i), its proof is omitted. For (i), arguing by contradiction, sup-pose there exists nonzero point y ∈ clcone(Fx0(�)+C)∩(−C). Then, there exist sequencesλn > 0, xn ∈ � and cn ∈ C such that

limn→∞ λn(Fx0(xn) + cn) = y. (3)

By Lemma 3.3, one has(Fx0(xn), g(xn)) − (Fx0(x0), g(x0)) ∈ p-clA(Fx0 ,g)(x0)(L ′

x0,xn(0+)) + C × K .

So, there exist (y′n, z′

n) ∈ p-clA(Fx0 ,g)(x0)(H ′x0,x (0

+)) and (c′n, kn) ∈ C × K satisfying

(Fx0(xn), g(xn)) − (0, g(x0)) = (y′n, z′

n) + (c′n, kn).

Hence, for all (c∗, d∗) ∈ C∗i × K ∗ and all n,⟨c∗, y′

n + c′n

⟩ + ⟨d∗, z′

n + kn⟩ = ⟨

c∗, Fx0(xn)⟩ + ⟨

d∗, g(xn) − g(x0)⟩.

Furthermore, from (3) we have

limn→∞ λn(〈c∗, Fx0(xn)〉 + 〈c∗, cn〉) = 〈c∗, y〉 < 0.

Therefore, for sufficiently large n, 〈c∗, Fx0(xn)〉 < 0. This implies that⟨c∗, y′

n + c′n

⟩ + ⟨d∗, z′

n + k′n

⟩ = ⟨c∗, Fx0(xn)

⟩ + ⟨d∗, g(xn) − g(x0)

⟩< − ⟨

d∗, g(x0)⟩.

Consequently, we obtain the contradiction that⟨c∗, y′

n

⟩ + ⟨d∗, z′

n

⟩< − ⟨

d∗, g(x0)⟩.

��Finally, we take care of local firm solutions.

Theorem 3.7 Assume that X is finite dimensional, x0 ∈ � and AFx0(x0) and Ag(x0) are

asympt p-compact first-order approximations of Fx0 and g, resp, at x0. Suppose, for allu ∈ T (�, x0) with norm one, all P ∈ p-clAFx0

(x0)∪(p-AFx0(x0)∞ \ {0}) and Q ∈ p-

clAg(x0)∪ (p-Ag(x0)∞ \ {0}), there exists (c∗, d∗) ∈ C∗ × K ∗ \ {(0, 0)} such that

〈c∗, Pu〉 + 〈d∗, Qu〉 > 0, 〈d∗, g(x0)〉 = 0.

Then, x0 is a local firm solution of order 1 of (EP).

Proof Reasoning by contraposition, suppose the existence of xn ∈ (S ∩ BX (x0,1n )) \ {x0}

and cn ∈ C such that g(xn) ∈ −K and Fx0(xn) + cn ∈ BY (0, 1n ‖xn − x0‖). Because

Fx0(xn) = Fx0(xn) − Fx0(x0) ∈ AFx0(xn − x0) + o(‖xn − x0‖),

there is Pn ∈ AFx0(x0) such that, for n large enough,

Pn(xn − x0) + o(‖xn − x0‖) + cn ∈ BY

(0,

1

n‖xn − x0‖

). (4)

123

Page 11: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 911

Since X is finite dimensional, one can assume that (xn − x0)/‖xn − x0‖ → u for someu ∈ T (�, x0) with norm one.

If {Pn} is norm-bounded, we can assume that Pnp→ P ∈ p-clAFx0

(x0). Dividing (4) by‖xn − x0‖ and passing to limit, one gets Pu ∈ −C .

If {Pn} is norm-unbounded, one can suppose that ‖Pn‖ → ∞ and Pn/‖Pn‖ p→ P ∈p-AFx0

(x0)∞ \ {0}. Dividing (4) by ‖Pn‖.‖xn − x0‖ and passing to limit give Pu ∈ −C . So,there exists P ∈ p-clAFx0

(x0)∪ (p-AFx0(x0)∞ \ {0}) such that Pu ∈ −C . We also have

g(xn) − g(x0) ∈ −K − g(x0) ⊆ −K (g(x0)).

Hence, there exists Qn ∈ Ag(x0) fulfilling

Qn(xn − x0) + o(‖xn − x0‖) ∈ −K (g(x0)). (5)

Similarly as for {Pn}, from (5) it follows the existence of Q ∈ p-clAg(x0)∪ p-Ag(x0)∞ \ {0}such that Qu ∈ −K (g(x0)). So, for each (c∗, d∗) ∈ C∗ × K ∗ \{(0, 0)} with 〈d∗, g(x0)〉 = 0,one has the contradiction 〈c∗, Pu〉 + 〈d∗, Qu〉 ≤ 0. ��

We illustrate this theorem as follows.

Example 3.3 Let X = Y = Z = R2, S = X, x0 = (0, 0), C = K = R

2+,

F((x1, x2), (y1, y2)) = (3√

y1 − 3√

x1, sgn(y2)√|y2| − sgn(x2)

√|x2|), and g(x1, x2) =

(x21 − x2,−x1).Then, � = {(x1, x2) ∈ R

2 | x2 ≥ x21 , x1 ≥ 0}, T (�, x0) = {(x1, x2) ∈ R

2 | x1 ≥0, x2 ≥ 0} and, for any fixed α1 > 0, α2 > 0, Fx0 and g admit first-order approximations

AFx0=

{(β1 00 β2

)| βi > αi , i = 1, 2

}, Ag(x0) =

{(0 −1

−1 0

)}.

Hence,

clAFx0(x0) =

{(β1 00 β2

)| βi ≥ αi , i = 1, 2

},

(AFx0(x0))∞ =

{(γ1 00 γ2

)| γi ≥ 0, i = 1, 2

},

clAg(x0) = Ag(x0), (Ag(x0))∞ = {0}.So, one sees that, ∀u ∈ R

2 : ‖u‖ = 1, u ∈ T (�, x0), ∀P ∈ clAFx0(x0)∪((AFx0

(x0))∞\{0}), ∀Q ∈ clAg(x0)∪((Ag(x0))∞\{0}), for (c∗, d∗) = ((1, 1), (0, 0)) ∈ C∗×K ∗\{(0, 0)},the following conditions hold

〈c∗, Pu〉 + 〈d∗, Qu〉 > 0, 〈d∗, g(x0)〉 = 0.

By virtue of Theorem 3.7, x0 is a local firm solution of order 1 of (EP). The results in [8,28]for convex or differentiable cases cannot be applied, since Fx0(x) is not neither Gateaux dif-ferentiable at x0 nor C-convex on X . (To see the nonconvexity, compute Fx0(1/2(0, 0) +1/2(1, 0)) = Fx0(1/2, 0) = (

3√

1/2, 0)

and 1/2Fx0(0, 0) + 1/2Fx0(1, 0) = (1/2, 0)).

4 Second-order optimality conditions

In this section, the following notations will be adopted for problem (EP). For d∗ ∈ K ∗, set

G(d∗) = {x ∈ S| g(x) ∈ −K , 〈d∗, g(x)〉 = 0}.

123

Page 12: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

912 J Glob Optim (2013) 55:901–920

If Fx0 and g admit Fréchet derivatives F ′x0

(x0) and g′(x0), then, set

C∗0 × K ∗

0 = {(c∗, d∗) ∈ C∗ × K ∗ \ {(0, 0)}| c∗ ◦ F ′

x0(x0) + d∗ ◦ g′(x0)

= 0, 〈d∗, g(x0)〉 = 0}.

If Fx0 and g have first-order approximations AFx0(x0) and Ag(x0), resp, at x0 ∈ X , then, for

(c∗, d∗) ∈ C∗ × K ∗, we set

P(x0, c∗, d∗) =

{v ∈ X | 〈c∗, Pv〉 + 〈d∗, Qv〉 = 0,∀P ∈ AFx0

(x0),∀Q ∈ Ag(x0)}

.

4.1 The first-order differentiable case

In this subsection, we assume that Fx0 and g are first-order Fréchet differentiable at x0 ∈ �.

Theorem 4.1 Let C be polyhedral, x0 ∈ �, d∗ ∈ K ∗ with 〈d∗, g(x0)〉 = 0,(F ′

x0(x0), BFx0

(x0)) and (g′(x0), Bg(x0)) be asympt p-compact second-order approxima-tions of Fx0 and g, resp, at x0 with Bg(x0) being norm-bounded.

If x0 ∈ WMin(F, g), then, for any v ∈ T (G(d∗), x0), there exists c∗ ∈ B, where Bis finite and cone(coB) = C∗, such that 〈c∗, F ′

x0(x0)v〉 + 〈d∗, g′(x0)v〉 ≥ 0. If, further-

more, c∗ ◦ F ′x0

(x0) + d∗ ◦ g′(x0) = 0, then either, ∃M ∈ p-clBFx0(x0), ∃N ∈ p-clBg(x0),

〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 ≥ 0 or, ∃M ∈ p-BFx0(x0)∞ \ {0}, 〈c∗, M(v, v)〉 ≥ 0.

Proof Let v ∈ T (G(d∗), x0) be fixed. There exists (tn, vn) → (0+, v) such that xn :=x0 + tnvn ∈ G(d∗) for all n. As x0 is a local weak solution and C is polyhedral and then sois C∗, there exists c∗ ∈ B such that (using a subsequence) 〈c∗, Fx0(xn)〉 ≥ 0. Then,

〈c∗, Fx0(xn) − Fx0(x0)〉 + 〈d∗, g(xn) − g(x0)〉 ≥ 0.

Dividing this inequality by tn , one has in the limit

〈c∗, F ′x0

(x0)v〉 + 〈d∗, g′(x0)v〉 ≥ 0.

If c∗◦F ′x0

(x0)+d∗◦g′(x0) = 0, then (0, c∗◦BFx0(x0)+d∗◦Bg(x0)) is a second-order approx-

imation of L(., c∗, d∗) := 〈c∗, Fx0(.)〉 + 〈d∗, g(.)〉 at x0. Consequently, Mn ∈ BFx0(x0) and

Nn ∈ Bg(x0) exist such that, for large n,

L(x0 + tnvn, c∗, d∗) −L

(x0, c∗, d∗) = t2

n 〈c∗, Mn(vn, vn)〉 + t2n 〈d∗, Nn(vn, vn)〉 + o

(t2n

).

On the other hand,

L(x0 + tnvn, c∗, d∗) − L(x0, c∗, d∗) = 〈c∗, Fx0(x0 + tnvn) − Fx0(x0)〉 ≥ 0.

Hence, for large n,⟨c∗, Mn(vn, vn)

⟩ + ⟨d∗, Nn(vn, vn)

⟩ + o(t2n

)/t2

n ≥ 0. (6)

We can assume that Nnp→ N ∈ p-clBg(x0). If {Mn} is norm-bounded, then assume that

Mnp→ M ∈ p-clBFx0

(x0). Letting n → ∞ in (6) yields

〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 ≥ 0.

If {Mn} is norm-unbounded, assume that ‖Mn‖ → ∞ and Mn/‖Mn‖ p→ M ∈ p-BFx0(x0)∞\

{0}. Dividing (6) by ‖Mn‖ and passing to limit, one gets 〈c∗, M(v, v)〉 ≥ 0. ��

123

Page 13: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 913

Example 4.1 Let X = Y = R2, Z = R, C = R

2+, B = {(1, 0), (0, 1)}, K = R+, x0 =(0, 0). Let F((x1, x2), (y1, y2)) =

(− 3

4 |y1| 43 + 3

4 |x1| 43 ,− 2

3 |y1| 32 + 2

3 |x1| 32 + x2

2 − y22

),

g(x1, x2) = x21 − 2x2.

Let α1, α2 be fixed and αi < 0, i = 1, 2. We can check that Fx0(x0) = 0, F ′x0

(x0) ={0}, g′(x0) = (0,−2),

BFx0(x0) =

{(β1 0 β2 00 0 0 −1

)| βi < αi , i = 1, 2

}, Bg(x0) =

{(1 00 0

)}.

Hence,

clBFx0(x0) =

{(β1 0 β2 00 0 0 −1

)| β ≤ αi , i = 1, 2

},

BFx0(x0)∞ =

{(γ1 0 γ2 00 0 0 0

)| γi ≤ 0, i = 1, 2

}.

Choose d∗ = 0 ∈ K ∗ = R+ and v = (1, 1) ∈ T (G(d∗), x0) = R × R+. Let c∗ ∈ B. Wehave two cases. If c∗ = (1, 0), then, c∗ ◦ AFx0

(x0) + d∗ ◦ g′(x0) = 0 and

〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 = β1 < 0

for all M ∈ clBFx0(x0) and N ∈ clBg(x0), and 〈c∗, M(v, v)〉 = γ1 < 0 for all M ∈

BFx0(x0)∞ \ {0}. If c∗ = (0, 1), then c∗ ◦ AFx0

(x0) + d∗ ◦ g′(x0) = 0 and

〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 = β2 − 1 < 0

for all M ∈ clBFx0(x0) and N ∈ clBg(x0), and 〈c∗, M(v, v)〉 = γ2 < 0 for all M ∈

BFx0(x0)∞ \ {0}.

Following Theorem 4.1, x0 is not a local weak solution of (EP).

Theorem 4.2 Assume that X is finite dimensional, x0 ∈ � and (F ′x0

(x0), BFx0(x0)),

(g′(x0), Bg(x0)) are asympt p-compact second-order approximations of Fx0 and g, resp,at x0 with norm-bounded Bg(x0). Impose further the existence of (c∗, d∗) ∈ C∗

0 × K ∗0 such

that, for all v ∈ T (�, x0) with ‖v‖ = 1 and 〈c∗, F ′x0

(x0)v〉 = 〈d∗, g′(x0)v〉 = 0,

(i) for all M ∈ p-clBFx0(x0) and N ∈ p-clBg(x0), one has

〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 > 0;(ii) for all M ∈ p-BFx0

(x0)∞ \ {0}, one has 〈c∗, M(v, v)〉 > 0.

Then, x0 is a local firm solution of order 2 to (EP).

Proof Reasoning by contraposition, suppose the existence of xn ∈ (S ∩ BX (x0,1n )) \ {x0}

and cn ∈ C such that g(xn) ∈ −K and, with tn = ‖xn − x0‖,

Fx0(xn) − Fx0(x0) + cn ∈ BY

(0,

1

nt2n

). (7)

Let v ∈ T (�, x0) with ‖v‖ = 1 such that 1tn

(xn − x0) → v. Dividing (7) by tn and tak-ing limit, we get F ′

x0v ∈ −C . Thus, 〈c∗, F ′

x0v〉 ≤ 0. By dividing the relation g(xn) −

g(x0) ∈ −K (g(x0)) by tn and passing to limit, one obtains g′(x0)v ∈ −K (g(x0)). Hence,〈d∗, g′(x0)v〉 ≤ 0. By the definition of C∗

0 × K ∗0 , we see that

⟨c∗, F ′

x0v⟩ = ⟨

d∗, g′(x0)v⟩ = 0.

123

Page 14: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

914 J Glob Optim (2013) 55:901–920

For the Lagrangian L(., c∗, d∗) = 〈c∗, Fx0(.)〉 + 〈d∗, g(.)〉, we have, for Mn ∈ BFx0(x0),

Nn ∈ Bg(x0),

L(xn , c∗, d∗) − L(x0, c∗, d∗) = 〈c∗, Mn(xn − x0, xn − x0)〉 + 〈d∗, Nn(xn − x0, xn − x0)〉 + o(t2n ). (8)

On the other hand, (7) implies that

L(xn, c∗, d∗) − L(x0, c∗, d∗) = 〈c∗, Fx0(xn) − Fx0(x0)〉 + 〈d∗, g(xn) − g(x0)〉 ≤ 〈d∗, dn〉for some dn ∈ BY

(0, 1

n t2n

). This and (8) together give

〈c∗, Mn(xn − x0, xn − x0)〉 + 〈d∗, Nn(xn − x0, xn − x0)〉 + o(t2n

) ≤ 〈d∗, dn〉.Likewise as for Theorem 4.1, we have either, ∃M ∈ p-clBFx0

(x0), ∃N ∈ p-clBg(x0),〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 ≤ 0, or, ∃M ∈ p-BFx0

(x0)∞ \ {0}, 〈d∗, M(v, v)〉 ≤ 0. Bothof them are impossible. ��Example 4.2 Let X = Y = Z = R, C = K = R+, x0 = 0, F(x, y) = |y| 3

2 −|x | 32 +y2−x2,

g(x) = −2x + x2, and α be positive and fixed.By direct calculations, one gets Fx0(x0) = 0, F ′

x0(x0) = 0, g′(x0) = −2, BFx0

(x0) =(α + 1,+∞), Bg(x0) = {1},� = [0, 2] and T (�, x0) = [0,∞). So, clBFx0

(x0) = [α +1,∞), BFx0

(x0)∞ = [0,+∞) and C∗0 × K ∗

0 = {(c∗, 0)|c∗ ∈ R+ \ {0}}.Choose (c∗, d∗) = (1, 0) ∈ C∗

0 × K ∗0 . Then, for all v ∈ T (�, x0) with ‖v‖ = 1, i.e.,

v = 1, we see that, for all M ∈ clBFx0(x0) and N ∈ clBg(x0),

〈c∗, M(v, v)〉 + 〈d∗, N (v, v)〉 = β ≥ α + 1 > 0,

and, for each M ∈ BFx0(x0)∞ \ {0}, 〈c∗, M(v, v)〉 = γ > 0. According to Theorem 4.2, x0

is a local firm solution of order 2 of (EP).

4.2 The nondifferentiable case

We have the following necessary condition for local weak solutions in the general nondiffer-entiable case.

Theorem 4.3 Let C be polyhedral, d∗ ∈ K ∗ with 〈d∗, g(x0)〉 = 0, and (AFx0(x0), BFx0

(x0)),(Ag(x0), Bg(x0)) be asympt p-compact second-order approximations of Fx0 and g, resp, atx0 with AFx0

(x0), Ag(x0), Bg(x0) being norm-bounded.If x0 is a local weak solution of (EP), then, for any v ∈ T (G(d∗), x0),(i) for all w ∈ T 2(G(d∗), x0, v), there exist c∗ ∈ B, where B is finite and cone(coB) =

C∗, P ∈ p-clAFx0(x0) and Q ∈ p-clAg(x0) such that 〈c∗, Pv〉 + 〈d∗, Qv〉 ≥ 0. If, in addi-

tion, v ∈ P(x0, y∗, z∗), then either, for some P ∈ p-clAFx0(x0), Q ∈ p-clAg(x0), M ∈

p-clBFx0(x0) and N ∈ p-clBg(x0),

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, M(v, v)〉 + 2〈d∗, N (v, v)〉 ≥ 0

or, for some M ∈ p-BFx0(x0)∞ \ {0}, 〈c∗, M(v, v)〉 ≥ 0;

(ii) for all w ∈ T ′′(G(d∗), x0, v), there exist P ∈ p-clAFx0(x0) and Q ∈ p-clAg(x0)

such that 〈c∗, Pv〉 + 〈d∗, Qv〉 ≥ 0. If, in addition, v ∈ P(x0, y∗, z∗), then either P ∈p-clAFx0

(x0), Q ∈ p-clAh(x0) and M ∈ p-clBFx0(x0) exist such that

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, M(v, v)〉 ≥ 0

or M ∈ p-BFx0(x0)∞ \ {0} exists with 〈d∗, M(v, v)〉 ≥ 0.

123

Page 15: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 915

Proof (i) Let v ∈ T (G(d∗), x0) and w ∈ T 2(G(d∗), x0, v) be arbitrary and fixed. Thereexist tn → 0+ and wn → w such that, for all n, xn := x0 + tnv + 1

2 t2n wn ∈ G(d∗).

Similarly as for Theorem 4.1, we have c∗ ∈ B such that, for all n,

L(xn, c∗, d∗) − L

(x0, c∗, d∗) = ⟨

c∗, Fx0(xn) − Fx0(x0)⟩ ≥ 0.

On the other hand, there are P ′n ∈ AFx0

(x0), Q′n ∈ Ag(x0) such that, for large n,

L(xn, c∗, d∗) − L(x0, c∗, d∗)

= tn

⟨c∗, P ′

n

(v + 1

2tnwn

)⟩+ tn

⟨d∗, Q′

n

(v + 1

2tnwn

)⟩+ o(tn).

Consequently,⟨c∗, P ′

n

(v + 1

2tnwn

) ⟩+

⟨d∗, Q′

n

(v + 1

2tnwn

)⟩+ o(tn)/tn ≥ 0.

Because of the assumed boundedness, there exist P ′ ∈ p-clAFx0(x0) and Q′ ∈ p-clAg(x0)

such that P ′n

p→ P ′ and Q′n

p→ Q′. Passing the above inequality to limit, we obtain

〈c∗, P ′v〉 + 〈d∗, Q′v〉 ≥ 0.

If v ∈ P(x0, c∗, d∗), we have Pn ∈ AFx0(x0), Qn ∈ Ag(x0), Mn ∈ BFx0

(x0), andNn ∈ Bg(x0) such that

〈c∗, Pnwn〉 + 〈d∗, Qnwn〉 + 2

⟨c∗, Mn

(v + 1

2tnwn, v + 1

2tnwn

)⟩

+ 2

⟨d∗, Nn

(v + 1

2tnwn, v + 1

2tnwn

)⟩+ o(t2

n )

/1

2t2n ≥ 0. (9)

By the boundedness of AFx0(x0), Ag(x0) and Bg(x0), we can assume the existence of P ∈

p-clAFx0(x0), Q ∈ p-clAg(x0) and N ∈ p-clBg(x0) such that Pn

p→ P , Qnp→ Q and

Nnp→ N .

If {Mn} is norm-bounded, it converges to M ∈ p-clBFx0(x0). Passing (9) to limit yields

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, M(v, v)〉 + 2〈d∗, N (v, v)〉 ≥ 0.

If {Mn} is norm-unbounded, we assume that ‖M‖ → ∞ and Mn‖Mn‖p→ M ∈ p-BFx0

(x0)∞ \{0}. From (9), we get, after dividing by ‖Mn‖ and passing to limit, 〈c∗, M(v, v)〉 ≥ 0.

(ii) For any v ∈ T (G(d∗), x0) and w ∈ T ′′(G(d∗), x0, v), we have (tn, rn) → (0+, 0+)

and wn → w such that tn/rn → 0+ and, for all n, xn := x0 + tnv + 12 tnrnwn ∈

G(d∗).

As for part (i), there is c∗ ∈ B such that, for all n, L(xn, c∗, d∗) − L(x0, c∗, d∗) ≥ 0.

Then, for some P ′n ∈ AFx0

(x0) and Q′n ∈ Ag(x0) and large n, one has

⟨c∗, P ′

n

(v + 1

2rnwn

)⟩+

⟨d∗, Q′

n

(v + 1

2rnwn

)⟩+ o(tn)/tn ≥ 0.

Thanks to the assumed boundedness, we have P ′n

p→ P ′ ∈ p-clAFx0(x0), Q′

np→ Q′ ∈

p-clAg(x0). Passing the above inequality to limit, one gets 〈c∗, P ′v〉 + 〈d∗, Q′v〉 ≥ 0.

123

Page 16: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

916 J Glob Optim (2013) 55:901–920

If v ∈ P(x0, c∗, d∗), we have Pn ∈ AFx0(x0), Qn ∈ Ag(x0), Mn ∈ BFx0

(x0) and Nn ∈Bg(x0) such that, for large n,

⟨c∗, Pnwn

⟩ + ⟨d∗, Qnwn

⟩ +⟨c∗,

(2tnrn

)Mn

(v + 1

2rnwn, v + 1

2rnwn

)⟩

+⟨d∗,

(2tnrn

)Nn

(v + 1

2rnwn, v + 1

2rnwn

)⟩+ 2o

(t2n

)/tnrn ≥ 0. (10)

As Bg(x0) is bounded,(

2tnrn

)Nn

p→ 0. Also by the boundedness, we can assume that Pnp→ P

and Qnp→ Q for some P ∈ p-clAFx0

(x0) and Q ∈ p-clAg(x0). There are three possibilities(using subsequences if necessary).

• If(

2tnrn

)Mn

p→ 0, then (10) gives, in the limit,

〈c∗, Pw〉 + 〈d∗, Qw〉 ≥ 0.

• If∥∥∥(

2tnrn

)Mn

∥∥∥ → γ > 0, then ‖Mn‖ → ∞ and one can assume that Mn‖Mn‖p→ M ∈

p-BFx0(x0)∞ \ {0}. Hence, (10) implies, by passing to limit,

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, γ M(v, v)〉 ≥ 0.

• If∥∥∥(

2tnrn

)Mn

∥∥∥ → ∞, then ‖Mn‖ → ∞ and one can assume that Mn/‖Mn‖ p→ M ∈p-BFx0

(x0)∞ \ {0}. Passing (10) to limit, one has 〈d∗, M(v, v)〉 ≥ 0.

��In the example below, Theorem is applied to reject a candidate.

Example 4.3 Let X = Y = R2, Z = R, C = R

2+, B = {c∗1 = (1, 0), c∗

2 = (0, 1)}, K ={0}, x0 = (0, 0), F((x1, x2), (y1, y2)) = (x2 − y2, y1 + |y2| − x1 − |x2|), and g(x1, x2) =−x3

1 + x22 .

Then, the following approximations can be amitted

AFx0(x0) =

{(0 −11 ±1

)}, BFx0

(x0) = {0},

Ag(x0) = {0}, Bg(x0) ={(

0 00 1

)}.

Let d∗ = 0 ∈ K ∗. Then, G(d∗) = {(x1, x2) ∈ R2| − x3

1 + x22 = 0} and T (G(d∗), x0) =

R+ × {0}.Choosing v = (1, 0) ∈ T (G(d∗), x0), we have T 2(G(d∗), x0, v)=∅, T ′′(G(d∗), x0, v) =

R2.

Now, let w = (0, 1) ∈ T ′′(G(d∗), x0, v). For c∗1 = (1, 0) ∈ B and all P ∈ clAFx0

(x0),Q ∈ clAg(x0), one gets

⟨b∗

1, Pv⟩ + 〈d∗, Qv〉 ≥ 0,

v ∈ P(x0, c∗

1, d∗) = {(v1, v2) ∈ R

2|v2 = 0}.

Furthermore, for all P ∈ clAFx0(x0), Q ∈ clAg(x0) and M ∈ BFx0

(x0), one has⟨c∗

1, Pw⟩ + ⟨

d∗, Qw⟩ + 2

⟨c∗

1, M(v, v)⟩ = −1 < 0.

123

Page 17: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 917

For c∗2 = (0, 1) ∈ B and all P ∈ clAFx0

(x0), Q ∈ clAg(x0), we have 〈c∗2, Pv〉+〈d∗, Qv〉 =

1 > 0, which means that v �∈ P(x0, c∗2, d∗). Taking into account Theorem , one sees that x0

is not a local weak solution.

Let us turn now to a sufficient optimality condition.

Theorem 4.4 Assume that X is finite dimensional, x0 ∈ �, (c∗, d∗) ∈ C∗ × K ∗ with〈d∗, g(x0)〉 = 0, (AFx0

(x0), BFx0(x0)), (Ag(x0), Bg(x0)) are asympt p-compact second-

order approximations of Fx0 and g, resp, at x0 with AFx0(x0), Ag(x0), Bg(x0) being norm-

bounded.Then, x0 is a local firm solution of (EP) if the following assumptions are fulfilled.

(i) For all v ∈ T (g−1(−K ), x0), P ∈ AFx0(x0) and Q ∈ Ag(x0), one has

〈c∗, Pv〉 + 〈d∗, Qv〉 = 0.

(ii) For all v ∈ T (g−1(−K ), x0) with ‖v‖ = 1 such that, ∃P ∈ p-clAFx0(x0) : Pv ∈ −C,

∃Q ∈ p-clBg(x0) : Qv ∈ −K (g(x0)), 〈c∗, M(v, v)〉 > 0 for all M ∈ BFx0(x0)∞\{0},

and(ii1) ∀w ∈ T 2(g−1(−K ), x0, v) ∩ v⊥,∀P ∈ p-clAFx0

(x0),∀Q ∈ p-clAg(x0),∀M ∈ p-clBFx0

(x0),∀N ∈ p-clBg(x0),

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, M(v, v)〉 + 2〈N (v, v)〉 > 0;(ii2) ∀w ∈ T ′′(g−1(−K ), x0, v) ∩ v⊥ \ {0},∀P ∈ p-clAFx0

(x0),∀Q ∈ p-clAh(x0),∀M ∈p-BFx0

(x0)∞,

〈c∗, Pw〉 + 〈d∗, Qw〉 + 〈c∗, M(v, v)〉 > 0.

Proof Reasoning ad absurdum, suppose xn ∈ (S ∩ BX (x0,1n )) \ {x0} and cn ∈ C exist such

that g(xn) ∈ −K and

dn := Fx0(xn) + cn ∈ BY

(0,

1

nt2n

). (11)

where tn = ‖xn − x0‖. We can assume that 1tn

(xn − x0) → v for some v ∈ T (g−1(K ), x0)

with norm one. For large n, there are P ′n ∈ AFx0

(x0) and Q′n ∈ Ag(x0) such that

Fx0(xn) = Fx0(xn) − Fx0(x0) = P ′n(xn − x0) + o(tn),

g(xn) − g(x0) = Q′n(xn − x0) + o(tn) ∈ −K (g(x0)). (12)

We can assume the existence of P ′ ∈ p-clAFx0(x0) and Q′ ∈ p-clAg(x0) such that P ′

np→ P ′

and Q′n

p→ Q′. Dividing (11), (12) by tn and passing to limit, one gets

P ′v ∈ −C, Q′v ∈ K (g(x0)).

On the other hand,

L(xn, c∗, d∗) − L

(x0, c∗, d∗) = ⟨

c∗, Fx0(xn) − Fx0(x0)⟩ + 〈d∗, g(xn) − g(x0)〉

≤ 〈c∗, dn − cn〉 ≤ 〈c∗, dn〉. (13)

By Lemma 2.3, there are only two cases now.

123

Page 18: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

918 J Glob Optim (2013) 55:901–920

Case 1. wn := (xn − x0 − tnv)/ 12 t2

n → w ∈ T 2(g−1(K ), x0, v) ∩ v⊥. By virtue of (13),there are Pn ∈ AFx0

(x0), Qn ∈ Ag(x0), Mn ∈ BFx0(x0), Nn ∈ Bg(x0) such that, for large n,

⟨c∗, Pn

(tnv + 1

2t2n wn

)⟩+

⟨d∗, Qn

(tnv + 1

2t2n wn

)⟩

+⟨c∗, Mn

(tnv + 1

2t2n wn, tnv + 1

2t2n wn

)⟩

+⟨d∗, Nn

(tnv + 1

2t2n wn, tnv + 1

2t2n wn

)⟩+ o

(t2n

) ≤ ⟨c∗, dn

⟩.

Therefore, due to assumption (i),

〈c∗, Pnwn〉 + 〈d∗, Qnwn〉 + 2

⟨c∗, Mn

(v + 1

2tnwn, v + 1

2tnwn

)⟩

2

⟨d∗, Nn

(v + 1

2tnwn, v + 1

2tnwn

)⟩+ o

(t2n

)/

1

2t2n ≤ ⟨

c∗, dn⟩ /1

2t2n . (14)

Assume that Pnp→ P ∈ p-clAFx0

(x0), Qnp→ Q ∈ p-clAg(x0), Nn

p→ M ∈ p-clBg(x0).

Now, if {Mn} is also norm-bounded and Mnp→ M ∈ p-clBFx0

(x0), then from (14) onehas, in the limit, the following contradiction with assumption (ii1) :

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, M(v, v)〉 + 2〈N (v, v)〉 ≤ 0.

If {Mn} is norm-unbounded, then, ‖Mn‖ → ∞, Mn/‖Mn‖ p→ M ∈ p-BFx0(x0)∞ \ {0}.

Dividing (14) by ‖Mn‖ and passing to limit, one has 〈c∗, M(v, v)〉 ≤ 0, contradicting (ii).Case 2. wn := (xn − x0 − tnv)/ 1

2 tnrn → w ∈ T ′′(g−1(K ), x0, v) ∩ v⊥ \ {0}, wherern → 0+ and tn

rn→ 0+. By (i), similarly as (14), one arrives at

〈c∗, Pnwn〉 + ⟨d∗, Qnwn

⟩ +(

2tnrn

) ⟨c∗, Mn

(v + 1

2rnwn, v + 1

2rnwn

)⟩

(2tnrn

) ⟨d∗, Nn

(v + 1

2rnwn, v + 1

2rnwn

)⟩+ o

(t2n

) /1

2tnrn ≤ 〈c∗, dn〉

/1

2tnrn .

(15)

Likewise as before, Pnp→ P ∈ p-clAFx0

(x0), Qnp→ Q ∈ p-clAg(x0). Because Bg(x0) is

bounded, one has(

2tnrn

)Nn

p→ 0. Now, we have the following three subcases.

•(

2tnrn

)Mn

p→ 0. By passing (15) to limit, it follows the contraction with (ii2):

〈c∗, Pw〉 + 〈d∗, Qw〉 ≤ 0.

•∥∥∥(

2tnrn

)Mn

∥∥∥ → α > 0. Then, ‖Mn‖ → ∞ and one can assume that Mn/‖Mn‖ p→ M ∈p-BFx0

(x0)∞ \ {0}. From (15), one gets in the limit

123

Page 19: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

J Glob Optim (2013) 55:901–920 919

〈c∗, Pw〉 + 〈d∗, Qw〉 + 2〈c∗, αM(v, v)〉 ≤ 0.

Since p-BFx0(x0)∞ is a cone, this contradicts (ii2).

•∥∥∥(

2tnrn

)Mn

∥∥∥ → ∞. Then, one also has ‖Mn‖ → ∞ and Mn/‖Mn‖ p→ M ∈ p-

BFx0(x0)∞ \ {0}. Dividing (15) by

(2tnrn

)‖Mn‖ and passing to limit, one gets the contra-

diction that

〈c∗, M(v, v)〉 ≤ 0.

��Acknowledgments The authors would like to thank the Editor and Anonymous Referees for their valuableremarks and suggestions, which have helped them to improve the paper. This work was supported by VietnamNational University-Hochiminh City.

References

1. Allali, K., Amahroq, T.: Second-order approximations and primal and dual necessary optimality condi-tions. Optimization 40, 229–246 (1997)

2. Aubin, J.-P., Frankowska, H.: Set-Valued Analysis. Birkhäuser, Boston (1990)3. Avriel, M.: Nonlinear Programming: Analysis and Methods. Prentice-Hall, New Jersey (1976)4. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math.

Stud. 63, 123–146 (1994)5. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)6. Fan, K.: A minimax inequality and applications. In: Shisha, O. (ed.) Inequality III, pp. 103–113. Academic

Press, New York (1972)7. Frankowska, H., Quincampoix, M.: Hölder metric regularity of set-valued maps. Math. Prog. 132, 333–

354 (2012)8. Gong, X.H.: Optimality conditions for vector equilibrium problems. J. Math. Anal. Appl. 342, 1455–

1466 (2008)9. Gong, X.H.: Scalarization and optimality conditions for vector equilibrium problems. Nonlinear Anal. 73,

3598–3612 (2010)10. Guerraggio, A., Molho, E., Zaffaroni, A.: On the notion of proper efficiency in vector optimization.

J. Optim. Theory Appl. 82, 1–21 (1994)11. Ha, T.X.D.: Optimality conditions for several types of efficient solutions of set-valued optimization

problems. In: Pardalos, P., Rassias, Th.M., Khan, A.A. (eds.) Nonlinear Analysis and Variational Prob-lems, pp. 305–324. Springer, Heidelberg (2010)

12. Jiménez, B., Novo, V.: Optimality conditions in differentiable vector optimization via second-order tan-gent sets. Appl. Math. Optim. 49, 123–134 (2004)

13. Jourani, A., Thibault, L.: Approximations and metric regularity in mathematical programming in Banachspaces. Math. Oper. Res. 18, 390–400 (1993)

14. Khanh, P.Q.: Optimality conditions via norm scalarization in vector optimization. SIAM J. ControlOptim. 31, 646–658 (1993)

15. Khanh, P.Q., Luc, D.T., Tuan, N.D.: Local uniqueness of solutions for equilibrium problems. Adv. Ineq.Var. 15, 127–145 (2006)

16. Khanh, P.Q., Tuan, N.D.: First and second order optimality conditions using approximations for non-smooth vector optimization in Banach spaces. J. Optim. Theory Appl. 130, 289–308 (2006)

17. Khanh, P.Q., Tuan, N.D.: First and second-order approximations as derivatives of mappings in optimalityconditions for nonsmooth vector optimization. Appl. Math. Optim. 58, 147–166 (2008)

18. Khanh, P.Q., Tuan, N.D.: Optimality conditions using approximations for nonsmooth vector optimizationproblems under general inequality constraints. J. Convex Anal. 16, 169–186 (2009)

19. Khanh, P.Q., Tuan, N.D.: Corrigendum: Optimality conditions using approximations for nonsmooth vec-tor optimization problems under general inequality constraints. J. Convex Anal. 18, 897–901 (2011)

20. Khanh, P.Q., Tung, L.T.: Local uniqueness of solutions to Ky Fan vector inequalities using approximationsas derivatives. J. Optim. Theory Appl. doi:10.1007/s10957-012-0075-9 (online first)

123

Page 20: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints

920 J Glob Optim (2013) 55:901–920

21. Khanh, P.Q., Tung, N.M.: Optimality conditions and duality for set-valued nonsmooth vector equilibriumproblems. Submitted for publication

22. Ma, B.C., Gong, X.H.: Optimality conditions for vector equilibrium problems in normed spaces. Opti-mization 60, 1441–1455 (2011)

23. Makarov, E.K., Rachkovski, N.N.: Unified representation of proper efficiency by means of dilatingcones. J. Optim. Theory Appl. 101, 141–165 (1999)

24. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation. Vol. I—Basic Theory.Springer, Berlin (2006)

25. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation. Vol. II—Application.Springer, Berlin (2006)

26. Penot, J.-P.: Second order conditions for optimization problems with constraints. SIAM J. ControlOptim. 37, 303–318 (1999)

27. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (2009)28. Wei, Z.F., Gong, X.H.: Kuhn-Tucker optimality conditions for vector equilibrium problems. J. Ineq. Appl.

Article ID 842715 (2010)29. Yang, X.Q., Zheng, X.Y.: Approximate solutions and optimality conditions of vector variational inequal-

ities in Banach spaces. J. Global Optim. 40, 455–462 (2008)

123


Recommended