20
This Provisional PDF corresponds to the article as it appeared upon acceptance. Fully formatted PDF and full text (HTML) versions will be made available soon. An Interior Approximal Method for Solving Pseudomonotone Equilibrium Problems Journal of Inequalities and Applications 2013, 2013:156 doi:10.1186/1029-242X-2013-156 Pham N Anh ([email protected]) ISSN 1029-242X Article type Research Submission date 10 July 2012 Acceptance date 10 March 2013 Publication date 5 April 2013 Article URL http://www.journalofinequalitiesandapplications.com/content/2013/1/ This peer-reviewed article can be downloaded, printed and distributed freely for any purposes (see copyright notice below). For information about publishing your research in Journal of Inequalities and Applications go to http://www.journalofinequalitiesandapplications.com/authors/instructions/ For information about other SpringerOpen publications go to http://www.springeropen.com Journal of Inequalities and Applications © 2013 Anh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1029-242X-2013-156

Embed Size (px)

Citation preview

Page 1: 1029-242X-2013-156

This Provisional PDF corresponds to the article as it appeared upon acceptance. Fully formattedPDF and full text (HTML) versions will be made available soon.

An Interior Approximal Method for Solving Pseudomonotone EquilibriumProblems

Journal of Inequalities and Applications 2013, 2013:156 doi:10.1186/1029-242X-2013-156

Pham N Anh ([email protected])

ISSN 1029-242X

Article type Research

Submission date 10 July 2012

Acceptance date 10 March 2013

Publication date 5 April 2013

Article URL http://www.journalofinequalitiesandapplications.com/content/2013/1/156

This peer-reviewed article can be downloaded, printed and distributed freely for any purposes (seecopyright notice below).

For information about publishing your research in Journal of Inequalities and Applications go to

http://www.journalofinequalitiesandapplications.com/authors/instructions/

For information about other SpringerOpen publications go to

http://www.springeropen.com

Journal of Inequalities andApplications

© 2013 Anh et al.This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0),

which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Page 2: 1029-242X-2013-156

An interior approximal method for solving pseudomonotoneequilibrium problems

Pham N. Anh∗1, Pham M. Tuan2 and Le B. Long1

1 Department of Scientific Fundamentals, Posts and Telecommunications Institute of Technology, Hanoi, Vietnam.2 Academy of Military Science and Technology, Vietnam.

Email: Pham Ngoc Anh ∗- [email protected];

∗Corresponding author

Abstract

In this paper, we present an interior approximal method for solving equilibrium problems for pseudomonotone

bifunctions without Lipschitz-type continuity on polyhedra. The method can be viewed as combining a special

interior proximal function, which replaces the usual quadratic function, Armijo-type linesearch techniques and

the cutting hyperplane methods. Convergence properties of the method are established, among them the global

convergences are proved under few assumptions. Finally, we present some preliminary computational results to

Cournot-Nash oligopolistic market equilibrium models.

Keyword: equilibrium problem; pseudomonotone; interior approximal function; cutting hyperplane method

AMS 2010 Mathematics subject classification: 65K10, 90C25.

1 Introduction

Let C be a nonempty closed convex subset of Rn and a bifunction f : C ×C → R satisfying f(x, x) = 0 for

all x ∈ C. We consider equilibrium problems in the sense of Blum and Oettli [1] (shortly EP (f, C)), which

are to find x∗ ∈ C such that

f(x∗, y) ≥ 0 ∀y ∈ C.

Let Sol(f, C) denote the set of solutions of Problem EP (f, C). When

f(x, y) = 〈F (x), y − x〉 ∀x, y ∈ C,

1

JIA_320_edited [04/04 16:19] 1/19

Page 3: 1029-242X-2013-156

where F : C → Rn, Problem EP (f, C) is reduced to the variational inequalities:

Find x∗ ∈ C such that, for all y ∈ C, 〈F (x∗), y − x∗〉 ≥ 0.

In this article, for solving Problem EP (f, C), we assume that the bifunction f and C satisfy the following

conditions:

A1. C = {x ∈ Rn : Ax ≤ b}, where A is a p × n maximal matrix (rankA = n), b ∈ Rp, and intC = {x :

Ax < b} is nonempty.

A2. For each x ∈ C, the function f(x, ·) is convex and subdifferentiable on C.

A3. f is pseudomonotone on C × C, i.e., for each x, y ∈ C, it holds

f(x, y) ≥ 0 implies f(y, x) ≤ 0.

A4. f is continuous on C × C.

A5. Sol(f, C) = ∅.

Equilibrium problems appear in many practical problems arising, for instance, physics, engineering, game

theory, transportation, economics, and network (see [2–5]). In recent years, both theory and applications

became attractive for many researchers (see [1, 6–14]).

Most of the methods for solving equilibrium problems are derived from fixed point formulations of Problem

EP (f, C): A point x∗ ∈ C is a solution of the problem if and only if x∗ ∈ C is a solution of the following

problem:

min{f(x∗, y) : y ∈ C}.

Namely, the sequence {xk} is generated by x0 ∈ C and

xk+1 ∈ argmin{f(xk, y) : y ∈ C}.

To conveniently compute the point xk+1, Mastroeni in [15] proposed the auxiliary problem principle for

solving Problem EP (f, C). This principle is based on the following fixed-point property: x∗ ∈ C is a

solution of Problem EP (f, C) if and only if x∗ ∈ C is a solution of the problem:

min{λf(x∗, y) + g(y)− 〈∇g(x∗), y〉 : y ∈ C}, (1.1)

2

JIA_320_edited [04/04 16:19] 2/19

Page 4: 1029-242X-2013-156

where λ > 0 and g(·) is a strongly convex differentiable function on C. Under the assumptions that f is

strongly monotone with constant β > 0 on C × C, i.e.,

f(x, y) + f(y, x) ≤ β‖x− y‖2 ∀x, y ∈ C,

and f is Lipschitz-type continuous with constants c1 > 0, c2 > 0, i.e.,

f(x, y) + f(y, z) ≥ f(x, z)− c1‖x− y‖2 − c2‖y − z‖2 ∀x, y, z ∈ C,

the author showed that the sequence {xk} globally converges to a solution of Problem EP (f, C). However,

the convergence depends on three positive parameters c1, c2, and β and in some cases, they are unknown or

difficult to approximate.

Many algorithms for solving the optimization problems and variational inequalities are projection algo-

rithms that employ projections onto the feasible set C, or onto some related set, in order to iteratively reach

a solution. In particular, Korpelevich [16] proposed an algorithm for solving the variational inequalities. In

each iteration of the algorithm, in order to get the next iterate xk+1, two orthogonal projections onto C are

calculated, according to the following iterative step. Given the current iterate xk , calculate

yk := PrC(xk − λF (xk))

and then

xk+1 := PrC(xk − λF (yk)).

where λ > 0 is some positive number. Recently, Tran et al. [17] extended these projection techniques to

Problem EP (f, C) involving monotone equilibrium bifunctions. but it must satisfy a certain Lipschitz-type

continuous condition. To avoid this requirement, the authors we proposed linesearch procedures commonly

used in variational inequalities to obtain projection-type algorithms for solving equilibrium problems.

It is well known that the interior approximal technique is a powerful tool for analyzing and solving

optimization problems. This technique has been used extensively by many authors for solving variational

inequalities and equilibrium problems on a polyhedron convex set (see [18–21]), where Bregman-type interior

approximal function d replaces the function g in (1.1):

d(x, y) =

⎧⎨

12 ||x− y||2 + μ

n∑

i=1

y2i (xi

yilogxi

yi− xi

yi+ 1) if xi > 0 ∀i = 1, · · · , n,

+∞ otherwise,(1.2)

with μ ∈ (0, 1) and y ∈ Rn+ = {(x1, · · · , xn)

T ∈ Rn : xi > 0 ∀i = 1, · · · , n}. Then the interior proximal

linesearch extragradient methods can be viewed as combining the function d and Armijo-type linesearch

3

JIA_320_edited [04/04 16:19] 3/19

Page 5: 1029-242X-2013-156

techniques. Convergence of the iterative sequence is established under the weaker assumptions that f is

pseudomonotone on C × C. However, at each iteration k in the Armijo-type linesearch progress of the

algorithm requires the computation of a subgradient of the bifunction ∂f(xk, ·)(yk), which is not easy in

some cases. Moreover, most of current algorithms for solving Problem EP (f, C) are based on Lipschitz-type

continuous assumptions or the computation of subgradients of the bifunction f (see [21–25]).

Our main purpose of this paper is to give an iterative algorithm for solving a pseudomonotone equilib-

rium problem without Lipschitz-type continuity of the bifunction and the computation of subgradients. To

summarize our approach, first we use an interior proximal function d as in [22], which replaces the usual

quadratic function in auxiliary problems. Next, we construct an appropriate hyperplane and a convex set,

which separate the current iterative point from the solution set and we also combine this technique with the

Armijo-type linesearch technique. Then the next iteration is obtained as the projection of the current iterate

onto the intersection of the feasible set with the convex set and the half-space containing the solution set.

The paper is organized as follows. In Section 2, we recall the auxiliary problem principle of Problem

EP (f, C) and propose a new iterative algorithm. Section 3 is devoted to the proof of its global convergence

and also show the relation between the solution set of EP (f, C) and the cluster point of the iterative sequences

in the algorithm. In Section 4, we apply our algorithm for solving generalized variational inequalities.

Applications to the Nash-Cournot oligopolistic market equilibrium model and the numerical results are

reported in the last section.

2 Proposed algorithm

Let ai denote the rows of the matrix A, and

li(x) = bi − 〈ai, x〉,

l(x) =(l1(x), l2(x), · · · , lp(x)

),

D(x, y) = d(l(x), l(y)

), (2.1)

where the function d is defined by (1.2). Then the gradient ∇1D(x, y) of D(·, y) at x for every y ∈ C is

defined by

∇1D(x, y) = −AT

(

l(x)− l(y) + μXylogl(x)

l(y)

)

,

where Xy = diag(l1(y), · · · , lp(y)

)and log l(x)

l(y) =(log l1(x)

l1(y), · · · ,log lp(x)

lp(y)

).

4

JIA_320_edited [04/04 16:19] 4/19

Page 6: 1029-242X-2013-156

It is well known that x∗ is a solution of the regularized auxiliary problem:

Find x∗ ∈ C such that f(x∗, y) +1

cD(y, x∗) ≥ 0 for all y ∈ C,

where c > 0 is a regularization parameter, if and only if x∗ is a solution of Problem EP (f, C) (see [3]).

Motivated by this, first we solve the following strongly convex problem with the interior proximal function

D:

yk = argmin{f(xk, y) + βD(y, xk) : y ∈ C},

for some positive constants β. It is easy to see that with f(x, y) = 〈F (x), y − x〉, where F : C → Rn,

and D(x, y) = 12‖x − y‖2, computing yk becomes Step 1 of the extragradient method proposed in [16]. In

Lemma 3.2 (i), we will show that if ‖yk − xk‖ = 0 then xk is a solution to Problem EP (f, C). Otherwise,

a computationally inexpensive Armijo-type procedure is used to find a point zk such that the convex set

Ck := {x ∈ Rn : f(zk, x) ≤ 0} and the hyperplane Hk := {x ∈ Rn : 〈x − xk, x0 − xk〉 ≤ 0} contain the

solution set Sol(f, C) and strictly separates xk from the solution. Then we compute the next iterate xk+1

by projecting x0 onto the intersection of the feasible set C with Ck and the half-space Hk. The algorithm is

described in more detail as follows.

Algorithm 2.1 Choose x0 ∈ C, 0 < σ < β2‖A−1‖2 and γ ∈ (0, 1).

Step 1. Evaluate

yk = argmin{f(xk, y) +β

2D(y, xk) : y ∈ C}, r(xk) = xk − yk, (2.2)

If r(xk) = 0 then Stop. Otherwise, set zk = xk − γmkr(xk), where mk is the smallest nonnegative

number such that

f(xk − γmkr(xk), yk

)≤ −σ‖r(xk)‖2. (2.3)

Step 2. Evaluate xk+1 = PrC∩Ck∩Hk(x0), where{Ck = {x ∈ Rn : f(zk, x) ≤ 0},Hk = {x ∈ Rn : 〈x− xk, x0 − xk〉 ≤ 0}.

Increase k by 1, and return to Step 1.

3 Convergence results

In the next lemma, we show the existence of the nonnegative integer mk in Algorithm 2.1.

Lemma 3.1 For γ ∈ (0, 1), 0 < σ < β2‖A−1‖2 , if ‖r(xk)‖ > 0 then there exists the smallest nonnegative

integer mk which satisfies (2.3).

5

JIA_320_edited [04/04 16:19] 5/19

Page 7: 1029-242X-2013-156

Proof. Assume on the contrary, (2.3) is not satisfied for any nonnegative integer i, i.e.,

f(xk − γir(xk), yk

)+ σ||r(xk)||2 > 0.

Letting i → ∞, from the continuity of f, we have

f(xk, yk) + σ‖r(xk)‖2 ≥ 0. (3.1)

Otherwise, for each t > 0, we have 1 − 1t ≤ log t. We obtain after multiplication by li(y

k)li(xk)

> 0 for each

i = 1, · · · , p,li(y

k)

li(xk)− 1 ≤ li(y

k)

li(xk)log

li(yk)

li(xk).

Then it follows from rankA = n that

‖x− y‖ = ‖A−1A(x− y)‖ ≤ ‖A−1‖‖A(x− y)‖,

and

D(yk, xk) =1

2‖l(xk)− l(yk)‖2 + μ

n∑

i=1

l2i (xk)( li(y

k)

li(xk)log

li(yk)

li(xk)− li(y

k)

li(xk)+ 1

)

≥ 1

2‖l(xk)− l(yk)‖2

=1

2‖A(xk − yk)‖2

≥ 1

2‖A(xk − yk)‖2

≥ 1

2‖A−1‖2‖xk − yk‖2

=1

2‖A−1‖2‖r(xk)‖2. (3.2)

Since yk is the solution to the strongly convex program (2.2), we have

f(xk, y) + βD(y, xk) ≥ f(xk, yk) + βD(yk, xk) ∀y ∈ C.

Substituting y = xk ∈ C and using assumptions f(xk, xk) = 0, D(xk, xk) = 0, we get

f(xk, yk) + βD(yk, xk) ≤ 0. (3.3)

Combining (3.2) with (3.3), we obtain

f(xk, yk) +β

2‖A−1‖2‖r(xk)‖2 ≤ 0. (3.4)

6

JIA_320_edited [04/04 16:19] 6/19

Page 8: 1029-242X-2013-156

Then inequalities (3.1) and (3.4) imply that

−σ‖r(xk)‖2 ≤ f(xk, yk) ≤ − β

2‖A−1‖2‖r(xk)‖2.

Hence, it must be either r(xk) = 0 or σ ≥ β2‖A−1‖2 . The first case contradicts to r(xk) = 0, while the second

one contradicts to the fact σ < β2‖A−1‖2 . The proof is completed. �

Let us discuss the global convergence of algorithm 2.1.

Lemma 3.2 Let {xk} be the sequence generated by algorithm 2.1 and Sol(f, C) = ∅. Then the following

hold.

(i) If r(xk) = 0, then xk ∈ Sol(f, C).

(ii) xk /∈ Ck.

(iii) Sol(f, C) ⊆ C ∩ Ck ∩Hk.

(iv) limk→∞

‖xk+1 − xk‖ = 0.

Proof. (i) Since yk is the solution to problem (2.2) and an optimization result in convex programming, we

have

0 ∈ ∂f(xk, ·)(yk) + β∇1D(yk, xk) +NC(yk),

where NC denotes the normal cone. From yk ∈ intC, it follows that NC(yk) = {0}. Hence,

ξk + β∇1D(yk, xk) = 0,

where ξk ∈ ∂f(xk, ·)(yk). Replacing yk = xk in this equality, we get

ξk + β∇1D(xk, xk) = 0.

Since

∇1D(x, y) = −AT

(

l(x)− l(y) + μXylogl(x)

l(y)

)

∀x, y ∈ intC,

we have

∇1D(xk, xk) = 0.

Thus, ξk = 0. Combining this with f(xk, xk) = 0, we obtain

f(xk, y) ≥ 〈ξk, y − ξk〉 = 0 ∀y ∈ C.

7

JIA_320_edited [04/04 16:19] 7/19

Page 9: 1029-242X-2013-156

which means that xk ∈ Sol(f, C).

(ii) Since zk = xk − γmkr(xk), r(xk) > 0, f(x, x) = 0 for every x ∈ C and f(x, ·) is convex on C, we have

0 =f(zk, zk)

=f(zk, (1− γmk)xk + γmkyk)

≤(1− γmk)f(zk, xk) + γmkf(zk, yk)

≤(1− γmk)f(zk, xk)− γmkσ‖r(xk)‖2

<(1− γmk)f(zk, xk).

Hence, we have f(zk, xk) > 0. This means that xk /∈ Ck.

(iii) For x∗ ∈ Sol(f, C). Then since f is pseudomonotone on C and f(x∗, zk) ≥ 0, we have f(zk, x∗) ≤ 0.

So x∗ ∈ Ck. To prove Sol(f, C) ⊆ Hk, we will use mathematical induction. Indeed, for k = 0 we have

H0 = Rn. This holds. Suppose that

Sol(f, C) ⊆ Hm for k = m ≥ 0.

Then, from x∗ ∈ Sol(f, C) and xm+1 = PrC∩Cm∩Hm(x0), it follows that

〈x∗ − xm+1, x0 − xm+1〉 ≤ 0,

and hence x∗ ∈ Hm+1. It implies that Sol(f, C) ⊆ Hm+1. Therefore, (iii) is proved.

(iv) Since xk is the projection of x0 onto C ∩Ck−1 ∩Hk−1 and (iii), by the definition of projection, we have

‖xk − x0‖ ≤ ‖x∗ − x0‖ ∀x∗ ∈ Sol(f, C).

So, {xk} is bounded. Otherwise, using the definition of Hk, we have

〈x− xk, x0 − xk〉 ≤ 0 ∀x ∈ Hk,

and hence

xk = PrHk(x0). (3.5)

From xk+1 ∈ Hk, it holds xk+1 = PrHk

(xk+1). Combining this, (3.5) and (e), we obtain

‖xk+1 − xk‖2 =‖PrHk(xk+1)− PrHk

(x0)‖2

≤‖xk+1 − x0‖2 − ‖PrHk(xk+1)− xk+1 + x0 − PrHk

(x0)‖2

=‖xk+1 − x0‖2 − ‖xk − x0‖2.

8

JIA_320_edited [04/04 16:19] 8/19

Page 10: 1029-242X-2013-156

This implies that

‖xk+1 − x0‖2 ≥ ‖xk − x0‖2 + ‖xk+1 − xk‖2. (3.6)

Thus, the sequence {‖xk − x0‖} is bounded and nondecreasing, and hence there exists limk→∞

‖xk − x0‖.

Consequently,

limk→∞

‖xk+1 − xk‖ = 0.

Theorem 3.3 Suppose that assumptions A1toA5 hold, ∂f(x, ·)(x) is upper semicontinuous on C, and the

sequence {xk} is generated by Algorithm 2.1. Then {xk} globally converges to a solution x∗ of Problem

EP (f, C), where

x∗ = PrSol(f,C)(x0).

Proof. For each wk ∈ ∂f(zk, ·)(zk), set

H∗k = {y ∈ Rn : 〈wk, y − zk〉 ≤ 0}.

From wk ∈ ∂f(zk, ·)(zk) and f(x, x) = 0 for every x ∈ C, it follows that

〈wk, y − zk〉 ≤f(zk, y)− f(zk, zk)

=f(zk, y). (3.7)

Then we have

Ck ⊆ H∗k ∀k ≥ 0,

and hence

‖xk − PrH∗k(xk)‖ ≤ ‖xk − PrCk

(xk)‖. (3.8)

On the other hand, it follows from

PrH∗k(xk) = xk − 〈wk, y − zk〉

‖wk‖2 wk

that

‖xk − PrH∗k(xk)‖ =

|〈wk, y − zk〉|‖wk‖ . (3.9)

Substituting y = yk into (3.7), we have

f(zk, yk) ≥ 〈wk, yk − zk〉.

9

JIA_320_edited [04/04 16:19] 9/19

Page 11: 1029-242X-2013-156

Combining this and (2.3), we have

〈wk, zk − yk〉 ≥ σ‖r(xk)‖2. (3.10)

From zk = (1− γmk)xk + γmkyk, it implies that

xk − zk =γmk

1− γmk(zk − yk).

Using this and (3.10), we have

〈wk, xk − zk〉 = γmk

1− γmk〈wk, zk − yk〉

≥ γmkσ‖r(xk)‖21− γmk

. (3.11)

From (3.9) and (3.11), it follows that

‖xk − PrH∗k(xk)‖ ≥ γmkσ‖r(xk)‖2

‖wk‖ .

Then, since ∂f(x, ·)(x) is upper semicontinuous on C and {xk} is bounded, there exists M > 0 such that

‖xk − PrH∗k(xk)‖ ≥ γmkσ‖r(xk)‖2

M.

Combining this, xk+1 ∈ Ck and (3.8), we have

‖xk+1 − xk‖ ≥ ‖xk − PrCk(xk)‖

≥ γmkσ‖r(xk)‖2M

.

Then, it follows from (iv) of Lemma 3.2 that

limk→∞

γmk‖r(xk)‖2 = 0.

The cases remaining to consider are the following.

Case 1. lim supk→∞

γmk > 0.

This case must follow that lim infk→∞

‖r(xk)‖ = 0. Since {xk} is bounded, there exists an accumulation point

x of {xk}. In other words, a subsequence {xki} converges to some x such that r(x) = 0, as i → ∞. Then

we see from Lemma 3.2 (i) that x ∈ Sol(f, C).

Case 2. limk→∞

γmk = 0.

Since {‖xk − x0‖} is convergent, there is the subsequence {xkj} of {xk} which converges to x as j → ∞.

Then, from the continuity of f and

ykj = argmin{f(xkj , y) +β

2D(y, xkj ) : y ∈ C},

10

JIA_320_edited [04/04 16:19] 10/19

Page 12: 1029-242X-2013-156

there exists y such that the sequence {ykj} converges y as j → ∞, where

y = argmin{f(x, y) + β

2D(y, x) : y ∈ C}.

Since mk is the smallest nonnegative integer, mk − 1 does not satisfy (2.3). Hence, we have

f(xk − γmk−1r(xk), yk) > −σ‖r(xk)‖2,

and besides

f(xkj − γmkj−1r(xkj ), ykj ) > −σ‖r(xkj )‖2. (3.12)

Passing onto the limit in (3.12), as j → ∞ and using the continuity of f , we have

f(x, y) ≥ −σ‖r(x)‖2, (3.13)

where r(x) = x− y. From algorithm 2.1, we have

f(xkj − γmkj r(xkj ), ykj ) ≤ −σ‖r(xkj )‖2.

Since f is continuous, passing onto the limit, as j → ∞, we obtain

f(x, y) ≤ −σ‖r(x)‖2.

Using this and (3.13), we have

−σ‖r(x)‖2 ≤ f(x, y) ≤ −σ‖r(x)‖2,

which implies r(x) = 0, and hence x ∈ Sol(f, C). So, all cluster points of {xk} belong to the solution set

Sol(f, C).

Set x = PrSol(f,C)(x0) and suppose that the subsequence {xkj} converges to x∗ ∈ Sol(f, C) as j → ∞.

By (iii) of Lemma 3.2, we have

x ∈ C ∩ Ckj−1 ∩Hkj−1.

So,

‖xkj − x0‖ ≤ ‖x− x0‖.

Thus,

‖xkj − x‖2 = ‖xkj − x0 + x0 − x‖2

= ‖xkj − x0‖2 + ‖x0 − x‖2 + 2〈xkj − x0, x0 − x〉

≤ ‖x− x0‖2 + ‖x0 − x‖2 + 2〈xkj − x0, x0 − x〉.

11

JIA_320_edited [04/04 16:19] 11/19

Page 13: 1029-242X-2013-156

As j → ∞, we get xkj → x∗ and

‖x∗ − x‖2 ≤ 2‖x− x0‖2 + 2〈x∗ − x0, x0 − x〉

= 〈x∗ − x, x0 − x〉

≤ 0.

The last inequality is due to x = PrSol(f,C)(x0) and (c). So, x∗ = x and the sequence {xk} has an unique

cluster point PrSol(f,C)(x0). �

Now we consider the relation between the solution existence of Problem EP (f, C) and the convergence

of {xk} generated by algorithm 2.1.

Lemma 3.4 (see [4]) Suppose that C is a compact convex subset of Rn and f is continuous on C. Then

the solution set of Problem EP (f, C) is nonempty.

Theorem 3.5 Suppose that assumptions A1toA4 hold, f is continuous, ∂f(x, ·)(x) is upper semicontinuous

on C, the sequence {xk} is generated by algorithm 2.1, and Sol(f, C) = ∅. Then we have

limk→∞

‖xk − x0‖ = +∞.

Consequently, the solution set of Problem EP (f, C) is empty if and only if the sequence {xk} diverges to

infinity.

Proof. The first, we show that C ∩Ck ∩Hk = ∅ for every k ≥ 0. On the contrary, suppose that there exists

k0 ≥ 1 such that

C ∩ Ck ∩Hk = ∅.

Then there exists a positive number M such that

{xk : 0 ≤ k ≤ k0} ⊆ B(x0,M),

where B(x0,M) = {x ∈ Rn : ‖x−x0‖ ≤ M}. From Lemma 3.4, it implies that the solution set of Problem

EP (f, C) is nonempty, where C = C ∩B(x0, 2M). Applying algorithm 2.1 to Problem EP (f, C). In order

to avoid confusion with the sequences {xk}, {Ck} and {Hk}, we denote the three corresponding sequences

by {xk}, {Ck} and {Hk}. With x0 = x0, the following claims hold:

(a) The set {xk} has at least k0 + 1 elements.

(b) xk = xk, Ck = Ck and Hk = Hk for every k = 0, 1, · · · , k0.

12

JIA_320_edited [04/04 16:19] 12/19

Page 14: 1029-242X-2013-156

(c) xk0 is not a solution to Problem EP (f, C).

Using Sol(f, C) = ∅ and (iii) of Lemma 3.2, we have C ∩ Ck ∩ Hk = ∅. Then we also have C ∩Ck ∩Hk = ∅,

which contradicts the supposition that C ∩ Ck ∩Hk = ∅. So,

C ∩ Ck ∩Hk = ∅ ∀k ≥ 0.

This implies that the inequality (3.6) also holds in this case, the sequence {‖xk −x0‖} is still nondecreasing.

We claim that

limk→∞

‖xk − x0‖ = ∞.

Suppose for contraction that the exists limk→∞

‖xk −x0‖ ∈ [0,+∞). Then {xk} is bounded and it follows from

(3.6) that

limk→∞

‖xk+1 − xk‖ = 0.

A similar discussion as above leads to the conclusion that the sequence {xk} converges to PrSol(f,C)(x0),

which contradicts the emptiness of the solution set Sol(f, C). The theorem is proved. �

4 Applications to Cournot-Nash equilibrium model

Now we consider the following Cournot-Nash oligopolistic market equilibrium model (see [25–28]): There are

n-firms producing a common homogenous commodity and that the price pi of firm i depends on the total

quantity σx =n∑

i=1

xi of the commodity. Let hi(xi) denote the cost of the firm i when its production level is

xi. Suppose that the profit of firm i is given by

fi(x1, · · · , xn) = xipi(σx)− hi(xi) i = 1, · · · , n,

where hi is the cost function of firm i that is assumed to be dependent only on its production level. There is

a common strategy space C ⊆ Rn for all firms. Each firm seeks to maximize its own profit by choosing the

corresponding production level under the presumption that the production of the other firms are parametric

input. In this context, a Nash equilibrium is a production pattern in which in which no firm can increase

its profit by changing its controlled variables. Thus, under this equilibrium concept, each firm determines

its best response given other firms’ actions. Mathematically, a point x∗ = (x∗1, · · · , x∗

n)T ∈ C is said to be a

Nash equilibrium point if x∗ is a solution of the problem:

max{fi(y∗,i) : y∗,i = (x∗1, · · · , x∗

i−1, yi, x∗i+1, · · · , x∗

n)T ∈ C} ∀i = 1, · · · , n.

13

JIA_320_edited [04/04 16:19] 13/19

Page 15: 1029-242X-2013-156

Set

φ(x, y) = −n∑

i=1

fi(x1, · · · , xi−1, yi, xi+1, · · · , xn) (4.1)

and

f(x, y) = φ(x, y)− φ(x, x). (4.2)

Then the problem of finding an equilibrium point of this model can be formulated as Problem EP (f, C). It

follows from Lemma 3.2 (i) that xk is a solution of Problem EP (f, C) if and only if r(xk) = 0. Thus, xk is

an ε-solution to Problem EP (f, C), if ‖r(xk)‖ ≤ ε. To illustrate our algorithm, we consider two academic

numerical tests of the bifunction f in R5.

Example 4.1 We consider an application of Cournot-Nash oligopolistic market equilibrium model taken

from [17]. The equilibrium bifunction is defined by

f(x, y) = 〈M(x+ y) + 〈x, d〉B(x+ y) + q, y − x〉, (4.3)

where

M =

⎜⎜⎜⎜⎝

1 2 3 4 51 0 0 5 60 0 9 2 33 3 3 4 13 4 5 1 2

⎟⎟⎟⎟⎠

, B =

⎜⎜⎜⎜⎝

1 2 3 4 19 0 2 1 33 4 5 2 40 1 2 3 70 1 2 3 4

⎟⎟⎟⎟⎠

, q =

⎜⎜⎜⎜⎝

13015

⎟⎟⎟⎟⎠

, d =

⎜⎜⎜⎜⎝

32049

⎟⎟⎟⎟⎠

and

C =

⎧⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎩

x ∈ R5+,

4 ≤ x1 + 2x2 + x3 + 3x5 ≤ 12,

7 ≤5∑

i=1

xi ≤ 15,

6 ≤ x2 + x3 + 2x4 ≤ 13,

3 ≤ x2 + x3 ≤ 5.

14

JIA_320_edited [04/04 16:19] 14/19

Page 16: 1029-242X-2013-156

In this case, the bifunction f is pseudomonotone on C and the interior approximal function (2.1) is defined

through

li(x) = −xi ∀i = 1, · · · , 5,

l6(x) = x1 + 2x2 + x3 + 3x5 − 12,

l7(x) = 4− x1 − 2x2 − x3 − 3x5,

l8(x) = 6− x2 − x3 − 2x4,

l9(x) = x2 + x3 + 2x4 − 13,

l10(x) = 3− x2 − x3,

l11(x) = x2 + x3 − 5.

It is easy to see that rankA = 5. Take ‖A−1‖ = 1, β = 4, σ = 1.5, γ = 0.7, μ = 0.55, we get the following

iterates.

Iteration (k) xk1 xk

2 xk3 xk

4 xk5

0 1 3 1 1 110 4.3842 0.0001 3.4633 1.7683 1.384220 4.4657 00.0013 3.1371 1.9315 1.465750 4.4901 0.0017 3.0404 1.9796 1.4898100 4.4976 0.0082 3.0117 1.9938 1.4969150 4.5001 0.0099 3.0031 1.9979 1.4990180 4.5012 0.0107 3.0005 1.9989 1.4995200 4.5106 0.0142 2.9716 2.0071 1.4965250 4.5309 0.0412 2.9176 2.0206 1.4897300 4.5370 0.0494 2.9013 2.0247 1.4877350 4.5389 0.0519 2.8963 2.0259 1.4870355 4.5395 0.0526 2.8948 2.0263 1.4868358 4.5396 0.0528 2.8943 2.0264 1.4868360 4.5397 0.0529 2.8942 2.0264 1.4868361 4.5397 0.0529 2.8942 2.0265 1.4868

Table 1-Example 4.1: Iterations of Algorithm 2.1 with ‖r(xk)‖ ≤ 0.0001.

The approximate solution obtained after 361 iterations is

x361 = (4.5397, 0.0529, 2.8942, 2.0265, 1.4868)T .

Example 4.2 The same as example 4.1, we only change the bifunction which has the form

f(x, y) = 〈Px+Qy + q, y − x〉+ 〈d, arctan(x− y)〉,

where arctan(x− y) = (arctan(x1 − y1), · · · , arctan(x5 − y5))T , the components of d are chosen randomly in

(0, 10).

15

JIA_320_edited [04/04 16:19] 15/19

Page 17: 1029-242X-2013-156

Then the bifunction f satisfies convergent assumptions of Theorem 3.3 in this paper and Theorem 3.1

in [21]. We choose the parameters in Algorithm 2.1: ‖A−1‖ = 1, β = 5, σ = 1.2, γ = 0.5, μ = 0.2.

In the algorithm (shortly (IPLE)) proposed by Nguyen et al. [21], the parameters are chosen as follows:

θ = 0.5, τ = 0.7, α = 0.4, μ = 2, ck = 0.5 + 1k+1 for all k ≥ 1. We compare algorithm 2.1 with (IPLE). The

iteration numbers and the computational time for 5 problems are given in Table 2.

Algorithm 2.1 Algorithm (IPLE)Problem No. Iterations CPU Times (seconds) Problem No. Iterations CPU Time (seconds)

No. 1 547 16.5140 No. 1 421 15.1388No. 2 379 8.0249 No. 2 372 14.4729No. 3 1045 21.2740 No. 3 1026 23.9581No. 4 781 15.7751 No. 4 1216 32.3385No. 5 625 14.1925 No. 5 297 9.4139

Table 2-Example 4.2: The tolerance ‖r(xk)‖ ≤ 0.19.

The computations are performed by Matlab R2008a running on a PC Desktop Intel(R) Core(TM)i5

[email protected] 3.33GHz 4Gb RAM.

5 Conclusion

This paper presented an iterative algorithm for solving pseudomonotone equilibrium problems without

Lipschitz-type continuity of the bifunctions. Combining the interior proximal extragradient method in [22],

the Armijo-type linesearch and cutting hyperplane techniques, the global convergence properties of the al-

gorithm are established under few assumptions. Compared with the current methods such as the interior

proximal extragradient method, the dual extragradient algorithm in [14], the auxiliary principle in [15], the

inexact subgradient method in [29], and other methods in [4], the fundamental difference here is that our

algorithm does not require the computation of subgradient of a convex function. We show that the cluster

point of the sequence in our algorithm is the projection of the starting point onto the solution set of the

equilibrium problems. Moreover, we also give the relation between the existence of solutions of equilibrium

problems and the convergence of the iteration sequence.

6 Author’s contributions

The main idea of this paper is proposed by Pham Ngoc Anh. Pham Ngoc Anh and Pham Minh Tuan

prepared the manuscript initially and performed all the steps of proof in this research. All authors read and

approved the final manuscript.

16

JIA_320_edited [04/04 16:19] 16/19

Page 18: 1029-242X-2013-156

7 Competing interests

The authors declare that they have no competing interests.

8 Acknowledgements

We are very grateful to the anonymous referees for their really helpful and constructive comments in im-

proving the paper.

The work was supported by National Foundation for Science and Technology Development of Vietnam

(NAFOSTED), code 101.02-2011.07.

References

1. Blum, E, Oettli, W: From optimization and variational inequality to equilibrium problems. Math. Stud.

63, 127-149 (1994)

2. Bigi, G, Castellani, M, Pappalardo, M: A new solution method for equilibrium problems. Opt. Meth. &

Soft. 24, 895-911 (2009)

3. Daniele, P, Giannessi, F, Maugeri, A: Equilibrium problems and variational models. Kluwer, 2003.

4. Konnov, IV: Combined relaxation methods for variational inequalities. Springer-Verlag, Berlin, 2000

5. Moudafi, A: Proximal point algorithm extended to equilibrium problem. J. Natural Geom. 15, 91-100

(1999)

6. Anh, PN: Strong convergence theorems for nonexpansive mappings and Ky Fan inequalities. J. Optim.

Theory Appl. (2012), DOI: 10.1007/s10957-012-0005-x

7. Anh, PN: A hybrid extragradient method extended to fixed point problems and equilibrium problems.

Optim. (2012), DOI:10.1080/02331934.2011.607497

8. Anh, PN, Kim, JK: Outer approximation algorithms for pseudomonotone equilibrium problems. Comp.

Math. Appl. 61, 2588-2595 (2011)

9. Anh, PN, Muu, LD, Nguyen, VH, Strodiot, JJ: Using the Banach contraction principle to implement

the proximal point method for multivalued monotone variational inequalities. J. Opt. Theory Appl. 124,

285-306 (2005)

17

JIA_320_edited [04/04 16:19] 17/19

Page 19: 1029-242X-2013-156

10. Iusem, AN, Sosa, W: On the proximal point method for equilibrium problems in Hilbert spaces. Optim.

59, 1259-1274 (2010)

11. Zeng, LC, Yao, JC: Modified combined relaxation method for general monotone equilibrium problems

in Hilbert spaces. J. Opt. Theory Appl. 131, 469-483 (2006)

12. Heusinger, A, Kanzow, C: Relaxation methods for generalized Nash equilibrium problems with inexact

line search. J. Opt. Theory Appl. 143, 159-183 (2009)

13. Konnov, IV: Combined relaxation methods for monotone equilibrium problems. J. Opt. Theory Appl.

111, 327-340 (2001)

14. Quoc, TD, Anh, PN, Muu, LD: Dual extragradient algorithms to equilibrium Problems, J. Glob. Optim.

52, 139-159 (2012)

15. Mastroeni, G: On auxiliary principle for equilibrium problems. In: Daniele, P., Giannessi, F., Maugeri,

A. (eds.) Equilibrium Problems and Variational Models, vol. 68, Nonconvex Optimization and its Ap-

plications., pp. 289-298. Kluwer Academic Publishers, Dordrecht, The Netherlands, 2003

16. Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon, 12

, 747-756 (1976)

17. Tran, DQ, Dung, ML, Nguyen, VH: Extragradient algorithms extended to equilibrium problems, Optim.

57, 749-776 (2008)

18. Anh, PN: A logarithmic quadratic regularization method for solving pseudomonotone equilibrium prob-

lems. Acta Math. Vietnam. 34 , 183-200 (2009)

19. Bnouhachem, A: An LQP method for pseudomonotone variational inequalities. J. Glob. Optim. 36,

351-356 (2006)

20. Forsgren, A, Gill, PE, Wright, MH: Interior Methods for Nonlinear Optimization. SIAM Review 44,

525-597 (2002)

21. Nguyen, TTV, Strodiot, JJ, Nguyen, VH: The interior proximal extragradient method for solving equi-

librium problems. J. Glob. Optim. 44, 175-192 (2009)

22. Anh, PN: An LQP regularization method for equilibrium problems on polyhedral. Vietnam J. Math. 36

, 209-228 (2008)

18

JIA_320_edited [04/04 16:19] 18/19

Page 20: 1029-242X-2013-156

23. Auslender, A, Teboulle, M, Bentiba, S: A logarithmic-quadratic proximal method for variational inequal-

ities. J. Comp. Opt. Appl. 12, 31-40 (1999)

24. Auslender, A, Teboulle, M, Bentiba, S: Iterior proximal and multiplier methods based on second order

homogeneous kernels. Math. Oper. Res. 24, 646-668 (1999)

25. Bigi, G, Passacantando, M: Gap functions and penalization for solving equilibrium problems with non-

linear constraints. Comp. Optim. Appl. 53, 323-346 (2012)

26. Marcotte, P: Algorithms for the network oligopoly problem. J. Oper. Res. Society, 38, 1051-1065 (1987)

27. Mordukhovich, BS, Outrata, JV, Cervinka, M: Equilibrium problems with complementarity constraints:

casestudy with applications to oligopolistic markets. Optim. 56, 479-494 (2007)

28. Murphy, FH, Sherali, HD, Soyster, AL: A mathematical programming approach for determining

oligopolistic market equilibrium. Math. Progr. 24, 92-106 (1982)

29. Santos, P, Scheimberg, S: An inexact subgradient algorithm for equilibrium problems. Comp. Appl.

Math. 30, 91-107 (2011)

19

JIA_320_edited [04/04 16:19] 19/19