31
MathematicalProgramming 78 (1997) 315-345 Polynomiality of primal-dual affine scaling algorithms for nonlinear complementarity problems Benjamin Jansen a,1, Kees Roos a Tam~ Terlaky a, Akiko Yoshise b.. a Faculty of Technical Mathematics and In lbrmatics, Delft University of Technology. P.O. Box 503 I, 2600 GA Delft, The Netherlands b Institute ofSocio EcomJmic Planning, Unioersity ofTsukuba, Tsukuba, Ibaraki 305, Japan Received 8 January 1996; revisedmanuscript received 23 September 1996 Abstract This paper provides an analysis of the polynomiality of primal-dual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smooth- ness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primal-dual affine scaling algorithms generates an approximate solution (given a precision e) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1/e) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in Jansen et al., SIAM Journal on Optimization 7 (1997) 126-140. © 1997 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V. Keywords: Complementarity problems;Interior point methods; Primal-dual affine scaling methods; Smooth- ness conditions;Polynomial-time convergence;Global convergence 1. Introduction Nonlinear complementarity problems (NCP) form a fairly general class of mathemati- cal programming problems with a large number of applications. For instance, any * Corresponding author. Research supported in part by Grant-in-Aids for Encouragement of Young Scientists (06750066) from the Ministryof Education, Science and Culture, Japan. i Research supported by Dutch Organizationfor ScientificResearch (NWO), grant 611-304-028 0025-5610/97/5 t7.00 © 1997The MathematicalProgramming Society, Inc. Published by ElsevierScience B.V. PII S0025-5610(96)00076-7

Polynomiality of primal-dual affine scaling algorithms for nonlinear complementarity problems

Embed Size (px)

Citation preview

Mathematical Programming 78 (1997) 315-345

Polynomiality of primal-dual affine scaling algorithms for nonlinear

complementarity problems

Benjamin Jansen a,1, Kees Roos a Tam~ Terlaky a, Akiko Yoshise b..

a Faculty of Technical Mathematics and In lbrmatics, Delft University of Technology. P.O. Box 503 I, 2600 GA Delft, The Netherlands

b Institute ofSocio EcomJmic Planning, Unioersity ofTsukuba, Tsukuba, Ibaraki 305, Japan

Received 8 January 1996; revised manuscript received 23 September 1996

Abstract

This paper provides an analysis of the polynomiality of primal-dual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smooth- ness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primal-dual affine scaling algorithms generates an approximate solution (given a precision e) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1/e) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in Jansen et al., SIAM Journal on Optimization 7 (1997) 126-140. © 1997 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

Keywords: Complementarity problems; Interior point methods; Primal-dual affine scaling methods; Smooth- ness conditions; Polynomial-time convergence; Global convergence

1. In t roduc t ion

Nonlinear complementarity problems (NCP) form a fairly general class of mathemati-

cal programming problems with a large number of applications. For instance, any

* Corresponding author. Research supported in part by Grant-in-Aids for Encouragement of Young Scientists (06750066) from the Ministry of Education, Science and Culture, Japan.

i Research supported by Dutch Organization for Scientific Research (NWO), grant 611-304-028

0025-5610/97/5 t 7.00 © 1997 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V. PII S0025-5610(96)00076-7

316 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

convex programming problem can be modelled as a monotone nonlinear complementar- ity problem (MNCP). Also, the problem has a close connection with variational inequalities, which play an important role in the study of equilibria in e.g. economics, transportation planning and game-theory. For a good introduction in CP and traditional solution methods we refer the reader to the book of Cottle et al. [2]. A survey on variational inequalities is provided by Harker and Pang [9].

The study of interior point methods for linear programming that has flourished since 1984, also led to the use of barrier methods for nonlinear convex problems, see e.g. [10-12,15,29,36,40,51]. An important aspect of these algorithms is that their conver- gence rate has been established for classes of problems which satisfy certain Lipschitz conditions. A general and unifying analysis was provided by Nesterov and Nemirovskii [40], who managed to give a framework for the study of central path-following methods for nonlinear programming problems satisfying the so-called self-concordance condition. Recently, Nesterov and Todd [41] analyzed primal-dual potential reduction algorithms in a similar framework by considering a symmetric primal-dual cone representation of convex programming problems. Jarre [15] and Den Hertog et al. [12] used the relative

Lipschitz condition, which was later shown (see e.g. [17]) to be essentially equivalent to self-concordance. Zhu et al. [29,44,45,48,51] used the scaled Lipschitz condition to analyze path-following methods.

The fundamental work of McLinden [31 ] brought us a lot of ideas to develop interior point algorithms for linear complementarity problems (LCP) and NCPs (see e.g. [6-8,13,18,20-24,26,27,33,38,42,47,49,50]). The global convergence of these algo- rithms has been shown by using the existence of the central path, which can be seen as a minimum requirement for interior point methods to be applicable. Similarly to the case of nonlinear convex problems, the study of the convergence rate has also become active for NCPs satisfying a smoothness condition on the mapping. In particular, Chapter 7 in [40] is completely devoted to the study of variational inequalities and their solution. Some drawbacks of the analysis in [40] are that it focuses on central path-following methods, that the type of search-direction to be used is prescribed (Newton's direction w.r.t, a self-concordant function), that the algorithm is in essence a pure primal algorithm, and that small neighborhoods of the central path are used. The last aspect has been handled by Nesterov in [39], but we are not aware of results concerning other search-directions. Potra and Ye [43] dealt with primal-dual interior point algorithms for solving MNCPs, and studied the global and local convergence rate and the complexity of the algorithms. Their smoothness condition (a scaled Lipschitz condition) can be regarded as a generalization of Zhu's condition [51].

In this paper we will focus on the complexity analysis for a family of primal-dual affine scaling algorithms for NCP. This serves as an extension to the analysis of the same family for linear CP by Jansen et al. [13]. The family contains the classical primal-dual affine scaling algorithm of Monteiro et al. [37] and the primal-dual Dikin affine scaling algorithm of Jansen et al. [14] as special cases. In the analysis we make use of wide neighborhoods of the central path. The definition of the neighborhood is equivalent to the so-called infinity-norm neighborhood as used for linear programming

B. Jansen et al./Mathematical Programming 78 (1997) 315-345 317

in [25], [30] and [1], among others. The introduction of such a type of neighborhood has

two important consequences. The first is that we may not hope for a complexity bound that is better than O(n In l / s ) , where n is the number of variables and e the required

accuracy. The second is that we need to consider separate components of the vector of

complementarity products instead of its norm. The latter aspect forces us to reconsider

the use of several smoothness conditions (as self-concordance) for this type of analysis. The analysis of our algorithms nor the algorithms themselves use any barrier functions.

This leads us to make assumptions on the smoothness of the mappings involved. A

version of the scaled Lipschitz condition appears to be a natural choice. We introduce a new condition, Condition 3.2, which is trivial in linear cases. The advantage of this

condition is that it does not require the monotonicity of the mapping while it seems to be indispensable for the scaled Lipschitz condition even if the mapping is linear. Therefore, we consider in this paper mappings for which the Jacobian matrix is a so-called

P , -matr ix (see Kojima et al. [20]). A different approach to the analysis of primal-dual large neighborhood algorithms was taken in Nesterov and Todd [41]. They reformulate a given convex programming problem in a primal-dual self-scaled conic reformulation and assume the existence of a logarithmically homogeneous barrier for the cones

involved. This assumption holds if the original convex program has a self-concordant barriers (Proposition 5.1.4 of [40]).

This paper is built up as follows. In Section 2 we introduce the mathematical fommlation of the CP and define some notation. In Section 3 we introduce a search mapping and derive some general results. Section 4 introduces the family of algorithms we consider and provides a complete complexity analysis. In Section 5 we consider several smoothness conditions, their relationships, and their usability for our purposes.

2. Problem statement

Let us consider the NCP:

(CP) Find ( x, s) ~ R 2" such that s = f ( x ) , ( x, s) >1 0 and xTs = O.

Here f is a C t mapping from R" to R". We denote the sets of feasible and interior-feasible points of (CP) as follows:

Y : = {(x, s) E N2,,: s = f ( x ) , ( x , s) >1 0},

y 0 := {(x, s) E 1~2,: s = f ( x ) , ( x , s) > 0}.

We assume that ~-o is not empty, or stated otherwise, that an interior point exists. In the literature the linear complementarity problem (LCP) has gained much atten-

tion, see e.g. [2]. In this special case the mapping f is given by

f ( x ) = M x + q ,

for M E N,,x,, and q E N". Special conditions on the matrix M have been developed to

guarantee the existence of a solution. These classes include PSD (M positive semi-deft-

318 B. Jansen et al. / Mathematical Programming 78 (1997)315-345

hire), P ( M has positive principal minors), P , (see below) and CS and RS (column- sufficient resp. row-sufficient). Some known implications are

P S D c P , c C S , P E P , , P , = C S A R S ,

see e.g. [2,20,46]. In this paper we will be interested in the class P , ,

Definition 2.1, Let K>~ 0. The matrix M e R "x" is in P , ( K ) if

(l+4K) ~, ~i (MsC) i+ iel+(~)

where

l + ( s ~ ) = { i : ~ i (M~) i>O} ,

i e l_(~)

I_(~:) = {i: ~i(MCs)i<O}. The matrix M is in P , if it is in P , ( K ) for some K.

Throughout this paper, we impose the following condition on the mapping f

Condi t ion 2.2. There exists a constant K >/ 0 such that the Jacobian Vf (x ) of the mapping f is a P , ( K ) matrix for all x > 0 .

For ease of notation we allow ourselves to perform componentwise operations (like multiplication and taking powers) on vectors. For instance, by d = 1/--f/s we mean the

vector d obtained from d; = ~ . Wherever this abuse of notation might be inconve- nient, we use capital syllables to denote the diagonal matrix obtained from a vector; for instance D = diag(d). Similarly, [ x t denotes the vector consisting of ] x i I s. We also define the mappings v, Vm~ n, vm,~ and oJ for every (x , s ) E 9 - ° , which are continuous with respect to (x , s) e , W ° :

x, s) := ( xs) '12

vmi,(x, s) := min{vi: i = 1 , 2 . . . . . n}, (1)

um.~(x, s) := max{v i" i= 1 ,2 . . . . . n},

oJ( x, s) := Vmi./Vmax <'< I.

We often use the symbols v, Vmi n and urea x to denote v(x, s), vmin(x, s) and vmax(x, s), respectively. Also if o., has only one argument, e.g., 69(p) then it denotes

min{pi: i= 1,2 . . . . . n}

o~(p) := max{ Pi: i = 1, 2 . . . . . n} <~ 1

for a positive vector p > 0. For a given p e (0, 1), we employ the following set as a neighborhood of the central path, which plays a key role in our analysis:

~"(P) = {(x, ') e ~ ° : . , ( x , s) >~0).

B. Jansen et al. / Mathematical Programming 78 (1997) 315-345 319

3. The search mapping and its properties

Suppose that we have an interior-feasible point (x, s ) ~ 9 -°, i.e., s = f ( x ) and (x, s ) > 0. Given displacements Ax and As, let us define

x(O) := x + O/ix, (2)

~is(O) := A s + g ( O ) / O ,

s(O) := s + O a s + g ( O ) = s + O a s ( O ) , (3)

g ( 0 ) := f ( x + O/ix) - f ( x ) - OVf(x) Ax. (4)

The mapping s(O) above was introduced in [27] as a modification of the one in [36] for the convex progamming problem. The mapping g(O) contains the second-order effect introduced by the displacement. Obviously, we have (x(0), s(0)) = (x, s). We require

our search-directions to satisfy the following equation:

- V f ( x ) Ax + As = 0. ( 5 )

Then we see that

s( O ) -- f ( x( O ) ) = ( s + OAs) + ( f ( x + 0 A x ) f ( x ) -- OVf( x) Ax)

- - f ( x + OAx)

= ( s - - f ( x ) ) + O( -- V f ( x ) ~ix + ~is)

- 0 (6)

for every 0 >/0, i.e., feasibility is preserved by construction. Consequently, if we find 0 such that (x(O), s(O)) > 0 then (x(O), s(O)) is also an interior-feasible point.

The term g(O) is continuous and higher order in 0, i.e., lim0_~ 0 {[ g(O)ll /O = 0; hence we have

ds(O) b

l =/is. dO [0=0

When the mapping f is linear, the term g(O) vanishes and we obtain

x ( O ) = x + OA x , s ( O ) = s + O/is ,

which means a usual line search mapping. Our analysis starts from representing the componentwise complementarity product

x(O)s(O) where x(O) and s(O) are given by (2) and (3):

~ ( 0 ) s ( 0 ) = ( x + 0 a ~ ) ( s + O/is(O)) = x y + O( sax + xA~( O)) + 02/ixas( O). (7)

We introduce the primal-dual scaling which is usual in the analysis of interior point

methods (see e.g. [5]):

d= (xs-~)~/-~. The important property of this scaling is that it maps d - t x = ds = v; this makes possible to express the scaled search-directions as orthogonal components of a vector. In

scaled space the search-directions are denoted by

320 B. Jansen et al. /Mathematical Programming 78 (1997) 315-345

px:=d-'Ax, ps:=dAs=DVf(x) Ax. (8)

Let us define the vector p,, as

P, := Px + Ps-

Part of the following lemma is similar to [13, Lemma 4.1].

Lemma 3.1. Let Px, Ps, and p~ be as defined above. Then, we have

II Po II 5, (i) --KII polI2 <~ AxWAs=p~p~<~ ~ ,

(ii) II zlx~s II ~ = II P,p., 11 ~. ~ ¼(] + 4,~) H p~ 112.

Proof. The vectors p~ and p~ satisfy

-Dgf(x)Op.,.+Ps=O, p.r+p.,.=po. Applying Lemma 3.4 in Kojima et al. [20] gives

Tp 2 p, ~ > / - K II Po II (9) Note that lemma applies since the P . (K) property is preserved by pre- and post-multi- plication with a positive diagonal matrix (cf. [20, Theorem 3.5]). Defining q,, = Px - P~, it holds

T =¼(11 II 5 2 , 2 P.¢Ps p, -Ilq,,ll )~<Tllp~ll

Using (9) we further obtain

IlqollZ~< (1 +4K)II p. II 2.

Finally, since Px P~ = ( P~ - q~)/4,

II p.~p~PI~ <~ ¼max{ IIp. Ilg, II qo Ilg}

kmax{ N poll 2, tl II } ~<¼(1 +4K)II poll 2

This completes the proof. []

For convenience in further discussions we define the mappings:

p.~( O) :=dAs( O) =p~ + d( ~ ),

p , ( O ) : = p , + p . ~ ( O ) = p ~ + d ( ~ ) , (10)

, , ( 0 ) := ( x ( 0 ) s ( 0 ) ) ' / - L

Using these definitions we may write

sax+xas(O) = ,,p~,(0), axas(O)=p.,.p.~(o). Hence Eq. (7) can be rewritten as

v (0) 2 = x ( 0 ) s ( 0 ) = v2 + Ovp~,( O) + 02p.rp~.( O). (1 I)

B. Jansen et aL / Mathematical Programming 78 (1997) 315-345 321

The reader may verify that (11) is very similar to the corresponding equation in [13] for the linear case. Our next task will be to prove results analogous to Lemma 3.1 for the vectors Ax and As(O). To be able to do this, we impose the following further condition

on the mapping f .

Condition 3.2. There exists a p ~ (0, 1), ~9 > 0 and a 7 t> 0 such that if (x, s) ~ J /"(p) :={(x, s) ~.~-°: to(x, s)~>p}, and if (Ax, As)~lt~ 2~ satisfies (5) and II x-~Ax+ s-tAs II -<< l, then

[I D( f ( x + OAx) - f ( x) -- OVf( x)Ax)11 -<< 70 II OVf( x)zlxll. (12)

for every 0 E (0, (9]. Here to(x, s), p~. and p~(O) are defined by (1) and (10).

The following observation is useful in the analysis and follows by definitions (8) and (10): The inequality (12) is equivalently written as

Ilps(O)-p~ll<7Ollp, II and p , ( O ) - p v = p s ( O ) - - p . ,.

Also, notice that the inequality (12) is equivalent to

d( f ( x + OAx) - f ( x) ) 0 - g f ( x ) A x <~7011d~7f(x) Axll , ( t3 )

or stated otherwise

d-~--~ - < 7o If gas II

for every 0 > 0. Note that the condition depends not only on the mapping f but also on the displacement (Ax, As) used in an algorithm. If the mapping f is linear, however, the above condition holds with ¢9= + ze, 7 = 0 and with every p ~ (0, 1), independent of the search-directions. In Section 5, we will show how Condition 3.2 is related to smoothness conditions on the mapping f and certain primal-dual interior point algo- rithms.

Using Condition 3.2 we derive the following lemma.

Lemma 3.3. Let p.~, p,(0), p,, and p~( O ) be as defined above and let Condition 3.2 hold. Then

(i) II p,,(0)II ~<(1 +yOV1 +2K)II pull,

(ii) Ip~(O)-p~I <70~+2~l lp~ l l e< 7 n(1 + 2K)

to(I p.I) 01p~l

for any ( x, s) ~ At (p) and 0 ~ (0, 6) ]. Here the function to is defined by ( 1 ).

Proof . The relation p,, = p , + p, and (i) of Lemma 3.1 imply

]1P,.[[ 2= [[ p~,ll 2 _ 2 p l p _ 11 p,.ll 2< (1 +2K)I [ p~II z (14)

322 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

Since Condition 3.2 holds, we have

II po (0 ) II ~< 11 po II + II p,( 0 ) - p, II ~< II p, II + ~,0 II p, II

~< (1 4- y0vtl +2K)II poll

which is the assertion of (i). Similarly,

l e o ( 0 ) - p o l = I p , ( 0 ) -p.,I ~<'/oll p,[I e

Finally. we see that

II p~,ll e = II poll P2XP,,~ II p, II I p.- ' I [ pol

<~ Lpol ]Pol. I polm~, OJ(I Pol)

This completes the proof of (ii). []

We may now prove the following lemma.

L e m m a 3.4. Let P x, P,., P~( 0 ), p,, and po( O ) be as defined above and let Condition 3.2 hold. Then, for every (x, s) ~.,4/ '(p) and 0 E (0, /9], we have

(i) - ( l+~,O)( l+2K) l lp , , l l2<~AxW/is (O) = p.~ps(O)

~¼(1 + 0 ~ 4 F T 5 - ~ ) 2 It poll 2.

(ii) Il ax / i s ( O ) lI~ = II p.,.p,( O ) ll~ <~ (¼(1 + 0 ~ ) 2

+(1 + 70) (1 +2 ,~ ) ) I I p , ll 2

Proof. (i) By (4), (5) and (10), we have

o ) Using also definition (8) we obtain

(/Ix)T/Is( 0 ) = p~ p,( 0 ) = I ( Ax)T ( f( X + O/IX) -- f(X)).

From (13) we derive

II d ( f ( x + OAx) - f ( x ) ) I I <-N (1 + y 0 ) 0 II dVf (x )Ax[[ .

Just as in (14) it holds

[I p.~ll 2 = II p, ll 2 - 2 p r p , - II p.,.ll 2 ~ (1 + 2 K ) I I poll 2. (15)

Consequently.

1 I A x h s ( O ) t = ~ l( A . r ) r ( f ( x + OAx) - - f ( x ) ) I

1 ~ II d- '.ax II II d(f ( x+ Oax) - f ( x ) ) I I

B. Jansen et al. / Mathematical Progrwnming 78 (1997) 315-345 323

1 < ~11 p.,. II(l + 3,0)0 II p, II

~< (1 + 3,0)(I +2K)II p~ll 2, (16)

where the last inequality follows from (14) and (15); this proves the left inequality in (i). For the right we can be better. We proceed as in the proof of Lemma 3.1. By (10) we have

Po(O) =P.,.+Ps(O). Letting %(0 ) = p.,. - Ps(O), we obtain the following bound:

p~p,(O) =¼(11 p,(0)]] 2 - ]] qt,(0)]] 2)~< 111 po(O)]] 2

~< ¼(1 + 03, 1 +l/'i--+2~K ) 2 [I Po II 2 (17)

where the last inequality follows from Lemma 3.3. For (ii) observe that combining (16) and (17) leads to the bound

~(1 + 70 1 +¢FT2-~K)z II p.[[ 2 + (1 +3 ,0 ) ( I +2,<)11 poll 2 ¼ II q.( 0 ) II 2 ~< ~

Hence, using

PxP,( O) = ¼( po( O )2 _ %(0)2)

we have

II p.~ps(O)I1~ < ¼max{ 11 p,(O)IIZ, II q~(O)II~}

-<< ¼max{ II Po(O) II 2 IP %(O) II 2}

{(I + 3,0 1 +¢/T~-K,¢ ) 2 II p. 112 + (! + 3,0)(1 + 2K) IIp. II 2

This completes the proof of the lemma. []

The above lemmas give us some tools to analyze primal-dual algorithms for NCP. In the linear case, Lemma 3.1 is important to provide the polynomiality of many primal-dual algorithms (see e.g. [18-20,24-26,32-35]). On the other hand, Lemma 3.4 suggests us that these analyses may be extended to nonlinear cases.

Before proceeding we mention that for the monotone NCP the bounds in Lemma 3.4(i) can be improved by using

1 zaxTas(O) = -~7(Oax)T(f( x + Oax) - f ( x ) ) / > O.

4. Primal-dual affine scaling algorithms

4.1. Development of the algorithm

Up to this point, our analysis was general, in the sense that we did not specify our search-directions. The conditions imposed till this point are feasibility (5) and Condition

324 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

3.2. We will now derive a family of affine scaling directions as in [13]. The directions are obtained by minimizing the complementarity (suitably scaled) over an ellipsoid, which is the idea of Dikin's primal affine scaling algorithm [3].

Consider the problem

minimize xTs subject to ( x, s) ~ 9-.

The NCP is equivalent to the above problem in the sense that (x, s) is a solution of the NCP if and only if it is a minimum solution of the above problem with objective value zero. According to the search mapping defined by (2) and (3), the complementarity x(O)Ts(O) is obtained by

x( o ) Ts( o ) = xT s + o( s h x + xTa4 0)) + 0 2axhs(0).

It follows from definition (4) of g that

x (O)Ts (O) - xVs d( x( O)Ts(0)) =-- lira = sVax + xTAs.

dO o-,o 0

The above relation gives us an idea for the determination of the search-direction ( ax , As). Let r be a fixed nonnegative constant (the order of scaling in the algorithm). Taking account of the equation - V J ( x ) A x + A s = O , we consider the following problem, which is essentially the same as the one given in [13]:

Minimize ((XS)~)T(x- ha" + S- aS)

subject to -- V f ( x ) a x + As = O,

II x - ' ~ x + s- 'asll <~ 1.

Obviously, the solution of the above system satisfies the assumption imposed on (Ax, As) in Condition 3.2. If we take r = 1 then the solution of this subproblem (Ax, As) minimizes the derivative

d( x( O)Ts( O) 0=0"

It is not difficult to find that the solution of the KKT (Karush-Kuhn-Tucker) optimality conditions for the above problem satisfies the following system:

-- V f ( x ) A x + As" = 0, (18)

u2r+2

s A x + x A s [Iv 2r II" (19)

The reader may observe that in case of linear or quadratic programming for r = 0 this system exactly determines the well-known (classical) affine scaling direction of Mon- teiro et al. [37]. From this moment on we let p,, have the following definition:

.u2r+ I

Po== [Iv 2 r l l (20)

B. Jansen et a l . / Mathematical Programmbzg 78 (1997) 315-345 325

Using the definitions in (8), the above optimality system can be rewritten as

- D V f ( x) Op~ + Ps=O" (21)

p.~ +ps=pv. ( 2 2 )

Since the Jacobian Vf(x) is a P , (•) matrix the system has a unique solution for every

x ~ ~" (cf. [20, Lemma 4.1]). We can now state our conceptual algorithm which is based on [13]:

Primal-dual affine scaling algorithm

Input (x °, sO): the initial pair of interior-feasible solutions;

r >/0: the degree of scaling; Parameters

e is the accuracy parameter; 0 is the step size;

begin X : = xO; S : = SO;

while xVs > e do Calculate Ax and As from (18) and (19); Compute the search mapping (x(O), s(O)) by (2) and (3); Find a suitable 0 such that (x(0) , s (0 ) )> 0 and x(-O)Ts(-O) < xVs (cL Corollary 4.7, Remark 4.8 and Theorem 4.10); x ,= x ( 0 ) ;

s := s ( ~ ) ;

end end.

4.2. Convergence results

We will analyze the behavior of the family of primal-dual affine scaling algorithms as follows. First, we will give some general results for the case r >/0. These regard the complementarity and the feasibility of the iterates after a step. Then, we will analyze the complexity of algorithms with r > 0; finally, we consider the classical primal-dual affine scaling algorithm (with r = 0) of Monteiro et al. [37]. Naturally, we will impose Condition 3.2 for those Ps and p.~(O) generated in the algorithm under consideration. Hence, in this section we will further assume that Pv is given by (20) for certain constant r~> 0, and that p.,, p,, F,(0), etc.. are obtained from solving (21)-(22).

From Lemma 3.3 and Lemma 3.4 we obtain the following result, which is a key for observing the behavior of v( 0 )2 = x( O )s( 0 ).

326 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

L e m m a 4.1. Suppose that Condition 3.2 holds. Then, for every (x, s )~ Jlr( p) and 0 ~ (0, @ ], we have

(i) ]l p.(O) /

(ii) - [1 +

(iii)

(i~)

Proo£ The vector p,,

II v 2~*' II I1 p~ I1 II l)2r I''"~ ~ II V II x ~ II v 11,

,o( t po t) = ,o(, ,) ~'+'

l p ~ l = - P o .

Thus the lemma follows from Lemmas 3.3 and 3.4.

I1 ~ (1 -4- 0~, 1 + x / T T ~ d ) II v l l~ ~ (1 + 0Tvtl -4- 2 K ) [ I v i i ,

oT(n( I+2K) ) v 2r+2 ~(~),.r+, , ~ . < v p . ( O )

( O Y~/n(I + 2K) ) v2"+2

~< - 1 - o ) (v )2 ,+ , Ilv 2r II'

p~p~(O) ~< ¼(1 + 0 ~ 1 + 2 K ) l l v l 1 2 ,

H p~p~CO)ll~<(¼(l + 0 v ~ r ; - ; - ~ ) ~ + C l + 0~ ) (~ + 2 ~ ) ) II vllg

is given by p~ = - v 2~+ '/11 v 2~ [1. Hence we have

[]

The following lemma is completely equivalent to Lemma 4.2 of [13]. It concems a

technical result used in estimating the new complementarity.

L e m m a 4.2 (Lemma 4.2 of [13]). Let vE ~ be an arbitrary vector.

eTv2 r+ 2 II v II 2 (i) I f O ~ r ~ l , then II vzr I ~ - - - T ~ - ---(-~-n

eVv2,+2 w(v) 2"-: (ii) If 1 <<. r, then II v 2' I~ ~< - ~ II v II 2

Let us introduce some notation:

~. '= yvq + 2K ,

~ . _ t ° (v ) 2r+'

2 / ' n '

• yv/n( 1 + 2 n )

o~(,~) ~-~+' 2 ~ " (23)

Notice that ~0 and ~ depend on v; however, in this section we are concerned with the

behavior in one iteration so v can be considered to be fixed. Later we will derive

uniform bounds for these quantities, We are now ready to show how the error in

B. Jansen et al. / Mathematical Programming 78 (1997) 315-345 327

complementarity can be reduced by taking a suitable step size 0. Combining Lemma 3.4, Lemma 4.1 and Lemma 4.2 with Eq. (11) we obtain the following lemma (cf. [13, Lemma 4.3]).

Lemma 4.3. Let ~/, 70, -~ be as defined in (23). Then (i) I f 0 ~< r <~ 1 and 0 <~ min{O, 1 / (2~) , 1/(~/-~(1 + 70)2)} then (o) x(O)Ts(O)=llv(O)ll2~ 1 - ~ - - ~ n Ilvll 2

(ii) I f 1<~ r a n d 0<~ min{O, 1 / (2~) , w(v)2r-2/(~--n(l + 70)2)} then

x(O)Ts(O)=IIv(o)II~ I 4v~-n 0 IIvII 2

Proof. Since 0~< 1/(2~') we have 1 + 0~< 1 + 70. The new complementarity is

2 T err(O) 2 eTv2 +OvTp~(O) +O pxp.~(O) ( b y ( l l ) )

e'rv 2 ~ ÷ 2 1 < Ilvll 2 - 0(1 - 0~-) llv2 r I-----/+ ° 2 4 (1 + 09)I lv l l 2 (by Lemma 4.1 (ii))

iivll2 lloll 2 1 1 - 0-~---~- n + 0v/~-(1 + 70)2 4 (1 + 70) 2 II vII2

= 1-- Ilvll z

The proof of (ii) is similar. []

The following lemma gives us a bound 0 such that (x(0) , s ( O ) ) ~ 3 -° for every 0 ~< 0, i.e., the new iterate is interior-feasible.

Lemma 4.4. Suppose that Condition 3.2 holds. Also let

~:=~(1+~)~+ 1 + ~ (I+2K).

I f ( x , s ) ~ . , ~ ( p ) and

0 ~ < 0 < m i n (9, 2~-' 3 ( 1 + r ) ' ~ 1 + - -

Let r >1 0 be a given constant and let Tr, 70, -~ be as defined in (23).

(24)

16n~ 2 4gn-~

(25) then (x(O), s(O)) e 9 -°.

328 B. Jansen el al . / Mathematical Programming 78 (1997) 315-345

Proof . The proof is essentially the same as a part of the proof of Theorem 5.1 in [!3]. From the fact that the search-direction satisfies (5) we have feasibility from (6). We still need to show that (x (0 ) , s (0)) is interior-feasible. The first upper bound 0 ~< O follows

from Condition 3.2. Due to the second bound on 0 it holds 1 + 0~, ~< 1 + O. From the second bound on 0 and (iv) of Lemma 4.1 we get

[[ p.,p,(O)]1~ ~<~2 I[ v lift.

Relation (11) and Lemma 4.1 imply then u2r+2

v(O)2<~ v 2 - 0(1 -- 0 " ~ ) ~ + 0 ~ 2 IIv II~e,

u 2 r t 2

- O~211vll~e, v(O)2>~v 2 O ( l + O ~ ' ) l l v 2 r l L

for every 0 E (0, O ]. Now let ot be a given positive number and consider the function

fr+ 1

4 , ( t ) = t - 0o~ I1 v zr I - - - - ~ " (26)

urea x ] if Then one easily verifies that 4, is monotonically increasing on the interval [0, 2

0~< II vzr I1/((1 2r + r)aw.ax) for every ~ > 0. Note that

II ~,2~ I1 II v 2r II 2v/~-n v~q, 2fn-n o) (v) 2~

0-rr) Urea,, ( l+r ) ( l+OTr)v~a ~ 3(1 + r)vm2~ 3(1 + r ) (1 + r ) ( l - -- 2~ ~> -- 2~ >~

hence if we enforce the third upperbound in (25) the largest coordinate v(O)ma,, of v(O) and the smallest coordinate v(O)mi , can be estimated as follows:

v(O)2m~<~vZax_O(l_O-ff) Vm2~+2 , ~ 2 2 (27) + u "q U~a ~ ,

2 2 _ 0(1 + 0 ~ ) v~i~n+2 ,,~,~ 2 (28) v( 0).,,~° >t v~,~ II v z' II " n vm.~.-

By the continuity of the mapping v, if v ( 0 ) ~ i . > 0 for every 0 ~ ( 0 , t?], then (x(O), s(O))> 0 and consequently (x(O), s ( O ) ) ~ r o for this interval. Hence it suf-

fices to show that v(0)2min > 0 if 0 satisfies (25). Dividing the relation (28) by 2 v~ , gives us the following condition to ensure that

v(O)~. > O: 2r ,02

Umin 0 2 _ ]> 0 1 - 0 ( 1 + 0~') l] v2r [i 0)(0)2

Since 2r 2r v.,., vff, i, 1

and 1 + 0~-~< 3 /2 the condition certainly holds if

3 "q~ 1 - 0 ~ n --02 >0.

B. Jansen et al, / Mathematical Programming 78 (1997) 315-345 329

It is easy to check that the last upper bound in the lemma ensures the inequality above.

This completes the proof of the lemma. []

By putting the above together, the following bound on 0 enjoys both the results in

Lemma 4.3 and Lemma 4.4:

1 l 2 '

0 ~ < 0 < m i n ~9, 2 ~ ' ~ / ~ ( 1 + ~ ) 2 ' 3 ( l + r ) '

1 + - - . ( 2 9 ) "~ 16n~ 2 47n-'~

If the mapping f is linear, we can take ~9= + ~ and y = 0 in Condition 3.2, and consequently y = 7r = 0. Furthermore, if f has the monotonicity property, Condition 2.2 holds with K= 0 . In this case, we obtain the bound 5/4~<~2 ~< 3, since 0 < 70~< 1. Thus, Lemma 4.3 and Lemma 4.4 above almost coincide with Lemma 4.3 and a part of Theorem 5.1 of [13] for linear monotone complementarity problems.

4.3. The polynomial convergence for r > 0

This section is devoted to analyzing the polynomiality of the class of (generalized) primal-dual affine scaling algorithms for r > 0. For a given parameter p E (0, 1), each algorithm in this class generates a sequence of iterates {(x k, sk): k = 1, 2 . . . . } satisfying o~(v k) ~ p for every k >i 1 where the function w is given by (1). This condition on the iterates is in fact equivalent to requiring that the iterates are in the wide neighborhood defined by infinity norms that has been used in the analysis of interior point methods in e.g. [25] and [1]; the wide neighborhood is given by

xTs / 3 ~ ( 0 , I) , p . : = - - , (1 - [3 ) i x<~x , s i<~( l + f l ) l x .

n

Suppose that the current point belongs to (x, s) E j l r (p) , i.e., (x, s ) ~ 9 -° and w(v) >1 p. Our algorithm determines the next point along the curve (x(O), s(O)) given by (2) and (3) by choosing a step size 0. The following theorem ensures the existence of 0 > 0 for which (x(0) , s(O))~.9 r° and o)(v(O))>i p for every 0 ~ (0, 0].

Theorem 4.5. Let r > 0 be a given constant and let ~-, -0, -~ be as in (23). Suppose that Condition 3.2 holds. If (x, s) ~ . / / (p) , 0 satisfies (29) and

l-p2" p2(l _o2,.) O~<O~<min 2~(I+p2¢)' 2~-~+p-~-~n ' (30)

then ( x( O ), s( O )) ~,A~( p).

330 B. Jansen et al. /Mathematical Programming 78 (1997) 315-345

Proof. The part (x (O) , s ( O ) ) ~ o is obvious from Lemma 4.4. Hence we need only

show that w(v(O)) >t p, i.e., p2v(O)2m~x <~ v(O)2mio. Utilizing the relation Vmi . = to (v )u~ x >1 p v ~ , , the same discussion for finding the

bounds (27) and (28) leads to the following relation: 2r 2 r + 2 - -2

). ".;a~ - - II v ~ II

Hence, from (27) and (31) we derive a sufficient condition for 0 as follows:

2 _ 0(1 - 0~-) v~r~2 ,,z'--z 2 vm.~ ~ + v "O Vma~

2r 2 r + 2 "~2 "~ 2 2

By rearranging this inequality, we have

0 ~ ( 1 +P~) vL~ ((1 - 0~ ) - (1 + 0~-)p 2~) p2 ~ ~

Since II ~2~ II ~< ~ V ~ x , we obtain the bound

p~(( l - 0~-) - (1 + 0 ~ ) p ~') 0~< ~(1 + p~)¢7

Using the first bound in (30) we find that 0 will certainly satisfy this inequality if

p2(l-p~) 0<~

2 ~ ( 1 + p~)Cg

Thus we obtain the theorem. []

(31)

We are now in the position to derive the complexity of our algorithms. The following theorem can be derived from Lemma 4.3 and the above theorem.

Theorem 4.6 (Theorem 5.2 of [13]). Let r > 0 be a given constant and let 0 < p < 1 be

given. Suppose that Condition 3.2 is satisfied. Let e > O, ( x °, s v) G ./U( p ) be given

and let 0 satisfy the conditions in (29) and (30). Then the primal-dual affine scaling

algorithm with order o f scaling r stops with a solution ( x" , s* ) for which ( x ~ )Ts* ~< and w(v :" ) >1 p holds, after at most

4v~- ( x° ) r s ° (i) - - I n iterations if 0 < I" <~ 1,

0 e

4¢~ ( x°) Ts ° (ii) p2r- ~------~ln c iterations if 1 < r.

To be more specific about the complexity, we have to check which of the various conditions on the step size 0 is strongest and how 0 depends on the input parameters.

B. Jansen et al. / Mathematical Programming 78 (1997) 315-345 331

This will be done below. It is easy to verify the following bounds

p2r+ I l 7//n(1 + 2 x ) 2----~n ~<~0~< 2---~n ' 7¢n(1 + 2 K ) ~<~'~< p2,+,

Using also n t> 2, we get from (24)

5 ~ 2 < 1+ + 1 + ( l + 2 K ) < 3 ( l + K ) .

We analyze the bounds in (29) consecutively. Notice, 1 p2,+ I

- - ~ . ~

1 1 1

• ~'n (1 + ~ ) 2 > ~- ' (I + 1/(2~/2-)) 2 >7 2----~-n '

2~ , , , ( , , ) ~' 2 ~ ~, > - -

3(1 + r ) 3(1 + r )

We have

(32)

3w(v) 3

4fn'n-~-~- ~< 4v~ ~ < 1,

so using the fact that the function 4,(t) = X/1 + t z - t is monotonically decreasing we obtain

t o ( v ) [ . / 9(o(v) 2 3w(v) o)(v) p l + - ( A - l )

"~ / V 16n~ 2 4~n~ >t - >~ "O 5 v q + K

For the bounds in (30) we obtain l - p 2r (1 - pZr)p 2~+' ( I - p Z r ) p 2r+'

2~'(1 +p2r ) >~ 2Ten(1 + 2 K ) (1 +p2,-) >~ 4yen(1 + K)(1 +p2,)' ,~(l _p2,) 0~(1-p~')

2~;(1 + p~-)~ >~ 6(1 +K)(1 +p2)Cfi • Thus we obtain the following result as a corollary of Theorem 4.6.

Corollary 4.7. Let us take the situation as in Theorem 4.6, besides that we speci fy p= l /v~.

(i) I f 0 < r <~ I a n d n >~ 2 w e m a y c h o o s e

]_2_, 1_2-, } 0=rain O, l>d , , ( l +K) ' 18&-~l +K) '

332 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

hence the complexity of the algorithm is

O ~nmax O ' 1 - 2 -~

(ii) If r = 1 and n >i 2 then we may choose

0 = r a i n O, 2 4 y C n ( l + K ) ' 3 6 ~ n ( l + K ) '

hence the complexity of the algorithm is

( { 1 ~ / n ( l + K ) m a x { 7 , l ~ - - ~ K } } l n - - O Cn-nmax ~ ,

(iii) If 1 < r and n st~ciently large we may choose

{ 1 1 }

0 = m i n O, 22~+2.,/¢n(1 + K ) ' 36v/-n(1 +1<)

in

hence the complexity of the algorithm is

o 2r max -d" +K)ma×{2r , In .

R e m a r k 4.8. Notice that the neighborhood ./Y(P) fulfills two roles in the above analysis. The first is as the admissible region in which the algorithm generates the sequence as in Section 5 of [13]. In this case, the initial point (x °, s °) prescribes the possible choices of p so that p ~< ~o(x °, sO). The second is as the region where the nonlinear mapping f can be approximated suitably as in Condition 3.2. If the mapping f is linear, however, Condition 3.2 holds with every p ~ (0, 1); hence we can start from an arbitrary initial point (x °, s °) ~.~-0. In addition, 69 = + ~ and 3, = 0 in those cases, and ~c = 0 if f is monotone; hence the above corollary corresponds to Corollary 5.1 of [13] if f is linear and monotone. Observe also, that for r>~ 1 the complexity bound gets worse as r increases, and similarly for r < 1 as r decreases to zero. A final remark to be made is that tlie actual value of K (which might be hard to compute) is not required in practice, although the theoretical step size and complexity depend on it. In an implementation we can just compute the maximal step

0o,0 := for every [0,

which is guaranteed to be as large as the theoretical step. However, we should take another strategy in the case r = 0 which wiIl be considered in the next section.

4.4. Polynomial complexity if r = 0

In this section we show that, with suitable step size, the classical primal-dual affine scaling algorithm of Monteiro et al. [37] can be applied to NCPs satisfying Conditions

B. Jansen et al,/Mathematical Programming 78 (1997) 315.345 333

2.2 and 3.2 with a polynomial complexity bound. We believe that this is the first proof

of polynomial convergence of the affine scaling algorithm for NCPs. So from now on we assume that r = 0. It is easily verified that Lemma 4.3 and

Lemma 4.4 still apply in the present case. Theorem 4.5, however, is not valid for r = 0.

In fact, by taking the limit in (30) as r tends to zero one obtains that the step size 0

becomes zero. Below we show that by making a positive step, (i.e., 0 > 0) w(v) may well decrease, but the decrease can be bounded from below. This is the contents of the

next lemma.

Lemma 4.9. Let ~-, ~ be as defined in (23) and ~ as in (24). I f (x, s) ~ .9-0 and

1 1 2 ~ 0 ~ < 0 < m i n O, 2~- ' ¢ ~ ( 1 + T 0 ) 2 ' - - - 3 - - - '

1 + (33) 16n~ 2 4 ~ ~

then ( x( O ), s( O ) ) ~ y o and

1 + ,o(v 2) 1 + ¢.o(v(0)2) >~ 02 (.~21/~-n + 2~. ) . (34)

1 + - 0(1 + 0 ~ )

Proof. It may be clear from Lemma 4.4 that the given bounds (33) on 0 guarantee the feasibility of the new iterate (x(O) , s(O)). So it remains to show that (34) holds. First observe that (27) and (28) also hold for r = 0. Hence, by using the notation ~o 2 = w(v 2)

= a / f l with ce and /3 such that a e <~ u z <.< fie, one has

(1 - o(1 + o ~ ) / & ) ~ - o ~ : ~ ~o(~,( o ):)

(l - o ( l - o ~ - ) / ¢ g ) / 3 + o ~ : ~

, o : ( ~ - o(1 + o ~ ) ) - o ~ : ~

( ~ - o(1 + o ~ ) ) + o ~ : v 7 + 2 o ~

w2(x/n-n- 0(1 + 0 ~ - ) ) - O~2~-n- -20z-~

>~ (¢7 - o(I + o~) ) + o ~ : ¢ g + 2 o ~

After adding one to both sides and rearranging the terms the inequality (34) follows. []

Now we are ready to prove the polynomial complexity. We will denote by (x (k), s Ikl)

the iterate after k iterations and for simplicity we use the notation ¢o k := w(x Ik), s(~0.

334 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

Theorem 4.10. Let an initial interior point ( x (°~, s (°)) e.9 r ° , with 7r2-p <~ to o <~ 1 and

0 < 6 < (X(O))Ts (°~ be given. We define parameters L and ~" as follows:

L : = l n (x(O))Ts (°~ 6 4 { y } 2 , ~ ' : = - - ( l + l ¢ ) + - - ~ / n ( l + K ) + - - ,

to~ w o nL

We assume that L >~ 1 and n >1 2. Let t be the smallest real number in the interval (~-, "r+ 1 / 2 n L 2] such that K : = 2tnL z is integral. I f 0 := 1 / ( t f n L ) < ~ (9, then after K=O(nL~-/(oJ~)) iterations the algorithm yields a solution ( x ' , s*) such that (x ~)Ts* ~<e and w ( x ~ s ' ) > l w ~ / 2 > ~ p .

Proof. For the moment we make the assumption that in each iteration of the algorithm the step size 0 = l / ( tv~nL) satisfies the conditions of Lemma 4.9. Later on we will justify this assumption. Taking logarithms in (34), and substituting the given value of 0,

l + w o 2 ( 82(-~2v/-~n + 2 ~ ' ) ) ln l+ to,2~kln 1+~--~+0~)

~<k - 0(1 + o~)

(t2nL ~) - ' ( ~ + 2 ~ ) ~ ~<k

- 2( ,vTL)- '

-~2 + 2~- - k

tL( tnL - 2) '

where we used ~" < ~ and 0~-< I in the third inequality. Hence we certainly have cok w~ /2 >10 as long as

7 2 + 2~ I + w o k tL( ntL - 2) ~< In 1 + to(I/2 " (35)

Since 05(o-) := In(( 1 + o- ) / ( 1 + ~r/2)) is a concave function, and q~(0) = 0, 05(1) >/ 1/4, it holds

1 + w~ oJ~ In > ~ - - . l + ,o~/2 4

As a consequence, inequality (35) is certainly satisfied if

w~tL( t n L - 2) k~< (36)

4(~ 2 + 2~)

We conclude that if the total number of iterations satisfies (36) then the inequality 2 w02/2 holds.

we obtain

B. Jansen et a l . / Mathematical Programraing 78 (1997) 315-345 335

Since Lemma 4.3 is valid and we employ a fixed step parameter O, the algorithm stops after at most k iterations, where

4fn- ( X(0))T S (°) k >i ~ In ~ - 4 tnff ,

and then we have (X(k))VS~k) ~< e. Note that the definition of t guarantees that 4 t n L 2 is integral. So, as far as the reduction in complementarity is concerned, the algorithm needs not more than 4 m L 2 iterations. This number of iterations will respect the bound (36) if

tnL - 2 4 t n L 2 <~ to~tL 4(~2 + 2 ~ ) "

Dividing by to~ tL and rearranging terms gives the condition

16(~ 2 + 2~) 2 t>~ w° 2 + n-L" (37)

which is satisfied by the value assigned to t in the theorem, since .q2 ~< 2(1 + K) and

~ ( 1 +2K) ~2,,(I +2K) 2 ~ ( 1 + ~ ) ~ = ~<

(,O k O) 0 t,O 0

It remains to show that in each iteration of the algorithm the specified step size 0 satisfies condition (33) of Lemma 4.9. First, observe that 0 < ~9 by assumption. From (37) we have

32~ t>7 w2 >132~-.

The condition 0 ~< 1 / ( 2 ~ ) is equivalent to

2~" t>_- V~n L '

which is guaranteed by the assumption on L. Since O ~< 1/(2~n) , a sufficient condition for the third bound in (33) is

1

0~ ~ ( 1 + 1 / (21 ; ) ) '

which is satisfied since t 7> 2 from (37) and L >/ 1. The fourth condition is trivially guaranteed, so it remains to deal with the condition that for each k

0 < -~- 1 + 16n~ ~ 4v~-n~

Using n I> 2 and ~ ~> V/5-/4 (see (32)), we have

3to k w~ 1

4~----~ ~ T < -~-

3 3 6 B. Jansen et al. / Mathemat ical Programming 78 (1997) 3 1 5 - 3 4 5

Therefore, since

i 3 1/1 + o -2 - or> ~ if0~< o ' < 7,

it is sufficient to show that 0 ~< oJk/(2~), for each k. As we have seen before, for the

given step size we have oJ k/> w0/v~- for each k. So is it sufficient that 0 satisfies

0~< w0/(2¢'2-~) or even 0~< Wo/(2v~-¢q-+K) . This amounts to Wot~nL>_. 2v~-~/1 + K. Due to the assumption that L/> l and n >_- 2, this certainly holds if t

satisfies t >t ',/i- + K / w 0. Using ~o 0 ~< 1, t >1 ~- and the definition of ~-, it is obvious that

t satisfies this inequality. Hence the proof of the theorem is complete. []

As the proof of the above theorem shows, we need the assumption w 0 > / 2 V ~ on the

initial point ( x °, s °) so that the generated sequence {(x k, s~)} lies in X ( p ) for which

Condition 3.2 is effective. However, as we have mentioned in Remark 4.8, if f is linear

then we can take any p ~ (0, 1), and the assumption does not affect any choice of the

initial point.

5. Smoothness conditions on the mapping f

In the literature on interior point methods for nonlinear programming problems a few

smoothness conditions on the functions involved have been proposed. These conditions serve the purpose of bounding the second order effects not taken into account by

Newton's method. Since these conditions are only applicable to monotone mappings, we confine ourselves in this section to mappings f satisfying the monotonicity property. In

this section we show how these conditions are generalized to the setting of NCP and

indicate their use for analyzing algorithms in wide neighborhoods. We first introduce some notation regarding a trilinear form N E ~ , x ,x , . We use the

notation

- N i N[x, y, Z] := E E Ex, y~, j, ER i j k

where N i, i = 1 . . . . . n, are matrices. Likewise, we denote for an n X n matrix M

Mix, y] := xTMy. According to this notation, we use the following symbols to denote the differentials of

f:

d I hVzf(x+tht) =h~7f(x)h, , f ' ( x )[h t ' h21 :=-d-tt ,=0

d t=0

5.1. Zhu's scaled Lipschitz condition

In this section we will show that the (modified) scaled Lipschitz condition as

introduced by Zhu [51] also guarantees Condition 3.2 to hold. The scaled Lipschitz

B. Jansen et aL / Mathematical Programming 78 (1997) 315-345 337

condition was also used by Potra and Ye [43] for the analysis of Newton's method for monotone NCP, by Kortanek et al. [28] for an analysis of a primal-dual method for entropy optimization problems, by Sun et al. [44] for min-max saddle point problems, and by Tseng [48] for an analysis of a primal-dual path following algorithm for monotone variational inequality problems which contain the monotone complementarity problems as special cases. Recently, Sun and Zhao [45] also used this condition for an analysis of a long step algorithm for mixed nonlinear complementarity problems.

Definition 5.1. Let G be a closed convex domain in R", with nonempty interior Q := int(G). A single-valued monotone operator f : Q ~ 1~" satisfies the scaled Lips- chitz condition if there is a nondecreasing function ~(ot) such that

1[ X ( f ( x + h) - f ( x) - Vf( x )h ) [1 ~< ~b( ot)haVf( x)h (38)

for all x > 0 and h satisfying II x - ' h II ~< o~.

We will show that any mapping satisfying the scaled Lipschitz condition also satisfies Condition 3.2.

Theorem 5.2. Let the mapping f satisfy the scaled Lipschitz condition with G = { x I~": x>~ 0}. Then, for every p ~ (0, 1), fsatisfies Condition 3.2 with

and 0 = ot p. Y= p2

Proof. Suppose that (x, s ) ~ 4 ~ ( p ) and (Ax, A s ) ~ ~2,, satisfy (5) and II x - 'Ax+ s- IAs [[ ~< 1. Since f is monotone, Lemma 3.1 holds with K = 0. Hence

II x-lAxl[ z= I Iv- l&l l z

~< IIv- ' 112(1I p,l[ 2 + 2p~p~ + 11 p, ll 2)

1 ~< v T I I P.,. +P.~ ll2

v~.~ 1[ x-~ax + s -~s l l 2 ~<

Vmin

1 ~ - ~ ,

p-

where the last inequality follows from the fact that o)(v) = Vmin/vma~ >i p and I[ x- tax +s- tAs[ I ~< I. Thus, if O~< ap then II Ox-'_ax[I <~ a for every 0E(0 , O], and we have

[[ P"( O) -P"II = d( f ( x + O/Ix) - f ( - Vf( x)Ax )

1 ~< ~ll o- ' II, I1 .r( f ( x + Oax) - f ( x) - OUf( x) /Ix) [I

338 B. Jansen et aL / Mathematical Programming 78 (1997) 315-345

1 ~ II v - ' II~.qJ(~)OZAxW~f(x)ax

0 I Iv- ' I1-~0(~)II d-~axll II d~Tf(x)axll

0 II v - ' I1 ~¢,(,~) II v[I = 11 x - ' a s II I[ dVf(x) Ax II 1

~< 0 - - ~ 4,(,~)II x-~xll II p~ll,

0,P(,~) ~< p2 II p~ll.

Consequently, we obtain ~ and "y as

, / , (~) =e~p and T - p2

This completes the proof. []

Definition 5.1 of the scaled Lipschitz condition implies that hvVf(x)h >~ 0 for every x > 0 and h with [1 x-~h l] ~< a , a priori. Hence, using this condition seems not to be suitable when we analyze the case where the mapping f has no monotonicity. In fact, even if f is linear, i.e. given by f ( x ) = Mx + q, the condition does not necessarily hold for the matrices M in P , , On the other hand, Condition 3.2 does not need the monotonicity and holds for any linear mapping, which may be an advantage of the condition.

5.2. Self-concordance and relative Lipschitz condition

The most important (and most general) condition is self-concordance introduced by Nesterov and Nemirovskii [40], later also used by, e.g.. Jarre [16], Den Hertog [10] and Den Hertog et al. [11], Freund and Todd [4], Nesterov and Todd [41], The crux of the condition is that it bounds the third-order derivative of a function in its second-order derivative. In [40, Chapter 7] it is shown how tile condition can be generalized to nonlinear mappings. A main difference with the Lipschitz conditions is that the self-concordance does not immediately apply to the mapping itself; what is needed is a so-called self, concordant barrier for the domain (in our case ~ , and we may use the function -Y."i=~ In xi) and a mapping that is /~-compatible with this barrier function. We have the following definition.

Def'mition 5.3, A C2-smooth monotone operator f : ~ ' ~ ~" is called ~-compatible with F ( x ) = -~"i=~ ]n x i i f fo ra l l x > 0 a n d h E~" . i = 1,2o3, the inequality

3

I / " ( x ) [ / , , , t, 2 , h3][ ~< 3~/2/~1- I { f ' ( x ) [ h , h,] "11 x 'l,,II'/3} i=1

holds.

B. Jansen et al. / Mathematical Programming 78 (1997) 315-345 339

If the mapping f is /3-compatible with the function -E~'= i In x; then the barrier function f t (x ) := (1 +/3)2{rf(x) + x -~} is strongly self-concordant for all t > 0 (see Prop. 7.3.2 of [40]). A similar property can be obtained if the mapping f is strongly self-concordant itself. For all o~ > 0 and /3 > 0, if the mappings q5 and ~0 are self-concordant then the mapping aq5 +/3qJ is also self-concordant with some parame- ter. Self-concordant mappings satisfy the following condition, which may be called the "relative Lipschitz condition" (see Prop. 7.2.1 of [40] and Section 2.1.4 of [17]).

Definition 5.4. Let G be a closed convex domain interior Q:= int(G). A single-valued monotone operator relative Lipschitz condition if for all x E Q,

:= { ( y - - x ) T V w ( x ) ( y - - x ) ~<1 the inequality

( ' ) - 1 h r V w ( x ) lT . I h T ( V w ( y ) V w ( x ) ) h ] <~ ( l - r ) 2

holds for all h ~ ~".

in N", with nonempty w : Q ---> [~" satisfies the

y E Q for which r

In the following lemma, we will show how ~-compatibility and the relative Lipschitz condition can be used to bound the inner product pi~p,(O), which plays an important role in the complexity analysis, see Lemma 3.3 and Lemma 3.4.

Lemma 5.5. Let the mappingf be fi-compatible with F(x) = - E In x i and p ~ (0, 1). If" (x, s) E J U ( p ) : = { ( x , s) E.gz°: oJ(x, s)>~ p}, and if (Ax, As) E ~ z" satisfy (5) and II x - '~x + s-~asll <~ l, then

1( 42/3v,~,, ) p ~ p ~ ( O ) < ~ l+0-~[ [p~ , l l IIp~.ll 2

for every 0 < 0<~ p/2 . Here w(x, s). p~ a~ul p,(0) are defined by (1) and (10).

Proof. Using Taylor's expansion of the function j x r g ( 0 ) , we have

J x v g ( 0 ) = A x v ( f ( x + 04 x) - f ( x ) - OF'./'( x ) .~ x )

I 2 t¢ = 5 0 f ( y ) [ A x , Ax, Ax],

where y = x + OAAx for some Z E N with A ~< 1. Then. for every t > 0, it holds

p Vp,(O) = A x T ( A s + g(O))o

= axWas+ ( ~ o J ' ( y ) [ ax . i x . J_,-])

<~ A x ~ s + 33/-~/2/30./'( y)[ a_v, A.t-] I] y- t-Ix [I

I <~ AxVAs + 33/2/2/30(rf,( y) + y_2)[j_~, Jx][I y - ~ x l l - .

t (39)

340 B. Jansen et a l . / Mathematical Programming 78 (I 997) 315-345

We will apply tile relative Lipschitz condition with y and x to the self-concordant mapping

h(~) = ( ¢ ( . , - ) - x - ' .

Since A x T V f ( x ) A x = A x T A s and y = x + OAAx, we obtain

r = (0/~Ax)T((ft(x) 't- X-2)(0/~Ax)

= 02A2( t+ AxTAs + II x-'Axll 2) <~ 02(,axTas + tl x-taxll ~')

and the relative Lipschitz condition gives

( ' ) ( t f ' (y)+r-2)[Z~x, a x ] ~ 1 + ( 1 _ , . ) 2 l ( a x w ( ~ ' ( x ) + X - 2 ) a x )

1 (1 _,.)2 ( tax-hs + I I - , - ~xll 2).

By the monotonicity of the mapping f and Lemma 3.1. we have

0<. Axr..4s <~ ¼ll p~,ll z

Also, the assumption guarantees

1 2 ( A X T j S -I- "Un2fin II x-'axll =)

"t)mi n

1 -- 2 (P~P.~+V2minllv-LP-' "112)

l)mi n

1 v <~ ~ ( PxP.~ + It P.,I] 2 )

I

9- ~31.11 ax T

Umi n

1 ~ .

It is easily seen

l[ y- 3 x l]

Substituting the

02

I[ p.~ + p~ll 2

- - I I x - ~ x + s-~as[I z

that

1 ~< II x- 'axl l < 2ll x-~axll

l - 0 II x - k4x II

latter results in (39) with t = 1/v~i,, ~.oNes

1 (by the assumption on 0),

2

Vrni. [I P.~ +psll =

2 II p~ II

Umin

B. Jansen et al. / Mathematical Programming 78 (1997) 315-345 341

1 - - < 2 , (1 - r ) 2

p.Tps ( 0 ) ~ ¼ II p~ I[ 2 + _ _ 3 3/2 2 2 II po II

2 ]30-~- v,~in v~i,,

42 ]3Vmi, ) " 1 + 0-)- 7 II p~ II [I p~, l[ 2

which completes the proof. [5

From the lemma we can derive the following corollay in the case of the primal-dual affine scaling algorithms considered in Section 4.

Corollary 5.6. I f dx and As are determined with a primal-dual affine scaling algoriNm as in Section 4, then

,( p.,Tp,(O)~-~ l + O 7 Ilpoll 2

for every 0 < 0 <~ p/2.

Proof. In the algorithm, the vector Pv is given by - - v 2 r + ' / l l v 2 r l T . The result immediately follows from the inequality

vm~. Vm~. [I v 2' H Vm~. II ,~2~ [I 1

II p,, II II v 2 " ÷ L II ~< < - " [ ] vm.~llv 2~lf p

The result of this corollary gives an alternative proof for a bound of the type as in Lemma 4.1(i). Unfortunately, since our analysis of the primal-dual affine scaling algorithms is based on large neighborhoods, we also need to compute a bound (cf. Lemma 4.1(ii)) for

II p,p,( O) II,. and for 1[ g(O)II ~. We have not been able to derive such a bound using self-concordance and relative Lipschitz. However, to have more insights into the relationship between the self-concordance and our condition, we will need to enforce more strict conditions on the mapping. Recall that Condition 3.2 requires the inequality

11 D( f ( x + OAx) -- f ( x) -- OVf( x) Ax) II ~ ~,0 II OVf( x) Ax [].

Using Taylor's expansion of the function

K(0) := II O ( f ( x + Oax) - f ( x ) - OVf(x)ax)[I

we obtain

~'(0) = 0~"(0A) < 0 II D( VI( x + OAAN) - V f ( x ) ) ,.3x II

for some A ~ 1~ satisfying A < 1, and

[I D( f ( x + h) - f ( x) - Vf( x) h) H ~ II D( V f ( y ) - Vf( x) )h II

342 B. Jansen et al. / Mathematical Programming 78 (1997) 315-345

where y = x + hh. For matrices A and B we mean by A ~ B that B - A is positive

semidefinite. It is not difficult to derive the following results from the above observa-

tions:

M= V/(x) and

for each x, y E Q. Then

(i) i f

1

The o rem 5,7. Let G be a closed convex domain in ~", with nonempty interior

Q ;= int(G). Let f be a single-valued monotone operator f : Q ~ ~", and let

N = Vf( y) - Vf( x)

( ) ( ' 1 M ~ N ~ 1 M (1 - , - ) ~ (1 - , - ) =

for all x E Q, y E Q such that ( y - x) 'rM(y - x) <~ r 2 then f satisfies the relative

Lipschitz condition, (ii) for a given p ~ (0,1), (fN TD2N ~ 0 Zy 2 M TDZMfor all x ~ Q, y := x + OAx ~ Q

such that 11 x - l ( y - x ) ] ] ~< O/p 2 for ever), O~ (0, 0] , and D = diag(d) with d > O,

then f satisfies" Condition 3.2 with p ~ (0,1) and O.

While the above theorem seems to be quite trivial, it may be somewhat practical

since the conditions in the theorem can be more easily checked than the original

conditions. In fact, we can find some examples to satisfy the relative Lipschitz condition

a n d / o r Condition 3.2 by using the above theorem.

R e m a r k 5.8. If the Jacobian V f ( x ) is symmetric for every x ~ Q, the assumption

NTD2N ~ "y=MTD2M in (ii) of the above theorem can be replaced by the condition that

there exist positive number y and a nonsingular matrix P such that

(i) N ~ ",/M and

(ii) P-ID(yt-yM - N ) D P and P - ~ D ( v ~ M + N ) D P are diagonal niatrices.

E x a m p l e 5.9 (Monotone LCP). Consider the linear complementari ty problem with

f ( x ) = Mx + q; here M is a symmetric positive semidefinite matrix and q E I~". Then

V f ( x ) = M , K = 0 and it is easy to see that all of the smoothness conditions are

satisfied. Specifically, Condition 3.2 is satisfied with O = + m and "y = 0.

Example 5.10 (Entropy function). Let . E ~ " and u > 0 and let 4 , (x) be an entropy

function of the form

(,) 4 ' ( x ) = x; Io~ - - . i= I lit

Let us define f ( x ) = Vd)(x). that is J ) (x) = log x i - log u i + 1 for all i = 1, 2 . . . . . n.

Then K = 0 and f satisfies all of tile smoothness conditions (cf. Theorem 4.1 of [51]).

Specifically, Condition 3.2 is satisfied with (9 = 1 and y = p / ( l - p).

B. Jansen et al. / Mathematical Programmbtg 78 (1997) 315-345 343

n -- " a f o r Example 5.11. Let a ~ l ~ and a > 0 and let f : ~ " + + ~ l ~ + + such that f i ( x ) - x i all i = 1, 2 . . . . . n. Then K = 0 and f satisfies all of the smoothness conditions (cf.

Theorem 4.1 of [51]). Specifically, Condition 3.2 is satisfied with 6 )= 1 and 3, = (1 + l / p ) ~- t .

It should be noted that the above three examples also satisfy the conditions in

Theorem 5.7.

Unfortunately, we have found no example of monotone CP which determines the

broadness of the class of problems covered by each smoothness condition. If we

abandon the monotonicity, however, there exist many examples which clarify the

difference between Condition 3.2 and other conditions.

Example 5.12 ( P . ( K ) LCPs). Consider the linear complementarity problem with

f ( x ) = Mx + q; here M is a P , ( K ) matrix with K > 0. Here we give the following two examples:

(i) M : = [ 2 12], q E " 2,

(ii) M:=[_I 2 --1],1 q~2 . Then Vf(x) = M in both cases and it is easy to see that

for the case (i) both of the scaled Lipschitz condition (Definition 5.1) and the relative

Lipschitz condition (Definition 5.4) do not hold with h = 0(1, - 1) v for some 0s, while K = 1 /16 and Condition 3.2 holds with (9= +co and y = 0, and

for the case (ii) both of the scaled Lipschitz condition (Definition 5.1) and the relative Lipschitz condition (Definition 5.4) do not hold with h = 0(3, 2) v for some 0s,

while K = 1 /8 and Condition 3.2 holds with 69 = + ze and 3' = 0.

Acknowledgements The authors are grateful to two anonymous referees for their many va]uable and

helpful comments.

References

[I] K.M. Anstreichcr and R. Bosch, A new infinity-norm path following algorithm for linear programming, SIAM Journal ol~ Optbni:ation 5 0995) 236-246.

[2] R.W. Cotfle, J.S. Pang and R.E. Stone, The Linear Complementarity Problem, Cmnlntler Science and Scient(fic Computing (Academic Press, San Diego, 1992).

[3] 1.1. Dikin, lterative solution of problems of linear and quadratic programming, Doklady Akademii Nauk SSSI? 174 (1967) 747-748. [Translated in: Soviet Mathematics Doklady 8 (I967) 674-675.1

[4] R.M. Frcund and M.J. Todd, Barrier functions and interior-point algorithms for lincar programming with zero-, one-, or two sided bounds on the variables, Mathematics of Operatkm.~ Re.watch 20 (1995) 415-440.

344 B. Jansen et al. / Mathematical Programming 78 (I 997) 315 -345

[5] C.C. Gonzaga, Path following methods for linear programming, SIAM Review 34 (1992) 167-227. [6] O. Giiler, Path lbllowing and potential reduction algorithms for nonlinear monotone complementarity

problems, Working Paper, Dept. of Management Science, University of Iowa, Iowa City, 1991. [7] O. Gi~ler, Existence of interior points and interior paths in nonlinear monotone complementarity

problems, Mathematics of Operations Research 18 (1993) 128-147. [8] O. Giiler, Generalized linear complementarity problems, Mathema#cs of Operations Research 20 (1995)

441-448. [9] P.T. Harker and J.S. Pang, Finite-dimensional variational inequality and nonlinear complementarity

problems: A su~'ey of theory', algorithms and applications, Mathematical Programming 48 (1990) 161-220.

[10] D. den Herlog, Interior Point Approach m Linear. Quadratic and Convex Programming. Algoritlm~s and Complexio' (Kluwer Academic Publishers. Dordrecht. The Netherlands, 1994).

[1 I] D. den Hertog. F. Jarre. C. Roos and T. Terlaky, A sufficient condition for self-concordance, with application to some classes of slructured convex programming, Mathematical Programming 69 (1995) 75-88.

[12] D. den Hertog, C. Roos and T. Teriaky, On the classical logarithmic barrier fimction method for a class of smootb convex programming problems, Journal of Optintic.ation Theory and Applications 73 (I 992) 1-25.

[13] B. Jansen, C. Roos and T. Terlaky, A family of polynomial affine scaling algorithms for positive semi-definite linear complementarity problems, Technical Report 93-112, Faculty of Technical Mathe- matics and Computer Science, Delft University of Technology. Delft, The Netherlands, 1993; also: SIAM Journal on Optimization 7 (1997) 126-140.

[I4] B. Jansen. C. Roos and T. Tertaky, A polynomial primal-dual Dikin-type algorithna for linear program- ming, M¢~thematics of Operations Research 21 (1996) 341 - 353.

[15] F. Jarre, The method of analytic centers for smooth convex programs, Ph.D. Thesis, Institut fiir Angewandte Mathematik und Statistik, Universit~it Wi~rzburg, Wiirzburg, Germany, 1989.

[16] F. Jarre, Interior-point methods for convex programming, Applied Mathematics & Optimization 26 (1992) 287-31 I.

[17] F. Jarre, Interior-point methods via self-concordance or relative Lipschitz condition, Optimization Methods and So.]hvare 5 (1995) 75- 104.

[18] M. Kojima. Y. Kurita and S. Mizuno. Large-step interior point algorithms Ibr linear complementarity problems. SIAM Journal on Optbnizatinn 3 (1993) 398-412.

[19] M. Kojima, N. Megiddo and S. Mizuno. Theoretical convergence of large-step-primal-dual interior point algorithnas tbr linear programming, Mothematieal Programming 59 (1993) 1-2 I.

[20] M. Kojima, N. lVlegiddo. T. Noma and A. Yoshisc, A Uni/~ed Approach to lnterior Point AIgtn'ithmsfi~r Linear Complementarity Problems, Vot. 538 of Lecture Notes in Computer Science, Vol. 538 (Springer, Berlin, 1991).

[21] M. Kojima. N. Megiddo and Y. Ye, An interior point potential reduction algorithm for the linear ¢onlplementarity problem. Mathematical Pragramming 54 (1992) 267-279.

[22] M. Kojima, S. Mizuno and T. Noma, A new continuation method for complementarity problems with uniform P-functions, Mathematical Programming 43 (1989) 107- 113.

[23] M. Kojinm. S. Mizuno and T. Noma, Limiting behavior of trajectories by a continuation method for monotone complementarity problems, Mathematic.~ of Operatitn~ Research 15 (1990) 662-675.

[24] M. Kojima. S. Mizuno and A. Yoshise, A polynomial-time algorithm for a class of linear complementar- ity prObIcms, Mt~themarical Progr~mmli~g 44 (I989) 1-26.

[25] M. Kojima, S. Mizuno and A. Yoshise, A primal-dual interior point algorithm for linear programming. ira: N. Megiddo, ed., Progress in Mathematical Programming: Interior Point and Related Method~' (Springer, New York, 1989) 29-47.

[26] M. Kqjima. S. Mizuno and A. Yoshise, An O(~nL) iteration potential reduction algorithm for linear complementarity problems, Mathematical Programming 50 (1991 ) 331-342.

[27] M. Kojima. T. Noma and A. Yoshise, Global convergence in infeasible-interior-point algorithms. Mathematicat f'rogr~unmhtg 65 (1994) 43-72.

[28] K.O. Kortanek. F. Potra and Y. Ye. On some efficient interior point methods tbr nonlinear convex programming. Linettr Algebra and Its Applications 152 (1991) 169-189.

[29] K.O. Kortanek and J. Zhu, A polynomial barrier algorithm [br linearly constrained convex programming problems. ,~hlt hematics of Operations Research 18 (1993) 116-127.

B. Jansen et al. / Mathematical Programming 78 (I 997) 315-345 345

[30] P. Ling, A new proof of convergence for the new primal-dual affine scaling interior-point algorithm of Jansen, Roos and Terlaky, Technical Report SYS-C93-09, School of Information Systems, University of East-Anglia, Norwich, 1993.

[31] L. McLinden, The analogue of Moreau's proximation theorem, with applications to the nonlinear complementarity problem, Pacific Journal of Mathematics" 88 (1980) 101-161.

[32] S. Mizuno, O(n ~'L) iteration O(n3L) potential redaction algorithms for linear programming, Linear Algebra and Its Applicatians 152 (199l) 155-168.

[33] S. Mizuno and M.J. Todd, An O(n3L) adaptive path following algorithm for a linear complementarity problem, Mathematical Programming 52 (1991) 587-595.

[34] R.D.C. Monteiro and I. Adler, Interior path following primal-dual algorithms: Part I: Linear program- ming, Mathematical Programming 44 (1989) 27-41.

[35] R.D.C. Monteiro and I. Adler, Interior path following primal-dual algorithms: Part II: Convex quadratic programming, Mathematical Programming 44 (1989) 43-66.

[36] R.D.C. Monteiro and I. Adler, An extension of Karmarkar-type algorithm to a class of convex separable programming problems with global linear rate of convergence, Mathematics of Operations Research 15 (1990) 408-422.

[37] R.D.C. Monteiro, I. Adler and M.G.C. Resende, A polynomial-time primal-dual affine scaling algorithm for linear and convex quadratic programming and its power series extension, Mathematics of Operations Research 15 (1990) 191-214.

[38] R.D.C. Monteiro, J.-S. Pang and T. Wang, A positive algorithm for the nonlinear complementarity problem, SlAM Journal on Optbniz.ation 5 (1995) 129-148.

[39] Y. Nesterov, Long-step strategies in interior point potential reduction methods, Technical Report, University of Geneva, Geneva, Switzerland, 1993.

[40] Y. Nesterov and A.S. Nemirovsky, Interior Point Polynomial Algoritlmzs in Ctmuex Program~ning, SIAM Studies in Applied Mathematics, Vol. 13 (SLAM, Philadelphia, PA, 1994).

[41] Y. Nesterov and M.J. Todd, Self-scaled barriers and interior-point methods for convex programming, Technical Report 1091, School of OR and IE, Comell University, Ithaca, NY, 1994; also: Mathematics ¢~" Operations Research 22 (1997) 1-42.

[42] J.S. Pang and S.A. Gabriel, NE/SQP: A robust algorithm for the nonlinear complementarity problem, Mathematiccd Programming 60 (I 993) 295-337.

[43] F.A. Potra and Y. Yc, Interior point methods for nonlinear complementarity problems, Reports on Computational Mathematics 15, Dept. of Mathematics, The University of Iowa, Iowa City, 199t; a~so: Jot~rnal of Optimization Theory' aJ~d Applications, to appear.

[44] J. Sun, J. Zhu and G. Zhao, A predictor-corrector algorithm for a class of nonlinear saddle point problems, Technical Report, National University of Singapore, Singapore, 1994.

[45] J. Sun and G. Zhao, A quadratically convergent polynomial-step algorithm for a class of nonlinear monotone complementarity problems. Technical Report, Dept. of Decision Sciences, National University of Singapore, Singapore, 1995.

[46] H. V~iliaho, P , matrices are just sufficient, Research Report, Department of Mathematics, University of Helsinki, Helsinki, 1994; also: l,h~ear Algebra and Its Applications 239 (1996) 103- I08.

[47] S.J. Wright, A path-following infeasible-interior-point algorithm for linear complementarity problems, Optimization Methods ~md So/~ware 2 (1993) 79-106.

[48] P_ Tseng, Global linear convergence of a path following algorithm for some variational inequz~lity problems, Jourl~al of Opthniz~ltion Theory and Applications 75 (1992) 265-279.

[49] Y. Ye and K.M. Anstrcicher, On qt,adratic and O(,/n'n L) convergence of predictor-corrector algorithms for LCP, Mathematical Programming 62 (1993) 537-551.

[50] Y, Ye and P.M. Pardalos, A class of linear complementarity problems solvable in polynomial time. Linear Algebra and Its Appli¢'~ttions 152 ( 1991 ) 3-17.

[51] J. Zhu, A path following ~llgorithm for a class of convex programming problems. Zeitschri/t .fiJr Operations Resear~'h 36 (1992) 359-377.