20
SIAM J. OPTIMIZATION Vol. 4, No. 1, pp. 208-227, February 1994 () 1994 Society for Industrial and Applied Mathematics 012 ON THE CONVERGENCE OF A CLASS OF INFEASIBLE INTERIOR-POINT METHODS FOR THE HORIZONTAL LINEAR COMPLEMENTARITY PROBLEM* YIN ZHANGt Abstract. Interior-point methods require strictly feasible points as starting points. In theory, this requirement does not seem to be particularly restrictive, but it can be costly in computation. To overcome this deficiency, most existing practical algorithms allow positive but infeasible starting points and seek feasibility and optimality simultaneously. Algorithms of this type shall be called in]easible interior-point algorithms. Despite their superior performance, existing infeasible interior- point algorithms still lack a satisfactory demonstration of theoretical convergence and polynomial complexity. This paper studies a popular infeasible interior-point algorithmic framework that was implemented for linear programming in the highly successful interior-point code OB1 [I. J. Lustig, R. E. Marsten, and D. F. Shanno, Linear Algebra Appl., 152 (1991), pp. 191-222]. For generality, the analysis is carried out on a horizontal linear complementarity problem that includes linear and quadratic programming, as well as the standard linear complementarity problem. Under minimal assumptions, it is demonstrated that with properly controlled steps the algorithm converges at a global Q-linear rate. Moreover, with properly chosen starting points it is established the algorithm can obtain e-feasibility and -complementarity in at most O(n 2 ln(1/)) iterations. Key words, infeasible interior-point methods, horizontal linear complementarity problem, global convergence, polynomiality AMS subject classification. 90C05 1. Introduction. Interior-point methods have proven to be effective for solving many optimization problems. For linear and quadratic programming, an interior point is defined as one that satisfies all constraints and, in particular, strictly satisfies all inequality constraints. Frequently, an interior point is also called a strictly feasible point. (A strictly feasible point need not always exist even when the relative interior of the feasibility set is nonempty.) As the name suggests, interior-point methods operate in the interior of a feasibility set, starting and remaining in the interior. However, finding an initial interior point is usually a nontrivial task. In theory, this difficulty can be overcome by introducing artificial variables and embedding the original feasibility set into a new one in a higher dimensional space. This embedding approach requires the use of some large but unknown parameters, as in the "big M" approach for linear programming. The disadvantages of this approach are twofold. First, it is not known a priori how large this big parameter should be and an excessively large value introduces instability into the solution process. Second, the reformulation may adversely alter the structure, such as the sparsity pattern, of the original problem due to added new columns and rows. Because of these deficiencies, embedding techniques have not been well received or widely used. Considerable research effort has been devoted to the initialization problem of interior-point methods, especially for linear programming. A number of approaches have been proposed. Among them is the so-called combined Phase-I and Phase-II approach. In this approach, no big parameter is used, but still artificial variables Received by the editors June 16, 1992; accepted for publication (in revised form) December 28, 1992. This work was presented at the Fourth SIAM Conference on Optimization, Chicago, May 11- 13, 1992. The research was supported in part by National Science Foundation grant DMS-9102761 and Department of Energy grant DE-FG05-91ER25100. Department of Mathematics and Statistics, University of Maryland Baltimore County, Balti- more, MD 21228-5398. 208 Downloaded 02/21/14 to 128.42.163.118. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

SIAM J. OPTIMIZATIONVol. 4, No. 1, pp. 208-227, February 1994

() 1994 Society for Industrial and Applied Mathematics012

ON THE CONVERGENCE OF A CLASS OF INFEASIBLEINTERIOR-POINT METHODS FOR THE HORIZONTAL LINEAR

COMPLEMENTARITY PROBLEM*

YIN ZHANGt

Abstract. Interior-point methods require strictly feasible points as starting points. In theory,this requirement does not seem to be particularly restrictive, but it can be costly in computation.To overcome this deficiency, most existing practical algorithms allow positive but infeasible startingpoints and seek feasibility and optimality simultaneously. Algorithms of this type shall be calledin]easible interior-point algorithms. Despite their superior performance, existing infeasible interior-point algorithms still lack a satisfactory demonstration of theoretical convergence and polynomialcomplexity. This paper studies a popular infeasible interior-point algorithmic framework that wasimplemented for linear programming in the highly successful interior-point code OB1 [I. J. Lustig,R. E. Marsten, and D. F. Shanno, Linear Algebra Appl., 152 (1991), pp. 191-222]. For generality,the analysis is carried out on a horizontal linear complementarity problem that includes linear andquadratic programming, as well as the standard linear complementarity problem. Under minimalassumptions, it is demonstrated that with properly controlled steps the algorithm converges at aglobal Q-linear rate. Moreover, with properly chosen starting points it is established the algorithmcan obtain e-feasibility and -complementarity in at most O(n2 ln(1/)) iterations.

Key words, infeasible interior-point methods, horizontal linear complementarity problem,global convergence, polynomiality

AMS subject classification. 90C05

1. Introduction. Interior-point methods have proven to be effective for solvingmany optimization problems. For linear and quadratic programming, an interior pointis defined as one that satisfies all constraints and, in particular, strictly satisfies allinequality constraints. Frequently, an interior point is also called a strictly feasiblepoint. (A strictly feasible point need not always exist even when the relative interior ofthe feasibility set is nonempty.) As the name suggests, interior-point methods operatein the interior of a feasibility set, starting and remaining in the interior. However,finding an initial interior point is usually a nontrivial task. In theory, this difficulty canbe overcome by introducing artificial variables and embedding the original feasibilityset into a new one in a higher dimensional space. This embedding approach requiresthe use of some large but unknown parameters, as in the "big M" approach for linearprogramming. The disadvantages of this approach are twofold. First, it is not known apriori how large this big parameter should be and an excessively large value introducesinstability into the solution process. Second, the reformulation may adversely alterthe structure, such as the sparsity pattern, of the original problem due to added newcolumns and rows. Because of these deficiencies, embedding techniques have not beenwell received or widely used.

Considerable research effort has been devoted to the initialization problem ofinterior-point methods, especially for linear programming. A number of approacheshave been proposed. Among them is the so-called combined Phase-I and Phase-IIapproach. In this approach, no big parameter is used, but still artificial variables

Received by the editors June 16, 1992; accepted for publication (in revised form) December 28,1992. This work was presented at the Fourth SIAM Conference on Optimization, Chicago, May 11-13, 1992. The research was supported in part by National Science Foundation grant DMS-9102761and Department of Energy grant DE-FG05-91ER25100.

Department of Mathematics and Statistics, University of Maryland Baltimore County, Balti-more, MD 21228-5398.

208

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 2: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 209

and constraints are introduced along with new columns and rows. Works in thisdirection include those by De Ghellinck and Vial [4], Anstreicher [1], [2], Todd [17],Todd and Wang [18], and Ye [20]. More recently, Kojima, Mizuno, and Yoshise [10],and Ishihara and Kojima [7] proposed methods to alleviate problems of the "big M"approach. These methods use adjustable instead of fixed big parameters. All theabove works are aimed at deriving algorithms that do not require one to start froman interior point of the original problem. In these works, the primary concern wasmaintaining the desirable theoretical properties, such as polynomial complexity, ofinterior-point methods.

On the other hand, another line of research primarily aimed at practical per-formance has taken interior-point methods in a quite different direction. This line ofresearch produced a series of practical algorithms along with numerical tests and com-parisons, for example, see Lustig, Marsten, and Shanno [11], [12]; McShane, Monma,and Shanno [13]; and Mehrotra [15]. All these practical algorithms require neitherproblem reformulation nor feasible starting points. They can start from any positivebut otherwise infeasible point and try to achieve feasibility and optimality simulta-neously. Most of these algorithms fall into the framework of a centered and dampedNewton method, as we will describe later in detail. Strictly speaking, these algorithmsare no longer interior-point methods in the original sense, since the equality constraintsare no longer satisfied; but most researchers continue to call them interior-point meth-ods, partly because of the similarities in algorithmic form and partly because of thefact that the iterates generated by these algorithms are still interior points of thepositive orthant, i.e., they strictly satisfy the inequality constraints x _> 0.

In this paper, we will adopt the following terminology. For a given mathematicalprogramming problem with both equality and inequality constraints, we call a pointan infeasible interior-point for this problem if it strictly satisfies all inequality con-straints but does not satisfy all equality constraints. In this terminology, the terminterior refers to the feasibility set defined by inequality constraints only. On theother hand, a point that not only strictly satisfies all inequality constraints, but alsosatisfies all equality constraints is called a feasible interior point, or simply an interior

point. Moreover, by an infeasible interior-point algorithm we mean an algorithm thatgenerates infeasible interior-point iterates. We note that in the literature an interiorpoint is also frequently called a strictly feasible point.

There are other terminologies in the literature for what we would call infeasibleinterior-point algorithms in this paper. In their recent work, Kojima, Megiddo, andMizuno [8] used the term exterior point algorithm. On the other hand, in the classicbook by Fiacco and McCormick, the term mixed interior point-exterior point algo-rithm [5, p. 59] was used for algorithms that strictly satisfy inequality constraints butdo not satisfy equality constraints; while the term exterior point algorithm [5, p. 53]was used for a different class of algorithms.

Despite their superior practical performance, existing infeasible interior-point al-gorithms still lack a satisfactory demonstration of theoretical convergence, let alonea demonstration of polynomiality. Recently, Kojima, Megiddo, and Mizuno [8] ana-

lyzed a primal-dual linear programming algorithm that they called an exterior pointalgorithm but can be classified, by our terminology, as an infeasible interior-pointalgorithm. To our knowledge, their work is the first that deals with convergence ofan infeasible interior-point algorithm for linear programming in the framework of thecentered and damped Newton method described in 3. Their main result is that in afinite number of iterations the algorithm will produce an iterate that either satisfies

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 3: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

210 YIN ZHANG

a prescribed tolerance in optimality or exceeds a prescribed upper bound in magni-tude. This result, however, does not imply that, assuming the existence of a solution,the algorithm guarantees to return an approximate solution. The authors managedto obtain some slightly stronger results by modifying the standard algorithm into anonstandard, more complicated one.

A fundamental question in the convergence analysis of infeasible interior-pointmethods is the following. If an algorithm were allowed to run without stopping,would the iterates converge to the solution set provided that it is nonempty? To ourknowledge, this question has not yet been satisfactorily answered. Another importantissue that is still open is the computational complexities of infeasible interior-pointmethods.

It is the objective of this paper to establish a convergence theory for an infeasibleinterior-point algorithmic framework that is close to the implemented practical algo-rithms. We consider it extremely important to analyze algorithms as close as possibleto algorithms with empirically proven effectiveness. Based on this consideration, weadhere to the principle of not tailoring the algorithm just for the sake of obtainingbetter complexity bounds (see more discussion on this issue in the last section). Asa result, we impose only mild restrictions on the choices of steplength and startingpoint. Our main results in this paper consist of a global Q-linear convergence resultand a polynomial complexity bound.

For the sake of generality, we will study a rather general model problem, called thehorizontal linear complementarity problem (horizontal LCP), that includes (but is notlimited to) linear and quadratic programs, and the standard linear complementarityproblem. This generality will cause little extra work in our analysis. We will describethe horizontal LCP in the next section.

We will use superscripts to denote the iteration count and subscript as indices forvector components. The norm I1" II will denote the 2 norm unless otherwise specified.For u, v E Rn, we will also use the notation

min(v)- min vi,l<i<n

min(u, v) min (ui, vi).l<i<n

The paper is organized as follows. In 2, we describe our model problem--a hor-izontal linear complementarity problem that includes linear and quadratic programs,and the standard linear complementarity problem. In 3, we give the general algo-rithmic framework, which is a centered and damped Newton method as implementedin OB1 [11]. In 4, we describe our choice of starting point and introduce an auxiliarysequence. This auxiliary sequence will play an important role in our analysis. In5, we specify our choice for the steplength and state our complete algorithm. Sec-tion 6 contains technical lemmas. Our global convergence results are presented in 7.Finally, further comments are given in the last section.

2. Horizontal L(P. By the horizontal linear complementarity problem (hori-zontal LCP), we mean the following nonlinear system with nonnegativity constraints

(2.1) F(x,y) ( Mx + Ny- h )XYe O, (x, y) >_ O,

where x,y,h,e e Rn, M,N Rnn, X diag(x), Y diag(y), and e has allcomponents equal to one. The name horizontal LCP comes from the recent book byCottle, Pang, and Stone on linear complementary problems [3]. The term horizontal

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 4: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 211

is used to characterize the geometric shape of the n-by-2n matrix [M N], as opposedto a vertical LCP in which the involved matrix has more rows than columns.

Problem (2.1) is sufficiently general to include the standard linear complemen-tarity problem, linear, and quadratic programs. It is easy to see that when N -I,Problem (2.1) is the standard linear complementarity problem. Moreover, considerthe quadratic program

(2.2) minimize cTx + 1/2xTQxsubject to Ax b, x >_ O,

where c, x E Rn, b E Rm, A Rm’ has full row rank m (m < n), and Q Rn’ issymmetric. It can be verified (see [25], for example) that the Karush-Kuhn-Tuckerconditions for (2.2) can be written as (2.1) with

(2.3) M-- -BQ N= B Bc

where B R(n-m)xn h full row rank n- m and BAT O, i.e., the rows of B spanthe null space of A. When Q 0, the quadratic proam (2.2) reduces to the lineproam in standd form

(2.4) minimize cTxsubjectto Ax b, x O.

We should point out that for linear or quadratic programming, the use of the Bmatrix in our formulation is purely for the convenience of exposition. Our analysiscan be equally applied to other standard formulations. It is also worth noting thatour analysis is applicable to even more general problems than the horizontal LCP.For example, one may add free variables to the horizontal LCP along with an equalnumber of linear equations, provided that the columns corresponding to the freevariables remain linearly independent.

For convenience of reference, we define the following sets:

Clearly, is the solution set for the horizontal LCP (2.1). We call " the feasibilityset of (2.1). A pair (x, y) is said to be feasible if it is in " and strictly feasible if it isin ’+.

It is straightforward to show that

[M N]Y X

3. Algorithmic framework. The algorithmic framework under study is a New-ton-type method applied to the nonlinear system F(x, y) 0 with positive startingpoints. At each iteration, a steplength is chosen to ensure that the next iterate remainspositive, and a so-called centering step is added to the Newton step. Roughly speaking,the purpose of adding the centering step is to prevent iterates from getting too closeto the boundary of the positive orthant too early. The centering step fades away as

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 5: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

212 YIN ZHANG

the iterates approach a solution. As a matter of fact, the algorithmic framework isprecisely the centered and damped Newton method described below.

ALGORITHM 1 (CENTERED AND DAMPED NEWTON). Given (x,y) > O, fork- 0,1,2,..., do

1 Choose ak E [0, 1) and let #k xkTyk Solve the following linear systemor (Ax Aye)

( Ax )----F(xk yk ak ( 0 )(3.1) F,(xk, yk) Ay / #ke2. Choose a steplength (k E (0, 1] and

(k < &k --1

min((Xk)-lAx, (Y)-Ay,-1/2)"

Let xk+ xk -t- okAxk and yk+l yk _}_ OkAyk.As one can see, a term involving k#ke is added to the right-hand side of the

Newton equation (3.1). This term, resulting from perturbing the equation XYe 0into XYe ffk[Ike, is commonly called the centering term. Moreover, the (centered)Newton step is damped (i.e., multiplied by k < 1) whenever necessary to keep thenext iterate positive. Notice that the restriction ck < &k guarantees that the iteratesremain positive.

If (x, y0) -+ is strictly feasible, then {(x, y)} C ’+. In this case, Algorithm 1is an interior-point algorithm and is called a primal-dual interior-point algorithm forthe cases of linear and quadratic programming.

In order to guarantee that Algorithm 1 is well defined and to facilitate our anal-ysis, we will make the following assumptions throughout this paper unless otherwisespecified:

A1. " q}, i.e., a feasible point exists.A2. For any (x, y) ,4 and (&, ) .4, (x &)T(y 1)

_O.

It is known that Assumption A1 implies the existence of a solution to Prob-lem (2.1) (see [6, Thm. 3.1], for example). It is also well known that Assumption A2is satisfied by linear programs, convex quadratic programs, and monotone linear com-plementarity problems. Therefore, Assumptions A1-A2 are essential but by no meansrestrictive.

The following proposition demonstrates that Assumption A2 guarantees that Al-gorithm 1 is well defined. A more general version of this result can be found in [6,Thm. 2.1].

PROPOSITION 3.1. Under Assumption A2, F(x, y) is nonsingular.for (x, y) > O.Proof. Let (x, y) > 0. Consider

v Yu + Xv O,

where u, v Rn. Since Mu + Nv 0, by Assumption A2, uTv >_ O. On the otherhand, the equation Yu + Xv 0 gives u -Y-Xv and uTv -vTy-1Xv <_0. Hence, we must have v 0 and, consequently, u 0. So F(x,y) is indeednonsingular.

For future references, we list several well-known identities as a proposition. Firstdefine

(3.2) Dk --(Yk)/2(xk)-/2.

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 6: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 213

PROPOSITION 3.2. Let ((xk, yk)} be generated by Algorithm 1. Then(1) ykAXk A- XkAyk crkke- xkyke;

+ -. +(3) IIDkAxkll 2 A-II(Dk)-lAykll 2 A-2AxkTAyk II(Xkyk) -1/2 (ak#ke--XkYke)ll 2.The following proposition can also be readily verified.PROPOSITION 3.3. Let {(xk, yk)} be generated by Algorithm 1. Then for k > 0

Mxk+ A- Nyk+l h- (1 -ck)(Mxk A- Nyk h) k+(Mx A- Ny h),

where o 1 and

(3.3)k

(1 1-[(1 > 0.j=O

Moreover, if (P 1 at some iteration p, then t/k 0 and (xk, yk) E jz+ for all k > p.

4. Starting points and an auxiliary sequence. In this section, we specifyour choice of starting point for Algorithm 1. Our starting point, strictly positive asrequired by Algorithm 1, is inexpensive to construct. It is not completely arbitrary,but by no means restrictive.

Given a pair (u, v) E ,4, we choose

(4.1) (x0, y0) > 0 and (x, y0) > (u0, v0).For instance, one possibility is to choose x max(, u) and y0 max(C, v) forsome > 0. The task of finding a pair (u, v) 4 is straightforward and inexpen-sive for common applications such as linear and quadratic programming, and linearcomplementarity problems. Moreover, from the construction, we obviously have

(4.2) x u > 0, y0 v0 > 0.

We will later make heavy use of an auxiliary sequence {(uk, vk)}, which is con-structed from Algorithm 1 as follows:

(4.3) uk+i uk A- (k(Axk A- Xk Uk) Vk+l Vk + (k(Ayk A- yk vk).

The introduction of this auxiliary sequence was partly motivated by the two-pointbarrier function method recently proposed by Xu [19] for linear programming, inwhich two sequences are generated, one positive and another satisfying the linearconstraints. In our case, however, the auxiliary sequence is used only for the purposeof analysis and need not be actually computed in Algorithm 1.

The following lemma gives useful properties of the auxiliary sequence {(uk, vk) }.LEMMA 4.1. Let { (xk, yk) } be generated by Algorithm 1, { (uk, vk) } be given by

(4.3), and {tk }, be given by (3.3). Then for k > 0(1) (uk, vk) e fit;(2) xk+l--uk+l (1--Ok)(Xk--Uk) >_ 0 and yk+l--vk+l (1--Ok)(yk--Vk) >_ 0;(3) Xk uk k(x u) < x u and yk vk tk(yo vo) < yo vo;(4) /f cp 1 for some p > O, then (xk, yk) (uk, vk) e + for all k > p.

Proof. The proof follows from direct substitution and Proposition 3.3. [:]

The auxiliary sequence { (uk, vk)} satisfies the first n linear equations in F(x, y)0. As the algorithm progresses, the distance between { (uk, vk) } and { (xk, yk) } de-creases monotonically. We hope that feasibility will be attained as the distance be-comes zero. The fact that xk- uk and yk_ vk are always nonnegative will proveparticularly useful in 6.

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 7: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

214 YIN ZHANG

5. Steplength and the complete algorithm. In this section, we will use thefollowing notation:

x() x + Ax, y(a) yk + aAyk.

Moreover, when 0, we have xk x(O), yk y(O), and so on; and when ( k,we have xk+l x(k), yk+ y(k), and so on.

There are two control parameters in Algorithm 1: the centering parameter ak andthe steplength (k. In practical implementations, ak is often set to a small constant,and k is chosen to be a large fraction, usually 99% or more, of &k--the length of thestep to the boundary of the positive orthant. This choice for (k may be adequate forpractical purposes, but we need more elaborated choices for theoretically guaranteedconvergence. In this section, we specify our choice for ck. Our choice is based onseveral factors, including decreasing a merit function that is described below.

We are seeking two objectives: complementarity and feasibility. The most com-monly used quantity for measuring complementarity is xTy. For feasibility, it isnatural to use the residual norm IIMx + Ny- hll as a measure. Therefore, we defineour merit function as

(5.1) (x, y) xTy + IIMx + Ny- hll.The values of the merit function at the iterates will be denoted by

For notational convenience, we define

r(a) IIMx(a) + Ny(a) hll and (a) x(a)Ty(a) + r(a).

By our notational convention, we have rk r(0) and rk+l r((k). In addition,Proposition 3.3 and Lemma 4.1 imply

(5.2) rk+l (1 --Ok)rk tk+lr0.

Moreover, letting u(c)= (1- a)uk and using Proposition 3.2 (2) and (5.2), we have

where

(a) x(a)Ty(a) + (a)r (1 --i(a))k,

(5.3) i()([(1 ak)xkTyk + ukrO AxkTAyk(].

xkTyk 2t- krO

Observe from (5.3) that it is possible to choose k E (0, 1] such that

0 < ik (a) < 1.

Consequently, the sequence (k} generated by Algorithm 1 satisfies

(5.4) Ck+ (1 5k)k < Ck.rthermore, observe that (k} converges to zero Q-linearly if 5k 5 > 0.

Our choice of the steplenh ak is bed on several considerations. Since our goalis to drive the merit function to zero, we would like to mimize 5(a) in (0, 1] at each

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 8: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 215

iteration. Meanwhile, to obtain global convergence, we need to enforce two conditionsspecified below. Let us first define a constant - E (0, 1) that satisfies

(5.5) <_ min(XYe)/(xTy/n).Without loss of generality, we can assume that - is independent of n. The twoconditions on the choice of ak are that for all a E (0, ak] c (0, 1],

min(X(a)Y(a)e) >_ x(a)Ty(a)/n,(5.6)

and

> (1-

From the continuity of (x(a), y(a)), it should be clear that if ak satisfies (5.6) and(5.7), then

xk+lTyk+l X(ok)Ty(ok) >_ 0 and (xk+l,yk+l)_

O.

Condition (5.6) is essential for interior-point methods. Its role is to prevent it-erates from prematurely getting too close to the boundary of the positive orthant.Compared with many interior-point algorithms, our restriction (5.6) on - is quiteliberal. For example, - can be chosen to be very small as long as it is independent ofn. This means that our requirement of centrality for the iterates is very mild.

In addition, observe that condition (5.7) implies that

xkTyk rk> Uk

xOTyO r0

So condition (5.7) gives a priority to feasibility over complementarity. Roughly speak-ing, we do not want to have complementarity without feasibility.

It is now widely accepted that in order for interior-point algorithms to obtaingood numerical performance, large steplengths (eventually close to one or approachingone) should be taken. We believe that conditions (5.6) and (5.7) do not prevent analgorithm from eventually taking large steps; though this issue deserves further study.

Observe that xpTyp 0 is possible only if up 0. In this case of xpTyp O,(xp, yP) will be a solution to (2.1) and Algorithm 1 can be terminated. However,it seems extremely unlikely for xPTyp 0 to happen in practice. Without loss ofgenerality, we will not consider this finite termination case but assume that {(xk, yk)}is an infinite sequence.

For notational convenience, we rewrite conditions (5.6) and (5.7) as follows. Forall a e (0, ak] C (0, 11

gl(a) min(X(a)Y(a)e) x(a)Ty(a)/n >_ 0

(5.9) g2(a) X(a)Ty(a) U(a)xOTy >_ O.

It is clear that gl(a) is a piecewise quadratic and g2(a) is a quadratic.At each iteration, our objective is to minimize the merit function subject to both

(5.8) and (5.9). Hence, we choose ak as

(5.10) ak arg max {(a) gi() > 0 for all < a, 1 2}.ae(O,1]

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 9: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

216 YIN ZHANG

Since 5(a) is a quadratic, it is not difficult to determine the solution of (5.10).The lemma below gives the solution.

LEMMA 5.1. If conditions (5.6) and (5.7) are satisfied by O, then prob-lem (5.10) has the unique solution

(5.11) ck { mAn(l, (k, a2k), if AxkTAyk <_ O,mAn(l, ak, ak3), if AxkTAyk > O,

where

c[ min(c > 0"gl(O) 0},

(5.13)min{ > 0" g2(()= 0}

if v 0 or AxkTAyk >_ O,otherwise,

and

(5.14) ca(1 ak)xkTyk + vkr

2AxkTAyk

Proof. We first verify the existence of ak and a2k. The existence of ak > 0 isknown, see [22] for a proof. If * 0, then (5.6) implies (5.7), i.e., gz(a) > 0 holds

for all a e (0, a]. Moreover, when Ax*rAy > 0,

g2(() (1 )(xkTyk vkxOTy) + (akxkTyk + aAxkTAyk) >_ 0

holds for all c E (0, 1]. This means that in the case of vk 0 or AxkTAyk >_ O, (5.9)becomes redundant, leading to the first relation in (5.13) without loss of generality.Otherwise, a positive and finite-valued 2

k is defined by the second relation in (5.13).When AxkTAyk <_ O, the mimum of the quadratic () is reached at the bound-

ary of the feible interval, confirming the first relation in (5.11). When AxkTAyk > O,it is ey to see that is the global mimizer of the quadratic 5(). So 5() attainsits constrained mimum at a whenever it is inside the legible interval (0, mAn(l, a)](note that a 1 in this ce); otherwise the constrained mimum will be attainedat the boundary. This confirms the second relation in (5.11). It also proves the lem-

AN mentioned earlier, gl (a) is a piecewise quadratic and g2 (a) is a quadratic.Finding the roots of g (a) requires at most O(n) operations.

Now we are in a position to state a complete inferAble interior-point algorithmfor solving the horizontal LCP.

ALGORITHM 2 (INFEASIBLE INTERIOR-POINT ALGORITHM). Let (x, yO) satisfy(4.1). For k O, 1, 2,..., do the following:

1. Choose ak (0, 1) and let k Xky Solve the linear system (3.1) forAxk Ayk ).

2. Set the steplength ak by (5.11). Let xk+ x + akAx and yk+ yk +kAyk.

Obviously, Algorithm 2 is a special ce of Algorithm 1. So all the properties ofAlgorithm 1 previously stated also apply to Algorithm 2.

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 10: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 217

6. Technical lemmas. In this section, we prove several more technical lemmas.These technical results will be used to establish our main convergence results.

LEMMA 6.1. Let {(xk,yk)} be generated by Algorithm 2. For any (x*,y*)

(xk uk)Tyk + (yk vk)TxkxkTyk(o o)r, + (o o), + (o o)(o o)

<1+ zOTyO

Moreover, if+ # , then { (xk, yk } is bounded.Proof. Let (&,)E ’. Consider

In view of Assumption A2 and Lemma 4.1(1),

( u)r(9- v) > 0.

Hence,

&Tyk + ITxk < &TI + xkTyk + (Xk uk)T(I Vk)+ ( )( ) ( )( ).

Replacing - uk by (- xk) + (xk- uk) and 9- vk by (9- yk) + (yk_ vk) in (6.1),we can rewrite the last three terms in (6.1) into

Substituting the above into (6.1) and rearranging, we obtain

(6.2)cTyk + ITxk A- (Xk uk)Tyk -f- (yk vk)Txk< r +x+( u)V + (u )+( )r(u ).

Note that xkTyk <_ Ck <_ 0, (Xk, yk) > 0 and (see Lemma 4.1)

O <_ xk uk < x u and 0_<yk--vk <_ yo vo.It follows from (6.2) and (5.7) that for any (x*, y*) e S

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 11: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

218 YIN ZHANG

This proves the first part of the lemma. Moreover, if $’+ - qi, then it follows from(6.2) that for any fixed (&, 17) e +ry +rx < r7 + o + (o uO)r9 + (yO vo)r + (xo uo)r(yO vo).

This proves the boundedness of {(xk, yk)} and completes the proof. [:l

Before we proceed to the next lemma, let us define the following quantities:

(6.3) ()/ )ru+(u(xkk

v ( <’< + ()/) + a(o o)r(uo vO)lOrO.and

(6. 1

LEMMA 6.2. Let ((xk,yk)} and {(hXk, hyk)} be generated by Algorithm 2, andDk be given by (3.2). Then

(6.6)

Moreover, the sequence (wk} is bounded.Proof. Let

(6.7)

Notice that

(6.8) and II(D<)-hy<l < t.Since (uk, vk) E A by Lemma 4.1, it can be easily verified that

M(Axk / xk uk) / N(Ayk / yk Vk) O.

Hence, it follows from Assumption A2 that

(hxk + xk u#)T(Ayk + yk vk) >_ O.

Using this fact and (6.8), we have the following estimate

ArU [(A + ) ( )]r[(AU + U v) (U )](Axk + xk u#)T(Ay# + y# v#) (Ax# + x uk)T(yk v#:)

(x )r(Au + u ) + ( )r(u v)>_ -(xk uk)TAyk (ya vk)TAxk (Xk uk)T(yk Vk)-[D(x u)]T[(Dk)-IAy] -[(D)-l(y v)]T[DAx]( )( )

>_ --eTDk(xk uk)tk eT(Dk)-l(yk vk)tk (Xk uk)T(yk Vk).

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 12: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 219

Therefore,

(6.9) AxkTAyk >_ --[eTDk(xk Uk) / eT(Dk)-l(yk Vk)]tk_( )r( ).

Let us further examine the two terms involving Dk in the above inequality. Itfollows from condition (5.6) that

xkTyk nk k

This leads to

and

Similarly,

eT(Dk)-l (yk vk) -- (nxkTyk l yk Vk)Txk

Thus, from (6.9) and the above two inequalities we have

(xk uk)Tyk _+. (yk vk)Txk

or equivalently,

(6.10)

Furthermore, from Lemma 4.1 and condition (5.7)

(6.11)( )r(u ) <(0 o)r(u .)

<(0 o)(0 o)/oo.In view of Proposition 3.2(3), (6.10), and (6.11)

This leads to the following inequalityDow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 13: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

220 YIN ZHANG

The quadratic t2 2(xkTyk)l/2k- xkTykf]k is convex and has a unique positive rootat

This implies

(6.12) (t)2<_(+/()U+rl)UxTy--WxTyand verifies (6.6).

Finally, it is evident that {@} is bounded. And so is {k} by Lemma 6.1. Con-sequently, {wk} is bounded. This completes the proof, rl

As a result of the boundedness of {wk }, we have

(6.13)

where

(6.14) w lim supwk < +c.k-*oo

It is worth noting that when (xk, yk) is feasible, we have xk uk yk vk 0and k 0. This implies that

and in turn

IIDkixkll 2 + II(Dk)-liykll2 <_ (1 2ak + (ak)2//)xkTyk.The above is a well-known bound for interior-point methods directly following fromProposition 3.2(3). In view of this, it appears that (6.13) is a reasonable extensionof the above inequality from the feasible interior-point framework to the infeasibleinterior-point framework.

The next lemma gives lower bounds to ak and ak. We will use the followinginequality:

(tk) 2

where tk is defined by (6.7) and the second inequality follows from the geometric-arithmetic inequality.

LEMMA 6.3. Let ak be given by Lemma 5.1 and 5k by (5.3). Then

(1) ak _> min{1, (1 /)k(xkTyk/n)/(tk)2}, where grk min(ak, 1

(2) 5 _> ak(1 a

Proof. (1) It suffices to show that for 1, 2, 3,

/k > min [(6.16) a 1,

(1 /)&kxkTyk/n)(t)

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 14: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 221

For i 1, it follows from a similar proof to that of Lemma 3.4 in [22] thatsatisfies

alk > min(1 (1-- O/)a’kxkTyk/n

)’where AXk --diag(Axk). It implies (6.16) because

(tk)2

For i 2, we need only to consider the case of k > 0 and AxkTAyk < O. In thiscase, it can be verified by substitution that

() (1 .)( 00) +.(+.).Since xkTyk kxOO 0, it is cle that g2() 0 at

min(1,--akXkk/AxkTAyk).By (5.13) and (6.15)

min 1,- min 1, (t)

So also satisfies (6.16).For i 3, from (5.14), satisfies

2AxkTAyk (tk)2

and thus (6.16). This proves (1).(2) om (5.3),

(.) [(1 )+0 .1/(+0).[ +(0 .)/(+0)].[ .u/(+0)].[ .{{/(+0)].(1 .{x{/).

This completes the proo7. Global convergence. In this section, we present our main convergence re-

sults for Algorithm 2 under Assumptions A1-A2. We first give a range for the choiceof the centering parameter ak"

(7.1) 0

We should point out that the upper bound is chosen for simplicity. It could be anynumber in (0, 1) without affecting our results. Our first result is a global Q-linearconvergence result.

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 15: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

222 YIN ZHANG

THEOREM 7.1. Let (k} be generated by Algorithm 2 with ak satisfying (7.1).Then (k} converges to zero at a global Q-linear rate, i.e., there exists 5 E (0, 1) suchthat

k+1 _< (1 i)k, k 0, 1, 2,

Proof. In view of (5.4), we need only show that ik

_5 > 0 for some 5 (0, 1).

By Lemma 6.3(2), it will suffice to show that ak is bounded away from zero and

[AxkTAyk[/Xkk is bounded above.Without loss of generality, sume w 1. Substituting (6.13) into Lemma 6.3(1)

and noting that (7.1) implies 5k ak if, we obtain

(7.2) ak

In view of (6.15) and Lemma 6.2,

Recall that k is the mimum of (a) in (0, k]. Therefore, for $ (0, ak]

k>(l_akw ) (i wa)a-a a 2

Let be the right-hand side of (7.2), then

n 2nwO.

This completes the proof.We observe that the key quantity involved in the estimates of k is w, and in

turn the key quantity involved in w is the magnitude of infeibility. Therefore,from our analysis the initiM convergence rate of Algorithm 2 depends on the initialinfeibility. Under further specifications on the starting point, we will establish apolynomil bound within which Algorithm 2 can produce an approximate solutionwith :-feibility and :-complementarity.

Let us consider specific choices for (u, v) A and (x, y0) > 0. Since the matrix

[M N] h :u:: o nk ( Po,oition .:), cn d:

(.) o r [+]-h.

Since (u, v) is he minimum-norm solution of he linear sysgem M +N h, wehave

(7.5) II(u, v)ll min{ll(x, y)ll’(x, y) e A} min(ll(x*, y*)[I" (x*, y*) e 8} o*.One possible choice for (x, yO) is follows:

(7.s) (o, no) (, ), (o, o)[[.Clely, (x, yO) > 0 and (x, yO) (uo, vO), i.e., (x, yO) satisfies (4.1). Moreover, wehave the following lemma.

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 16: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 223

LEMMA 7.2. Let (x, yO) and (u, v) be defined by (7.6) and (7.4), respectively,and {wk} by (6.14). /f

(.) > */(Zv)

for some positive constant independent of n, then

w lim supw O(n).

Proof. We first show that there exist positive constants /1 and//2, both indepen-dent of n, such that for some (x*, y*) E

[(o o)ru, + (uo ,o),]/oo < Z,

and

(7.9) (o o)(uo ,o)/oo < Z.From the definition of p* in (7.5), we can choose (x*, y*) E q such that

(7.10) x*ll -< P* and IlY*II -< P*.It follows from the construction of (u, v) and (x, yO) that

(7.11) I1: -11 _< I1:11 / I111 _< p(/- / I),

and

Ilu -,11 _< Ilull + I1,11 <_ ,o(/ / 1).

Moreover, (7.6) and (7.7) together imply

(7.13) x0Ty0 p2n >_ pp*vf’/.It follows from the Cauchy-Schwarz inequality, (7.10), and (7.11) that

(o O)Ty, <_ i1:o OllllY, _< pp,(/- / 1).

Similarly, we can show that

(yO vo)TX, <_ pp* (vf + 1),

and also

(x0 uO)T(yO Vo) <_ p2(/- + 1)2.

Hence, in view of (7.13) we have that (7.8) holds for

/1 4/ > 2/ + 1

vand that (7.9) holds for

/2 4 _> (- + 1)2n

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 17: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

224 YIN ZHANG

Considering Lemma 6.1 and substituting (7.8) and (7.9) into (6.3) and (6.4), wesee that

ek _< (1 + Zl + Z2) (4Z -t- 5),

and

yk <_ 1 2ak q- (ak)2/9/+ 2/2 _< i//-f 9.

Finally, substituting the above bounds for k and r/k into (6.5), the definition of wk,and taking the upper limit, we have w O(n). This completes the proof.

Using Theorem 7.1 and Lemma 7.2, we obtain the following complexity result forAlgorithm 2.

THEOPEM 7.3. Let (x, yO) be given by (7.6) with p satisfying (7.7), and letbe generated by Algorithm 2 with ak satisfying (7.1). Assume that .for given > 0

(7.14) o x0Ty0 + ro _<

where T > 0 is independent of n. There exists a positive constant

(7.15) K O(n2 ln(1/))

such that :for k >_ Ke

Proof. It follows from Theorem 7.1 and a standard argument that Ck _< for

k > > O(ln(1/e))

where we may take i lim inf 5k. From (7.3)

( ) 1i> 1-2(1-)a (1-)a

or ----O(nw)n nw

Hence,

Ke O(nw ln(1/e)).

By Lemma 7.2, we obtain (7.15) and this completes the proof.

8. Stopping criteria. We note that Algorithm 2 is a generic algorithm thatdoes not contain stopping criteria. Theorem 7.1 states that as long as there existsa feasible point, Algorithm 2 always converges in the sense that the merit functionconverges to zero along the iterates, which in turn implies that the iterates approachthe solution set.

To implement an infeasible interior-point algorithm in the framework of Algo-rithm 2, we need certain stopping criteria. One natural criterion is

(8.1) ck <_ e,

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 18: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 225

where e is a prescribed tolerance for optimality. However, when a horizontal linearcomplementarity problem (2.1) does not have a solution, the merit function will notconverge to zero but to a positive value (recall that it is monotonically decreasing).In this case, we use another stopping criterion, which we describe below.

At each iteration, we calculate the quantity

and terminate Algorithm 2 if

(8.2) > aZ + 5,

where f is any positive constant independent of the problem size.In [8], Kojima, Megiddo, and Mizuno considered the problem of giving certain

infeasibility information when an algorithm stops without obtaining an approximatesolution. Here we consider the same problem for Algorithm 2 equipped with thestopping criteria (8.1) and (8.2).

THEOREM 8.1. Let Algorithm 2 be equipped with the stopping criteria (8.1) and(8.2), and let Assumption A2 hold (Assumption A1 may or may not hold). Givene > O, let (x, yO) p(e, e) satisfy (7.6) and (7.14). In at most O(n2 In(i/e)) iterationsAlgorithm 2 terminates at either criterion (8.1) or (8.2). Moreover, if it terminatesa (8.2), hhiooio (*, *) oPo (e.1) hh II(*,U*)II-<

Proof. It is easy to see that the quantity 0k is the left-hand side of the inequalityin Lemma 6.1.

At each iteration as long as the algorithm does not stop, Ok is bounded above by4f -t- 5, which implies wk O(n) (see (6.3)-(6.5)). From Theorem 7.3, the algorithmhas to stop in at most O(n2 ln(1/e)) iterations.

If Problem (2.1) has a solution (x*, y*) such that II(x*, Y*)]I -< P/, then (7.7)holds. From the proof of Lemma 7.2, we have Ok _< 4/ + 5. Therefore, the algorithmcannot stop at (8.2). This completes the proof. [:]

In normal cases, it should be reasonable to choose e sufficiently small, andsufficiently large.

9. Further comments. In this section, we briefly summarize the results of thecurrent work, and then offer some relevant comments on the important issues ofcomplexity, finite termination, and local convergence.

9.1. Summary. Infeasible interior-point algorithms in the framework of the cen-tered and damped Newton method empirically have proved to be among the mostefficient interior-point algorithms for linear programming. But until now, their the-oretical convergence properties were not well understood. In this paper, we resolvedthe convergence issue for an infeasible interior-point algorithm from this frameworkwith a particular steplength control procedure, and also established an O(n2 ln(iteration complexity bound for the algorithm to attain feasibility and complementaritywithin the e accuracy. No significant restriction was imposed upon the algorithmicframework, except for a quite liberal control on the steplength selection.

9.2. Complexity. The study of worst-case complexity has been one of the focalpoints of research on interior-point methods. In the meantime, however, the inad-equacy of worst-case complexity as a measure of computational efficiency has been

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 19: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

226 YIN ZHANG

long and increasingly acknowledged. Currently, the best measure of efficiency is stillnumerical performance. It is unfortunate that worst-case complexity bounds obtainedfor noncombinatorial algorithms, including both feasible interior-point and infeasibleinterior-point ones, are often too conservative or too far from average cases. As aresult, it seems only appropriate to use worst-case complexity as a qualitative, ratherthan quantitative, measure of reliability. An algorithm with a polynomial complexitybound is likely more reliable than one without; but for two polynomial algorithms,a small difference in their presently available worst-case complexity bounds provides,in our opinion, little or no information as to which one of the two is likely to be morereliable or more efficient. Based on this belief, we do not consider it of practical valueto pursue a slightly better complexity bound at the price of imposing more stringentrestrictions on an otherwise practically efficient algorithm. This was precisely thereason behind our effort to keep the algorithm analyzed as close to the algorithmimplemented as we can.

9.3. Finite termination. It is well known that in theory interior-point methodsfor linear programming can be terminated in a finite number of iterations thanks tothe combinatorial nature of linear programming. Recently, Ye [21], and Mehrotra andYe [16] proposed and studied effective finite termination procedures for primal-dualinterior-point linear programming algorithms. Following the similar line of analy-sis, one can show that for a given linear program, an infeasible interior-point ap-proximate solution (xk, yk) satisfying -feasibility and -complementarity, as well ascondition (5.6), will eventually provide the correct zero-nonzero partition when issmall enough. Therefore, effective finite termination procedures can also be appliedto infeasible interior-point algorithms.

9.4. Local convergence. An important aspect of interior-point methods is theirlocal behavior. The analysis of local convergence rate for primal-dual interior-pointmethods has recently attracted considerable research interest. At this point, it isnatural to ask whether or not infeasible interior-point methods can also achieve su-perlinear and quadratic convergence as their interior-point counterparts. So far, am-ple empirical evidence suggests that the answer should be in the affirmative. In [25],Zhang, Tapia, and Potra already gave sufficient conditions for infeasible interior-pointmethods in the framework of Algorithm 1 to attain superlinear convergence. Theseconditions are extensions of those for feasible interior-point methods (see [24] and[23]); basically, ensuring ak -- 0 and ck --, 1. Although it does not seem particularlydifficult to establish a fast convergence rate for infeasible interior-point algorithmsunder the assumption of nondegeneracy or convergence of the iteration sequence, itis of more interest without, these assumptions. This is currently a subject of furtherinvestigation.

Acknowledgment. I am indebted to Richard Tapia for his persistence in encour-aging me to carry through this investigation, especially at times when the outcome didnot appear particularly attractive. I am also grateful to M. Seetharama Gowda for hisobservation that led to Proposition 3.1, and to an anonymous referee for constructivecomments and suggestions.

REFERENCES

[1] K. M. ANSTREICHER, A combined phase phase II projective algorithm for linear program-ming, Math. Programming, 43 (1989), pp. 209-223.

Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 20: (A However, Ayzhang/caam471/assignments/hw9/InfeasIPM.pdfYIN ZHANGt Abstract. Interior-point methodsrequire strictly feasible points as starting points. In theory, this requirement

CONVERGENCE OF INFEASIBLE INTERIOR-POINT ALGORITHMS 227

[2] K. M. ANSTREICHER, A combined phase phase II scaled potential algorithm for linearprogramming, Math. Programming, 52 (1991), pp. 429-440.

[3] R.W. COTTLE, J. S. PANG, AND R. E. STONE, The linear complementarity problem, AcademicPress, Boston, 1992.

[4] G. DE GHELLINCK AND J.-PH. VIAL, A polynomial Newton method for linear programming,Algorithmica, 1 (1986), pp. 425-453.

[5] A. V. FIACCO AND G. P. MCCORMICK, Nonlinear Programming: Sequential UnconstrainedMinimization Techniques, John Wiley, New York 1968 and SIAM, Philadelphia, 1990.

[6] O. Gi)LER, Generalized Linear Complementarity Problems and Interior point Algorithms ]orTheir Solutions, Tech. Report, Faculty of Technical Mathematics and Computer Science,Delft University of Technology, the Netherlands, April 1992.

[7] T. ISHIHARA AND M. KOJIMA, On the Big JA in the Affine Scaling Algorithm, Res. Report onInformation Sciences, B-255, Dept. of Information Sciences, Tokyo Institute of Technology,Tokyo, Japan, 1992.

[8] M. KOJIMA, N. MEGIDDO, AND S. MIZUNO, A Primal-Dual Exterior Point Algorithm forLinear Programming, Res. Report RJ 8500, IBM Almaden Research Center, San Jose,CA, 1991.

[9] M. KOJIMA, N. MEGIDDO, AND W. NOMA, Homotopy continuation methods for complemen-tarity problems, Math. Oper. Res., 16 (1991), pp. 754-774.

[10] M. KOJIMA, S. MIZUNO, AND A. YOSHISE, A Little Theorem of the Big J in Interior PointAlgorithms, Res. Report on Information Sciences, B-239, Dept. of Information Sciences,Tokyo Institute of Technology, Tokyo, Japan, 1991.

[11] I. J. LUSTIG, R. E. MARSTEN, AND D. F. SHANNO, Computational experience with a primal-dual interior point method for linear programming, Linear Algebra Appl., 152 (1991),pp. 191-222.

[12] , On implementing Mehrotra’s predictor-corrector interior point method for linear pro-gramming, SIAM J. Optimization, 2 (1992), pp. 435-449.

[13] K. A. MCSHANE, C.L. MONMA, AND D.F. SHANNO, An implementation of a primal-dualinterior point method for linear programming, ORSA J. Computing, 1 (1989), pp. 70-83.

[14] N. MEGIDDO, Pathways to the optimal set in linear programming, in Progress in MathematicalProgramming, Interior-Point and Related Methods, N. Megiddo, ed., Springer-Verlag, NewYork, 1989, pp. 131-158,

[15] S. MEHROTRA, On the implementation of a primal-dual interior point method, SIAM J. Opti-mization, 2 (1992), pp. 575-601.

[16] S. MEHROTRA AND Y. YE, On finding the optimal facet of linear programs, Tech. Report,Department of IE and MS, Northwestern University, Evanston, IL, 1991.

[17] M. J. TODD, On Anstreicher’s Combined Phase I-Phase II Projective Algorithm for LinearProgramming, Tech. Report 776, School of Operations Research and Industrial Engineering,Cornell University, Ithaca, NY, 1989; Math. Programming, to appear.

[18] M. J. TODD AND Y. WANG, On Combined Phase I-Phase II Projective Methods ]or LinearProgramming, Tech. Report 877, School of Operations Research and Industrial Engineering,Cornell University, Ithaca, NY, December 1989; Algorithmica, to appear.

[19] Q. Xu, On the New Linear Programming Algorithms--New Sequential Penalty FunctionMethod and Two Point Barrier Function Method, Ph.D. thesis, Institute of Nuclear Tech-nology, Tsinghua University, Beijing, China, 1991. (In Chinese.)

[20] Y. YE, A Low Complexity Combined Phase I-Phase II Potential Reduction Algorithm forLinear Programming, Working Paper Series No. 91-1, College of Business Administration,University of Iowa, Iowa City, IA, 1991.

[21] , On the Finite Convergence of Interior-Point Algorithms for Linear Programming,Working Paper 91-5, the College of Business Administration, University of Iowa, IowaCity, IA, 1991.

[22] Y. ZHANG AND R. A. TAPIA, A superlinearly convergent polynomial primal-dual interior-pointalgorithm for linear programming, SIAM J. Optimization, 3 (1993), pp. 118-133.

[23] , Superlinear and quadratic convergence of primal-dual interior point algorithms forlinear programming revisited, J. Optim. Theory Appl., 73 (1992), pp. 229-242.

[24] Y. ZHANG, R. A. TAPIA, AND J. E. DENNIS, JR., On the superlinear and quadratic convergenceof primal-dual interior point linear programming algorithms, SIAM J. Optimization, 2(1992), pp. 304-324.

[25] Y. ZHANG, R. A. TAPIA, AND F. POTRA, On the superlinear convergence of interior pointalgorithms for a general class of problems, SIAM J. Optimization, 3 (1993), pp. 413-422.Dow

nloa

ded

02/2

1/14

to 1

28.4

2.16

3.11

8. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p