14
* Fax: #39-06-4976-6765. E-mail address: gf.corradi@easpur.it (G. Corradi). Computers & Operations Research 28 (2001) 209}222 Higher-order derivatives in linear and quadratic programming Gianfranco Corradi* Faculty of Economics, Department of Mathematics, University of Rome **La Sapienza++, Via Del Castro Laurenziano, 9 00161 Roma, Italy Received 1 February 1998; received in revised form 1 July 1998 Abstract An interior method for linear and quadratic programming which makes use of higher derivatives of the logarithm barrier function is presented. A convergence analysis is considered too. Our computational experience shows that the method considered performs quite well and seems to be more reliable and robust than the standard method. Scope and purpose Linear programming problems were, for many years, the exclusive domain of the simplex algorithm developed by G.B. Dantzing in 1947. With the introduction of a new algorithm, developed by N.K. Karmarkar in 1984, an alternative computational approach became available for solving such problems. This algorithm established a new class of algorithms: interior point methods for linear programming. In this paper we introduce a barrier method for solving a linear and quadratic programming problem which [9 }15] makes use of higher-order derivatives. We note that a di!erent approach used to construct higher-order interior point methods is presented in [1}4]. We think that making use of an approximation of higher-order we may obtain a faster convergence and an algorithm more robust than a method obtained using a second-order approximation. ( 2000 Elsevier Science Ltd. All rights reserved. Keywords: Higher-order derivatives; Linear and quadratic programming; Interior methods 1. Introduction In this paper we consider the following problems. 0305-0548/00/$ - see front matter ( 2000 Elsevier Science Ltd. All rights reserved. PII: S 0 3 0 5 - 0 5 4 8 ( 9 9 ) 0 0 0 9 9 - 4

Higher-order derivatives in linear and quadratic programming

Embed Size (px)

Citation preview

Page 1: Higher-order derivatives in linear and quadratic programming

*Fax: #39-06-4976-6765.E-mail address: [email protected] (G. Corradi).

Computers & Operations Research 28 (2001) 209}222

Higher-order derivatives in linear and quadratic programming

Gianfranco Corradi*

Faculty of Economics, Department of Mathematics, University of Rome **La Sapienza++, Via Del Castro Laurenziano,9 00161 Roma, Italy

Received 1 February 1998; received in revised form 1 July 1998

Abstract

An interior method for linear and quadratic programming which makes use of higher derivatives of thelogarithm barrier function is presented. A convergence analysis is considered too. Our computationalexperience shows that the method considered performs quite well and seems to be more reliable and robustthan the standard method.

Scope and purpose

Linear programming problems were, for many years, the exclusive domain of the simplex algorithmdeveloped by G.B. Dantzing in 1947. With the introduction of a new algorithm, developed by N.K.Karmarkar in 1984, an alternative computational approach became available for solving such problems.This algorithm established a new class of algorithms: interior point methods for linear programming. In thispaper we introduce a barrier method for solving a linear and quadratic programming problem which [9}15]makes use of higher-order derivatives. We note that a di!erent approach used to construct higher-orderinterior point methods is presented in [1}4]. We think that making use of an approximation of higher-orderwe may obtain a faster convergence and an algorithm more robust than a method obtained usinga second-order approximation. ( 2000 Elsevier Science Ltd. All rights reserved.

Keywords: Higher-order derivatives; Linear and quadratic programming; Interior methods

1. Introduction

In this paper we consider the following problems.

0305-0548/00/$ - see front matter ( 2000 Elsevier Science Ltd. All rights reserved.PII: S 0 3 0 5 - 0 5 4 8 ( 9 9 ) 0 0 0 9 9 - 4

Page 2: Higher-order derivatives in linear and quadratic programming

1. The linear programming (LP) problem in the so-called standard form is given as

minx

Sc,xT

s.t. Ax"b, x*0, (1)

where A3RmCn and S ) , ) T denotes the Euclidean scalar product.2. The convex quadratic programming (QP) in the standard form is given as

minx

(1/2)Sx,QxT#Sc,xT

s.t. Ax"b, x*0, (2)

where Q*0 (semide"nite positive) and A3RmCn.

Next, we denote by D ) D the Euclidean norm in Rn, while f r( ) ) denotes the rth derivative of f ( ) ) andf : RnPR1. LP (1) is called the primal problem. Its dual may be written in the inequality form as

maxy

Sb, yT

s.t. ATy)c (3)

or in standard form

maxy,z

Sb, yT

s.t. ATy#z"c, z*0. (4)

The vector z in (4) is called the dual slack. The termination criteria in many interior LP methods arebased on an important relationship between the primal and dual objective functions. Let x be anyprimal-feasible point (satisfying Ax"b, x*0) and y any dual-feasible point (satisfying ATy)c),with z the dual slack vector c!ATy. It is straightforward to show that

Sc,xT!Sb, yT"Sx, zT*0. (5)

The necessarily nonnegative quantity Sc,xT!Sb, yT is called the duality gap, and is zero if andonly if x and y are optimal for the primal and dual.

Remark 1.1. Consider the problem

minx

f (x)

s.t. Ax"b (6)

where A3RmCn and f : RnPR1. We denote by g( ) ) the gradient of f ( ) ) and by H( ) ) the Hessianmatrix of f ( ) ). Then the conditions

Ax6 "b (7a), g(x6 )"ATy6 (7b), NTH(x6 )N*0 (7c) (7)

210 G. Corradi / Computers & Operations Research 28 (2001) 209}222

Page 3: Higher-order derivatives in linear and quadratic programming

where N3RnCn denotes any matrix whose columns form a basis for the null space of A, i.e. for thesubspace of vectors p such that Ap"0, are necessary conditions for the point x6 to be an isolatedsolution of (6). The su$cient conditions for x6 to be an isolated solution of (6) are that (7a) and (7b)hold and that NTH(x6 )N is positive de"nite.

2. Higher-order derivatives in linear programming

We assume that

1. The set of x satisfying Ax"b, x'0 is nonempty.2. The set (y, z), satisfying ATy#z"c, z'0, is nonempty.3. Rank(A)"m.

Because the inequality constraints in a standard-form LP problem are exclusively simple bounds,the corresponding logarithmic barrier function is

B(x,k)"Sc,xT!kn+i/1

log(xi), (8)

so we solve the following barrier subproblem:

minx

Sc,xT!kn+i/1

log(xi)

s.t. Ax"b. (9)

Subproblem (9) has an unique minimizer if Assumption 2 is satis"ed. The gradient and Hessian ofB(x, k) for this case have particularly simple forms:

B@(x, k)"+B(x, k)"c!kX~1e, BA(x, k)"+2B(x, k)"kX~2, (10)

where X"diag x and e3Rn is the vector whose components are all equal to one, moreover,

SBA@(x, k)pp, pT"!2kn+i/1

(pi)3/(xi)3. (11)

Assume that we have a point x satisfying Ax"b and x'0, and that we wish to apply a barriermethod to solve the standard form LP (1). Using (10) and (11) the subproblem of higher order for (9)is

minp

SB@(x, k), pT#(1/2)SBA(x, k)p, pT#(1/6)SBA@(x, k)pp, pT

s.t. Ap"0 (12)

from which, using the formulas above we have

minp

Sc!kX~1e,pT#(k/2)SX~2p, pT!(k/3)n+i/1

(pi)3/(xi)3

s.t. Ap"0. (13)

G. Corradi / Computers & Operations Research 28 (2001) 209}222 211

Page 4: Higher-order derivatives in linear and quadratic programming

Remark 2.1. Consider the Lagrangian function for (13):

¸"Sc!kX~1e, pT#(k/2)SX~2p,pT!(k/3)n+i/1

(pi)3/(xi)3!SAp, yT. (14)

If p is a solution for problem (13) then there exists a y3Rn such that

¸p"c!kX~1e#kX~2p!kX~3(diag p)p!ATy"0, (15)

¸y"Ap"0. (16)

From (15) and (16) we have

AkX~2(I!X~1 diag p) AT

A 0 BAp

!yB"!A+B(x,k)

0 Bfrom which

Ap

!yB"AkX~2(I!X~1 diag p) AT

A 0 B~1

A!+B(x,k)

0 B. (17)

If we set

AkX~2(I!X~1 diag p) AT

A 0 B~1

"AH ¹T

¹ ; Bthen

;"!(A=~1AT)~1, ¹T"=~1AT(!;), H"=~1!=~1AT(!;)A=~1, (18)

where="=(x, k, p)"kX~2(I!X~1diag p).

Remark 2.2. Note that from (17) and (18) the solution p, y of system (15)}(16) satis"es

p"!H(p)+B(x, k), (19)

where

H(p)"=(x, k, p)~1!=(x, k, p)~1AT(A=(x, k, p)~1AT)~1A=(x, k, p)~1 (20)

and

y"¹+B(x, k)"(A=(x, k, p)~1AT)~1A=(x, k, p)~1+B(x, k). (21)

We now present an algorithm using the above discussion for solving problem (9). We note thata di!erent approach used to construct higher-order interior point methods is presented in [1}4]and elsewhere.

212 G. Corradi / Computers & Operations Research 28 (2001) 209}222

Page 5: Higher-order derivatives in linear and quadratic programming

Algorithm 2.1. Let s'0, 0(b(1, 0(p(12, be. Let x

13Rn be, such that Ax

1"b, x

1'0

(proper values for b may be 0.1; 0.5, and for p, 0.1; 10~4).Step 1: Set k"1.Step 2: If are satis"ed the optimality conditions for problem (9), then stop, else go to step 3.Step 3: Compute a solution p

kof problem (13) at x

k.

Step 4: Compute jk"bmk ) s where m

kis the "rst nonnegative integer for which

B(xk, k)!B(x

k#j

kpk, k)*!pj

kSp

k, +B(x

k, k)T.

Step 5: Set xk`1

"xk#j

kpk'0.

Step 6. Set k"k#1 and go to step 2.

Lemma 2.1. Consider Algorithm 2.1. Then

Spk, +B(x

k, k)T"!kSp

k,X~2

k(I!X~1

kdiag p

k)p

kT.

Proof. Since pk

is a solution of problem (13), then there exists yk

such that pk, y

ksolve system

(15)}(16), for which pk

satis"es the relation

pk"!H(p

k)+B(x

k, k), (22)

where H(pk) is obtained from (20). Let=

k"=(x

k, k, p

k) be, by (22) we have

=kpk"!=

kH(p

k)+B(x

k, k)

"!+B(xk, k)#AT(A=~1

kAT)~1A=~1

k+B(x

k, k).

It follows that

Spk,=

kpkT"!Sp

k, +B(x

k, k)T#Sp

k, AT(A=~1

kAT)~1A=~1

k+B(x

k, k)T

"!Spk, +B(x

k, k)T#SAp

k, (A=~1

kAT)~1A=~1

k+B(x

k, k)T.

On the other hand, Apk"0, hence

Spk,=

kpkT"!Sp

k, +B(x

k, k)T.

Finally,

Spk, +B(x

k, k)T"!kSp

k, X~2

k(I!X~1

kdiag p

k)p

kT. h (23)

Remark 2.3. Note that from (23) it follows that pk

is a descent direction for B(., k) at xk

ifSp

k, X~2

k(I!X~1

kdiag p

k)p

kT'0.

Theorem 2.1. Consider Algorithm 2.1. If the sequence MxkN constructed by Algorithm 2.1 is xnite, then

the last point satisxes the xrst-order optimality for problem (9). If the sequence MxkN is inxnite and

xkPx6 '0, X~2

k(I!X~1

kdiag p

k)PD'0, then x6 satisxes the xrst-order optimality conditions for

problem (9).

G. Corradi / Computers & Operations Research 28 (2001) 209}222 213

Page 6: Higher-order derivatives in linear and quadratic programming

Proof. The "rst part of the theorem is obvious. We now assume that MxkN is an in"nite sequence.

From Algorithm 2.1 and Lemma 2.1 it follows that

B(xk, k)!B(x

k#j

kpk, k)*!pj

kSp

k, +B(x

k, k)T

"pjkkSp

k, X~2

k(I!X~1

kdiag p

k)p

kT. (24)

Since X~2k

(I!X~1k

diag pk)PD'0, then for k su$ciently large we have B(x

k`1, k)(B(x

k, k), so

if we assume that B(., k) is bounded from below it follows that MB(xk, k)!B(x

k#j

kpk, k)NP0.

From (24)

jkSp

k, X~2

k(I!X~1

kdiag p

k)p

kTP0. (25)

From (25) and from the hypothesis it follows that lim pk"0, or lim j

k"0 and lim p

k"pO0.

Assume that lim pk"0. Since p

kis a solution of problem (13), then there exists y

ksuch that

c!kX~1k

e#kX~2k

pk!kX~3

k(diag p

k)p

k!ATy

k"0, (26)

Apk"0 (27)

for which (see Remark 2.2)

pk"!H(p

k)+B(x

k, k),

where

H(pk)"(1/k)X2

k(I!X~1

kdiag p

k)~1!(1/k)X2

k(I!X~1

kdiag p

k)~1

AT[AX2k(I!X~1

kdiag p

k)~1AT]~1AX2

k(I!X~1

kdiag p

k)~1

and

yk"[AX2

k(I!X~1

kdiag p

k)~1AT]~1AX2

k(I!X~1

kdiag p

k)~1+B(x

k, k).

Hence, if kP#R and pkP0, then

ykPy6 "(AXM 2AT)~1AXM 2+B(x6 , k)

and by (26)

c!kXM ~1e"ATy6 . (28)

On the other hand, if xkis such that Ax

k"b, then Ax

k`1"A(x

k#j

kpk)"Ax

k#j

kAp

k"Ax

k"b.

Since x1

is such that Ax1"b it follows that Ax

k"b for every k, hence

Ax6 "b. (29)

Finally from (10)

NT(kXM ~2)N*0. (30)

From Remark 1.1 and from (28)}(30) it follows that x6 satis"es the "rst-order conditions for problem(9). Assume now that

lim jk"0 and limp

k"pO0.

214 G. Corradi / Computers & Operations Research 28 (2001) 209}222

Page 7: Higher-order derivatives in linear and quadratic programming

It follows, in view of the Armijo rule, that the initial stepsize s will be reduced at least once fork su$ciently large, for which for every k*k1 :

B(xk, k)!B(x

k#a6

kpk, k)(pa6

kkSp

k, X~2

k(I!X~1

kdiag p

k)p

kT, (31)

where a6k"j

k/b.

On the other hand,

B(xk#a6

kpk, k)"B(x

k, k)#a6

kSp

k,+B(x

k, k)T#o(a6

k)

and lim o(a6k)/a6

k"0.

From Lemma 2.1 it follows that

B(xk#a6

kpk, k)"B(x

k, k)!a6

kkSp

k, X~2

k(I!X~1

kdiag p

k)p

kT#o(a6

k)

from which

B(xk, k)!B(x

k#aN

kpk, k)"a6

kkSp

k, X~2

k(I!X~1

kdiag p

k)p

kT!o(a6

k). (32)

From (31) and (32) we have

0(a6kkSp

k, X~2

k(I!X~1

kdiag p

k)p

kT(p!1)#o(a6

k)

hence

Spk, X~2

k(I!X~1

kdiag p

k)p

kT(1!p)!o(a6

k)/(a6

kk)(0.

Taking the limit as kP#R (1!p)Sp,DpT)0 and this is a contradiction. Then it follows thatlim p

k"0. h

Remark 2.4. We note that in a similar way as the proof of Theorem 2.1 it is straightforward toshow that every accumulation point of Mx

kN satis"es the "rst-order optimality conditions for

problem (9).

Remark 2.5. Note that (see the proof of Theorem 2.1) Algorithm 2.1 constructs a sequence MxkN

such that Axk"b for every k.

Remark 2.6. Since B(x, k) is de"nited if x'0, then in Algorithm 2.1 must be xk'0 for every k.

Note that (see Ref. [5]) if xk'0 and if we want that x

k`1'0 then strict feasibility is retained if the

step taken along p is less than the step jK "minM!xik/pi

kN for all indices i such that pi

k(0. Note

that an adequate choice for jk

it results jk"0.9995jK

k. In the case when this step is inadequate

a standard method of linear search may be used.

Remark 2.7. From Remarks 2.5 and 2.6 it follows that if x1

is such that Ax1"b and x

1'0, then

Axk"b and x

k'0 for every k.

Based on the discussion above we now present an algorithm for solving problem (1).Algorithm 2.2. Let x

13Rn be such that Ax

1"b and x

1'0. Select 0(k

13R1, p3(0, 1

2).

Step 1: Set i"1.Step 2: If are satis"ed the e-optimality conditions for problem (1) at x

1then stop, else go to step 3.

G. Corradi / Computers & Operations Research 28 (2001) 209}222 215

Page 8: Higher-order derivatives in linear and quadratic programming

Step 3: Set k"1.Step 4: Compute p

ksolution of problem (13) at x

k.

Step 5: If Spk, +B(x

k, k

i)T*0 then compute p

ksolution of the following problem:

min Sc!kiX~1

ke, pT#(k

i/2)SX~2

kp, pT

s.t. Ap"0

and go to step 6, else go to step 6.Step 6: If j

k"0.9995jK

kis such that

B(xk#j

kpk, k

i)!B(x

k, k

i))pj

kSp

k, +B(x

k, k

i)T

then set xk`1

"xk#j

kpk

and go to step 7, else compute jk

using a standard method of linearsearch and set x

k`1"x

k#j

kpk.

Step 7: If are satis"ed the e-optimality conditions for problem (9) at xk`1

then set x1"x

k`1and

go to step 9, else go to step 8.Step 8: Set k"k#1 and go to step 4.Step 9: Compute k

i`1(k

i(proper sequences for Mk

iN are obtained setting k

i`1"o )k

i, where

o3(0, 1). Proper values for o may be 0.1; 0.3.).Step 10: Set i"i#1 and go to step 2.

3. Higher-order derivatives in quadratic programming

We now apply the analysis of Section 2 for the solution of a convex quadratic problem instandard form. So we consider the following problem:

minx

(1/2)Sx,QxT#Sc,xT

s.t. Ax"b, x*0,

where Q*0. (33)

For problem (33) the corresponding barrier function is

B(x, k)"(1/2)Sx,QxT#Sc,xT!kn+i/1

log(xi) (34)

so we solve the following subproblem:

min(1/2)Sx,QxT#Sc,xT!kn+i/1

log(xi)

s.t. Ax"b. (35)

216 G. Corradi / Computers & Operations Research 28 (2001) 209}222

Page 9: Higher-order derivatives in linear and quadratic programming

From (34), following Section 2 we have

B@(x, k)"Qx#c!kX~1e, (36)

BA(x, k)"Q#kX~2, (37)

SBA@(x, k)pp, pT"!2kn+i/1

(pi)3/(xi)3. (38)

Therefore, if x is such that Ax"b, and x'0 we obtain a solution to problem (33), solvinga sequence of problems of the following form:

minp

SQx#c!kX~1e, pT#(1/2)Sp, (Q#kX~2)pT!(k/3)n+i/1

(pi)3/(xi)3

s.t. Ap"0. (39)

Following Section 2 we have

p"!H(p)+B(x, k),

where

H(p)"=(x,k, p)~1!=(x, k, p)~1AT(A=(x, k, p)~1AT)~1A=(x, k, p)~1

and

=(x, k, p)"Q#kX~2!kX~3diag p.

We also obtain

y"(A=(x, k, p)~1AT)~1A=(x, k, p)~1+B(x, k).

We now present an algorithm for solving problem (35)

Algorithm 3.1. Let s'0, 0(b(1, 0(p(1/2, be. Let x13Rn be, such that Ax

1"b, x

1'0.

(proper values for b may be 0.1; 0.5 and for p, 0.1; 10~4)Step 1: Set k"1.Step 2: If are satis"ed the optimality conditions for problem (35), then stop, else go to step 3.Step 3: Compute a solution p

kof problem (39) at x

k.

Step 4: Compute jk"bmk ) s where m

kis the "rst nonnegative integer for which

B(xk, k)!B(x

k#j

kpk, k)*!pj

kSp

k, +B(x

k, k)T.

Step 5: Set xk`1

"xk#j

kpk'0.

Step 6: Set k"k#1 and go to step 2.

Following Section 2 it is straightforward to prove

Lemma 3.1. Consider Algorithm 3.1. Then

Spk, +B(x

k, k)T"!Sp

k, (Q#kX~2

k!kX~3

kdiag p

k)p

kT.

G. Corradi / Computers & Operations Research 28 (2001) 209}222 217

Page 10: Higher-order derivatives in linear and quadratic programming

Theorem 3.1. Consider Algorithm 3.1. If the sequence MxkN constructed by Algorithm 3.1 is xnite, then

the last point satisxes the xrst-order optimality for problem (35). If the sequence MxkN is inxnite and

xkPx6 '0, Q#kX~2

k!kX~3

kdiag p

k)PD'0, then x6 satisxes the xrst-order optimality condi-

tions for problem (35).

Remark 3.1. We note that for solving problem (33) we now can construct an algorithm based onthe results above and analogous to Algorithm 2.2.

Algorithm 3.2. Let x13Rn be such that Ax

1"b and x

1'0. Select 0(k

13R1, p3(0, 1/2).

Step 1: Set i"1.Step 2: If are satis"ed the e-optimality conditions for problem (33) at x

1then stop, else go to step 3.

Step 3: Set k"1.Step 4: Compute p

ksolution of problem (39) at x

k.

Step 5: If Spk, +B(x

k, k

i)T*0 then compute p

ksolution of the following problem:

minSQxk#c!k

iX~1

ke, pT#(1/2)Sp, (Q#k

iX~2

k)pT

s.t. Ap"0

and go to step 6, else go to step 6.Step 6: If j

k"0.9995jK

kis such that

B(xk#j

kpk, k

i)!B(x

k, k

i))pj

kSp

k, +B(x

k, k

i)T

then set xk`1

"xk#j

kpk

and go to step 7, else compute jk

using a standard method of linearsearch and set x

k`1"x

k#j

kpk.

Step 7: If are satis"ed the e-optimality conditions for problem (35) at xk`1

then set x1"x

k`1and

go to step 9, else go to step 8.Step 8: Set k"k#1 and go to step 4.Step 9: Compute k

i`1(k

i(proper sequences for Mk

iN are obtained setting k

i`1"o )k

i, where

o3(0, 1). Proper values for o may be 0.1; 0.3.)Step 10: Set i"i#1 and go to step 2.

4. Some numerical results for linear programming

We have obtained our numerical results for linear programming problems making use ofAlgorithm 2.2.

Remark 4.1. Note that we have obtained a solution to problem (13) solving system (15)}(16) andmaking use of a global method for systems of nonlinear equations (see Section 6.5, Ref. [6]).

Remark 4.2. Note that the step-length in step 6 of Algorithm 2.2 is obtained making use ofa method introduced by Powell [7]. In this method two parameters are considered d3(0, 1) andp su$ciently small.

218 G. Corradi / Computers & Operations Research 28 (2001) 209}222

Page 11: Higher-order derivatives in linear and quadratic programming

Remark 4.3. Note that in our program written in fortran language we reduce the penaltyparameter and continue with the outer iteration if, after a "xed number of iterations, either are notsatis"ed the e-optimality conditions for problem (9), or the Powell method fails to verify the Armijocondition.

For our numerical results we have considered the following problems:

Problem P1 (Amaya [8]).

minSc,xT#cn~1xn~1

s.t.

Ax#(b!Ax0)xn~1"b,

SATy0#s

0!c, xT#xn"bm,

x,xn~1, xn*0.

In problem P1 all generated numbers are uniformly distributed random numbers. The genera-tion procedure is the following: for given integers n, m, p, d such that

m!1(n!2, 0)p(m!1, 0)d((n!2)!(m!1),

we generate all the entries of the matrix A3R(m~1)C(n~2). Then we generate the "rst m!1!pcomponents of a nonnegative vector x6 3Rn~2. The last (n!2)!(m!1!p) components are "xedat zero. Similarly, we generate s63Rn~2 having zeros in the "rst m!1#d components and positivenumbers in the last n!2!(m!1#d). Naturally Sx6 , s6 T"0. Finally, we generate a vectory6 3Rm~1. Then we de"ne b"Ax6 ; c"ATy6 #s6 , furthermore x

0'0, s

0'0, and y

0are arbitrary

points in Rn~2 and Rm~1, respectively. We also choose cn~1 and bm such that

cn~1'Sb!Ax0, y

0T; bm'SATy

0#s

0!c, x

0T, (40)

so (x0, xn~1

0, xn

0) is a feasible interior solution of problem P1 where

xn~10

"1, xn0"bm!SATy

0#s

0!c, x

0T.

The point (x0, xn~1

0, xn

0) can be used as starting point. In our numerical experiments we use

y0"0, x

0"s

0"e where e is a vector of all ones. In addition, we assume that

cn~1'Sb!Ax0, y6 T; bm'SATy

0#s

0!c, x6 T. (41)

From a theorem of Kojima et al. [8] it follows that under conditions (40) and (41) a feasiblesolution (x( , x( n~1, x( n) of problem P1 is optimal if and only if x( is optimal for problem (1) andx( n~1"0. Finally, we set

cn~1"maxMSb!Ax0, y

0T,Sb!Ax

0, y6 TN#10,

bm"maxMSATy0#s

0!c,x

0T, SATy

0#s

0!c, x6 TN#10

and apply our Algorithm 2.2 to problem P1. For further details see [8]. For our computationalresults the pair (p, d) takes the values (0, 0), (0, 8), (0, 24), (12, 0), (12, 8), (12, 24), (36, 0), (36, 8), (36, 24).

G. Corradi / Computers & Operations Research 28 (2001) 209}222 219

Page 12: Higher-order derivatives in linear and quadratic programming

We set m"60, n"100. Futhermore aij3(!1, 1], x6 i3(0, 1], s6 i3(0, 1], y6 i3(!1, 1], so we ob-

tain the "rst nine problems. We have considered a modi"cation to these problems so we obtainproblems 10}18 generating a

ij3(!2, 2], x6 i3(0, 1], s6 i3(0, 1], y6 i3(!2, 2]. We have also con-

structed 18 additional problems. In these problems the pair (p, d) takes the values(0, 0), (0, 16), (0, 48), (4, 0), (4, 16), (4, 48), (12, 0), (12, 16), (12, 48). We set m"20, n"100. Inproblems 19}27 a

ij3(!1, 1], x6 i3(0, 1], s6 i3(0, 1], y6 i3(!1, 1], while in problems (28}36)

aij3(!3, 3], x6 i3(0, 1], s6 i3(0, 1], y6 i3(!3, 3].

Problem P2.

minSc,xT

s.t. Ax)b, x*0,

where

x3R2m, c3R2m, A3RmC2m, b3Rm.

Moreover, ci"!1 for every i, bi"104 for every i. The elements aij

of A are uniformlydistributed random numbers and a

ij3(1, 103] , while m"30. We have also considered a modi"ca-

tion to problem P2. In this modi"ed problem the elements aij

are random numbers andaij3(1, 103] , the elements bi of b are random numbers and bi3(1, 104] for every i, "nally

ci"!3 ) i. In this problem we have used x0

as starting point such thatxi0"10~10 i"1,2, 2m; xi

0"bi~2m i"2m#1,2, 3m so that Db!Ax

0D)10~4. So for our

numerical results we have considered a set of 38 test problems.

Remark 4.4. We note that our program terminates if the duality gap is less than or equal to 10~3.

For our numerical results all the computations were done using double-precision arithmetic. Foreach problem we have considered two sequences Mk

iNP0 such that k

i`1"ok

i. Furthermore, we

have considered di!erent choices of the parameters d, p (see Remark 4.2), so, for each problem, wehave executed eight experiments for which we have, in all, considered 304 experiments. Inparticular, we have set k

0"1, o"0.1 and o"0.3. d and p assume the values 0.1, 0.5 and 0.1, 10~4,

respectively. Our method (see Algorithm 2.2) has been compared with the standard methodobtained from Algorithm 2.2 where step 4 is dropped and step 5 is replaced by

Step 5@: Compute pk

solution of the following problem:

minSc!kiX~1

ke, pT#(k

i/2)SX~2

kp, pT

s.t. Ap"0.

Remark 4.5. Note that if our global method for solving system (15)}(16) fails, then we use asdirection p

kthe solution of step 5@.

Remark 4.6. Note that our computational experience shows that the method considered in thispaper performs quite well and seems to be much more reliable and robust than the standardmethod. Naturally, the standard method, when converges, is preferable to our method this is due to

220 G. Corradi / Computers & Operations Research 28 (2001) 209}222

Page 13: Higher-order derivatives in linear and quadratic programming

Table 1

Standard method Our method (Algorithm 2.2)

Problems 1}9n m a}b n m a}b29 43 6}11 5 67 6}12

Problems 10}18n m a}b n m a}b62 10 6}12 14 58 6}11

Problems 19}27n m a}b n m a}b48 24 6}12 0 72 6}12

Problems 28}36n m a}b n m a}b71 1 6 14 58 6}12

Problems 37n m a}b n m a}b0 8 6}11 0 8 6}11

Problems 38n m a}b n m a}b8 0 3 5 6}13!

!From Table 1 it follows that for the standard methodtotal n"218, total m"86, for our method total n"36,total m"268.

the fact that is more di$cult to obtain the solution of problem (13) than the solution of the problemof step 5@.

We now present our numerical results. In Table 1, the value of n indicates that the method fails toterminate in a reasonable number of iterations n times for solving the indicated problems, the valueof m indicates that the method converges m times for solving the indicated problems, a}b indicatesthat the number of iterations of the method, for solving the indicated problems, is between a and b.

Remark 4.7. We note that when the methods (our method and standard method) fail thennumerical di$culties are encountered. The Powell method (see Remark 4.2) used to obtain thestep-length in step 6 of Algorithm 2.2 fails repeatedly (see Remark 4.3) and the algorithm is unableto reach the solution with desired accuracy. This is usually due to the fact that the minimization ofB( ) , k

i) becomes increasingly ill-conditioned as k

iP0.

5. Conclusions

In this paper we have presented an algorithm for linear and quadratic programming which useshigher derivatives of the logarithm barrier function. We want to point out that making use of an

G. Corradi / Computers & Operations Research 28 (2001) 209}222 221

Page 14: Higher-order derivatives in linear and quadratic programming

approximation of higher-order (superior to the second order as usual) we may obtain a fasterconvergence and an algorithm more robust than a method obtained using a second-orderapproximation. To this end in the paper we present a set of 38 test problems and our numericalexperience shows that the method introduced performs quite well and seems to be more reliableand robust than the quadratic method. So to improve the total behaviour of an optimizationmethod may be useful to consider a higher-order approximation.

References

[1] Carpenter TJ, Lusting IJ, Mulvey JM, Shanno DF. Higher-order predictor}corrector interior point methods withapplication to quadratic objectives. SIAM Journal of Optimization 1993;3:696}725.

[2] Gondzio J. Multiple centrality corrections in a primal-dual method for linear programming. ComputationalOptimization and Applications 1996;6:137}56.

[3] Lusting IJ, Marsten RE, Shanno DF. Computational experience with a primal}dual interior point method forlinear programming. Linear Algebra and its Applications 1991;152:191}222.

[4] Mehrotra S. On the implementation of a primal}dual interior point method. SIAM Journal of Optimization1992;2:575}601.

[5] Wright MHW. Interior methods for unconstrained optimization. In: Acta numerica. New York: CambridgeUniversity Press, 1992. p. 341}407.

[6] Dennis JE, Schnabel RB. Numerical methods for unconstrained optimization and nonlinear equation. EnglewoodCli!s, NJ: Prentice-Hall, 1983.

[7] Powell MJD. A fast algorithm for nonlinearly constrained optimization calculations. In: Watson, GA. editor.Lecture Notes in Mathematics. Vol. 630. Berlin: Springer-Verlag, 1978. p. 144}57.

[8] Amaya J. Numerical experiments with the symmetric a$ne scaling algorithm on degenerate linear programmingproblems. Optimization 1993;27:51}62.

[9] Den Hertog D. Interior point approach to linear, quadratic and convex programming. London: Kluwer AcademicPublishers, 1994.

[10] Lagarias JC, Todd MJ, editors. Mathematical developments arising from linear programming. Providence, RI:American Mathematical Society, 1988.

[11] Montero RDC, Adler I. Interior path following primal}dual algorithms. Part I: Linear programming. Mathemat-ical Programming 1989;44:27}41.

[12] Montero RDC, Adler I. Interior path following primal}dual algorithms. Part II: Linear programming. Mathemat-ical Programming 1989;44:43}66.

[13] Resende MGC, Pardalos PM. Interior point algorithms for network optimization problem. In: Beasley, J. editor.Advances in linear and integer programming. Oxford: Oxford University Press, 1996. p. 147}87.

[14] Resende MGC, Pardalos PM. Interior point methods for global optimization problems. In: Terkalay, T. editor.Interior points methods of mathematical programming. Dordrecht: Kluwer Academic Publishers, 1996. p. 467}500.

[15] Roos C, Vial JPh, editors. Interior points methods for linear programming: theory and practice. MathematicalProgramming 52: 1991.

G. Corradi is full professor at the University of Rome `La Sapienzaa. He has obtained the degree in mathematics fromthe University of Rome `La Sapienzaa. His research interests are in unconstrained and constrained optimization and itsapplications in economics and "nance.

222 G. Corradi / Computers & Operations Research 28 (2001) 209}222