16
Numerische Mathematik 1, 253--268 (1959) Newton's Method for Convex Programming and Tchebycheff Approximation By E. W. CHENEY and A. A. GOLDSTEIN w 1. Introduction. The rationale of Newton's method is exploited here in order to develop effective algorithms for solving the following general problem: given a convex continuous function F defined on a closed convex subset K of E,~, obtain (if such exists) a point x of K such that F(x)_<_F(y) for all y in K. The manifestation of Newton's method occurs when, in the course of computation, convex hypersurIaces are replaced by their support planes. The problems of infinite systems of linear inequalities and of infinite linear programming are subsumed by the above problem, as are certain Tchebycheff approximation problems for continuous functions on a metric compactum. In regard to the latter, special attention is devoted in w167 27--30 to the feasibility of replacing a continuum by a finite subset in such a way that a discrete approxi- mation becomes an accurate substitute for the continuous approximation. It is to be pointed out that the basic idea of the algorithms below is not new, having been first used by RE~EZ [1, 2, 3, 4]. Other authors who have put it to use in one form or another are NOVORDVORSKII-PINSKER [5], BEALE [6, 71, BRATTON ~81, STIEFEL [9, 101, WOLFE [11~, STONE [12j, and KELLEY [13]. The general problem described above is put aside until w 2t in order that tile main ideas may be developed in a simpler setting. Consider, then, a matrix having a finite number, n+ 1, of columns and at least n+ 1 rows (the number of rows may be non-denumerable), A point x= (Xl ..... xn) ~ E~ is sought which will minimize the function n F(x) sup = Y, A i xj -- bi . i i=l The linear Tchebycheff approximation problem is already included by this problem. Note that any continuous convex function F defined throughout E, may be represented as above via its support planes. These planes assume the form R'(x) = ~, A} xj-- b,. i-1 Numer.Math. Bd. t t 8

Newton’s Method for Convex Programming and Tchebycheff Aproximation

Embed Size (px)

DESCRIPTION

Metodos de Newtons generalizados para funciones convexas sobre espacios matriciales. Autores. E. W. CHENEY and A. A. GOLDSTEIN

Citation preview

Page 1: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Numerische Mathematik 1, 253--268 (1959)

Newton's Method for Convex Programming and Tchebycheff Approximation

By

E. W. CHENEY and A. A. GOLDSTEIN

w 1. In t roduc t ion . The rationale of Newton 's method is exploited here in order to develop effective algorithms for solving the following general problem: given a convex continuous function F defined on a closed convex subset K of E,~, obtain (if such exists) a point x of K such that F(x)_<_F(y) for all y in K. The manifestation of Newton 's method occurs when, in the course of computat ion, convex hypersurIaces are replaced by their support planes.

The problems of infinite systems of linear inequalities and of infinite linear programming are subsumed by the above problem, as are certain Tchebycheff approximat ion problems for continuous functions on a metric compactum. In regard to the latter, special at tention is devoted in w167 27--30 to the feasibility of replacing a cont inuum by a finite subset in such a way tha t a discrete approxi- mat ion becomes an accurate substitute for the continuous approximation.

I t is to be pointed out tha t the basic idea of the algorithms below is not new, having been first used by RE~EZ [1, 2, 3, 4]. Other authors who have put it to use in one form or another are NOVORDVORSKII-PINSKER [5], BEALE [6, 71, BRATTON ~81, STIEFEL [9, 101, WOLFE [11~, STONE [12j, and KELLEY [13].

The general problem described above is put aside until w 2t in order tha t tile main ideas m a y be developed in a simpler setting. Consider, then, a matr ix having a finite number, n + 1, of columns and at least n + 1 rows (the number of rows may be non-denumerable),

A point x = (Xl . . . . . xn) ~ E~ is sought which will minimize the function

n

F(x) sup = Y, A i xj - - b i . i i=l

The linear Tchebycheff approximat ion problem is already included by this problem. Note tha t any continuous convex function F defined throughout E , m a y be represented as above via its support planes. These planes assume the form

R'(x) = ~, A} x j - - b,. i-1

Numer. Math. Bd. t t 8

Page 2: Newton’s Method for Convex Programming and Tchebycheff Aproximation

254 E . W . CHENEY and A. A. GOLDSTEIN:

w 2. Nomencla ture . Consider the three matrices

A . . . . B - - - 12 C = 1 . �9 . . " . . " .

The Haar condition on A is the requirement tha t every n • submatr ix of A be non-singular. The solvency condition on A is the requirement tha t every ( n + t ) • ( n + t ) submatr ix of B be non-singular. The normality condition on C is the requirement tha t every (n + 2) • (n + 2) submatr ix of C be non-singular. The functions R i defined above are known as residual/unclions, and the hyper- planes in E,,+I whose equations are

z = /~ i (x)

are residual planes. An edge is the intersection of any n residual planes. When the Haar condition is fulfilled, each edge is a t-dimensional manifold in E~, 1 along which z is not constant. When the solvency condition is met, each set of n + t residual planes has in common a single point called a vertex. Assuming the normal i ty condition, no more than n + 1 residual planes can pass thru a vertex.

The solvency condition is equivalent to the following: given a point x~E~ and a set of rows {A r176 . . . . A ~} from A, there corresponds a unique vector u = ( u 0 . . . . , u~) satisfying 27ui=1 and x Zu iA ij. Stated otherwise, each set of n + t rows from A has an n-simplex for its convex hull.

The notat ion o (I) denotes the number of elements in the set I . The notat ion H{Ai: iCI} denotes the convex hull of the set of points {A ~" iCI}; i.e., the set of all linear combinations ZuiA ~ where ui>=O, o{i: u i > 0 } < oo, and ~ ' u i = t . The notat ion [u, v I will be used for ,~,uiv ~. The notat ion C{Ai: i~ I} denotes the conical hull of the set of points {Ai: i~I}; i.e., the set of all linear combi- nations ZuiA ~ where u i ~ 0 and o{i: u i > 0 } < o o . A hall-space in E~ is a set of the form {x: [a, x~ ~ k}. A polytope in E , is the intersection of a finite number of half-spaces.

w Lemmas . A. Let K be a closed convex set in Hilbert space and u a point not in K. There exists an unique v C K closest to u. Furthermore, for x~K,

[ x , u - v l u - v ] < [ u , u - v l .

This theorem is well-known. See for example ~14~.

B. Let Q denote a subset of E~ and x a point of H(Q). There exists Q ' CQ such tha t o (,Q') ~ n + t and x C H(,Q'). If Q is connected then there exists ,Q' C-Q such that o ( Q ' ) ~ n and x C H(~Q'). See [15, p. 9~ and the references given there.

C. The distance between two polytopes in E~ is attained. Thus, if bounded

below, the function F ( x ) = max' ~, A~xj-- b~ attains its infimum in E~. See [14J. l~i<=m /=i

D. Let a matr ix B result from an n x n non-singular matr ix A b y replacement of its r th row by a vector b. Let the columns of A-1 be designated b y C t . . . . . C~. If , ~ [b, C,l 4:0, then B is non-singular and the columns D 1 . . . . . D~ of its inverse are given by D , : ;t-1 C, and Di= C i - A -1 [b, Ci~ C~ (/4:r).

Page 3: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 25

E. The convex hull of a compactum in E , is itself compact.

Pro@ Let aQ be a compactum in E,~, K its convex hull. By w 3B to each

x C K there corresponds a representation x = Y. t~ (x) A i (x) with t i (x) => 0, ~ t , (x) = 1, i~.0

and Ai(x) C D. If {xs} is a sequence in K then there exists by the compactness of D and of (p = {(t o . . . . . t,,)' t ~ 0, 27t i = 1 } a sequence Jk such tha t lim A i (xi~) ~ A

k --~- o o

exists in ~O and lim (to(x~k) . . . . . t,~(xi~))~(t o . . . . . t,,) exists in Q. Clearly then k

Z tiA ~ C K, proving the compactness of K.

In a general Banach space, the convex hull of a compactum is totally bounded, by a theorem of MAZUR, but not necessarily closed.

w 4. Lemma . Let E denote an arbi t rary linear space, ~Q a set of linear func- tionals on E, and K the convex hull of ~ . The system of linear inequalities

(S) l(x) < 0 (1 ~X?) possesses a finite inconsistent subsystem if and only if 0 ~ K.

m

Pro@ (i) Assume 0 C K . Then an equation of the form O--~c i / i holds

where liar2, c i > 0 and Z c i = l . Thus for any x C E , O = ~ c i / i ( x ), showing that the system

(s') l~(x) < o (t -< i <_ m)

is inconsistent.

(ii) Assume tha t system (S) has system (S') as an inconsistent subsystem. Denote by N the set of solutions of

1, (z) = o (1 <_ i _< ,~) and select xl . . . . . x,, ~ E (n =< m) so that N @ x 1 @.-. @ x,~= E. Since every x ~ E

n

has a representation x = x 0 + ~ c s x s with x 0 ~ N, 1, (x) = 2Jcj/i (xs). The system (S') i = 1

may therefore be writ ten n

(s") E A r , j < 0 ( t< i<_ .~ ) i=1

where A~=li(xi) , and this system too is inconsistent in E , . We shall show tha t 0 C H { A 1 . . . . . A ' ~ } ~ K ' where A i = ( A ~ . . . . . A~). Indeed if this is not the case, then by w 3 A, there exists a halfspace {x: ~x, c] < 0 } in E,~ containing K'. This would make c a solution of (S"). Thus 0 C K', and an equation of the form

O = ~ , e i A i must obtain with e ~ 0 and ~-~ei=t. From this we obtain easily ~/=1

Z'eil i = 0, which completes the proof. For infinite systems this lemma generalizes corollary 5 of [16].

w 5. L e m m a . Let sQ denote a compact subset of E , and b a continuous real- valued function on Q. For xCE~ define l ( x ) ~ sup [A, x] and F(x) = sup [A, x 1

AEs AED

- - b (A). Consider also two systems of linear inequalities

(s) [A, z] < 0 (S) [A, z] < b (A) + M (A # D). 18'

Page 4: Newton’s Method for Convex Programming and Tchebycheff Aproximation

256 E. ~i-. CHENEY and A. A. GOLDSTEIN :

The following st a tements are equivalent

(i) F is bounded below. (ii) / is bounded below.

(iii) Sys tem (s) is inconsistent. (iv) 0 ff H(f2). (v) System (s) has an inconsistent subsys tem comprising at most n + l

inequalities. (vi) For some M, system (S) has an inconsistent subsystem comprising at

most n + t inequalities. (vii) For some M, system (S) is inconsistent.

Pro@ (i)---~(ii).

F(x) = sup{~A, x~ - - b (A)} ~ sup [A, x~ - - inf b (A) = / ( x ) -- inf b (A). A A

(ii)-+(iii). If (s) is consistent and is satisfied by z ~ then q ~ s u p [A, z~ < 0 , A

for the supremum is necessarily a t ta ined, f2 being closed and bounded. But now / (t z ~ = t q --> - - o, as t --> + o~.

(iii)--->(iv). H(~2) is compact by w 3E. If 0~H(s then by w 3A, there is a halfspace {x:[x , z~ < 0 } containing H(/2). Thus z satisfies system (s).

(iv)-+(v). If 0CH(SQ) then by w 3 B, there exist A ~ . . . . . A" such tha t 0 C H { A ~ . . . . . An}. B y w 4, then, the sys tem

i (s') [A, z~ < o Co -< r -< ~)

is inconsistent.

(v)-->(vi). Assume system (s') inconsistent. Pu t M = - - m a x b(A i) then for O ~ n

all z, m a x {~A i, z~ - - b (Ai)}~ m a x [A i, z~ -- m a x b (A i) => M. Thus the system O~i~n

(S') [A i, z~ < b (A') + M (0 --< i _< n)

is inconsistent.

(vi)-->(vii). Trivial.

(vii)-~(i). If (S) is inconsistent, then clearly F(x )>:M for all x.

w L e m m a . Let S ~ ) S 2) .. . be a nested sequence of compact sets in E~. Then H ( f / S i ) = NH(Si).

Pro@ Clearly VI S i ( (3 H(Si). Hence by the convexi ty of A H(Si) , H(A Si) ( N H(Si). For the converse, let x=--(x l , . . . , x,) be a point of ('IH(Si). Then for each i there is, b y w 3 B a representat ion

x : ~, tl ~' y ~ / = 0

where the ( n + t)- tuple t I i /~ (t(o il . . . . . t~/) lies in the set

O = {(t o . . . . , t~ ) : t i>= o, Z, ] t i = 1}

and where y]ilC S i. B y the compactness of Q and of S 1 there exists a sequence of integers i~, i~ . . . . such tha t lira t (ik) ---= t ~ (to, . . . , t~) exists in Q and lim y/ik/~ Yi

k--+ oo k--+ cx)

exists in S 1. For each k, all bu t a finite number of y}i,), y}i,/ . . . . lie in S k because

Page 5: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 257

the Sk's are nested. Since each S k is closed, y j~S~; thus y j ~ n Si. Hence n

x = Y, ljyp showing x~H(N Si). Q.E.D. ] = 0

w 7. Theorem. Let Y2 be a compact subset of E , and b a continuous real valued function on Y2. Define R (A, x ) = [A, x I b {A) and F(x)= sup R {A, x).

A E ~

If there exists an x ~ such that F(x~ for all x, then there exists a set {A 0 . . . . . A k} ( Q with k ~ n such that inf max R (A ~, x) --F(x ~ ~ M = R (A i, x ~ (O~i<~k). ~ o<~k

Pro@ D e f i n e f o r e a c h i t , 2 . . . . thesetY2i={A~12:R(A,x~176 - 1i}.

Clearly the sets Y2 i are compact and nested. We shall prove tha t for each i the following system is inconsistent:

(1) [A,z~ < 0 (A C I2,).

Indeed, if z ~ satisfies (1), then define q = sup [A, z~ Since .Qi is compact and A E D i

[A, z~ is a continuous function of A, there is an A~ for which EA ~ z~ = q.

( ' ) +-)--EA,z~ T h u s q < 0 . Letc=l]z~ SinceR A, x~ �89 z~ = R ( A , x ~ 2ic A EY2

i t isclearthat for A ~Y2iwehaveR(A,x~ -2~czO)~F(xo)+ q c , w h i l e for A ~Q,,

R A, xO+ ~o < F ( x o ) _ ~ § ~ l [ / [ [ . ] l z O l l < F ( ~ o ) _ _ ~ . .

( 1o) system (1) is inconsistent. By w 5, 0CH(ff2i). By w 6, 0C_H i . Furthermore .=

AY2/={AC:Y2 R(A, x~176 Now by w there exist A ~ . . . . . A k in f-lY2 i such that the system

[A ~, zl < o (o < i < k)

is inconsistent, whence the theorem. A related result may be found in [171. w 8. Remark . Let ,(2 denote a closed, bounded, connected subset of E,~ and

n

b a continuous function on ,(2. If inf sup Y, Ajx i - b ( A ) > - o% then ,(2 contains x A E D 1 = I

vanishing n • determinants; in other words, the Haar condition is violated.

Pro@ By w 5, 0 ~ H(g2). Since Q is connected, by w 3 B a there exist points A L . . . , A" in ,(2 and non-negative coefficients q , . . . , c~ fulfilling ~'c~=1 and Ec~A ~= O. This latter equation exhibits a linear dependence among A ~, . . . , A ~.

w Lemma . If A is an ( n + t ) • matr ix of rank n and if the function /(x) -- max ~A i, xj is bounded below, then the solvency condition follows.

Pro@ If the solvency condition fails, there exists a vector (% . . . . . u,) 4=0 such that

(i (!0)(i) thus [Ai, u]=--Uo, where 0<~i--<n and where u=(u 1 . . . . . %~). If %=0, the

Page 6: Newton’s Method for Convex Programming and Tchebycheff Aproximation

25 8 E . W . CHENEY and A. A. GOLDSTEIN:

rank hypothesis is violated. If u0=~0 , then either + u or - - u is a solution of lira the inequalities [A i, z~ < 0. Thus / (z) < 0, whencet_,+o~ / (tz) = -- oo.

w L e m m a . Assume that the set {Ai:O<~i<~n+t} satisfies the Haar condit ion and that 0 ~ H { A i : O<~i<_n}. Then there exist unique indices, i o and i t among { 0 , . . . , n } such that ocn{A ' :O<_i<_n+l , i4=io} and - - A " + t C C{Ai: O<_i<~n, i@il}. Furthermore, io=i t. See Satz 5, p. 4, of [10J.

Proo/. Since 0 C H { A i : O<_i<_n}, and since {A~: O<_i<_n} satisfies the solvency condition, there exist unique coefficients u0 , . . . , u , such tha t u i ~ 0 , E u i = l , and ,Y, uiAi=O. Because of the Haar condition u i > 0 . By 59, there exist unique coefficients Vo, . . . , v n such that 2 J v i = l and Z v i A i = A n+t. For

any]'<-- n we have 0 = A ~+1 -- ZviA i = A ~+t -- viA i -- ,Y,' v i A i = A ~+1 -- vj•'-- ui A i

- - Z ' v i A i = A ~ + t + Z ' ( vsui --vii A i throughout which, 2Y abbreviates 22 ~ W \ ui / ~=o, i4:i"

If vi ut >= v~ for all i, then this equation will furnish a barycentric representation u] of 0 in terms of {Ai: 0 _ < i _ < n + l , i~=]'}. This will indeed be the case i f j is chosen

to fulfill vi ~ v_i for all i. If this value of j be denoted by i o, we have in fact uj ui

rio > vz for all i =t= i 0, due again to the Haar condition. The uniqueness of i o Uio Ui m a y be seen at once from the fact tha t any other choice will lead to a negative coefficient in the above representation of zero. For each ]. this representation is unique up to scalar multiplication due to the Haa r condition. Similar argu- men t s apply to i t.

w 11. R e m a r k . Assume that the set {A t . . . . . A "+~} ( E , satisfies the Haar condition. The system of inequalities

0) [ A~, ~3 < o (1 _< i <_ n + 2)

is inconsistent if and only if it possesses precisely two inconsistent proper sub- systems.

Proo/. The "if" par t being trivial, we proceed at once to the "only if" part. I f the system (1) is inconsistent, then by w 4, 0CH{A~: t - < i < - n + 2 } , and thus, by w 3B, there exists an index i 0 such that 0 V H { A i : t_<i--<n+2, i4=io}. By 5w 5 and 9, the solvency condition holds for the set {Ai: l _ < i ~ < n + 2 , i~io}. By w 10, there exists a unique index it4=i o such tha t

O C H { A ~ : I < - - i ~ n + 2 , i6=i~}.

By w 4, the inconsistent subsystems are obtained.

w 12. R e m a r k . Consider two related matrices

. . .A~ \ [A 1 A I . . . A ~ \ A = ( ~ I ~ A~) A* IA2 = . A..~ A~)

in which the number of rows is finite or denumerable. If every ( n + l ) • sub- mat r ix of A is of full rank then (A01, Ao ~ . . . . )T m a y be chosen so that in A* every (n + t) • (n + t) submatr ix is of full rank.

Proo[. Set i i A o = t , and expand a typical ( n + l ) • determinant of A* b y the elements of its first column, obtaining thereby a polynomial in t whose coefficients are n • determinants from A which are not all zero by hypothesis.

Page 7: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 259

The set S of all the roots of al l the polynomials ob ta ined in this way is an at most denumerab le set. One m a y therefore select any l ~ S to ob ta in the desired con- clusion.

w T h e o r e m . Assume tha t the funct ion / ( x ) = m a x [ A i, x~ is bounded

below, tha t m>=n+2, and tha t the Haa r condit ions prevails. Then there exist a t least m - - n sets I ( { t . . . . . m} such tha t o ( I ) = n + t and OCH{A~:iCI}. This bound is best possible.

Pro@ B y w 5 and the boundedness of ] there exists a set I 0 C { t , . . . , m} such tha t O(Io)=n+t and O(H{A~:iCIo}. By w the funct ion m a x [ A ~,x]

~EIo is bounded below. B y the H a a r condit ion, the set {Ai: i ~ Io} has rank n. B y w 9, then , the solvency condi t ion is sat isf ied b y this set. By w 10, to each index ] '(~I o there corresponds uniquely an index i i~ I o in such a way tha t O~H{A i" iCI+i-ij}.

Since j m a y be selected in m - - n - - t ways, the number m - - n is es tabl ished. Tha t this bound is best possible is shown in the nex t paragraph . Observe tha t i t has been shown tha t to each i there corresponds an I < {t . . . . . m} such t h a t iC_I, o ( I ) = n + l and O~H{A~:iCI}.

w 14. E x a m p l e . Let posi t ive numbers tl,

A =

0 0 ' ' " 0

- -1 0 . . . 0

0 - - t . . . 0

0 0 . . . -1 t. 1 ~2 ~3 . . . ~

be selected. Define

\

I t turns out tha t (i) A

and tha t (iii) there are and 0 ~ H { A i : iCI}.

Pro@ (i) Suppose tha t A contains a s ingular n • s u b m a t r i x B. Then a dependence -~cjB/=O exists among the columns of B. If exac t ly k of the first n rows of A are present in B, then this equat ion implies the vanish ing of k of the coefficients c/ as well as the vanishing of the po lynomia l ~ c j l j a t exac t ly n - k posi t ive points . Since such a po lynomia l has at most n - k - 1 changes of sign, i t can have b y DESCARTES' rule at most n - - k - - t posi t ive roots. Thus c ~ 0.

(ii) This follows at once, using w 5, from the observat ion tha t

O~H{A 1 . . . . . A ' + I } .

(iii) If 0 C H { A i : iCI} and o ( I ) = n + l , then one m a y wri te 0 = L k i A i where

k i ~ 0 and 22ki= 1. I t m a y be seen at once from these condi t ions tha t all the first n rows of A mus t be among {A~: i~ I} . The number of ways of ob ta in ing a set of n + t rows from A including the first n is m - - n.

w 15. T h e o r e m . Define F(x)= m a x Ri(x) and / (x)= rain Ri(x). If e i ther l < = i ~ m l ~ i ~ m

of the numbers F o = i n f F ( x ), /o=sup/(x) is finite, then the other is also, and x

\ i / I,~ n t,~-n'" lm-,~ tm --- n 2 3 n

satisfies the Haa r condi t ion; (ii) inf maxEA i, x i > - - o~; x i

precisely m - - n sets I ( {1 . . . . . m} such tha t o (I) = n + t

Page 8: Newton’s Method for Convex Programming and Tchebycheff Aproximation

260 E . W . CHENEY and A. A. GOLDSTEIN:

these values are achieved at appropr ia t e points ; f u r the rmore /o =< Fo; i.e.

m a x min R i (x) < min m a x R i (x). x x i

E q u a l i t y occurs here if and only if there exists a poin t y and a number M sat is- fying R~(y)=M (1 <=i<=m).

Pro@ By w 3C, if F o > -- oz, an x~ exists for which Fo~-F(x~ ). Define then I = {i: R i (x ~ =/7o} and observe tha t the sys tem of inequal i t ies U Ai, z} < o, (i~I) is inconsistent . This implies tha t x ~ maximizes the concave funct ion min R i (x). T h u s / 7 o = sup min R i(x) >_ sup min R i (x) = / o . Here we obta in s t r ic t i • I x iccI

i nequa l i ty unless R i(x ~ = F o for all i. The a rguments are the same if one begins wi th the a s sumpt ion /o < oo.

w 16. A l g o r i t h m I. We are given a bounded subset ,Q of E,~ and a bounded rea l -va lued funct ion on ,(2 which we wri te A 0 for A 6D. For each A C/2 define R(A, x)= [A, x~--A o. Also define F ( x ) = sup R(A, x). I t is desi red to ob ta in

A c i d

an xCE,, for which F(x)<=F(y) for all y if such an x exists. Assume tha t in ge t t ing s ta r t ed a subset {A ~ . . . . . A 1} ( f2 is known which spans E~ and satisfies 0 C H { A ~ . . . . . Al}. In this connection, see w 18. At s tep k (k>=l) in the a lgor i thm there is given {A ~ . . . . . Ak}(tQ. Select x k to minimize the function Fk(x)~ max{R(A i, x):O<~i_<k}. (This m a y be accomplished b y the a lgor i thm of w 17, b y the methods of [181, b y l inear programming, etc., etc.) Select A = A k + l ~ D to maximize R(A, xk), or to come wi thin l/k of this max imum. Repea t this cycle, ob ta in ing the reby a sequence x l, xl+l, . . . . The va l id i ty of this a lgor i thm is es tabl ished in w 22.

w 17. A l g o r i t h m II . We are given a subset {A 1 . . . . . A m} of E~ sat is fying the H a a r condi t ion and an m-tuple (b 1 . . . . . bm). I t is desired to ob ta in a m in imum

n

for the function F(x) m a x R i (x) where R i(x) = = Y.Ajxi--b i. I t is necessary to l ~ i G m i = l

assume tha t i n f F ( x ) > - o~, or equivalently, O~H{A 1, . . . , Am}. At each stage

there is given a set I ( . { 1 . . . . . m} of n + l e lements and a poin t y C E , . Select ]'~{1 . . . . , m} to maximize RJ(y). Select y' to minimize m a x Ri(y'). Select

i 6 I + j

h ~ I to minimize Rh(y'). Define I ' = I + ] - - h , and s t a r t anew with I ' and y'. A s ta r t ing procedure is given in w 18 and a fo rmulary in w t9. This a lgor i thm is now in use on the IBM 704 computer , hav ing been p rogrammed b y Mr. NORMAN LEVINE.

The following remarks will assist in interpreting w 19. In each cycle there is a set of indices I = {io, ii . . . . . a point . . . . . and a number such that R { ( x ) = - x o for i 6 I. This equation may be written x*= D b*, where x* = (Xo, xl . . . . . xn), b* = (bio, . . . . b{~) and where D is the inverse of

A 1 . . . A ,

i : �9 A 1 . . .A~ /

In the formulary , / = 0 signifies the problem of minimiz ing F(x)=maxRi(x);

l = 1 signifies the problem of ob ta in ing a solut ion of F(x) < 0; and l = 2 signifies the p rob lem of min imiz ing F*(x~--max [R i (x)[.

Page 9: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 261

w 18. Start ing Procedure for Algor i thms. Assume the Haar condition and that the function F(x) = max R ~ (x) possesses a greatest lower bound M. By w 3 C

l < i . m

there exists an x ~ such tha t F(x~ Algorithm 1I requires, for starting, a set of rows {A% . . . , A i~} from the matr ix having the proper ty tha t the system of inequalities

[ n # , z ] < o ( o ~ i ~ )

be inconsistent. The existence of such sets (indeed, m - - n of them) is guaranteed by w t3. To obviate the search for such a set, a new row A ~ is adjoined to the matrix, and a number b o is selected in such a way that

(t) RO(xO) ~ [A0, xo I -- b0 < M;

tha t is, x ~ remains a solution of the augmented problem. Toward this end, n

define A ~ -- ~ A i, and suppose b 0 is large enough to validate equation (l) above. i = 1

I t is to be verified that the set {A ~ . . . . , A n} satisfies the Haar condition and the condition tha t the system

(2) [A i, z] <s 0 (0 ~ i ~ n)

be inconsistent. Indeed, supposing a non-trivial linear equation O= 2 u i A i, the i = O i :I=*0

Haar condition on {A 1 . . . . . A ~} implies that i o ~=0 and that u o =t=0. The replace- n

ment of A ~ in this equation by -- Y'A i will then yield an equation which exhibits i = l

linear dependence among A 1 . . . . . A ~.

Finally, we obviously have 0 C H { A ~ . . . . . A~}, showing (w 4) tha t system (2) is inconsistent.

If the presence of A ~ vitiates the Haar or solvency condition at a subsequent juncture in the algorithm, the above technique m a y be repeated. Specifically, suppose that at a certain juncture the set I contains 0, and that the set {Ai: i ~ I} fails either of the two desired conditions. Then A ~ could be replaced by -- ~ A i, and the computat ions m a y proceed. ~ r i 4 : o

Since there is no a priori knowledge of the number [A ~ x~ --M', b o is chosen in practice by trial, and the number

M* = inf max R i(x) x O ~ i ~ m

is calculated by means of the algorithm. If, in the last cycle of the algorithm, 0 C I , then condition (t) above is not satisfied by bo, and b o must be increased. As b o increases, M* decreases, but if M* < rain (-- bi) , then by w t 5, the (original) function F is not bounded below. ~-<i~m

Assuming the Haar and Normal i ty conditions, the Tchebycheff problem of minimizing tile function F(x) = max [ R i(x) l has an alternate start ing procedure.

l < i ~ m

Let J be any set of n + t indices, and let x* be a point which minimizes max [ R ~ (x)[. i c J

Such a point m a y be determined by methods of [18], for example. Define R'~+i=--Ri(x). The set I ~ { i : l<_i<_2m, Ri(x*)=F(x*)} is a sat isfactory star t ing set.

Page 10: Newton’s Method for Convex Programming and Tchebycheff Aproximation

262 E. XV. CHENEY and A. A. GOLDSTEIN:

w 19. Formulary for Algorithm II

Read In q AI . . . A ~

! A?. . .A~ /

i polytope I = inecluality

minimax

(E~) ~ -.'- 2.

D~ -~ D~ (o =<k < n )

D~ -- D~ E i ---> D~ b~ "--> bi~ # -+ O~fl

Select fl so that Y~ >7~

all s

Starting Procedure I -->Ao ~ 1 0 - + b 0

- 2A~-+A~Q ( l _ _ < j < ~ ) i = 1

( + ~ . . . . . + ~) - , (o0 . . . o~)

(o, ~ . . . . . n) -~ fro--- ~ ) - I

(A ~ . . . . . , . A ~ 1 7 6 D~) FromMatriX-inverse

\A'~ A~/ \D'~... D~/ Subroutine

test D ~ (0 ~ j =<n)

l yes

Error Stop

Compute

s=O

k = l

l =0 / = 1

4

Select a so that R. ~ Ri all i

1-+#

D O + ~ ~ k A k Ds --> Es

Es -1)o - + ~'s (o <- s <- n)

Select ~ so that l<,l --->- [&[ an i

" a /

test e~I ? yes__> t e s t oeI ?

I ~ l i Y

- - 1 SgnR~-->/~ ] Stop I [10bo-~bo Print

- ;est ; 7 0 - - '~ - I

Page 11: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 263

w 20. A lgor i thm I I I . Consider the problem of obtaining the minimizing point for the function F ( x ) = sup [A, x]--b(A), D being a countable compact set in

A E D

E , and b a continuous function on f). Let the elements of ~ be enumerated: A 1, A ~ , . . . . For each m let x (m) be chosen to minimize the auxiliary function

F ('~) (x) = max R (A i, x). l ~ i ~ m

By w 7, there is a subset {A ~o, . . . . A i-} of~Q having the property tha t the minimum of

F*(x) = max R (AiJ, x) 0 _ ~ i < n

equals the minimum of F. Thus in the presence of the Haar condition, the sequence x (1), x (2), ... is eventually s tat ionary and gives the minimum point for F. The algorithm therefore converges in a finite number of steps.

w 21. Algor i thm IV. Let f2 and A denote bounded subsets of E , and b and c bounded real-valued functions on s and A respectively. Define the closed convex set K = {x C E , : EB, x I _-< c (B), all B • A}. Define the continuous convex function F(x)=sup{R(A,x) :ACQ}, where R(A,x)=EA, x~--b(A ). I t is desired to obtain, if such exists, a point y of K inducing a min imum value of F. Assume, in getting started, tha t finite sets ~~ and A~ are available for which O~intH(~Q~176 In this connection, see w 18. At the 5Wg th step there are given two finite sets D " C f2 and A " C A. Define / ; " (x) = sup {R (A, x) : A C ~Q'~} and K ~ = { x : EB, x]<=c(B), all B~Am}. Select x '~ to minimize F " on K ". In this connection, see w 23. Select A'C-Q to maximize R(A, x m) within a tolerance of t/re. Select B'G.A to maximize [B, x~]--c(B) within a tolerance of l/re. Begin anew with Q " + I = , Q " U { A ' } and A'~+I=A~U{B'}.

w Theorem. If K is non-empty then algorithm IV is effective in the sense that

(i) F '~ (x '~) S P --= inI {F(x) : x C K} ; (ii) the sequence {x": m = 0 , t, 2 . . . . } possesses cluster points, each of which

lies in K and minimizes F thereon. Pro@ We have finite sets ~Q0 and A ~ fulfilling 0 C int H(W) where W = f2 ~ U A ~

Define 0 = inf max ~w, x~. If 0=<0, then there exists an x~ such tha t [IxI]=l wEW

~W, X~ <=0 for all w ~ W. Hence [w, x~ ~ 0 for all w CH(W). Since this is in- compatible with the fact tha t 0 6 intH(W), t9 > O.

Now assume K=t-9. Select v ~ K and define M=va-lmax[supc(B), F(v)+ BEA

sup b (A)~. Let x be an arbi t rary point satisfying I1-1[ > M. Select w sue5 A E f 2

tha t [w ~ x ] = max ~w, x~. If w~ o, then F~ Ew ~ x I -- b(w ~ >11211 w ~ 1"11

sup b(A) > F(v)>=F~ On the other hand, if w~ A ~ then [w~ x~ --c(w ~ >= AED I l x l l O - s u p ~ ( B ) > 0, so tha t x~ K ~ This argument establishes that i n f F ~

BEA xEK~

inf F~ so that, due to the cont inui ty of F ~ the infimum on the left is xEK~ attained. Thus x ~ exists. By the same argument, inf F(x) = inf F(x) ~ p < oo.

xE K xE K, I[xI[_~M

The following inequality is obvious: F"(x'~)= infF~(x)<_ inf F ~ l ( x ) = xEKm - - x E gm+ l

F,,+ 1 (x,~+ 1) ~ p. Since F ~ (x m) <= F m (x m) ~ p <= F(V), and since x ~ C K~ we know by virtue of the preceding paragraph that II x~ll < M. Let y be any cluster point of

Page 12: Newton’s Method for Convex Programming and Tchebycheff Aproximation

264 E .W. CHENEY and A. A. GOLDSTEIN:

the sequence x ~ x 1, x2, . . . . If for some m, y ~ K m, then select 5 > 0 such that B m [ , y J - - c ( B m ) > & Select i < m so that H x ' - y [ l < b / r where r = sup llBIl.

BCA Then [B '~, x'] -- c (B") = [B m, y] -- c (B '~) + [B "~, x i -- Yl > 6 -- H BmH " [l xi - - YH ~ O,

contradict ing x i ~ K i C Km. Hence y C ~ Kin. Now define G (x) = sup [B, x I c (B). m = 0 BEA

If y ~ K , then put d = � 8 9 Take m>=l/d so that iix -ytt< /r. Then

G ( y ) < a ( x m ) + d ~ [ B ' , x m ~ - c ( B ' ) + I + d < [ B ' , y ] - - c ( B ' ) + 3 d g 3 d = G ( y ) , m

a contradiction. Note tha t we use here the fact tha t y ~ K m+l. Thus y ~ K. I t was shown earlier tha t F ~ (x m) is non-decreasing and bounded by p. Thus

F ' ~ ( x m ) / p - - 3 e for some e_>0. If e > 0 , select m_>_l/e so that Hy--xmll<e/2q, where q = sup IIA M. Then F(x '~) > F(y) -- e >__ p -- e. Take i > m so tha t H Y -- xil[ <

A E ~

e/2q. Using R m+l (x) = [A', x] -- b (A'), we have R m+l (X m) -- R m+l (x i) ~ V ( x m) --

1 Fi (x i )>=F(xm)- -e - -p+3e~e . On the other hand, Rm+l (xm) - -Rm+l (x i ) m

<MA'll. [Ixm--x'll<~, a contradiction. Hence e = 0 , establishing (i). As for (ii), observe tha t every cluster point y of {x m} lies in K. Then as above, p ~ F(y) <= F(x") + b g R m+l (m m) 4- 2 (} < R m+l (x i) 4- 3 (} ~ Fi (xi) 4- 3 (9 ~ ~5 @ 3 (}, Q.E.D.

w 23. A lgor i thm V. Define F(x)= max Ri(x) and G (x) = max Ri(x), where l<=i<=k k<i<m

Ri(x)=EA i, x]- -b i and l<_k<_m. I t is desired to obtain the min imum of F on the domain K = {x ~ E,~: G (x) g 0}, if such exists. I t is assumed tha t {A ~ . . . . . A'~} satisfies the Haar condition.

In each cycle of the algorithm, a set I C {t . . . . . m} is given such that (i) o (I) = n + 1

(ii) O~H{A ' : iC I} (iii) I n {t . . . . . k} non-empty. (See w t8 for start ing procedure.) Obtain a point x and a number /z such tha t (iv) i C I r3 {1 . . . . . k} ~ R i (x) = / z (v) i c I n {k + 1 . . . . . m} ~ R ~(x) = o.

If G(x)<=O and F(x)~lz , then x is a solution. If G (x) > 0, select p ~ {k + t . . . . . m} so that RP (x) ~ 0. If G(x)<=O and F(x)>l~, select p c { t . . . . . k} so that RP(x)~# . In both the

lat ter cases, select qCI so that 0CH{A~: ig:I'} where I ' = I U { p } - - { q } . See w 10 in this connection. Begin the next cycle with I ' in place of I .

w 24. Effect iveness of A lgor i thm V. Consider a set I satisfying (i), (ii), and (iii). Put l z {i0 . . . . . i~}, with {i0 . . . . . ij} C {t . . . . . k} and {i~.§ . . . . . i~} C {k 4- t . . . . . m}. We show first tha t the following matr ix is non-singular.

B _ i) / A ~ J A~

A[J+, Aij+*

\ A ~ A~

Page 13: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 265

Suppose, on the contrary, tha t there exists a non-zero vector u = (% . . . . , u,+~) for which Bu=O. Then [Ai,,u*~ --u,,+x for 0~<p<=] and [Ar for ]+t<=p~n , where u * = ( u 1 . . . . . u,). If U , + l = 0 then the Haa r condition is violated, If u,~+14=0, then write, in accordance with (ii): 0 = ~ 2 p A iv with Y. 2p = t and 2p>= 0. By the Haar condition, 2p > 0. Thus 0 = [0, u~ = Y. 2p [A ~v, u], a contradiction. Thus there is no difficulty in obtaining x and ,,~ satisfying (iv) and (v).

We show now that if G (x) =< 0 and F(x) =lz then x is a solution. If not then there exists y such that F(y)<r and G (y)=< 0. Then the vector z = y - x satis- fies the inequalities

(A', z) < 0 i C - [ n { l . . . . . k}

and one obtains a contradiction as above. Hence x is a solution.

If G(x)> 0 or F(x)> # then the algorithm specifies how to obtain a set I ' satisfying conditions (i) and (ii). We now show tha t I ' satisfies (iii) as well. If not, then o ( I ~ { 1 . . . . . k } ) = t and k<p<=m. Select y<K. Then [A ~,yl<-bi for k < i < m . Thus [A ~, y - xl<_O for i~ I N { k + l . . . . . m} and ~AP, y - - xl<O. This contradicts property (ii) for I'. an x' and # ' satisfying

[A i, x' 1 -- b i =/ , ' E A~, x ' ] - - b~ = o

By subtracting, we find

E Ai, x ' - x~ = [ ~ ' - / ~ [A ~, x ' - x l = 0

[A p, x' -- x] < t , ' - - # lAP, x ' - x l < o

Thus there is no difficulty in obtaining

i C I ' n { k + ~ . . . . . ~n}.

when p 6 { t , . . . , k } when P C { k + t . . . . . m}.

Since o ( H { A i : i ~ I ' } , # ' - - ~ , > 0 . Thus in proceeding from one cycle of the algorithm to the next, the value of /z increases. Since there are but a finite number of sets I satisfying (i), the algorithm will terminate with a point x satisfy- ing F(x)=# and G(x)<=O. This completes the proof. I t m a y be observed tha t if K is empty the algorithm wiI1 indicate this fact by the impossibility of comput- ing x and ~ in some cycle.

w 25. Algor i thm VI. I t is desired to obtain x* C E,~ minimizing the function F(x)= max Ri(x). This algorithm has the feature that at the k-th step, an

upper bound r~ and a lower bound s k are provided for the unknown number p ~_infF(x) ; furthermore, rk'a p and sks p. The desirability of such a feature

x

was pointed out in [9!.

Assume now that the Haar and normali ty conditions are fulfilled and tha t F is bounded below. For any I < { 1 . . . . . m} define F , ( x ) = maxRi(x) . In each

comput ing cycle there will be given a point x ~ E~ and a set I < {1 . . . . . rn}, the lat ter satisfying o (I) = n + t and 0 ~ H {A~: i 6 I}. Define then J = {j: R i (x) = F(x)}.

Page 14: Newton’s Method for Convex Programming and Tchebycheff Aproximation

266 E. V~7, CI~ENEY and A. A. GOLDSTEIN:

Select z to minimize ~ w j - If z = x , then z is a solution. Otherwise proceed to select an x' which will minimize F on the ray { x + t ( z - - x): t>0}. Define I ' = {i~IUJ:R~(z)=F~wy(Z)}. Begin anew with x' and I'. See w for starting procedure.

w 26. Effectiveness of Algor i thm VI. The proof will be given in six parts.

(1) The point x' is well-defined because on the indicated ray, F is a polygonal function which is bounded below and thus attains its minimum (w 3 C).

(2) If x4=z then F(x ' )<F(x) . To prove this, observe that for each j '~], R j (z) <=Fi~ J (z) <~wJ (x) <=F(x), the strict inequality being due to the uniqueness of z (a consequence of the Haar condition). Thus for small t > O, F ( x + t ( z -- x)) < F(x). Note that F(x) has then the properties claimed above for r k.

(3) I ' satisfies the conditions laid down for I . To prove this, observe that the system of inequalities ~A ~, dJ < 0, (i C I ' ) is inconsistent. Thus 0 ff H {A~: i C I'}. By the Haar condition, then, o (I') > n. By the normality condition, o (I') ~ n + t.

(4) If x = z then z is a solution. Indeed if x=z , then x minimizes F~wj. By the uniqueness of z, FI,, l is increasing in a neighborhood of z. Now F(x) =/~}~j (x) and FIwj<=F always. Thus F is increasing in a neighborhood of x, and x must be a solution.

(5) If x' =t=z' then FF,(z') >Fi,(z). To establish this, note first that minFv=< minFvwl,--minFl, , . If equality occurs here, then z = z ' due to uniqueness of z. In this event, x ' = x " because x, x', z are colinear, as are x', x", z', so that the minimum of F on the ray xz occurs with the minimum of F on the ray x'z'. Hence x ' = z ' as well. Note that Fv(z ) has therefore the properties claimed above for s k.

(6) There are but a finite number of sets I in {1 . . . . . m}, and in each cycle of the algorithm a new I occurs because of (5). Thus the algorithm terminates at some cycle in which x=z , such a point being a solution, by (4). This concludes the proof.

w Approximat ion in C(T) . Let T denote a compact metric space and C(T) the linear space of all continuous real-valued functions defined on T. For any subset S of T define IsI-- sup inf d(s, t), and define a semi-norm in C(T)

t ~ T sES

by writing H/lIs = sup I/(s)[. Let M denote a given subset of C(T) and g a fixed sES

element of C(T). The problem of approximating g by elements of M is to be investigated. Specifically, given ~ > 0, it is desired to obtain by a simple algorithm an ]0 ~ M fulfilling the condition

[I/0-- gi lT-- c < p ~ inf I [ ] - -g i lT" IEM

Since in practice it is easier to compute the semi-norms [[ [is instead of [[ 1It, it is advantageous that the following principle be valid.

(P) To each e > 0 there corresponds a d > 0 such that p-e<] l /o -g l ] s<= II/0--gllr<P+e whenever ISI < d and I I / 0 - g f l s < ~-r inf II/-glls.

tEM w 28. Lemma. Let M designate a finite dimensional subspace of C(T). There

exist two constants q > 0 and Q > o such that IIII[T<= Q whenever lS[<q and

tt/-glis<=p+ t.

Page 15: Newton’s Method for Convex Programming and Tchebycheff Aproximation

Newton's Method for Convex Programming 267

Pro@ Let {ft . . . . . /,~} be a basis for M, and define for each t C T an n-tuple A(t) = (/1 (t) . . . . . /~(t)). Due to the independence of the /,'s, the set {A(t): tC T} has a zero orthogonal complement in E , and thus contains a basis {A (tl) , . . . , A (t,,)}. Hence the n • determinant whose entries are Aii=/ i ( t j ) is non-zero, and by the continui ty of the determinant function, there exist positive numbers 6 and r such that I d e t B i j l ~ r whenever m a . x [ B i j - - A i j l < b . Select q > 0 so that

. 1

I/~(s) /~( t ) l~b whenever d(s,t)<=q. Assume [sl< q, I I / - g l l s < p + t , and n

/ = 5" x,/i . For each ] select sj ~ S satisfying d (sj, ti) <= q. Clearly I / (sj) -- g (si)[ <

Thus lSx,/,(sj)l<p+ l+llgll . Since d(sj, ti)<=q, I/,(sy)--/,(t,)l--<d, and [det/,(s,)L>r. By CRA:~ER'S Rule, each Ixil has an upper bound d-= cn! r - : ( b + m a x IAr :, whence I l l l l~ .<Sl~ , l . I I / , t l~<d~.lL/ , lb.=:0.

w Theorem. Principle (P) is valid under either of the two following conditions :

(i) M is an equicontinuous subset of C(T) ;

(ii) M is a subset of a finite dimensional subspace of C(T).

Proo/s. (i) Given e > 0, take 0 < e/3 such that [ / (s) - - / (t) l < e/3 and I g (s) --g (t)[ < e/3 whenever / C M and d (s, t) < d. Suppose I SI < b and ll/o - g I Is < a + ir~f I[/-gll~.

Select t ~ T so that I/o(t)-g( )l=ll/o-gllT. Select s G S so that d ( s , t ) < & Then p --< II/o-glb,= I/o(t) g(t)l < I/o(*) -/o(S)l + -g(e)l < e/3 + ll /o--gtls+ e/3 < e + inf I I / - e l l s <=e+p. T h u s p - - e < p -- 2e /3~ t l /o - -g l l s <= / II/0-gll -< +P-

(ii) By the Lemma, if d < m i n ( l , q), lSl<0, and tl/o-gll <0+ inf II/-glls, IEM

then [[/0]IT<Q. Thus the approximating functions are taken from a bounded subset of a finite-dimensional subspace of C(T), which is therefore equicontinuous. Hence (ii) reduces to (i).

w Examples . A. Let T = [0, 1] and let M consist of all functions on T having a first-derivative bounded in modulus by a constant k. Then M is equi- continuous since [/(s) I �9 I -tl<kls-tl. This M is infinite di- mensional, containing the independent set {e~t: 0 - - < a~< �89 log k}.

B. Now let U denote an arbi t rary (non-topological) set and B (U) the Banach space of all bounded real-valued functions on U, normed by I1~/1= sup I~(u)l-

uEU

Let N designate a finite dimensional subspace of B(U) spanned by {Wl . . . . . %,}. Let O denote any fixed element of B(U). We seek q?* CN such that 1!~* ILw-~9[I for all ~ C N . This problem may be t reated by Algorithm I after re- casting the problem as follows. For each u C U, define an n-tnple A " = (q)l (u) . . . . .

A ~ % (u)). We then seek x* C E~ which minimizes the function F(x) = sup I I , x~ - -

I t is also possible to recast the problem into the form of w 27. Define B ~-- (~l(U) . . . . . %(u) , O(u)) and denote by T the closure of the set {B~: u(:: U} in E~ ~1- On E~+: define the functions / i (Y)= i - t h component of y, 1 <_i<_n+ 1. Clearly T is compact and/~ C C(T). It turns out tha t for each x C E,,, 1[ ~ x~ 9x-- oil g =l l2x~ / , - - / ,+: l lT . Thus by w 29, Principle (P ) i s valid.

Page 16: Newton’s Method for Convex Programming and Tchebycheff Aproximation

268 E. V~ r. CHENEY and A. A. GOLDSTEIN : Newton's Method for Convex Programming

References

[1] REMEZ, E.: Sur un Proc6d~ Convergent d'Approximations Successives pour Determiner les Polynomes d'Appproximation. C.R. Acad. Sci. Paris 198, 2063--2065 (1934).

[2] REMEZ, E.: Sur le Calcul Effectif des Polynomes d'Approximation de TSCHEBY- SCHEFF. C. R. Acad. Sci. Paris 199, 337--340 (1934).

[3] RE~IEZ, E. YA. : On the Method of Best, in the Sense of TCHnI~VCt~FS, Approxi- mate Representation of Functions, (Ukrainian), Kiev, 1935. See also Refe- rence 4 below.

[4] REMEZ, E. YA.: General Computation Methods for Chebyshev Approximation. Problems with Real Parameters Entering Linearly. Izdat. Akad. Nauk Ukrainsk. SSR. Kiev, 1957. 454 pp. See also 5IR 19-580 (Russian).

[SJ NOVODVORSKII, g. N., and I. SH. PINSKER: On a Process of Equalization of Maxima, UspehiMatem. Nauk, N.S. 6, 174--181 (t951). See also SHENITZER, A.: Chebyshev Approximations. J. Assoc�9 Comput. Mach. 4, 30--35 (1957), MR 13-728.

[6] BEALE, E. M. L.: An Alternative Method for Linear Programming. Proc. Cam- bridge Philos. Soc. 50, 512--523 (1954). MR I6-155.

[7] BEALE, E. M. L. : On Minimizing a Convex Function Subject to Linear Inequali- ties. J. Roy. Stat. Soc., Ser. B 17, 173--t77 (1955).

ES] BRATTON, DONALD: New Results in the Theory and Techniques of Chebyshev Fitting. Abstract 546-34. Notices Am. Math. Soc. 5, 248 (1958).

[92 STIEFEL, EDUARD L.: Numerical Methods of Tchebyeheff Approximation, pp. 217--232 in R. E. LANGER (ed.), On Numerical Approximation. Madison 1959. 48O .p.p.

~10] STIEFEL, E. : Uber diskrete und lineare Tschebyscheff-Approximationen. Nume- rische Mathematik 1, 1--28 (1959).

[11] XNOLFE, PHILIP: Programming with Nonlinear Constraints. Preliminary Report, Abstract 548-t02. Notices Am. Math. Soc. 5, 508 (1958).

~12] STONE, JEREMY J.: The Cross Section Method, presented orally at Symposium for Mathematical Programming, RAND Corporation, ),larch 19, 1959.

[13] KELLEY, JAMES E.: A General Technique for Convex Programming, presented orally at Symposium on Mathematical Programming, RAND Corporation, March 19, 1959.

[14]_ CHE~Eu E. \V., and A. A. GOLDSTEIN: Proximity Maps for Convex Sets. Proc. Amer. Math. Soc. 10, 448--450 (1959).

[15] BONNESEN, T., u. W. FENCHEL: Theorie der konvexen K6rper. Berlin 1934. [16] FAN, K. : Orl Systems of Linear Inequalities. In : Linear Inequalities and Related

Systems, ed. by H. \u KUHX and A. W. TUCKER, pp. 99--156. Princeton 1956.

[17] BRAM, JOSEPH: Chebychev Approximation in Locally Compact Spaces. Proe. Am. Math. Soc. 9, 133--136 (1958).

[18] GOLDSTEIN, A. A., and E. W. CHENEY: A Finite Algorithm for the Solution of Consistent Linear Equations and Inequalities and for the Tchebycheff

�9 . ' ~ . .

Approximation of Inconmstent Linear Equations. Pac. J. Math. 8, 415--427 0958).

Convair Astronautics Department 591.10

San Diego 12, California

(Received July 3, 1959)