A feasible directions method for nonsmooth convex optimization

Embed Size (px)

DESCRIPTION

METODO DE OPTIMIZACION PARA PROBLEMAS DE MINIMIZACION DE UNA FUNCION CONTINUA SIN RESTRICCIONES

Citation preview

  • Struct Multidisc OptimDOI 10.1007/s00158-011-0634-y

    RESEARCH PAPER

    A feasible directions method for nonsmooth convex optimization

    Jos Herskovits Wilhelm P. Freire Mario Tanaka Fo Alfredo Canelas

    Received: 26 July 2010 / Revised: 14 December 2010 / Accepted: 20 January 2011c Springer-Verlag 2011

    Abstract We propose a new technique for minimizationof convex functions not necessarily smooth. Our approachemploys an equivalent constrained optimization problemand approximated linear programs obtained with cuttingplanes. At each iteration a search direction and a step lengthare computed. If the step length is considered non seri-ous, a cutting plane is added and a new search directionis computed. This procedure is repeated until a seriousstep is obtained. When this happens, the search directionis a feasible descent direction of the constrained equiva-lent problem. To compute the search directions we employthe same formulation as in FDIPA, the Feasible DirectionsInterior Point Algorithm for constrained optimization. Weprove global convergence of the present method. A set ofnumerical tests is described. The present technique was alsosuccessfully applied to the topology optimization of robust

    J. Herskovits M. Tanaka Fo (B)Mechanical Engineering Program, COPPE, Federal University of Riode Janeiro, PO Box 68503, CEP 21945-970, CT, Cidade Universitria,Ilha do Fundo, Rio de Janeiro, Brazile-mail: [email protected]. Herskovitse-mail: [email protected]

    W. P. FreireUFJF, Federal University of Juiz de Fora, Juiz de Fora, Brazile-mail: [email protected]

    A. CanelasIET, Facultad de Ingeniera, UDELAR, Montevideo, Uruguaye-mail: [email protected]

    Present Address:M. Tanaka FoFaculdade de Matemtica, UFOPA, Universidade Federal do Oeste doPar, Santarm, PA, Brazil

    trusses. Our results are comparable to those obtained withother well known established methods.

    Keywords Unconstrained convex optimization Nonsmooth optimization Cutting planes method Feasible direction interior point methods

    1 Introduction

    In this paper, we propose a new algorithm for solving theunconstrained optimization problem:{

    minxRn

    f (x) (P)

    where f : Rn R is a convex function, not necessarilysmooth.

    Nonsmooth optimization problems arise in advancedstructural analysis and optimization. This is the case forseveral types of unilateral problems, as contact, impactin solids or delamination analysis in composite materials.Dynamic analysis and stability studies involve eigenval-ues and nonsmooth functions. Most of these applicationsinvolve nonconvex problems with constraints. The authorsconsider the present study as the first step on a new tech-nique for numerical algorithms for nonconvex optimizationwith smooth and nonsmooth constraints.

    It is assumed that f has bounded level sets. Thus, thereis a R such that a {x Rn | f (x) a} is compact.Let f (x) be the subdifferential (Clarke 1983) of f at x . Inwhat follows, it is assumed that one arbitrary subgradients f (x) can be computed at any point x Rn .

    A special feature of nonsmooth optimization is the factthat f (x) can change discontinuously and is not neces-sarily small in the neighborhood of a local extreme of the

  • J. Herskovits et al.

    objective function see (Hiriart Urruty and Lemarchal 1993;Bonnans et al. 2003). For this reason, the usual smoothgradient based optimization methods cannot be employed.

    Several methods have been proposed for solving Prob-lem P, see (Auslender 1987; Bonnans et al. 2003; Kiwiel1985; Mkel and Neittaanmki 1992). Cutting plane meth-ods approximate the function with a set of tangent planes.At each iteration the approximated function is minimizedand a new tangent plane is added. The classical referenceis (Kelley 1960), where a trial point is computed by solv-ing a linear programming problem. We mention (Nesterovand Vial 1999; Goffin and Vial 2002) for analytic center cut-ting plane methods and (Schramm and Zowe 1992; Mitchell2003) for logarithmic potential and volumetric barrier cut-ting plane methods. Bundle methods based on the stabilizedcutting plane idea are numerically and theoretically wellunderstood (Bonnans et al. 2003; Kiwiel 1985; Mkel andNeittaanmki 1992; Schramm and Zowe 1992).

    Here we show that the search direction of FDIPA(Herskovits 1982, 1986, 1998), the Feasible Direction Inte-rior Point Algorithm for constrained smooth optimization,can be successfully employed in the frame of cutting planemethods. In this way, we obtain an algorithm for nonsmoothoptimization, with a simple structure, without need of solv-ing quadratic programming subproblems. The absence ofquadratic subproblems makes our approach very easy toimplement and enables the development of codes for verylarge problems.

    The nonsmooth unconstrained Problem P is reformu-lated as the equivalent constrained Problem EP with a linearobjective function and one nonsmooth inequality constraint,

    min(x,z)Rn+1

    z

    s.t. f (x) z,(EP)

    where z R is an auxiliary variable. With thepresent approach, a decreasing sequence of feasible points{(xk, zk)} converging to a minimum of f (x) is obtained.That is, we have that zk+1 < zk and zk > f (xk) for allk. At each iteration, an auxiliary linear program is definedusing cutting planes. A feasible descent direction of thelinear program is obtained employing FDIPA, and a step-length is computed. Then, a new iterate (xk+1, zk+1) isdefined according to suitable rules. To determine a new iter-ate, the algorithm produces auxiliary points (yi ,wi ) andwhen an auxiliary point is at the interior of epi( f ), thatmeans wi > f (yi ), we say that the step is serious andwe take it as the new iterate. Otherwise, the iterate is notchanged and we say that the step is null. A new cuttingplane is then added and the procedure is repeated until aserious step is obtained. It will be proved that, when a seri-ous step is obtained, the search direction given by FDIPA isalso a feasible descent direction for Problem EP.

    This paper is organized in six sections. In the next onewe describe the FDIPA. In Section 3 the main features ofthe new method are presented. Global convergence of thealgorithm is shown in Section 4. In the subsequent section,numerical results are presented. Comparing with four bun-dle algorithms, our results show that the present approach isrobust and efficient. An application to robust structural opti-mization is described in Section 6. Finally, our conclusionsan suggestions for future research are presented.

    2 The feasible direction interior point method

    The present approach for nonsmooth optimization employsprocedures and concepts involved in the Feasible DirectionInterior Point Algorithm. Even if FDIPA (Herskovits 1998)is a technique for smooth optimization, to improve the clear-ness and completeness of the present paper, we describesome of the involved basic ideas.

    We consider now the inequality constrained optimizationproblem

    minxRn

    f (x)s.t. g(x) 0,

    (1)

    where f : Rn R and g : Rn Rm are contin-uously differentiable. The FDIPA requires the followingassumptions about Problem 1:

    Assumption 1 Let {x Rn | g(x) 0} be thefeasible set. There exists a real number a such that the seta {x | f (x) a} is compact and has an interiorint(a).

    Assumption 2 Each x int(a) satisf ies g(x) < 0.Assumption 3 The functions f and g are continuouslydif ferentiable in a and their derivatives satisfy a Lipschitzcondition.

    Assumption 4 (Regularity Condition) A point x ais regular if the gradient vectors gi (x), for i such thatgi (x) = 0, are linearly independent. FDIPA requiresregularity assumption at a local solution of Problem 1.

    Let us remind the reader of some well known concepts(Luenberger 1984), widely employed in this paper.

    Definition 1 d Rn is a descent direction for a smoothfunction : Rn R at x if dT(x) < 0.Definition 2 d Rn is a feasible direction for Problem 1,at x , if for some > 0 we have x + td for allt [0, ].

  • A feasible directions method for nonsmooth convex optimization

    Definition 3 A vector field d(x) defined on is said to bea uniformly feasible directions f ield of Problem 1, if thereexists a step length > 0 such that x + td(x) for allt [0, ] and for all x .

    It can be shown that d is a feasible direction ifdTgi (x) < 0 for any i such that gi (x) = 0,(Herskovits 1998). Definition 3, introduces a condition onthe vector field d(x), which is stronger than the simple fea-sibility of any element of d(x). When d(x) constitutes auniformly feasible directions field, it supports a feasiblesegment [x, x+(x)d(x)], such that (x) is bounded belowin by > 0.

    Let x be a regular point of Problem 1. Karush-Kuhn-Tucker (KKT) first order necessary optimality conditionsare expressed as follows: If x is a local minimum ofProblem 1 then there exists Rm such that

    f (x) + g(x) = 0 (2)

    G(x) = 0 (3)

    0 (4)

    g(x) 0, (5)

    where G(x) is a diagonal matrix with Gii (x) gi (x).We say that x such that g(x) 0 is a Primal Feasible

    Point, and 0 a Dual Feasible Point. Given an initialfeasible pair (x0, 0), FDIPA finds KKT points by solvingiteratively the nonlinear system of Eqs. 2 and 3 in (x, ), insuch a way that all the iterates are primal and dual feasible.Therefore, convergence to feasible points is obtained.

    A Newton-like iteration to solve the nonlinear system ofEqs. 2 and 3 in (x, ) can be stated as

    [Sk g(xk)

    kg(xk)T G(xk)] [

    xk+1 xkk+1 k

    ]

    = [ f (xk) + g(xk)k

    G(xk)k]

    (6)

    where (xk, k) is the starting point of the iteration and(xk+1, k+1 ) is a new estimate, and a diagonal matrixwith i i i .

    In the case when Sk 2 f (xk) +mi=1

    ki 2gi (xk),

    Eq. 6 is a Newton iteration. However, Sk can be a quasi-Newton approximation or even the identity matrix. FDIPArequires Sk symmetric and positive definite. Calling dk =

    xk+1 xk , we obtain the following linear system in(dk, k+1 ):

    Skdk + g(xk

    )k+1 = f

    (xk

    )(7)

    kg(xk

    )Tdk + G

    (xk

    )k+1 = 0. (8)

    It is easy to prove that dk is a descent direction of theobjective function (Herskovits 1998). However, dk cannotbe employed as a search direction, since it is not necessarilya feasible direction. In effect, in the case when gl(xk) = 0it follows from Eq. 8 that gl(xk)T dk = 0.

    To obtain a feasible direction, the following perturbedlinear system with unknowns dk and k+1 is defined, byadding the negative vector kk to the right side of Eq. 8,with k > 0,

    Skdk + g(xk

    )k+1 = f

    (xk

    )

    kg(xk

    )Tdk + G

    (xk

    )k+1 = kk .

    The addition of a negative vector in the right hand side ofEq. 8 produces the effect of deflecting dk into the feasibleregion, where the deflection is proportional to k . As thedeflection of dk grows with k , it is necessary to bound k ,in a way to ensure that dk remains a descent direction. Since(dk)T f (xk) < 0, we can get these bounds by imposing

    (dk)T f(xk

    )

    (dk

    )T f (xk) , (9)with (0, 1), which implies (dk)T f (xk) < 0. Thus,dk is a feasible descent direction. To obtain the upper boundon k , the following auxiliary linear system in (dk,

    k) is

    solved:

    Skdk + g(xk

    )k+1 = 0

    kg(xk

    )Tdk + G

    (xk

    )k+1 = k .

    Now defining dk = dk + kdk and substituting in Eq. 9,it follows that dk is a descent direction for any k > 0 inthe case when (dk)T f (xk) 0. Otherwise, the followingcondition is required,

    k ( 1)(dk

    )T f (xk)/(dk)T f (xk) .

    In (Herskovits 1998), is defined as follows: If(dk)T f (xk) 0, then k = dk2. Otherwise,

    k =min[

    dk2 ,(1) (dk

    )T f (xk)/(dk)T f (xk)

    ].

  • J. Herskovits et al.

    A new feasible primal point with a lower objective valueis obtained through an inexact line search along dk . FDIPAhas global convergence in the primal space for Sk+1 positivedefinite and k+1 > 0.

    3 Description of the present technique for nonsmoothoptimization

    We employ ideas of the cutting planes method (Kelley1960), to build piecewise linear approximations of the con-straint of Problem EP. Let gki (x, z) be the current set ofcutting planes such that

    gki (x, z) = f (yki ) + (ski )T (x yki ) z, i = 0, 1, ...,

    where yk Rn are auxiliary points, ski f (yki ) are subgra-dients at those points and represents the number of currentcutting planes. We call,

    gk (x, z) [gk0(x, z), ..., gk (x, z)]T , gk : RnR R+1

    and the current auxiliary problem

    min(x,z)Rn+1

    (x, z) = zs.t. gk (x, z) 0.

    (AP)

    Instead of solving this problem, the present algorithmonly computes with FDIPA a search direction dk ofProblem AP. We note that dk can be computed even ifProblem AP has not a finite minimum.

    The largest feasible step is

    t = max{t | gk

    ((xk, zk

    )+ tdk

    ) 0

    }.

    Since t is not always finite, it is taken

    tk := min{tmax/, t}

    where (0, 1). Then,(xk+1, z

    k+1

    )=

    (xk, zk

    )+ tk dk (10)

    is feasible with respect to Problem AP. Next we computethe following auxiliary point(yk+1,w

    k+1

    )=

    (xk, zk

    )+ tk dk . (11)

    If (yk+1,wk+1) is strictly feasible with respect to Prob-

    lem EP, that is, if wk+1 > f (yk+1) we consider that thecurrent set of cutting planes is a good local approximationof f (x) in a neighborhood of xk . Then, we say that the

    step is serious and set the new iterate (xk+1, zk+1) =(yk+1,w

    k+1). Otherwise, a new cutting plane g

    k+1(x, z)

    is added to the approximated problem and the procedurerepeated until a serious step is obtained. We are now inposition to state our algorithm.

    3.1 Nonsmooth Feasible Direction Algorithm - NFDA

    Parameters. , (0, 1), > 0, tmax > 0.Data. x0, a > z0 > f (x0), 00 R+, B0 Rn+1 Rn+1symmetric and positive definite. Set y00 = x0, k = 0 and = 0.Step 1) Compute sk f (yk ). A new cutting plane at thecurrent iterate (xk, zk) is defined by

    gk (x, z) = f (yk ) + (sk )T (x yk ) z.Consider now

    gk (x, z) = s

    k

    1

    Rn+1,

    define

    gk (x, z) = [gk0(x, z), ..., gk (x, z)]T R+1,and

    gk (x, z) = [gk0(x, z), ...,gk (x, z)] R(n+1)(+1).Step 2) Feasible descent direction dk for Problem AP

    i) Compute dk and k, solving

    Bkdk + gk (xk, zk)k = (x, z) (12)

    k[ gk (xk, zk)]T dk + Gk(xk, zk)k = 0. (13)Compute dk and

    k, solving

    Bkdk + gk (xk, zk)k = 0 (14)

    k[ gk (xk, zk)]T dk + Gk(xk, zk)k = k, (15)where

    k := (k0, ..., k), k := (k0, ..., k),k := (k0, ..., k), k := diag(k0, ..., k)and Gk(x, z) := diag(gk0(x, z), ..., gk (x, z)).

  • A feasible directions method for nonsmooth convex optimization

    ii) If (dk)T(x, z) > 0, set = dk2. Otherwise,set

    = min{dk2, ( 1)

    (dk)T(x, z)(dk)T(x, z)

    }.

    iii) Compute the feasible descent direction

    dk = dk + dk.

    Step 3) Compute the step length

    tk = min{tmax/, max{t | gk ((xk, zk) + tdk ) 0}

    }.

    (16)

    Step 4) Compute a new point

    i) Set (yk+1,wk+1) = (xk, zk) + tk dk .ii) If wk+1 f (yk+1), we have a null step. Then, define

    k+1 > 0 and set := + 1.Otherwise, we have a serious step. Then, call dk =dk , dk = dk, dk = dk, k = k, k = k andk = . Take (xk+1, zk+1) = (yk+1,wk+1), definek+10 > 0, Bk+1 symmetric and positive definite andset k = k + 1, = 0, yk0 = xk .

    iii) Go to Step 1).

    The new values of and B must satisfy the followingassumptions:

    Assumption 5 There exist positive numbers 1 and 2 suchthat 1 v2 vT Bv 2 v2 for any v Rn+1.Assumption 6 There exist positive numbers I , S, suchthat I i S, for i = 0, 1, . . . , .

    Global convergence of the present algorithm will beproved for any updating rule for and B satisfying Assump-tions 5 and 6. In the numerical section we shall give apractical choice for the updating rules.

    4 Convergence analysis

    In this section, we prove global convergence of the presentalgorithm. We first show that the search direction dk isa descent direction for . Then, we prove that the num-ber of null steps at each iteration is finite. That is; since(xk, zk) int(epi f ), after a finite number of subiterations,we obtain (xk+1, zk+1) int(epi f ). In consequence, thesequence

    {(xk, zk)

    }kN is bounded and belongs to the inte-

    rior of the epigraph of f . In what follows, it is proved

    that dk converges to zero when k . This fact isemployed to establish a stopping criterium for the presentalgorithm. Finally we show that the optimality condition0 f (x) is satisfied for the accumulation points (x, z)of the sequence

    {(xk, zk)

    }kN. Thus, any accumulation

    point of the sequence{(xk, zk)

    }kN is a solution of Prob-

    lem P. In some cases, indices will be omitted to simplify thenotation.

    We remark that the solutions d , , d , and of thelinear systems 12, 13, 14, and 15 are unique. This fact is aconsequence of Assumptions 5 and 6 and a lemma provedin (Panier et al. 1988; Tits et al. 2003) and stated as follows:Lemma 1 Let the vectors in the set {gi (xk, zk) |gi (xk, zk) = 0} be linearly independent. Then, for anyvector (x, z) int(epi f ) and any positive def inite matrixB R(n+1)(n+1), the matrix[

    B g(x, z)[ g(x, z)]T G(x, z)

    ],

    is nonsingular.

    It follows that d, d, and are bounded in a . Since is bounded above we also have that = + isbounded.

    Lemma 2 The vector d satisf ies

    dT (x, z) dT Bd.Proof It follows from Eq. 12

    dT Bd + dT g(x, z) = dT (x, z), (17)and from Eq. 13

    dT g(x, z) = T 1G(x, z). (18)Replacing Eq. 18 in Eq. 17 we have

    dT (x, z) = dT Bd + T 1G(x, z).Since 1 is positive definite, in consequence of Assump-tion 6, and G(x, z) is taken negative definite in the algo-rithm, the result of the lemma is obtained. unionsq

    As a consequence, we also have that the search directiond is descent for the objective function of Problem AP.Proposition 1 The direction d = d + d is a descentdirection for the objective function of Problem AP at point(x, z).

    Proof Since d = d + d , we havedT(x, z) = dT (x, z) + dT (x, z).

  • J. Herskovits et al.

    In the case when dT (x, z) > 0, we have

    ( 1)dT (x, z)dT (x, z)

    .

    Therefore,

    dT(x, z) dT (x, z) + ( 1)dT (x, z)= dT (x, z) < 0.

    On the other hand, when dT (x, z) 0, it follows fromLemma 2 that dT(x, z) dT (x, z) < 0 for any > 0. unionsq

    As a consequence of the previous proposition, sincezk+1 = zk + tkdkz , we have that zk+1 < zk for all k.Thus, the sequence {(xk, zk)}kN generated by the presentalgorithm belongs to the bounded set int(epi f ) {(x, z) Rn+1 | z < z0}.

    Lemma 3 Let X Rn be a convex set. Consider x0 int(X) and x X. Let {x k}kN Rn X be a sequencesuch that xk x . Let {xk}kN Rn be a sequence def inedby xk = x0 + (x k x0) with (0, 1). Then there existk0 N such that xk int(X), k > k0.Proof We have

    xk = x0 + (x k x0) x0 + (x x0) = x.

    Since the segment [x0, x] X and (0, 1) we have thatx int(X) and, in consequence there exist > 0 such thatB(x, ) int(X). Since xk x there exist k0 Nsuch that xk B(x, ) int(X), k > k0. unionsqRemark 1 The sequence {(xk , zk)}N is in a compact set.In fact, there exist r > 0 such that a is included in the ballcentered in the origin and with radius r . Then, the sequenceis in the ball centered in the origin and radius r + tmax. Forthe same reasons, the sequence {(yk ,wk)}N is in the samecompact set.

    Proposition 2 Let (x k, zk) be an accumulation point of thesequence {(xk , zk)}N def ined in Eq. 10 for k f ixed, thenzk = f (x k).Proof By definition of the sequence {(xk , zk)}N, we havealways that zk f (x k). Suppose now that zk < f (x k) andconsider a convergent sequence {(xk , zk)}N (x k, zk)such that {sk }N sk , where N N. This sequenceexists because {(xk , zk)}N as well as {sk }N are in com-pact sets. The corresponding cutting plane is represented byf (xk )+sT (xxk )z = 0. Then, z(x k) = f (xk )+sT (x kxk ) is the vertical projection of (x k, zk) on the cutting plane.Taking the limit for , we get z(x k) = f (x k).

    Then, for N > L large enough, (x k, zk) is under theth cutting plane. Then, we arrived to a contradiction andzk = f (x k). unionsqProposition 3 Let (xk, zk) int(epi f ). The next iterate(xk+1, zk+1) int(epi f ) is obtained after a f inite numberof subiterations.Proof Our proof starts with the observation that in theStep 4) of the algorithm we have that (xk+1, zk+1) =(yk+1,w

    k+1) only if w

    k+1 > f (yk+1) (i.e., if we have a

    serious step), consequently, we have that (xk+1, zk+1) int(epi f ).

    The sequence {(xk , zk)}N is bounded by constructionand, by Proposition 2, it has an accumulation point (x k, zk)such that zk = f (x k). Considering now the sequencedefined by Eq. 11,

    (yk ,wk) = (xk, zk) +

    (xk , zk) (xk, zk) , (0, 1)

    it follows from Lemma 3, that there exist k0 N such that(yk ,w

    k) int(epi f ), for k > k0. But this is the condition

    for a serious step and the proof is complete. unionsqLemma 4 There exists > 0 such that g((x, z)+ td) 0, t [0, ] for any (x, z) int(epi f ) and any direction dgiven by the algorithm.

    Proof Let us denote by b a vector such that bi = sTi xi f (xi ) for all i = 0, 1, ..., .

    Then, g((x, z) + td) = ( g(x, z))T (x, z) b, sincegi (x, z) = f (yi ) + sTi (x yi ) z

    = (sTi ,1)(x, z) bi= (gi (x, z))T (x, z) bi

    for all i = 0, 1, ..., . The step length t is defined in the Eq.16 in the Step 3) of the algorithm. Since the constraints ofProblem AP are linear, to satisfy the line search conditionthe following inequalities must be true:

    gi ((x, z) + ti d) = (gi ((x, z) + ti d)T ((x, z) + ti d) bi= gi (x, z) + ti (gi (x, z))T d 0,

    (19)

    for all i = 0, 1, ..., . If (gi (x, z))T d 0 the inequality issatisfied for all ti > 0. Otherwise, it follows from iii) in theStep 2) that,

    (gi (x, z))T d = (gi (x, z))T (d + d).It follows from Eqs. 13 and 15

    (gi (x, z))T d = gi (x, z)ii

  • A feasible directions method for nonsmooth convex optimization

    and

    (gi (x, z))T d = 1 gi (x, z)ii

    .

    Then, Eq. 19 is equivalent to

    gi (x, z)(1 ti ii

    ) ti 0,

    where = + . Since ti > 0, the last inequality istrue when ti i/i .

    By Assumption 6, > 0 is bounded and we also havethat is bounded from above. Thus, there exists 0 < 0. When k , k N on Eq. 20 we have z =z +tdz , thus dz = 0. From Proposition 1 it follows that

    0 = dz (d)T(x, z) = dz 0, thus dz = 0.

    Further, by Lemma 2, we have

    0 = dz = (d)T(x, z) (d)T Bd 0,

    thus d = 0, since B is positive definite. unionsqIt follows from the previous result that

    dk = dk + kdk 0 when k , k N,

    since k 0 if dk 0. In consequence, the iterations canbe stopped when dk is small enough.

    Proposition 5 For any accumulation point (x, z) of thesequence {(xk, zk)}kN, we have 0 f (x).Proof Consider

    Y :={y00 , y01 , ..., y00, y10 , y11 , ..., y11, ....., yk0 , yk1 , ..., ykk , ....},

    the sequence of all the points obtained by the sub-iterationsof the algorithm. As this sequence is bounded, we can findconvergent subsequences. In particular, since xk+1 = yk

    k,

    we can assert that there is Y Y that converges to x.We call Yk Y {yk0 , yk1 , ..., ykk }. It follows from

    Eq. 12 in the Step 2) of the algorithm, that

    limk

    ki=0

    ki ski = 0 and limk

    ki=0

    ki = 1. (21)

    We define Ik = {i | yki Yk}. In consequence ki > 0for i Ik and k large enough. Then,

    limk

    iIk

    ki ski = 0 and limk

    iIk

    ki = 1. (22)

    Consider the auxiliary point yki and the subgradient ski

    f (yki ) such that i Ik . By definition of the subdifferential(Clarke 1983), we can write

    f (x) f (yki )+(ski )T (xyki ) = f (xk)+(ski )T (xxk)ik,

    where ik = f (xk) f (yki ) (ski )T (xk yki ). Sincexk x and yki x for i Ik , we have thatlimk

    ik = 0. Thus, ski ik f (x

    k), where f (x) repre-sents the -subdifferential of f , (Clarke 1983). Then, wecan write

    iIkki

    f (x)

    iIkki

    f (xk)

    +iIk

    ki ski

    T (x xk

    )

    iIk

    kiik,

    and

    f (x) f(xk

    )+

    iIk

    kiiIk ki

    ski

    T (x xk

    ) k,

    where k =

    iIk ki

    ik

    iIk ki

    . Since

    limk

    ik = 0 and limk

    iIk

    ki = 1,

    we have that limk k = 0. Considering s

    k =

    iIkkiiIk

    kiski .

    It follows from (22) that sk k f (xk) and 0 f (x).With this result the proof of convergence is complete. unionsq

  • J. Herskovits et al.

    5 Numerical results

    In this section we present the numerical results obtainedwith the NFDA algorithm, employing a set of fixed defaultparameters. The algorithm was implemented in MATLABand the numerical experiments were performed in a 2GBRAM microcomputer.

    Section 5.1 reports the collection of test problems andsolvers employed to study the performance of NFDA. InSection 5.2 we show a visual representation of the resultsby using performance profiles (Dolan and Mor 2002).

    5.1 Test problems and solvers

    We employed the following set of test problems: CB2, CB3,DEM, QL, LQ, Mifflin1, Rosen-Suzuki, Shor, Maxquad,Maxq, Maxl, Goffin and TR48, described in detail inLukan and Vlcek (2000) or in Mkel and Neittaanmki(1992).

    The default parameters for the present method are:

    B= I, =0.75, =0.1, =0.7, and tmax =1.

    The iterates stop when dk = 104.The Lagrange multipliers vector is updated according

    with the following formulation:Set, for i = 0, 1, ..., ,

    i := max[i ; d2

    ], > 0. (23)

    With the purpose of improving the numerical results, ateach iteration k we add to the set of cutting planes definedby the algorithm the last 5n cutting planes computed in theprevious iterations. That is, for a subiteration the algorithmworks with up to + 5n cutting planes.

    We shall compare our numerical results with the follow-ing classic solvers:

    M1FC1 -steepest descent,BTC Bundle Trust,

    PB Proximal Bundle,SBM Standard Bundle.

    This methods can be found in (Lemarechal and Ban-cora Imbert 1985), (Schramm and Zowe 1992), (Mkeland Neittaanmki 1992) and (Lukan and Vlcek 2000). Theresults are summarized in Tables 1, 2 and 3 where NI repre-sents the number of iterations for each method and NF thenumber of functions and subgradients computations. Theoptimal function and the computed values are f and f ,respectively. Our results are described in Tables 1, 2 and 3.

    Table 1 Numerical results

    Solver n SBM M1FC1 BTC PB NFDAProblem NI NI NI NI NI

    CB2 2 31 11 13 15 13CB3 2 14 12 13 15 24DEM 2 17 10 09 07 22QL 2 13 12 12 17 27LQ 2 11 16 10 14 15Mifflin1 2 66 143 49 22 21Rosen 4 43 22 22 40 49Shor 5 27 21 29 26 51Maxquad 10 74 29 45 41 115Maxq 20 150 144 125 158 272Maxl 20 39 138 74 34 70TR48 48 245 163 165 152 317Goffin 50 52 72 51 51 79

    5.2 Performance profiles

    In order to present a visual comparison of the solversbehavior, we give the performance profiles as described in(Dolan and Mor 2002). These compare the performance ofns solvers of a set S with respect to solution of n p problemsof a set P , using some performance measure, like numberof iterations, number of function computations or CPU time.tp,s denote the performance measure required to solve prob-lem p by solver s. For each problem p and solver s, theperformance ratio value is

    rp,s = tp,smin{tp,s : s S} ,

    Table 2 Numerical results

    Solver n SBM M1FC1 BTC PB NFDAProblem NF NF NF NF NF

    CB2 2 33 31 16 16 14CB3 2 16 44 21 16 25DEM 2 19 33 13 08 22QL 2 15 30 17 18 28LQ 2 12 52 11 15 16Mifflin1 2 68 281 74 23 22Rosen 4 45 61 32 41 50Shor 5 29 71 30 27 52Maxquad 10 75 69 56 42 116Maxq 20 151 207 128 159 273Maxl 20 40 213 84 35 71TR48 48 251 284 179 153 318Goffin 50 53 94 53 52 80

  • A feasible directions method for nonsmooth convex optimization

    Table 3 Numerical results

    Solver n SBM M1FC1 BTC PB NFDA f Problem f f f f f

    CB2 2 1.95222e+0 1.95225e+0 1.95222e+0 1.95222e+0 1.95220e+0 1.95222e+0CB3 2 2.00000e+0 2.00141e+0 2.00000e+0 2.00000e+0 2.00010e+0 2.00000e+0DEM 2 3.00000e+0 3.00000e+0 3.00000e+0 3.00000e+0 2.98630e+0 3.00000e+0QL 2 7.20000e+0 7.20001e+0 7.20000e+0 7.20001e+0 7.20000e+0 7.20000e+0LQ 2 1.41421e+0 1.41421e+0 1.41421e+0 1.41421e+0 1.41390e+0 1.41421e+0Mifflin1 2 9.99990e1 9.99960e1 1.00000e+0 1.00000e+0 9.99800e1 1.00000e+0Rosen 4 4.399999e+1 4.399998e+1 4.399998e+1 4.399994e+1 4.399990e+1 4.400000e+1Shor 5 2.260016e+1 2.260018e+1 2.260016e+1 2.260016e+1 2.260020e+1 2.260016e+1Maxquad 10 8.4140e1 8.4135e1 8.4140e1 8.4140e1 8.4140e1 8.4140e1Maxq 20 0.16712e6 0.00000e+0 0.00000e+0 0.00000e+0 3.17800e7 0.00000e+0Maxl 20 0.12440e12 0.00000e+0 0.00000e+0 0.00000e+0 2.83150e4 0.00000e+0TR48 48 6.3853048e+5 6.3362550e+5 6.3856500e+5 6.3856000e+5 6.3856499e+5 6.3856500e+5Goffin 50 0.11665e11 0.00000e+0 0.00000e+0 0.00000e+0 1.33720e4 0.00000e+0

    if a solution of the problem p was obtained by solver s.Otherwise, rp,s = rM , where rM is a large parameter.

    The distribution function s : R [0, 1], for the per-formance ratio rp,s represents the total performance of thesolver s to solve the set of test problems P . This function isdefined by

    s( ) = 1n p

    |{p : rp,s }|,

    where |{}| is the number of elements in the set {}. The valueof s(1) indicates the probability of the method s to be thebest method (among all the belonging to the set S), by usingas the performance measure tp,s .

    1 2 3 4 5 6 70

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    s

    M1FC1BTCPBSBMNFDA

    Fig. 1 Performance profileiteration counts

    Functions and derivatives computation, for engineeringapplications, are generally very expensive in terms of pro-cessing time. For this reason we consider the number offunction evaluations as a measure of performance. Figures 1and 2 show the performance profiles for these solvers usingthe measures mentioned.

    6 Truss topology robust design via nonsmoothoptimization

    This section presents an application of the proposed algo-rithm in the topology design of robust trusses. A largelyemployed model for truss topology optimization consid-

    0 2 4 6 8 10 12 140

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    s

    M1FC1BTCPBSBMNFDA

    Fig. 2 Performance profilenumber of function evaluations

  • J. Herskovits et al.

    ers structures submitted to a set of nodal loadings, that wecall primary loadings, and looks for the volume of eachof the bars that minimizes the structural compliance, see(Bendse 1995). The structural topology changes if thevolume of some of the bars are zero in the solution.

    A truss can be considered robust if it is reasonable rigidwhen any set of small possible uncertain loads act on it. Themodel considered here, additionally to the primary loadings,includes a set of secondary loadings that are uncertain insize and direction and can act over the structure. The com-pliance to be minimized is the worst possible one of thoseproduced for a loading in one of both sets. This model wasproposed by Ben-Tal and Nemirovski (1997), as well asa formulation based on semidefinite programming. In thissection we derive an alternative formulation that leads to anonsmooth convex optimization problem that we solve withthe present algorithm.

    Let us consider a two or a three-dimensional ground elas-tic truss with n nodes and m degrees of freedom, submittedto a finite set of loading conditions P {p1, p2, . . . , ps}such that pi Rm for i = 1, 2, . . . , s, and let b be the num-ber of initial bars. The design variables of the problem arethe volumes of the bars, denoted x j , j = 1, 2, ..., b. Thereduced stiffness matrix is

    K (x) =bj=1

    x j K j , (24)

    K j Rmm , j = 1, 2, ..., b, are the reduced stiffness matri-ces corresponding to bars of unitary volume. To obtain awell-posed problem, the matrix

    bj=1 K j must be positive

    definite (Ben-Tal and Nemirovski 1997). The compliancerelated to the loading condition pi P can be defined as(Bendse 1995):

    (x, pi ) = supu

    {2uT pi uT K (x)u, u Rm

    }. (25)

    Let be (x) = suppi

    {(x, pi ) , pi P} the worst possi-ble compliance for the set P . A energy model for topologyoptimization with several loading conditions can be statedas follows:

    minxRb

    (x)

    s.t.bj=1

    x j V ,

    x j 0 , j = 1, . . . , b

    (26)

    The value V > 0 is the maximum quantity of material todistribute in the truss.

    Instead of maximizing on the finite domain P , we con-sider a model proposed by Ben-Tal and Nemirovski (1997)

    that maximizes on the ellipsoid M of loading conditionsdefined as follows:

    M ={Qe | e Rq , eT e 1

    }, (27)

    where

    [Q] =[p1, . . . , ps, r f 1, . . . , r f qs

    ]. (28)

    The vectors p1, . . . , ps must be linearly independentand r f i , represents the i-th secondary loading. The valuer is the magnitude of the secondary loadings and the set{ f 1, . . . , f qs} must be chosen as an orthonormal basis ofa linear subspace orthogonal to the linear span of P . Theprocedure to chose a convenient basis { f 1, . . . , f qs} isexplained later.

    For this model we have

    (x) = supp

    {(x, p) , p M}.

    A robust design is then obtained by solving Eq. 26 with(x) previously defined.

    Other convex formulations of the truss optimizationproblem are described in (Makrodimopoulos et al. 2010).An interesting discussion considering similar problems oftopology optimization of structures is presented in (Stolpe2010). Other similar formulations for obtaining robustdesigns are studied in (Cherkaev and Cherkaev 2008).

    In (Ben-Tal and Nemirovski 1997), it was proved theequivalence of the following two expressions:

    (x) (29)and

    A(, x) =( I q QTQ K (x)

    ) 0 , (30)

    where R and A 0 means that A is posi-tive semidefinite. Since the epigraph of coincides with{(, x) | A(, x) 0}, and this last set is convex(Vandenberghe and Boyd 1996) then is a convex func-tion. Then, for robust optimization we can employ themodel proposed by Ben-Tal and Nemirovski (1997), andsolve the problem of Eq. 26 using the present nonsmoothoptimization algorithm. Note that this problem has linearinequality constraints. To solve the optimization problemwe must include all these constraints in the initial set oflinear inequality constraints of the auxiliary Problem AP.In addition, the initial point x0 must be interior to the fea-sible region defined by the linear inequality constraints.It remains to show how to compute the function at aninterior point x .

  • A feasible directions method for nonsmooth convex optimization

    It follows that Eq. 30 is equivalent to (Ben-Tal andNemirovski 1997):

    K (x) QQT 0. (31)

    Then, the equivalence of Eqs. 29 and 31 shows that (x)is equal to that solves the problem:

    {min

    s.t. K (x) QQT 0 . (32)

    Since in our case K (x) 0, we have a so called Gen-eralized Eigenvalue Problem. In consequence, (x) is thehighest generalized eigenvalue of the system (QQT , K (x)),(Vandenberghe and Boyd 1996). There are efficient routinesavailable for the computation of the generalized eigenval-ues, (Bai et al. 2000).

    If the highest eigenvalue is single, the function isdifferentiable. If it is multiple, the function is generallynondifferentiable. In both cases it is possible to compute therequired subgradients, see (Seyranian et al. 1994; Rodrigueset al. 1995; Choi and Kim 2004).

    We consider four test problems. In all of them, Youngsmodulus of the material is E = 1.0 and the maximumvolume is V = 1.0.

    6.1 Example 1

    The first example considers the ground structure of Fig. 3.The length of each of the horizontal and vertical bars isequal to 1.0 and the magnitude of the loads is 2.0. The sec-ondary loadings have a magnitude r = 0.3 and define abasis of the orthogonal complement of the linear span of P ,L(P), in the linear space F of the degrees of freedoms ofnodes 2 and 4. According to the numeration of the degrees

    Fig. 3 Truss of Example 1

    of freedom of Fig. 3, the primary loading and the matrixA = [e1, e2, e3, e4] of the vectors of a basis of F are:

    p1 =

    000

    20000

    , A =

    0 0 0 00 0 0 01 0 0 00 1 0 00 0 0 00 0 0 00 0 1 00 0 0 1

    .

    We have to find the orthonormal basis { f1, f2, f3} of theorthogonal complement of L(P) in F . As each of the vec-tors f i are in F , they satisfy f i = Avi for some vi R4. Asthey are normal to p1, the vectors vi satisfy (p1)T Avi = 0.Then, we can find {v1, v2, v3} as a orthonormal basis of thekernel of (p1)T A. This basis can be found using the singu-lar value decomposition of (p1)T A (Golub and Van Loan1996). The result obtained for vi is

    [v1, v2, v3] =

    1 0 00 0 00 1 00 0 1

    .

    The final result is:

    Q = [p1, r f 1, r f 2, r f 3] =

    0 0 0 00 0 0 00 0.3 0 0

    2 0 0 00 0 0 00 0 0 00 0 0.3 00 0 0 0.3

    .

    Fig. 4 Result of Example 1

  • J. Herskovits et al.

    0 5 10 15 20 25 30 35 40 450

    50

    100

    150

    200

    250

    300

    350

    400

    450

    500

    Iteration

    Eige

    nval

    ues

    1234

    Fig. 5 Example 1evolution of the four highest eigenvalues

    Figure 4 shows the optimal truss obtained using thepresent algorithm and Fig. 5 shows the evolution of the fourhighest eigenvalues of the system (QQT , K (x)).

    6.2 Example 2

    This example considers the same ground structure of theExample 1, and a loading condition as shown in Fig. 6. Themagnitude of the primary loads is 2.0. The secondary load-ings have a magnitude r = 0.4 and define a basis of theorthogonal complement of L(P) in the linear space F of allthe degrees of freedoms of the structure. Figure 7 shows theoptimal truss obtained and Fig. 8 shows the evolution of thefour highest eigenvalues of the system (QQT , K (x)).

    6.3 Example 3

    This example consists of a three-dimensional truss withfixed nodes on the horizontal plane z = 0 and free nodes

    Fig. 6 Truss of Example 2

    Fig. 7 Result of Example 2

    on the horizontal plane z = 2. The structure has 8 nodes ofcoordinates

    x = cos(2 i/N ), y = sin(2 i/N ),z = 0, i {1, . . . , N } ,

    x = 12

    cos(2 i/N ), y = 12

    sin(2 i/N ),

    z = 2, i {N + 1, . . . , 2N } , (33)

    with N = 4. All the possible bars between free-free orfree-fixed nodes are considered. The loading condition con-sists of four forces acting simultaneously and applied at thenodes on the plane z = 2. The force at node i is

    pi =(1/

    N (1+2)

    ) [sin(2 i/N ),cos(2 i/N ),]T ,

    i {N + 1, . . . , 2N },(34)

    0 5 10 15 20 25 30 35 400

    100

    200

    300

    400

    500

    600

    700

    Iteration

    Eige

    nval

    ues

    1234

    Fig. 8 Example 2evolution of the four highest eigenvalues

  • A feasible directions method for nonsmooth convex optimization

    Fig. 9 Result of Example 3

    0 5 10 15 20 250

    50

    100

    150

    200

    250

    300

    Iteration

    Eige

    nval

    ues

    123456

    Fig. 10 Example 3evolution of the six highest eingenvalues

    Table 4 Results of the truss optimization examples

    NB NI NF F

    Example 1 10 42 72 258.24Example 2 10 37 161 278.40Example 3 22 25 35 110.56Example 4 35 35 112 135.27

    NB: number of bars, NI: number of iterations, NF: number of functionevaluations, F: optimal value of the objective function

    Table 5 Bar volumes of the optimal structure

    Example 1 Example 2 Example 3 Example 4

    bar volume bar volume bar volume bar volume53 2.478e-1 53 2.448e-1 16 1.247e-1 17 1.000e-164 1.276e-1 31 1.195e-1 18 1.246e-1 110 9.926e-242 1.251e-1 64 2.448e-1 25 1.246e-1 26 9.947e-254 3.715e-3 42 1.195e-1 27 1.247e-1 28 1.003e-163 2.478e-1 54 1.265e-2 36 1.246e-1 37 9.922e-232 2.478e-1 63 1.265e-2 38 1.247e-1 39 1.002e-1

    32 2.368e-1 45 1.247e-1 48 9.943e-241 9.195e-3 47 1.246e-1 410 1.001e-1

    56 4.842e-4 56 1.003e-157 4.343e-4 59 9.935e-258 4.847e-4 67 2.573e-467 4.847e-4 68 2.169e-468 4.343e-4 69 2.366e-478 4.842e-4 610 2.530e-4

    78 2.548e-479 2.345e-4710 2.162e-489 2.728e-4810 2.137e-4910 2.669e-4

    with = 0.001. The secondary loadings have a magnituder = 0.3 and define a basis of the orthogonal complement ofL(P) in the linear space F of all the degrees of freedomsof the structure. Figure 9 shows the optimal truss obtainedfor this example and Fig. 12 shows the evolution of the sixhighest eigenvalues of the system (QQT , K (x)).

    Fig. 11 Result of Example 4

  • J. Herskovits et al.

    0 5 10 15 20 25 30 350

    50

    100

    150

    200

    250

    300

    350

    Iteration

    Eige

    nval

    ues

    123456

    Fig. 12 Example 4evolution of the six highest eigenvalues

    6.4 Example 4

    This example is similar to the previous one. The nodalcoordinates and nodal forces are given by Eqs. 33 and 34,respectively, but with N = 5 and = 0.01. The secondaryloadings have a magnitude r = 0.3 and define a basis of theorthogonal complement of L(P) in the linear space F of allthe degrees of freedoms of the structure. The optimal trussof this example and the evolution of the six highest eigen-values of the system (QQT , K (x)) are shown in Figs. 11and 10.

    Tables 4 and 5 describe the numerical results obtainedusing the present algorithm.

    7 Conclusions and future work

    In this paper, a new approach for unconstrained nonsmoothconvex optimization was introduced and global convergencewas proved. The present algorithm is very simple and doesnot require the solution of quadratic programming subprob-lems but just of two linear systems with the same matrix.All test problems were solved with the same values ofparameters, indicating that our approach is robust and thecorresponding code can be employed in practical applica-tions by engineers and other professionals non experts inmathematical programming.

    When comparing with well established nonsmooth opti-mization techniques, our algorithm obtained the sameresults, requiring a comparable number of iterations andfunction evaluations and exhibiting similar performanceprofiles.

    The main advantage of our approach, as in FDIPA,comes from the fact that we solve linear systems, instead

    of quadratic programs. The state of the art of numericaltechniques for linear systems is vast, including direct anditerative methods for simple or high performance comput-ers. These techniques take advantage of the structure of thesystem. This suggests the possibility of solving very largeproblems with the present approach.

    Even if the present algorithm is formally restricted tounconstrained convex problems, we can include linear con-straints by adding them in Problem AP. This is a significantadvantage with respect to existing nonsmooth optimizationtechniques.

    Our algorithm was successfully applied to the topologyoptimization of trusses employing a model proposed byBen-Tal and Nemirovski (1997), based on semidefinite pro-gramming. We reformulated this model in such a way tohave a convex optimization problem with linear constraints.

    The present paper brings a new methodology for nons-mooth optimization and the authors believe that our algo-rithm can be significantly improved in future research. Weare also working on the generalization of this approachfor nonconvex problems and the treatment of nonsmoothinequality constraints. Since this technique works withinternal linear systems similar to those of FDIPA it seemspossible to elaborate an efficient code for problems involv-ing simultaneously smooth and nonsmooth functions.

    Acknowledgements The authors wish to thank Claudia Sagastizabal(CEPEL, Brazil) and Napsu Karmitsa (University of Turku, Finland)for their careful reading of this manuscript and the helpful comments.This research was partially supported by CNPQ, FAPERJ (Brazil) andANII (Uruguay).

    References

    Auslender A (1987) Numerical methods for nondifferentiable convexoptimization. Math Program Stud 30:102126

    Bai Z, Demmel J, Dongarra J, Ruhe A, van der Vorst H (eds) (2000)Templates for the solution of algebraic eigenvalue problems. Soft-ware, Environments, and Tools, a practical guide, vol 11. Societyfor Industrial and Applied Mathematics (SIAM), Philadelphia

    Ben-Tal A, Nemirovski A (1997) Robust truss topology design viasemidefinite programming. SIAM J Optim 7(4):9911016

    Bendse MP (1995) Optimization of structural topology, shape, andmaterial. Springer-Verlag, Berlin

    Bonnans JF, Gilbert JC, Lemarchal C, Sagastizbal C (2003) Numer-ical optimization: theoretical and practical aspects. Springer-Verlag

    Cherkaev E, Cherkaev A (2008) Minimax optimization problem ofstructural design. Comput Struct 86(1314):14261435

    Choi KK, Kim NH (2004) Structural sensitivity analysis and optimiza-tion 1: linear systems. Springer, Berlin

    Clarke FH (1983) Optimization and nonsmooth analysis. John Wileyand Sons

    Dolan ED, Mor JJ (2002) Benchmarking optimization software withperformance profile. Math Program 91:201213

  • A feasible directions method for nonsmooth convex optimization

    Goffin J, Vial J (2002) Convex nondifferentiable optimization: a sur-vey focused on the analytic center cutting plane method. OptimMethods Softw 17:808867

    Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. JohnsHopkins studies in the mathematical sciences. Johns HopkinsUniversity Press, Baltimore

    Herskovits J (1982) Dveloppement dune mthode nmerique pourloptimisation non-linaire. PhD thesis, Paris IX University

    Herskovits J (1986) A two-stage feasible directions algorithm fornonlinear constrained optimization. Math Program 36:1938

    Herskovits J (1998) Feasible direction interior-point technique fornonlinear optimization. JOTA J Optim Theory Appl 99:121146

    Hiriart Urruty JB, Lemarchal C (1993) Convex analysis and mini-mization algorithms I: fundamentals. Springer-Verlag

    Kelley JJE (1960) The cutting-plane method for solving convex pro-grams. J Soc Ind Appl Math 8:703712

    Kiwiel KC (1985) Methods of descent for nondifferentiable optimiza-tion. Springer-Verlag

    Lemarechal C, Bancora Imbert M (1985) Le module m1fc1. Tech rep,Institut de Recherch dAutomatique

    Luenberger DG (1984) Linear and nonlinear programming, 2nd edn.Addison-Wesley

    Lukan L, Vlcek J (2000) Test problems for nonsmooth unconstrainedand linearly constrained optimization, n-798. Tech rep, Instituteof Computer Science, Academy of Scienes of the Czech Republic

    Mkel M, Neittaanmki P (1992) Nonsmooth optimization, anal-ysis and algoritms with applications to optimal control. WordScientific Publishing

    Makrodimopoulos A, Bhaskar A, Keane AJ (2010) Second-order coneprogramming formulations for a class of problems in structuraloptimization. Struct Multidiscipl Optim 40(16):365380

    Mitchell JE (2003) Polynomial interior point cutting plane methods.Optim Methods Softw 18:507534

    Nesterov O Y Pton , Vial J (1999) Homogeneous analytic centercutting plane methods with approximate centers. Optim MethodsSoftw 11:243273

    Panier E, Tits A, Herskovits J (1988) A QP-free, globally conver-gent, locally superlinearly convergent algorithm for inequalityconstrained optimization. SIAM J Control Optim 26:788811

    Rodrigues HC, Guedes JM, Bendse MP (1995) Necessary conditionsfor optimal design of structures with a nonsmooth eigenvaluebased criterion. Struct Optim 9(1):5256

    Schramm H, Zowe J (1992) A version of the bundle idea for minimiz-ing a nonsmooth function: conceptual idea, convergence analysis,numerical results. SIAM J Optim 2:121152

    Seyranian AP, Lund E, Olhoff N (1994) Multiple eigenvalues instructural optimization problems. Struct Optim 8(4):207227

    Stolpe M (2010) On some fundamental properties of structural topol-ogy optimization problems. Struct Multidiscipl Optim 41(5):661670

    Tits A, Wchter A, Bakhtiari S, Urban T, Lawrence C (2003) A primal-dual interior-point method for nonlinear programming with strongglobal and local convergence properties. SIAM J Optim 14:173199

    Vandenberghe L, Boyd S (1996) Semidefinite programming. SIAMRev 38(1):4995

    A feasible directions method for nonsmooth convex optimization1 Introduction2 The feasible direction interior point method3 Description of the present technique for nonsmooth optimization3.1 Nonsmooth Feasible Direction Algorithm - NFDA

    4 Convergence analysis5 Numerical results5.1 Test problems and solvers5.2 Performance profiles

    6 Truss topology robust design via nonsmooth optimization6.1 Example 16.2 Example 26.3 Example 36.4 Example 4

    7 Conclusions and future work

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 600 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice