Neurodynamic Optimization: New Models and kWTA Optimization: New Models and kWTA Applications Jun Wang…

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

  • Neurodynamic Optimization:New Models and

    kWTA Applications

    Jun Wangjwang@mae.cuhk.edu.hk

    Department of Mechanical & Automation Engineering

    The Chinese University of Hong Kong

    Shatin, New Territories, Hong Kong

    http://www.mae.cuhk.edu.hk/ jwang

    Computational Intelligence Laboratory, CUHK p. 1/69

  • IntroductionOptimization is ubiquitous in nature and society.

    Computational Intelligence Laboratory, CUHK p. 2/69

  • IntroductionOptimization is ubiquitous in nature and society.

    Optimization arises in a wide variety of scientificproblems.

    Computational Intelligence Laboratory, CUHK p. 2/69

  • IntroductionOptimization is ubiquitous in nature and society.

    Optimization arises in a wide variety of scientificproblems.

    Optimization is an important tool for design,planning, control, operation, and management ofengineering systems.

    Computational Intelligence Laboratory, CUHK p. 2/69

  • Problem FormulationConsider a general optimization problem:

    OP1 : Minimize f(x)

    subject to c(x) 0,

    d(x) = 0,

    where x n is the vector of decision variables, f(x)is an objective function, c(x) = [c1(x), . . . , cm(x)]T isa vector-valued function, andd(x) = [d1(x), . . . , dp(x)]

    T a vector-valued function.

    Computational Intelligence Laboratory, CUHK p. 3/69

  • Problem FormulationConsider a general optimization problem:

    OP1 : Minimize f(x)

    subject to c(x) 0,

    d(x) = 0,

    where x n is the vector of decision variables, f(x)is an objective function, c(x) = [c1(x), . . . , cm(x)]T isa vector-valued function, andd(x) = [d1(x), . . . , dp(x)]

    T a vector-valued function.

    If f(x) and c(x) are convex and d(x) is affine, thenOP is a convex programming problem CP. Otherwise,it is a nonconvex program. Computational Intelligence Laboratory, CUHK p. 3/69

  • Quadratic and Linear Programs

    QP1 : minimize1

    2xTQx + qTx

    subject to Ax = b,

    l Cx h,

    where Q nn , q n, A mn,b m, C nn, l n, h n.

    Computational Intelligence Laboratory, CUHK p. 4/69

  • Quadratic and Linear Programs

    QP1 : minimize1

    2xTQx + qTx

    subject to Ax = b,

    l Cx h,

    where Q nn , q n, A mn,b m, C nn, l n, h n.When Q = 0, and C = I , QP1 becomes a linearprogram with equality and bound constraints:

    LP1 : minimize qTx

    subject to Ax = b,

    l x hComputational Intelligence Laboratory, CUHK p. 4/69

  • Dynamic OptimizationIn many applications (e.g., online pattern recognitionand onboard signal processing), real-time solutions tooptimization problems are necessary or desirable.

    Computational Intelligence Laboratory, CUHK p. 5/69

  • Dynamic OptimizationIn many applications (e.g., online pattern recognitionand onboard signal processing), real-time solutions tooptimization problems are necessary or desirable.

    For such applications, classical optimizationtechniques may not be competent due to the problemdimensionality and stringent requirement oncomputational time.

    Computational Intelligence Laboratory, CUHK p. 5/69

  • Dynamic OptimizationIn many applications (e.g., online pattern recognitionand onboard signal processing), real-time solutions tooptimization problems are necessary or desirable.

    For such applications, classical optimizationtechniques may not be competent due to the problemdimensionality and stringent requirement oncomputational time.

    It is computationally challenging when optimizationprocedures have to be performed in real time tooptimize the performance of dynamical systems.

    Computational Intelligence Laboratory, CUHK p. 5/69

  • Dynamic OptimizationIn many applications (e.g., online pattern recognitionand onboard signal processing), real-time solutions tooptimization problems are necessary or desirable.

    For such applications, classical optimizationtechniques may not be competent due to the problemdimensionality and stringent requirement oncomputational time.

    It is computationally challenging when optimizationprocedures have to be performed in real time tooptimize the performance of dynamical systems.

    One very promising approach to dynamicoptimization is to apply artificial neural networks.

    Computational Intelligence Laboratory, CUHK p. 5/69

  • Neurodynamic OptimizationBecause of the inherent nature of parallel anddistributed information processing in neural networks,the convergence rate of the solution process is notdecreasing as the size of the problem increases.

    Computational Intelligence Laboratory, CUHK p. 6/69

  • Neurodynamic OptimizationBecause of the inherent nature of parallel anddistributed information processing in neural networks,the convergence rate of the solution process is notdecreasing as the size of the problem increases.

    Neural networks can be implemented physically indesignated hardware such as ASICs whereoptimization is carried out in a truly parallel anddistributed manner.

    Computational Intelligence Laboratory, CUHK p. 6/69

  • Neurodynamic OptimizationBecause of the inherent nature of parallel anddistributed information processing in neural networks,the convergence rate of the solution process is notdecreasing as the size of the problem increases.

    Neural networks can be implemented physically indesignated hardware such as ASICs whereoptimization is carried out in a truly parallel anddistributed manner.

    This feature is particularly desirable for dynamicoptimization in decentralized decision-makingsituations.

    Computational Intelligence Laboratory, CUHK p. 6/69

  • Existing ApproachesIn their seminal work, Tank and Hopfield (1985,1986) applied the Hopfield networks for solving alinear program and the traveling salesman problem.

    Computational Intelligence Laboratory, CUHK p. 7/69

  • Existing ApproachesIn their seminal work, Tank and Hopfield (1985,1986) applied the Hopfield networks for solving alinear program and the traveling salesman problem.

    Kennedy and Chua (1988) developed a neural networkfor nonlinear programming, which contains finitepenalty parameters and thus its equilibrium pointscorrespond to approximate optimal solutions only.

    Computational Intelligence Laboratory, CUHK p. 7/69

  • Existing ApproachesIn their seminal work, Tank and Hopfield (1985,1986) applied the Hopfield networks for solving alinear program and the traveling salesman problem.

    Kennedy and Chua (1988) developed a neural networkfor nonlinear programming, which contains finitepenalty parameters and thus its equilibrium pointscorrespond to approximate optimal solutions only.

    The two-phase optimization networks by Maa andShanblatt (1992).

    Computational Intelligence Laboratory, CUHK p. 7/69

  • Existing ApproachesIn their seminal work, Tank and Hopfield (1985,1986) applied the Hopfield networks for solving alinear program and the traveling salesman problem.

    Kennedy and Chua (1988) developed a neural networkfor nonlinear programming, which contains finitepenalty parameters and thus its equilibrium pointscorrespond to approximate optimal solutions only.

    The two-phase optimization networks by Maa andShanblatt (1992).

    The Lagrangian networks for quadratic programmingby Zhang and Constantinides (1992) and Zhang, et al.(1992).

    Computational Intelligence Laboratory, CUHK p. 7/69

  • Existing Approaches (contd)A recurrent neural network for quadratic optimizationwith bounded variables only by Bouzerdoum andPattison (1993).

    Computational Intelligence Laboratory, CUHK p. 8/69

  • Existing Approaches (contd)A recurrent neural network for quadratic optimizationwith bounded variables only by Bouzerdoum andPattison (1993).

    The deterministic annealing network for linear andconvex programming by Wang (1993, 1994).

    Computational Intelligence Laboratory, CUHK p. 8/69

  • Existing Approaches (contd)A recurrent neural network for quadratic optimizationwith bounded variables only by Bouzerdoum andPattison (1993).

    The deterministic annealing network for linear andconvex programming by Wang (1993, 1994).

    The primal-dual networks for linear and quadraticprogramming by Xia (1996, 1997).

    Computational Intelligence Laboratory, CUHK p. 8/69

  • Existing Approaches (contd)A recurrent neural network for quadratic optimizationwith bounded variables only by Bouzerdoum andPattison (1993).

    The deterministic annealing network for linear andconvex programming by Wang (1993, 1994).

    The primal-dual networks for linear and quadraticprogramming by Xia (1996, 1997).

    The projection networks for solving projectionequations, constrained optimization, etc by Xia andWang (1998, 2002, 2004) and Liang and Wang(2000).

    Computational Intelligence Laboratory, CUHK p. 8/69

  • Existing Approaches (contd)The dual networks for quadratic programming by Xiaand Wang (2001), Zhang and Wang (2002).

    Computational Intelligence Laboratory, CUHK p. 9/69

  • Existing Approaches (contd)The dual networks for quadratic programming by Xiaand Wang (2001), Zhang and Wang (2002).

    A two-layer network for convex programming subjectto nonlinear inequality constraints by Xia and Wang(2004).

    Computational Intelligence Laboratory, CUHK p. 9/69

  • Existing Approaches (contd)The dual networks for quadratic programming by Xiaand Wang (2001), Zhang and Wang (2002).

    A two-layer network for convex programming subjectto nonlinear inequality constraints by Xia and Wang(2004).

    A simplified dual network for quadratic programmingby Liu and Wang (2006)

    Computational Intelligence