Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
OPTIMAL CONTROL AND FLIGHTTRAJECTORY OPTIMIZATION APPLIED TO
EVASION ANALYSIS
Licentiate Thesis
Jukka Ranta
March 2004
Systems Analysis Laboratory
Department of Engineering Physics and Mathematics
Helsinki University of Technology
FI-02150 Espoo
HELSINKI UNIVERSITY ABSTRACT OF
OF TECHNOLOGY LICENTIATE’S THESIS
Department of Engineering Physics and Mathematics
Author: Jukka Ranta
Department: Department of Engineering Physics and Mathematics
Major subject: Systems and Operations Research
Minor subject: Systems Engineering
English title: Optimal control and flight trajectory optimization ap-
plied to evasion analysis
Finnish title: Optimisaato ja lentoratojen optimointi sovellettuna
vaistoanalyysiin
Number of pages: 102
Chair: Mat-2 Applied Mathematics
Supervisor: Professor Raimo P. Hamalainen
Instructor: Dr.Tech. Tuomas Raivio
Abstract:
This thesis deals with the numerical solution of optimal control problems arisingfrom aircraft trajectory optimization and evasion analysis. The Introduction con-siders optimal control problems, solution methods for them, dynamic modeling offlight vehicles, and flight trajectory optimization in general. Encounter scenariosbetween an aircraft and a missile are considered. The missile pursues the aircraft,which in turn tries to avoid being hit. The methodology is based on dynamicmodeling of the vehicles and optimal control theory. The missile is assumed touse a deterministic and known feedback guidance law while the controls and theresulting trajectory of the aircraft are subject to optimization. For the solutionof the optimal control problem, a direct control parameterization method is ap-plied. The first scenario considered is an edgame evasion where the objective ofthe optimal control problem is to maximize the minimum distance to the missile,i.e., the miss distance. For the aircraft, a point mass model extended to accountfor limited rotation rates is used, and a solution method is tailored to handle thekinematic rotation rate constraints and to deal with the different time constantsof the dynamic models of the aircraft and the missile. The second scenario studiesthe maximal distance a missile can be launched from so that it still captures anoptimally evading target. The problem is modeled as a bilevel dynamic optimiza-tion problem which is, by using optimality conditions, reformulated into a singlelevel problem. A similar solution approach is applied as in the first study. Forboth scenarios numerical examples are computed and analyzed.
Keywords: Optimal control, dynamic optimization, nonlinear pro-gramming, transcription methods, trajectory optimiza-tion.
Study secretary fills:
Thesis approved:
Library code:
TEKNILLINEN LISENSIAATINTYON
KORKEAKOULU TIIVISTELMA
Teknillisen fysiikan ja matematiikan osasto
Tekija: Jukka Ranta
Osasto: Teknillisen fysiikan ja matematiikan osasto
Paaaine: Systeemi- ja operaatiotutkimus
Sivuaine: Systeemitekniikka
Tutkimuksen nimi: Optimisaato ja lentoratojen optimointi sovellettuna
vaistoanalyysiin
Title in English: Optimal control and flight trajectory optimization ap-
plied to evasion analysis
Sivumaara: 102
Professuuri: Mat-2 Sovellettu matematiikka
Valvoja: Professori Raimo P. Hamalainen
Ohjaaja: TkT Tuomas Raivio
Tiivistelma:
Tama tyo kasittelee lentoratojen optimointia ja vaistoanalyysin yhteydessa muo-dostettujen optimisaatotehtavien numeerista ratkaisemista. Optimisaatotehtavia,niiden ratkaisumenetelmia, lentolaitteiden dynaamista mallintamista ja lentora-tojen optimointitehtavia kasitellaan johdannossa. Tarkastelun kohteenaon lentokoneen ja ohjuksen valisia takaa-ajotilanteita. Ohjus ajaa takaalentokonetta, joka puolestaan pyrkii valttamaan osuman. Kaytetyt menetelmatperustuvat lentolaitteiden dynaamiseen mallintamiseen ja optimisaatoteoriaan.Ohjuksen oletetaan kayttavan tunnettua, takaisinkytkettya ohjauslakia, kun taaslentokoneen ohjaukset valitaan optimointitehtavan ratkaisuna. Muodostettujendynaamisten optimointitehtavien ratkaisemiseen kaytetaan suoraa parametrisoin-timenetelmaa. Ensimmainen tarkasteltava tilanne on loppupelivaisto, jossa mak-simoidaan lentokoneen ja ohjuksen valista minimietaisyytta, eli ohitusetaisyytta.Lentokoneen mallintamiseen kaytettaan pistemassamallia, joka on laajennettuhuomioimaan asennonmuutoksen rajalliset kulmanopeudet. Tehtavaan sovel-letaan ratkaisumenetelmaa, joka on raataloity kasittelemaan kinemaattisetkulmanopeusrajoitukset ja lentokoneen ja ohjuksen dynamiikan aikavakioidenpoikkeavuus. Toinen tilanne kasittelee ohjuksen maksimaalista laukaisuetaisyyttaoptimaalisesti pakoon lentavaa kohdetta vastaan. Tehtava mallinnetaan kaksita-soisena dynaamisena optimointitehtavana, joka uudelleenmuotoillaan optimaa-lisuusehtojen avulla yksitasoiseksi tehtavaksi. Tehtavaan sovelletaan vastaavaaratkaisumenetelmaa kuin ensimmaisessa tarkastelussa. Molemmille tilanteille las-ketaan ja analysoidaan numeerisia esimerkkeja.
Avainsanat: Optimisaato, dynaaminen optimointi, epalineaarinenoptimointi, parametrisointimenetelmat, radan opti-mointi.
Taytetaan osastolla:
Hyvaksytty:
Tutkimuksen sijaintipaikka:
Preface
Optimal control, also referred to as dynamic optimization, is a widely used tool and
framework for various types of analyses. Direct transcription methods have risen
to be common approaches to numerically solve optimal control problems. What
they lack in accuracy, they make up in larger region of convergence and less prob-
lem dependent solution procedure. Aerospace applications and related trajectory
optimization problems are an important field on which optimal control is used.
This work has been carried out at the Systems Analysis Laboratory of Helsinki
University of Technology in conjuction with the Dynamics and Strategy of Flight
project. I wish to thank Professor Raimo P. Hamalainen, head of the laboratory,
for the support and the opportunity to participate in the project. In particular,
thanks are due to Dr. Tuomas Raivio, my instructor and co-author. I believe I have
learned much from working with you. I will remember Professor Harri Ehtamo as
an inspiring teacher of optimization theory, and Mr. Kai Virtanen, I wish to thank
for the shared knowledge and humor in many conversations. And to the whole
personnel of the Systems Analysis Laboratory, I express my thanks for the working
environment and the atmosphere.
The cooperation with the Finnish Air Force and the Laboratory of Aerodynamics is
gratefully acknowledged. The Finnish Air Force is to thank for shared insight into
aviation and funding of the project. I wish to thank Professor Jaakko Hoffren and
Mr. Timo Sailaranta of the Laboratory of Aerodynamics for the cooperation and
discussions on modeling aspects.
A great number of friends and relatives have contributed indirectly to this thesis.
The members of the Finnish taido community have by our common pastime greatly
enriched my life. So have all my friends influenced my life, and the time spent with
them is remembered with warmth. I wish to thank my parents Marjatta and Reijo
for their support and love at all times.
Espoo, March 2004
Jukka Ranta
Contents
Introduction
1 Dynamic optimization and optimal control . . . . . . . . . . . 1
1.1 Dynamic and differential games . . . . . . . . . . . . . . 3
2 Numerical solution methods . . . . . . . . . . . . . . . . . . . . 4
2.1 Direct — Indirect . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Direct methods . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Bilevel dynamic optimization . . . . . . . . . . . . . . . 7
3 Numerical optimization and sparsity . . . . . . . . . . . . . . . 7
4 Genetic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5 Flight vehicle models . . . . . . . . . . . . . . . . . . . . . . . . 10
5.1 Aerodynamic and performance parameters . . . . . . . 12
6 Homing guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7 Aerospace trajectory optimization . . . . . . . . . . . . . . . . . 15
8 Evasion analyses of this thesis . . . . . . . . . . . . . . . . . . . 16
8.1 Endgame evasion . . . . . . . . . . . . . . . . . . . . . . . 16
8.2 No-escape envelope . . . . . . . . . . . . . . . . . . . . . 17
9 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . 17
Bibliography 19
[I] Miss Distance Maximization in the Endgame
Raivio, T. and Ranta, J., Manuscript,
Submitted for publication
[II] No-Escape Envelope Computation of a Guided Missile
Ranta, J. and Raivio, T., Manuscript
Introduction
This thesis considers the analysis of pursuit-evasion situations by optimal control
approach. The focus lies in aircraft-missile encounter, where the missile pursues
the aircraft, and in the optimal evasive maneuvers of the aircraft. The interest
is in the possibilities of controlling the combined aircraft-missile system using the
controls of the aircraft so that the outcome is favourable for the aircraft. The theory
and methodology is based on optimal control, and the solution of the formulated
problems is by direct numerical methods.
1 Dynamic optimization and optimal control
The theory and framework of optimal control, often referred to also as dynamic
optimization, allows analysis of problems, in which a dynamic system is to be con-
trolled in an optimal manner according to some performance index. The dynamics
describes the evolution of the systems’s state and how the controls affect it. The
performance index is a functional of the state and the control and gives the cost
to be minimized or utility to be maximized. The history of optimal control reaches
back to the Brachistocrone problem, proposed by John Bernoulli in the 17th cen-
tury, and calculus of variations, from which optimal control theory is developed.
Calculus of variations leads to the Euler-Lagrange equations, which are first order
necessary conditions of optimality for a function to minimize or maximize a func-
tional. A comprehensive introduction to calculus of variations and optimal control
can be found, e.g., in [30, 72, 74].
Of discrete time problems, i.e., problems for which the dynamics evolves in discrete
steps, the name dynamic programming problem is often used. The term also refers
to the solution method developed by Bellman [11]. The method is based on the
principle of optimality, which states that ‘On an optimal path each control is optimal
1
for the state at which it is executed, regardles of how that state was arrived at,’
and is valid for continuous as well as for discrete time problems. The dynamic
programming method, when extended to continuous time problems, leads to the
Hamilton-Jacobi-Bellman equation, which is a partial differential equation defining
the optimal cost to go function, i.e., performance index value from current time to
the end, on the optimal trajectory.
For continuous time optimal control problems the necessary conditions of optimality
are provided by the Pontryagin maximum principle [98], which relates the optimality
of the control to minimizing or maximizing the Halmiltonian function of the problem
at each time instant by the control value subject to control constraints. These
necessary conditions define a two point boundary value problem for the dynamics
of the system and the adjoint states also known as co-states. Analytical as well as
indirect numerical methods of solving optimal control problems are based on the
maximum principle.
The solution of an optimal control problem can be found either by solving the
boundary value problem formulated by the optimality conditions of the maximum
principle, see e.g., [33], by application of dynamic programming or direct optimiza-
tion of the objective functional. For example, if the dynamics of a discrete time
problem with finite number of time steps is given in state space form, the problem
can easily be written as a parameter optimization problem with the state transition
equations as constraints, or the dynamics can be inserted into the objective function,
so that only controls are to be chosen. For a continuous time problem the solution
can be found approximatively by representing the state and control functions by a
finite number of parameters, and thus transcribing the problem into a parameter
optimization problem, e.g., [61].
Finding a solution in closed form for an optimal control problem, by any method, is
usually possible only for very simple problems. Also, the solution is often available
only in the form of open loop control, i.e., the solution applies only to a given initial
state instead of being in feedback form in which the control is related to the current
state. A special situation is if the dynamics is linear, the objective functional is
quadratic, and there are no constraints, except for fixed initial state. Then, a closed
loop formulation of the control is available, it is a linear function of the state, and the
problem and solution are referred to as linear quadratic (LQ) problem and control
[3, 31, 78].
Dynamic programming provides the solution in feedback form but is in practise
2
limited by the size of the problem. The term curse of dimensionality [12] refers
to the fact that the computational and memory requirements of using dynamic
programming increase exponentially with the size of the problem. An approximative
method to generate a feedback formulation is by receding horizon approach, where
the problem is re-solved at each (discrete) control instant and only the part of
the solution before the next control instant is used [82, 86]. The role of dynamic
programming is particularily important in stochastic optimal control [15, 16, 56].
The principle of optimality still holds in the sense of expected or average objective
value, and the method of dynamic programming leads to the solution.
The fields in which optimal control is applied ranges from industrial engineering and
military applications to economics, medicine and biology. As an industrial exam-
ple of optimal control serves the control of a distillation process [41, 42]. Military
applications are discussed further in context of aircraft trajectory optimization. In
economics, optimal control is applied, e.g., in real option pricing [43] and manage-
ment of resources [55]. Design of effective medical treatments is formulated as an
optimal control problem in, e.g., [70, 79]. Many biological and ecological systems
are dynamic and, e.g., [63] studies airflow in breathing while [73] considers flight
paths of flying fish.
1.1 Dynamic and differential games
In optimal control there is one system, one control, and one objective. In a dynamic
game setting [8, 62, 67, 80, 97] there is a single common system affected by multiple
controls, each chosen based on a different objective. Hence, multiple players excercise
their control on the common system to optimize their individual objectives. The
structure of the game can differ greatly. It can describe a conflict or a co-operative
situation, and the knowledge the players have of each other or of the system may
differ. E.g., in zero sum games the gain of one player is the loss of the other. Hence,
there can be no co-operation. In Stackelberg games one player is the ‘leader’, whose
action is known to other players but who knows how the other players will react to
his control action and can choose his control accordingly. Usually, there is no unique
solution concept.
In pursuit-evasion games [67, 97] the performance index is of zero sum type and the
final time is free. A qualitative solution tells wether the pursuer can capture the
evader or not. Quantitative performance indices can be, e.g., the time of capture
minimized by the pursuer and maximized by the evader or the maximal distance
3
from the evader the pursuer can start from and still capture the evader. The solution
of a pursuit evasion game is a saddle point solution from which neither player wishes
to deviate because it would allow the other player to improve his outcome. Hence,
the solution is a Nash equilibrium.
2 Numerical solution methods
The method of dynamic programming is applicable only to problems of moderate
size, and treatment of continuous time problems or problems with a continuous
state space requires discretization or solving the Hamilton-Jacobi-Bellman partial
differential equation.
2.1 Direct — Indirect
In practice the methods of solving deterministic optimal control problems are di-
vided into two categories: direct and indirect methods. Indirect methods proceed by
formulating the optimality conditions according to the Pontryagin maximum prin-
ciple and then numerically solving the resulting two point boundary value problem.
Direct methods discretize the original problem in time and solve the resulting pa-
rameter optimization problem and thus generate an approximative solution of the
original problem. Both methods solve the necessary conditions of optimality and
both discretize the problem. The order in which the actions are taken is opposite.
The differences between the methods are considered in, e.g., [105].
Solving the two point boundary value problem requires finding the initial values of
the co-states, and possibly a number of Lagrange multipliers, if there are constraints.
Hence, the number of unknowns equals the number of states in the dynamics plus the
Lagrange multipliers [96, 104]. The parameter optimization problem, on the other
hand, usually has a large number of decision variables and constraints corresponding
to the parameters of the discretized state and control and the constraints based on
approximating the dynamics.
Indirect methods are considered to produce more accurate results, whereas direct
methods tend to have better convergence properties. A problem of indirect methods
is their need for a rather good initial guess of the initial values of the co-states in
4
order to converge. The co-states do not have a physical meaning, and thus, it can be
difficult to find a reasonable initial guess for them. Another issue is the derivation
of the necessary conditions, which must be considered for each problem and, in case
of complex dynamics and constraints structure, may be laborious. Furthermore,
the switching structure of the solution needs to be known in advance. Hence, if
there are state trajectory inequality constraints, the sequence of constrained and
unconstrained subarcs needs to be known. Also, the singular arcs of the problem
must be known in advance. Hence, the problem with indirect methods is in getting
started [33]. The necessary conditions are problem structure specific, but finding a
good initial value of the co-state and the switching structure is case specific, i.e.,
they may change if, e.g., the value of the initial state changes. Hence, even an
automatic solver designed for a particular problem may need an experienced user
for getting started in difficult cases.
To alleviate the problem of finding convergent starting values for the initial values
of the co-states, and to find the switching structure, homotopy techniques, see e.g.,
section 7.2 of [96], have been developed. The approach is to formulate a series of
problems that start with easy ones and gradually approach the original problem.
Also, combining direct and indirect methods is possible.
For both direct and indirect methods, an important aspect is the solution of the dif-
ferential equations. Consequently, both methods use some rather similar approaches.
The following considerations of the direct methods apply also to the solution of the
two point boundary value problem. The difference is that there are no free parame-
ters to be optimized and the dynamics includes also the co-states and terminal value
constraints for them.
2.2 Direct methods
Direct methods that parameterize the infinite dimensional problem have been inten-
sively studied in the 90’s and are considered in, e.g., [18, 19, 22, 23, 38, 45, 57, 61,
106, 125]. The continuous time functions (state and/or control) are represented by
a finite number of parameters by discretization or by using suitable basis functions.
The methods use three basic approaches. First, if only the control is parameter-
ized, the resulting state trajectory and performance index and constraint values can
be solved by numerical integration. The method is known as the direct shooting
method or control parameterization. Second, if only the state is parameterized and
constrained so as to be achievable by some allowed control, differential inclusion
5
method is arrived at. Third, both state and control can be parameterized, and
methods such as direct collocation and direct multiple shooting are of this type.
The size of the resulting optimization problem depends on the parameterization
method. Usually, parameterizing only the control results in the smallest problem
and parameterizing both control and state results in the largest. The computational
effort to solve the problem does not depend only on the size of the problem but also
on other aspects. For example, if control parameterization is used, controls early in
the trajectory usually have a large nonlinear effect on the state at later parts of the
trajectory. This may hamper the convergence and even cause numerical problems.
On the other hand, if a very small discretization interval (small integration step
size for the state equations) is required, parameterizing both state and control may
result in a huge optimization problem.
Shooting methods rely on an intial value problem solver, e.g., numerical integration,
to generate the state trajectory, from which the performance index can be evaluated.
The name ‘shooting’ refers to the repeated process of integrating the state trajectory,
observing the outcome, and then adjusting the control. Multiple shooting methods,
e.g. [25], divide the time interval into multiple segments on which the shooting is
performed, and the values of the state variables at the junctions of these segments
are taken among the variables of the parameterization. The effect of the controls is
thus limited to corresponding segments, and the nonlinear effects of early controls
on the latter parts of the trajectory is reduced.
Whereas shooting methods rely on a separate integrator to solve the state equa-
tions, several methods that parameterize both state and control or only the state
include the integration in the optimization problem as constraints. These methods
use a numerical integration formula that relates consecutive state and control val-
ues as a constraint to approximatively satisfy the state equations. This allows a
fairly efficient use of implicit integration as the set of equations of the integration is
solved as a part of the optimization problem. The use of implicit integration with
a shooting method would require the same set of constraint equations to be solved
at each iteration to evaluate the trajectory. Implicit integration is considered more
robust and accurate but computationally more expensive in comparison to explicit
integration methods [6, 119].
The simplest example of a constraint used for satisfying the state equations is based
on the explicit Euler integration formula: x2 = x1 + ∆tf(x1, u1), where x1 and
x2 are two consecutive discretized state values, u1 is the discretized control value
6
at the time of x1, f defines the dynamics, i.e., state derivatives, and ∆t is the
discretization interval. The constraint is then x1 +∆tf(x1, u1)−x2 = 0. An implicit
integration formula is already in the form of an equality constraint. The implicit
Euler integration formula is x1 + ∆tf(x2, u2)− x2 = 0. As constraints there is little
difference between the two, but when used for numerical integration of an initial
value problem, x1, u1, and u2 given and x2 unknown, the required computational
effort is larger for the latter, especially if the function f is nonlinear.
If only the state is parameterized, a constraint of the form x1 + ∆tfmin ≤ x2 ≤x1 + ∆tfmax could be used. Here fmin and fmax are the minimum and maximum
state derivatives achievable using feasible controls.
2.3 Bilevel dynamic optimization
Some dynamic games such as certain zero sum games and stackelberg games can
be formulated as bilevel dynamic optimization problems. The manuscript II of this
thesis deals with such a problem. The added difficulty is in the objective function of
the problem being the result of another optimization problem. A straight forward
approach is to solve the optimization problem when evaluating the objective func-
tion and constraints, and then use an interpolation of these results to approximate
the objective function. By suitable formulations, the Lagrange multipliers of the
optimization results can be used as gradient information, see e.g., papers III to VI
of [99] and [44]. Another standard approach, found in, e.g., [8, 94], is to formulate
the necessary conditions of optimality of either the minimization or maximization
problem, and then include them as constraints in the other problem, in which all
the decision variables are then chosen. This is the approach used in manuscript II.
3 Numerical optimization and sparsity
A key element in using a direct solution method is the solution of the parameterized
problem, which usually is a nonlinear optimization problem (NLP) [10, 17, 46, 91].
Basically, any suitable general optimization implementation can be applied. A
widely used method is sequential quadratic programming (SQP), e.g., [20, 47, 48].
Recently much research interest has been devoted to interior point methods, which
show great promise in efficiency [21, 123, 124, 129].
7
Transcription methods that discretize the state have a large number of decision
variables and constraints. The optimization problem formulated by embedding the
integration of state equations as constraints into the problem can, however, be effi-
ciently solved if the parameterization is such that the constraints involve only a few
parameter values, i.e., the parameters have a local effect on the state values. Then,
each constraint depends on a small number of decision variables. This means that
the Jacobian of the problem is sparse, i.e., a large portion of the Jacobian elements
are always zero. Several optimization methods proceed by generating a search di-
rection based on a solution of a local approximation of the problem. The solution is
found by solving the Karush-Kuhn-Tucker (KKT) first order necessary conditions
of optimality, which, for a common approximation of a quadratic objective and liner
constraint functions, are a set of linear equations. Solving this set of equations can
be done very efficiently even for large problems if the Jacobian of the problem is
sparse. In such a case, the matrix involved in the KKT conditions is also sparse and
efficient computational methods designed for sparse matrix algebra can be utilized.
Most optimization algorithms use local gradient information. Hence, they require
the gradient of the objective function and constraints at the current iteration point to
be known. Some methods also require second order, i.e., Hessian information. Often,
obtaining analytical gradients is difficult and numerical differences are employed.
The sparsity of the Jacobian can be exploited also here. In principle, evaluating the
effect of one decision variable on the constraints, i.e., one column of the Jacobian,
requires knowledge of the nominal value of the constraints at the current iteration
point and evaluating the constraints with a perturbed value of the decision variable
corresponding to the column. Thus, producing numerical difference approximations
for a full Jacobian requires evaluation of the constraints as many times as there
are decision variables, plus one evaluation for the nominal value. If the Jacobian
is sparse, groups of decision variables can be generated, so that each constraint is
affected by at most one variable of the group. Thus, it is possible to perturb several
decision variables at the same time and fewer constraint evaluations are needed.
4 Genetic algorithms
The use of simulated natural evolution as a search or optimization method [5, 51, 59,
85] has produced a group of so called evolutionary algorithms. Of these, genetic algo-
rithms are the best known. These methods belong to a larger group called heuristic
methods. Also the term soft computing is used of methods strongly based on in-
8
tuition instead of hard mathematical derivations. Whereas gradient based methods
proceed by deterministically improving an iteration point, genetic algorithms use a
random population of solution candidates. The features of the best candidates are
used for generating new populations with the intent of producing new and better
candidates. The basic approach uses three operators on the population: selection,
crossing over and mutation. Selection chooses, possibly in a random fashion, the
best candidates for the new population. Crossing over combines features of the se-
lected members of the population, and mutation introduces random variability into
the members. The convergence of the repeated selection – crossing over – mutation
procedure to the optimal solution is based on the schema theorem, see, e.g., [5]. A
schema is a similarity template fixing some values of the elements of the decision
variable vector and leaving others free. The schema theorem states that the number
of candidates in the population matching a schema increases if those candidates are
above average.
Genetic algorithms are a global search strategy that generally starts with the initial
population randomly dispersed into the search domain. They have a fairly good
probability of locating the global optimum from among local optima, in contrast to
gradient based methods, which converge to the local optimum closest to the given
starting point. However, the convergence of genetic algorithms is slow, compared to
gradient based methods, when the problem is sufficiently smooth for gradient based
methods to be applicable. This has led to the idea of combining the methods, see
e.g., [103]. The genetic algorithm can be used for generating a starting point for
the gradient based search. Alternatively, the genetic search can be enhanced by
performing local gradient based searches on the members of the population.
Parameterized optimal control problems are usually such that gradient based meth-
ods can be employed. However, complex problems may have local optima, which
makes the use of genetic algorithms for starting point generation interesting. The
idea is to generate a starting point, from which the gradient based method con-
verges to the global optimum. The nonlinearity of the problem dynamics may lead
to local optima or the objective function may be nonconvex. The endgame evasion
analysis in the manuscript I of this thesis involves a convex objective function to be
maximized and highly nonlinear dynamics.
Genetic algorithms are poor in handling constraints that in practice must usually
be included by penalty functions or by completely ignoring infeasible solution candi-
dates [83]. For some problems crossing over and mutation operators can be designed
to produce only feasible candidates, but this is very problem specific. Thus, apply-
9
ing a genetic algorithm to optimal control problems is more naturally implemented
by parameterizing only the controls and using a shooting approach. In [40], the dy-
namic model is such that a state parameterization using splines can be used without
equality constraints.
5 Flight vehicle models
Usually, aircraft and missile models used in trajectory optimization context are based
on the point mass assumptions: [A1] The earth is assumed flat. [A2] The velocity
vector, reference line of the vehicle and the thrust and drag forces are all parallel.
[A3] The lift force is orthogonal to the velocity vector. [A4] The time constants
of rotation dynamics are assumed negligibly small compared to the dynamics of
translational motion. Also wind is often ignored in the models, and the velocity
vector of the vehicle is considered to be equal in magnitude but opposite in direction
to the airflow around the vehicle. These assumptions are often associated with Miele
[84]. Aircraft modeling is considered extensively in [118].
The position of the vehicle in the three dimensional space is usually defined by three
state variables, which are the inertial coordinates x and y range and altitude h. The
orientation of the vehicle, i.e., the direction of the velocity vector, is denoted by the
Euler angles: heading angle χ, flight path angle γ, and bank angle µ. Of these,
heading and flight path angles are state variables, and are sufficient to determine
the direction of the velocity, and bank angle is a control variable that determines
the direction of the lift force. Heading angle is the angle between the projection
of the velocity vector onto the xy plane and the x axis. Flight path angle is the
angle between the velocity vector and its projection onto the xy plane. Bank angle
is then the rotation around the velocity vector. The Euler angles are illustrated in
figure 1. The sixth state variable is the magnitude of the velocity. In some cases the
mass of the vehicle and its change due to fuel consumption is also of interest and
can be included as a state variable. In addition to bank angle the commonly used
control variables are load factor and throttle setting. Load factor is the magnitude
of lift force relative to the gravitational force and oriented by bank angle controls the
rotation of the velocity vector. Throttle setting is usually defined as the magnitude
of active thrust relative to maximum thrust, i.e., it is a real number in the range of
[0,1]. This definition of throttle setting simplifies the formulation of optimal control
problems because the control has fixed upper and lower bounds, whereas maximum
thrust depends on flight conditions.
10
x
y
hv
Lift
Figure 1: Euler angles defining the attitude of the velocity vector v and the direction
of the lift force.
The difficulty of solving an optimal control problem depends to some extent on
the dimension of the dynamics. Hence, it is desireable to reduce the number of
state variables if possible. Some trajectory optimization problems either are not
interested in some of the states, the solution may be known to have some state at a
constant value or the state may actually be constrained. For example, a minimum
time ascent to a given altitude can be considered in the xh plane with the state
variables y range and heading angle and the control variable bank angle removed
from the model.
The point mass model gives reasonably accurate results if the trajectory does not
contain aggressive maneuvers in which the magnitude or direction of the lift force
changes rapidly or large values of the lift force are applied. A flight vehicle is
controlled by aerodynamic control surfaces that that produce forces that alter the
attitude of the vehicle. This in turn alters the balance of the forces affecting the
direction of the velocity vector. A large lift force requires a large deviation between
the velocity vector and reference line of the vehicle. This implies that the thrust
force still parallel with reference line of the vehicle is no longer parallel with the
velocity vector and the assumption [A2] is contradicted. For the direction of the lift
force to change rapidly the bank angle must change rapidly, which contradicts the
11
assumption [A3].
The missile evasion trajectories considered in this thesis contain maneuvers with
rapid lift force changes both in magnitude and direction. Therefore, the following
extensions to the point mass model of the aircraft are made. The model includes
angle of attack, which is defined as the angle between the velocity vector and the
reference line of the vehicle in the plane spanned by the velocity and lift force vectors.
It is taken as a control variable and replaces load factor of the point mass model.
The thrust force is then divided into components parallel with the lift and drag force
vectors. Drag force remains parallel to the velocity vector and perpendicular to the
lift force. The rotation dynamics are still ignored but kinematics are considered in
the form of angular rate constraints. The rate of change of the bank angle and the
pitch angle are considered and formulated as control rate constraints. Manuscript I
considers the aircraft model extensions in more detail.
For a missile the consideration of the angle of attack is particularily important
because it can attain very large values. Thus, the thrust force is likely to deviate
greatly from the direction of the velocity vector. Manuscript II considers a problem
in which the missile’s rocket motor is active and a modeling of the angle of attack
is presented.
In the future, the increasing maneuverability and techical solutions, such as thrust
vectoring, will require model modifications. The modeling, optimal use, and effects
of thrust vectoring have been considered in [24], and the optimization of a very high
angle of attack cobra maneuver has been studied in [75]. Also, in these studies, the
angle of attack reaches values above the stall limit, which requires a more extensive
consideration of the aerodynamic characteristics of the vehicles.
5.1 Aerodynamic and performance parameters
The properties of the atmosphere are of importance as many parameters of the
models are affected by air density and Mach number. The Mach number is often
used as an independent variable for stating aerodynamic measurement data and is
defined as the ratio of the velocity of the vehicle and the speed of sound at current
altitude. The air density and speed of sound are available as functions of altitude
from standard atmosphere models, e.g., [4, 39].
The drag force is usually characterized by the drag coefficient, which relates the force
12
to the lift force, dynamic pressure, and reference wing area of the vehicle. Often
a quadratic dependence on the load factor or lift coefficient is assumed, and the
drag coefficient is then composed of the induced, as a result of generating lift force,
and zero-lift drag coefficients. Drag coefficient values can be obtained through wind
tunnel and flight tests and are usually declared as tabular data. Thus, it is also
common that the drag coefficient is given for, e.g., certain values of lift coefficient,
altitude and Mach number triplets.
The thrust force of a missile is usually a fixed function of time defined by the
characteristics of the rocket motor. For an aircraft the thrust force depends on,
e.g., the airflow and fuel injection into the engine. When the fuel flow is considered
through the throttle setting, the maximal achievable thrust is of interest. Maximum
thrust is often declared as tabular data for different flight conditions.
In addition to drag and thrust, a number of other parameters affect characteristics
of the models. Among those of most importance are the vehicle mass and reference
wing area. Mass can be considered fixed or it may depend on the fuel consumption,
which for a missile is typically a fixed function of time, whereas for an aircraft its
change depends on the throttle setting.
There are also constraints that need to be taken into account when designing or
optimizing flight trajectories. The load factor should remain within values that do
not induce stall or effect forces that cause structural damage to the vehicle. Too
large a dynamic pressure in itself can be harmful to the vehicle. The aforementioned
bank angle and pitch rate limits are also vehicle dependent limits on performance.
For realistic modelling of an air vehicle the aerodynamic and performance charac-
teristics are important. A great variety of different vehicles can be described by
the same basic model by using appropriate characteristic data. The characteristics
are also usually only available by testing and measurement. Hence, no analytical
function is available, but a finite data set, which gives the parameter values only at
discrete operating points, must be used. Some form of interpolation or approxima-
tion is therefore required in order to use the model under varying flight conditions.
In order to facilitate effective numerical solution of the parameterized problem or
the two point boundary value problem by gradient utilizing methods, a sufficiently
smooth representation of the data between measurement points is needed.
A smooth representation of pointwise data can be achieved by using interpolation
by, e.g., splines of suitable dimensions, or by fitting an approximating function into
13
the data by, e.g., the least squares method. Both approaches have their advantages
and drawbacks. An interpolation fits the data perfectly but may fluctuate between
the points. This can affect the convergence or even create local optima. Fitting a
function into the data requires a modeling effort in order to determine a suitable
form for the function. A fitted function usually does not coincide with the data
points, but if the data is considered measurements with accompanying measurement
errors, this is not a great problem. Representing different data sets may require
different approximating functions, whereas interpolation can be considered as a more
automated, data set independent approach.
6 Homing guidance
Anti-aircraft missiles require a quite effective homing system in order to reach and
detonate within a lethal radius of their target. Typically, modern missiles employ
a radar or an infrared sensor to provide measurements of the target location. The
guidance logic of the homing system translates the measurements into guidance
commands, which the guidance system then translates into commands for the control
surface actuators. Some missiles rely on a supporting aircraft or ground station that
generates the radar signal reflecting from the target, others download also guidance
commands. In a guidance method known as the beam rider the missile does not
observe the target but follows a beam projected from the launch site to the target.
One of the most widely used guidances laws is proportional navigation (PN) [1, 89,
131], in which the lateral acceleration of the missile, perpendicular to the velocity of
the missile (pure PN) or perpendicular to the line of sight (true PN), is proportional
to the observed line of sight rate (LOSR). Line of sight rate being the angular velocity
of the line connecting the missile and the target. Hence, the change in the heading of
the missile is also proportional to the LOSR. In essence, PN is simply a proportional
controller that regulates the LOSR to zero. The idea is that if LOSR is zero, the
target and the missile are on collision course, i.e., if they continue at current velocity
at current headings they will collide. For certain models proportional navigation
constitutes an optimal minimum control effort guidance [77]. Different variants of
PN exist and new ones are still being developed, see e.g. [13].
Recently, missile guidance laws based on dynamic game formulations have received
research interest [52, 108, 110, 111, 115, 122]. Proportional navigation has a draw-
back in assuming linear motion of the target. The idea of game optimal control laws
14
is to generate as close a hit as possible against the worst possible evasive maneuver
of the target. The developement of unmanned (uninhabited) aerial vehicles (UAV)
potentially capable of more aggressive evasive maneuvers than what a pilot could
withstand and the concept of anti-missile missiles have fueled the interest in new
and more effective guidance laws because in both cases the target is harder to hit
than current fighter aircraft.
7 Aerospace trajectory optimization
Aviation has been one of the important fields of optimal control. During World War
II optimal ascent trajectories were studied in Germany because an energy advantage
often decided the winner in a duel [71]. This is an example of minimum flight time
trajectory optimization for reaching a given target state set from a given initial state
[32, 107, 113].
A classic optimal control problem is the Goddard rocket problem [50] of reaching
maximal altitude. Space mission planning and the problem of designing launch
and re-entry trajectories and placing satellites into orbit has been an important
incentive for the development of optimal control. Current research considers, e.g.,
computation of such trajectories on-board a space vehicle instead of using a pre-
planned trajectory computed at a ground station [2, 26, 36, 76, 132] and optimal
interplanetary and orbital transfer trajectories [7, 9, 58, 90, 102, 121].
Civil aviation interest covers such problems as minimum fuel flight, which is a prob-
lem of reaching a given target set from a given initial state, and maximum range
flight. Also of interest are issues related to safety, such as avoiding a crash as a
result of windshear [34, 35, 88] or engine failure [37] and coordinating air traffic
[60]. Military aviation has much similar interests as civil aviation but also a number
of special interests, e.g., the design of optimal guidance laws for missiles [14] and
avoiding detection by radar [92, 93].
In many problems of military aviation conflict is considered. This often leads to game
formulations. Missile evasion can be formulated as a pursuit-evasion game, where the
missile has the role of the pursuer and the aircraft the evader [27, 28, 49, 52, 100].
Pursuit-evasion between two aircraft can be one with fixed roles or a two target
game, where the roles are not fixed [54, 68, 101, 112]. In [127, 128] generation
of control signals, i.e., pilot action, is studied by using decision analytic methods.
15
Missile defence systems have incited interest also in pursuit-evasion games between
two missiles [29, 81, 114].
8 Evasion analyses of this thesis
The following manuscripts both present an analysis of missile evasion using aircraft
trajectory optimization methods. The first one considers optimal endgame evasion,
i.e., last ditch maneuvers. The second one considers optimal missile outrunning
maneuvers and the resulting feasible, maximal launch distance of the missile.
8.1 Endgame evasion
The endgame evasion analysis has been studied by both simulation and optimization
approaches. Typically, the miss distance, i.e., the minimum distance between the
aircraft and the missile along the trajectories, has been used as the performance
index. Both optimal maneuvers and simulation results of typical maneuvers are
studied in [65, 66]. A point is made that executing an optimal maneuver is difficult
while, e.g., a sustained maximum g turn is easier from the point of view of the
pilot. Ref. [64] considers a large set of simulations. Optimal evasion is studied in
[95, 117]. Ref. [130] studies the effect of a periodic maneuver on the miss distance
of a proportional navigation missile. Usually, the investigations are performed using
nonlinear two or three dimensional point mass models. With suitable simplifications
analytical optimal maneuvers versus proportional navigation can be obtained [116].
The main contributions of the manuscript I are the extension of the aircraft model
to include the angle of attack of the aircraft and the rotation rate kinematics and a
tailored solution method to take into account the fast dynamics of the missile and
to include the rotation rate constraints by differential inclusion. The problem has
numerous local optima, and a genetic algorithm for generation of the starting point
of the gradient based optimization is implemented. The resulting trajectories and
effects of the limited rotation rates of the aircraft on the evasion trajectories are
analyzed.
16
8.2 No-escape envelope
The use of missiles by two aggressive aircraft is studied, e.g., in [68, 87]. In addition
to flight trajectory optimization, missile use is considered. A subproblem of this
is the pursuit-evasion game between the missile and the aircraft. The set of initial
states from which a missile is able to reach within a prescribed capture radius from
the target aircraft, irrespective of target’s actions, is known as the capture set. The
boundary of the capture set, no-escape envelope, is studied in [100, 109]. Ref. [53]
studies optimal launch conditions of a feedback guided missile against a suboptimally
evading target.
Manuscript II considers the no-escape envelope of a feedback guided missile versus
an optimally evading target aircraft. The problem of evaluating the maximal launch
distance minimized by aircraft evasion trajectory is formulated as a bilevel dynamic
optimization problem. To facilitate effective computational solution, the original
problem is transformed into a minimization problem using optimality conditions of
the maximization problem. The approach is demonstrated with realistic aircraft
and missile models in three dimensions. Aspects of the problem formulation, vehicle
modeling, and results are discussed.
9 Concluding remarks
This thesis consists of this introductory section and two manuscripts that consider
pursuit-evasion situations between a missile and an aircraft. Endgame evasion and
computation of the no-escape envelope are considered. Dynamic models of the
vehicles and a feedback guidance law of the missile are introduced and used for
describing the encounter dynamics. The control and resulting trajectory of the
aircraft are optimized.
The solution method used in the studies is a direct transcripion method. The origi-
nal infinite dimensional problem is approximated by a finite dimensional parameter
optimization problem. Direct multiple shooting approach is used. Control rate
constraints are included by a differential inclusion approach, and a multirate inte-
gration scheme is used for handling the different time scales of the dynamics of the
aircraft and the missile. To promote convergence and to avoid local optima a genetic
algorithm is used for generation of the starting point of gradient based search.
17
Direct solution methods of optimal control problems avoid the problem of formulat-
ing the optimality conditions and have better convergence properties than indirect
methods. Especially when implementing a new problem, direct methods only re-
quire the dynamics and constraints to be defined. Initial guess of the co-states or
knowledge of the switching structure is not needed.
The benefit of trajectory optimization results to aviation professionals can be en-
hanced if they have the possibility to solve problems on their own and observe the
effects of, e.g., the initial state, on the results. Hence, a system not requiring knowl-
edge of optimization would be needed. An interface for the endgame evasion problem
of this thesis, with the possibility of adding other problems later, was implemented
in [69]. Another similar software is presented in [126].
Both studies of the the thesis consider only the dynamics of the encounter. In a real
encounter, the aircraft is likely to have countermeasures, e.g., infrared flares or chaff,
at its disposal. The modeling of the effects of the countermeasures and optimizing
the use of such measures are possible directions of further research. Also, both the
aircraft and the missile are assumed to have perfect knowledge of each others state.
In a real situation, the position, let alone the velocity or heading, of the missile is
unlikely to be known to any degree of accuracy. On the other hand, there are likely
to be measurement errors in the missile’s measurement of the aircraft’s state. The
unknown state of the missile could be considered by a worst case analysis, e.g., the
best evasive trajectory against the worst possible missile state in a given set. A
stochastic approach could be used for analyzing the effect of both unknown missile
state and effect of measurement errors.
18
Bibliography
[1] Adler, F.P., Missile guidance by three-dimensional proportional navigation,
Journal of Applied Physics, Vol. 27, No. 5, pp. 500-507, 1956.
[2] Anand, J.K., Bhat, M.S., and Ghose, D., Performance of Parallel Shoot-
ing Method for Closed Loop Guidance of an Optimal Launch Vehicle Trajec-
tory, Optimization and engineering, Vol.1, No. 4, pp. 399-435, 2000.
[3] Anderson, B.D.O, and Moore, J.B., Optimal Control: Linear Quadratic
Methods, Prentice Hall, Engelwood Cliffs, 1990.
[4] Anon., ISO Standard Atmosphere, International Organization for Standard-
ization, ISO 2533:1975.
[5] Ansari, N., and Hou, E.S.H., Computational Intelligence for Optimization,
Kluver Academic Publishers, 1997.
[6] Ascher, U.M., and Petzold, L.R., Computer Methods for Ordinary Dif-
ferential Equations and Differential-Algebraic Equations, SIAM, 1998.
[7] Axelrod, A., Guelman, M., and Mishne, D., Optimal Control of Inter-
planetary Trajectories Using Electrical Propulsion with Discrete Thrust Levels,
Journal of Guidance, Control, and Dynamics, Vol. 25, No. 5, pp. 932-939, 2002.
[8] Basar, T., and Olsder, G., Dynamic Noncooperative Game Theory, 2nd
ed., Academic Press, London, 1995.
[9] Baumann, H., and Oberle, H.J., Numerical Computation of Optimal Tra-
jectories for Coplanar, Aeroassisted Orbital Transfer, Journal of Optimization
Theory and Applications, Vol. 107, No. 3, pp. 457-479, 2000.
[10] Bazaraa, M., Sherali, H., and Shetty, C., Nonlinear Programming:
Theory and Algorithms, 2nd ed., Wiley, New York, 1993.
[11] Bellman, R.E., Dynamic Programming, Princeton University Press, Prince-
ton, 1957.
19
[12] Bellman, R.E., Adaptive Control Processes: A Guided Tour, Princeton Uni-
versity Press, Princeton, 1961.
[13] Ben-Asher, J.Z., and Levinson, S., Proportional Navigation Guidance
Law for Ground-to-Air Systems, Journal of Guidance, Control, and Dynamics,
Vol. 26, No. 5, pp. 822-825, 2003.
[14] Ben-Asher, J.Z., and Yaesh, I., Adnavces in Missile Guidance Theory,
Progress in Astronautics and Aeronautics, Vol. 180, AIAA, Reston, VA, 1998.
[15] Bertsekas, D., Dynamic Programming and Stochastic Control, Academic
Press, New York, 1976.
[16] Bertsekas, D.P., Dynamic Programming and Optimal Control, Vol I,
Athena Scientific, Belmont, MA, 1995.
[17] Bertsekas, D.P., Nonlinear Programming, Athena Scientific, Belmont, MA,
1995.
[18] Betts, J., Survey of Numerical Methods for Trajectory Optimization, Journal
of Guidance, Control and Dynamics, Vol. 21, No. 2, pp. 193-207, 1998.
[19] Betts, J.T., Practical Methods for Optimal Control Using Nonlinear Pro-
gramming, SIAM: Advances in Design and Control, 2001.
[20] Betts, J.T., and Frank, P.D., A sparse nonlinear optimization algorithm,
Journal of Optimization Theory and Applications, Vol 82, No. 3, pp.519-541,
1994.
[21] Betts, J.T., and Gablonsky, J.M., A Comparison of Interior Point and
SQP Methods on Optimal Control Problems, Mathematics and Computing
Technology Reports M&CT-Tech-02-004, The Boeing Company, 2002.
[22] Betts, J., and Huffman, W., Path-Constrained Trajectory Optimization
Using Sparse Sequential Quadratic Programming, Journal of Guidance, Con-
trol, and Dynamics, Vol. 16, No. 1, pp. 59-68, 1993.
[23] Betts, J., and Huffman, W., Mesh Refinement in Direct Transcription
Methods for Optimal Control, Optimal Control Applications & Methods, Vol.
19, pp. 1-21, 1998.
[24] Bocarov, S., Lutze, F.H., and Cliff, E.M., Time-Optimal Reorienta-
tion Maneuvers for a Combat Aircraft, Journal of Guidance, Control, and
Dynamics, Vo. 16, No. 2, pp. 232-240, 1993.
20
[25] Bock, H.G. and Plitt, K.J., A Multiple Shooting Algorithm for Direct
Solution of Optimal Control Problems, Proceeding of the IFAC 9th World
Congress, Budapest, Hungary, pp. 242-247, 1984.
[26] Breitner, M.H., Robust Optimal Onboard Reentry Guidance of a Space
Shuttle: Dynamic Game Approach and Guidance Synthesis via Neural Net-
works, Journal of Optimization Theory and Applications, Vol. 107, No. 3, pp.
481-503, 2000.
[27] Breitner, M., Pesch, H., and Grimm, W., Complex Differential Games
of Pursuit-Evasion Type with State Constraints, Part 1: Necessary Condi-
tions for Optimal Open-Loop Strategies, Journal of Optimization Theory and
Applications, Vol. 78, No. 3, pp. 419-441, 1993.
[28] Breitner, M., Pesch, H., and Grimm, W., Complex Differential Games
of Pursuit-Evasion Type with State Constraints, Part 2: Multiple Shooting and
Homotopy, Journal of Optimization Theory and Applications, Vol. 78, No. 3,
pp. 442-464, 1993.
[29] Breitner, M., Rettig, U., and von Stryk, O., Robust Optimal Missile
Guidance Upgrade with a Dynamic Stackelberg Game Linearization, Proceed-
ings of the 8th International Symposium on Dynamic Games and Applications,
Maastricht, the Netherlands, pp. 100-105, 1998.
[30] Bryson, A.E., Dynamic Optimization, Prentice Hall, 1998.
[31] Bryson, A.E., Applied Linear Optimal Control; Examples and Algorithms,
Cambridge University Press, 2002.
[32] Bryson, A.E., Desai, M.N., and Hoffman, W.C., Energy-State Ap-
proximation in Performance Optimization of Supersonic Aircraft, Journal of
Aircraft, Vol.6, pp. 481-488, 1969.
[33] Bryson, A., and Ho, Y., Applied Optimal Control, Revised Printing, Hemi-
sphere, New York, 1975.
[34] Bulirsch, R., Montrone, F., and Pesch, H.J., Abort Landing in the
Presence of Windshear as a Minimax Optimal Control Problem, Part 1: Nec-
essary Conditions, Journal of Optimization Theory and Applications, Vol. 70,
No. 1, pp. 1-23, 1991.
[35] Bulirsch, R., Montrone, F., and Pesch, H.J., Abort Landing in the
Presence of Windshear as a Minimax Optimal Control Problem, Part 2: Mul-
21
tiple Shooting and Homotopy, Journal of Optimization Theory and Applica-
tions, Vol. 70, No. 2, pp. 223-254, 1991.
[36] Calise, A.J., and Brandt, N., Generation of Launch Vehicle Abort Trajec-
tories Using a Hybrid Optimization Method, Proceedings of the AIAA Guid-
ance, Navigation, and Control Conference, Monterey, CA, 5-8 August 2002,
AIAA Paper No. 2002-4560.
[37] Carlson, E.B., and Zhao, Y.J., Optimal Short Takeoff of Tiltrotor Aircraft
in One Engine Failure, Journal of Aircraft, Vol. 39, No. 2, 2002.
[38] Conway, B.A., and Larson, K.M., Collocation Versus Differential Inclu-
sion in Direct Optimization, Journal of Guidance, Control, and Dynamics,
Vol. 21, No.5, pp. 780-785, 1998.
[39] Derr, V.E., Atmospheric Handbook: Atmospheric Data Tables Available on
Computer Tape, World Data Center A for Solar-Terrestrial Physics, Report
UAG-89, Boulder, Colorado, 1984.
[40] Desideri, J-A., Peigin, S., and Timchenko, S., Application of Genetic
Algorithms to the solution of the Space Vehicle Reentry Trajectory Optimiza-
tion Problem, INRIA Sophia Antipolis, Research Report No. 3843, 1999.
[41] Diehl, M., Uslu, I., Findeisen, R., Schwarzkopf, S., Allgower,
F., Bock, H.G., Burner, T., Gilles, E.D., Kienle, A., Schloder,
J.P., and Stein, E., Real-time optimization of large scale process models:
Nonlinear model predictive control of a high purity distillation column, in
Groetschel, M., Krumke, S.O., and Rambau, J., editors, Online Opti-
mization of Large Scale Systems: State of the Art, Springer, 2001.
[42] Diehl, M.M., Real-Time Optimization for Large Scale Nonlinear Processes,
PhD thesis, Ruprecht-Karls-Universitat, Heidelburg, 2001.
[43] Dixit, A.K. and Pindyck, R.S. Investment under Uncertainty, Princeton
University Press, Princeton, NJ, 1994.
[44] Ehtamo, H., and Raivio, T., On Applied Nonlinear and Bilevel Program-
ming for Pursuit-Evasion Games, Journal of Optimization Theory and Appli-
cations, Vol. 108, No. 1, pp. 65-96, 2001.
[45] Fahroo, F., and Ross, I.M., Direct Trajectory Optimization by a Chebyshev
Pseudospectral Method, Journal of Guidance, Control, and Dynamics, Vol. 25,
No. 1, pp. 160-166, 2002.
22
[46] Fletcher, R., Practical Methods of Optimization, 2nd ed., Wiley, 2000.
[47] Gill, P.E., Murray, W., and Saunders, M.A., User’s Guide for SNOPT
5.3: A Fortran Package for Large-Scale Nonlinear Programming, Technical
report 97-5, Department of Mathematics, University of California, San Diego,
La Jolla, CA, 1997
[48] Gill, P.E., Murray, W., and Saunders, M.A., SNOPT: An SQP Algo-
rithm for Large-Scale Constrained Optimization, SIAM Journal on Optimiza-
tion, Vol. 12, No. 4 pp. 979-1006, 2002.
[49] Glizer, V.Y., and Shinar, J., Optimal Evasion from a Pursuer with De-
layed Information, Journal of Optimization Theory and Applications, Vol. 111,
No. 1, pp. 7-38, 2001.
[50] Goddard, R.H., A Method of Reaching Extreme Altitudes, Smithsonian Int.
Misc. Collections 71, 1919. Also American Rocket Society 1946.
[51] Goldberg, D.E., Genetic Algorithms in Search, Optimization, and Machine
Learning, Addison-Wesley, 1989
[52] Green, A.,Shinar, J., and Guelman, M., Game Optimal Guidance Law
Synthesis for Short Range Missiles, Journal of Guidance Control, and Dynam-
ics, Vol. 15, No. 1, pp.191-197, 1992.
[53] Grimm, W., and Schaeffer, J.M., Optimal Launch Conditions on the
“No-Escape Envelope” , Computers and mathematics with applications, Vol.
18, No. 1-3, pp. 45-59, 1989.
[54] Grimm, W., and Well, K., Modeling Air Combat as Differential Game
- Recent Approaches and Future Reguirements, in Hamalainen, R.P., and
Ehtamo, H., editors, Differential Games - Developements in Modeling and
Computation, Springer, Berlin, pp. 1-13, 1991.
[55] Hanson, F.B., and Ryan, D., Optimal harvesting with both population and
price dynamics, Mathematical Biosciences, Vol. 148, No. 2, pp. 129-146, 1998.
[56] Hanson, F.B., Techniques in Computational Stochastic Dynamic Program-
ming, in Leondes, C.T., editor, Control and Dynamic Systems Volume 76:
Stochastic Digital Control System Techniques, Academic Press, New York, pp.
103-162, 1996.
[57] Hargraves, C.R., and Paris, S.W., Direct Trajectory Optimization Using
Nonlinear Programming and Collocation, Journal of Guidance, Vol. 10, No. 4,
pp. 338-342, 1987.
23
[58] Herman, A.L., and Spencer, D.B., Optimal, Low-Thrust Earth-Orbit
Transfers Using Higher-Order Collocation Methods, Journal of Guidance, Con-
trol, and Dynamics, Vol. 25, No. 1, pp. 40-47, 2002.
[59] Holland, J., Adaptation in Natural and Artificial Systems, University of
Michigan Press, Ann Arbor, 1975.
[60] Hu, J., Prandini, M., and Sastry, S., Optimal Coordinated Maneuvers for
Three-Dimensional Aircraft Confllict Resolution, Journal of Guidance, Con-
trol, and Dynamics, Vol. 25, No. 5, pp. 888-900, 2002.
[61] Hull, D.G., Conversion of Optimal Control Problems into Parameter Opti-
mization Problems, Journal of Guidance, Control, and Dynamics, Vol. 20, No.
1, pp. 57-60, 1997.
[62] Hamalainen, R.P. and Ehtamo, H., editors, Differential Games - Devel-
opments in Modeling and Computation, Springer, Berlin, 1991.
[63] Hamalainen, R.P. and Sipila, A., Optimal control of inspiratory airflow
in breathing, Optimal Control Applications and Methods, Vol. 5, pp. 177-191,
1984.
[64] Imado, F., Some Aspects of a Realistic Three-Dimensional Pursuit-Evasion
Game, Journal of Guidance, Control, and Dynamics, Vol. 16, No. 2, pp. 289-
293, 1993.
[65] Imado, F., and Miwa, S., Fighter Evasive Maneuvers Against Proportional
Navigation Missile, Journal of Aircraft, Vol. 23, No. 11, pp.825-830, 1986.
[66] Imado, F., and Miwa, S., Fighter Evasive Boundaries Against Missiles,
Computers and Mathematics with Applications, Vo. 18, No. 1-3, pp. 1-14,
1989.
[67] Isaacs, R., Differential Games, Krieger Publishing Co., New York, 1975.
[68] Jarmark, B., A Missile Duel Between Two Aircraft, Journal of Guidance,
Control, and Dynamics, Vol. 8, No. 4, pp. 508-513, 1985.
[69] Jarvenpaa, T., Optimization Software for Evasive Aircraft Maneuvers,
M.Sc. thesis, Systems Analysis Laboratory, Helsinki University of Technology,
2003.
[70] Joshi, H.R., Optimal control of an HIV immunology model, Optimal Control
Applications and Methods, Vol. 23, No. 4, pp. 199-213, 2002.
24
[71] Kaiser, F., Der Steigflug mit Strahlflugzeugen - Teil I, Bahngeschwindigkeit
besten Steigens, Versuchsbericht 262-02-L44, Messerchmidt A.G., Augsburg,
1944.
[72] Kamien, M.L. and Schwartz, N.L., Dynamic Optimization - The calculus
of variations and optimal control in economics and management, 2nd edition,
North Holland, 1991.
[73] Kawachi, K., Inada, Y., and Azuma, A., Optimal Flight Path of Flying
Fish, Journal of Theoretical Biology, Vol. 163, No.2, pp. 145-159, 1993.
[74] Kirk, D.E., Optimal Control Theory, Prentice Hall, 1970.
[75] Komduur, H.J., and Visser, H.G., Optimization of Vertical Plane Cobra-
like Pitch Reversal Maneuvers, Journal of Guidance, Control, and Dynamics,
Vol. 25, No. 4, 2002.
[76] Kreim, H., Kugelmann, B., Pesch, H.J., and Breitner, M.H., Min-
imizing the Maximum Heating of a Re-Entering Space Shuttle: An Optimal
Control Problem with Multiple Control Constraints, Optimal Control Appli-
cations and Methods, Vol.17, No. 1, pp. 45-69, 1996.
[77] Kreindler, E., Optimality of proportional navigation, AIAA Journal, Vol.
11, June 1973, pp. 878-880.
[78] Kwakernaak, H., and Siwan, R., Linear Optimal Control Systems, Wiley,
New York, 1972.
[79] Ledzewicz, U., and Schattler, H., Optimal Bang-Bang Controls for a
Two-Compartment Model in Cancer Chemotherapy, Journal of Optimization
Theory and Applications, Vol. 114, No. 3, pp. 609-637, 2002.
[80] Lewin, J., Differential Games, Springer, 1994.
[81] Lipman, Y., and Shinar, J., A Linear Pursuit-Evasion Game with a State
Constraint for a Highly Maneuverable Evader, in Olsder, G., editor, New
Trends in Dynamic Games and Applications, Birkhauser, New York, pp. 142-
164, 1995.
[82] Maciejowski, J.M. Predictive Control with Constraints, Pearson Education,
2002.
[83] Michalewicz, Z., and Schoenauer, M., Evolutionary Algorithms for Con-
strained Optimization Problems, Evolutionary Computation, Vol. 4, No. 1, pp.
1-32, 1996.
25
[84] Miele, A., Flight Mechanics, Addision Wesley, Massachusetts, MA, 1962.
[85] Miettinen, K., Makela, M.M., Neittaanmaki, P., and Periaux, J.,
editors, Evolutionary Algorithms in Engineering and Computer Science, Wiley,
1999.
[86] Morari, M, and Lee, J.H., Model predictive control: past, present and
future, Computers and Chemical Engineering, Vol. 23, pp. 667-682, 1999.
[87] Moritz, K., Polis, R., and Well, K.H., Pursuit-evasion in medium-range
air combat scenarios, Computers and Mathematics with Applications, Vol. 13,
No.1-3, pp. 167-180, 1987.
[88] Mulgund, S.S, and Stengel, R.F., Optimal Recovery from Microburst
Wind Shear, Journal of Guidance, Control, and Dynamics, Vol. 16, No. 6, pp.
1010-1017, 1993.
[89] Murtaugh, S.A., and Criel, H.E., Fundamentals of proportional naviga-
tion, IEEE Spectrum, Vol. 3, December 1966 , pp. 75-85.
[90] Nah, R.S., Vadali, S.R., and Braden, E., Fuel-Optimal, Low-Thrust,
Three-Dimensional Earth-Mars Trajectories, Journal of Guidance, Control,
and Dynamics, Vol. 24, No. 6, pp. 1100-1107, 2001.
[91] Nocedal, J., and Wright, S.J., Numerical Optimization, Springer, 1999.
[92] Norsell, M., Flight Testing Radar Detection of the Saab 105 in Level Flight,
Journal of Aircraft, Vol. 39, No. 5, pp. 894-897, 2002.
[93] Norsell, M., Radar Cross Section Constraints in Flight-Path Optimization,
Journal of Aircraft, Vol. 40, No. 2, pp. 412-415, 2003.
[94] Oberle, H.J., Numerical Treatment of Minimax Optimal Control Problems
with Application to the Reentry Flight Path Problem, Journal of the Astronau-
tic Sciences, Vol. 36, No.1/2, pp. 159-178, 1988.
[95] Ong, S.Y., and Pierson, B.L., Optimal Planar Evasive Aircraft Maneuvers
Against Proportional Navigation Missiles, Journal of Guidance, Control, and
Dynamics, Vol. 19, No. 6, pp. 1210-1215, 1996.
[96] Pesch, H.J., A practical guide to the solution of real-life optimal control
problems, Control and Cybernetics, Vol. 23, No. 1/2, pp. 7-60, 1994.
[97] Petrosjan, L.A., Differential Games of Pursuit, World Scientific, Singa-
pore, 1993.
26
[98] Pontryagin, L., Boltyanskii, V., Gamkrelidze, R, and Mischenko,
E., The Mathematical Theory of Optimal Processes, Wiley, New York, 1962.
[99] Raivio T., Computational methods for dynamic optimization and pursuit-
evasion games, Doctroral dissertation, Helsinki University of Technology, Sys-
tems Analysis Laboratory, Research report A80, March 2000.
[100] Raivio, T., Capture Set Computation of an Optimally Guided Missile, Jour-
nal of Guidance, Control, and Dynamics, Vol. 24, No. 6, pp. 1167-1175, 2001.
[101] Raivio, T., and Ehtamo, H., Visual Aircraft Identification as a Pursuit
Evasion Game, Journal of Guidance, Control, and Dynamics, Vol. 23, No. 4,
pp. 701-708, 2000.
[102] Rao, A.V., Tang, S., and Hallman, W., Numerical optimization study of
multiple-pass aeroassisted orbital transfer, Optimal Control Applications and
Methods, Vol. 23, No. 4, pp. 215-238, 2002.
[103] Renders, J-M., and Flasse, S.P., Hybrid Methods Using Genetic Algo-
rithms for Global Optimization, IEEE Transactions on Systems, Man, and
Cybernetics-Part B: Cybernetics, Vol.26, No. 2, pp. 243-258, 1996.
[104] Roberts, S.M., and Shipman, J.S., Two-Point Boundary Value Problems:
Shooting Methods, Elsevier, New York, 1972.
[105] Ross, I.M., and Fahroo, F., A Perspective on Methods for Trajectory Op-
timization, Proceedings of the AIAA/AAS Astrodynamics Specialist Confer-
ence, Monterey, CA, 5-8 August 2002, AIAA Paper No. 2002-4727.
[106] Seywald, H., Trajectory Optimization Based on Differential Inclusion, Jour-
nal of Guidance, Control and Dynamics, Vol. 17, No. 3, pp. 480-487, 1994.
[107] Seywald, H., Cliff, E.M., and Well, K.H., Range Optimal Trajectories
for an Aircraft Flying in the Vertical Plane, Journal of Guidance, Control,
and Dynamics, Vol. 17, No. 2, pp. 389-398, 1994.
[108] Shima, T., Shinar, J., and Weiss, H., New Interceptor Guidance Law
Integrating Time-Varying and Estimation-Delay Models, Journal of Guidance,
Control, and Dynamics, Vol. 26, No. 2, pp. 295-303, 2003.
[109] Shinar, J., and Gazit, R., Optimal ”No-escape” Firing Envelopes of Guided
Missiles, AIAA Paper 85-1960, 1985.
27
[110] Shinar, J., and Glizer, V.J., Application of Receding Horizon Control
Strategy to Pursuit-Evasion Problems, Optimal Control Applications and
Methods, Vol. 16, No. 2, pp. 127-142, 1995.
[111] Shinar, J., Guelman, M., and Green, A., An Optimal Guidance Law for
a Planar Pursuit-Evasion Game of Kind, Computers and Mathematics with
Applications, Vol. 18, No. 1-3, pp. 35-44, 1989.
[112] Shinar, J., and Gutman, S., Three-Dimensional Optimal Pursuit and Eva-
sion with Bounded Controls, IEEE Transactions on Automatic Control, Vol.
25, No. 3, pp. 492-496, 1980.
[113] Shinar, J., Merari, A., Blank, D., and Medinah, E.M., Analysis of
Optimal Turning Maneuvers in the Vertical Plane, Journal of Guidance and
Control, Vol. 3, No. 1, pp. 69-77, 1980.
[114] Shinar, J., and Shima, T., A Game Theoretical Interceptor Guidance Law
for Ballistic Missile Defence, Proceedings of the 35th IEEE Conference on
Decision and Control, Kobe, Japan, The Institute of Electrical and Electronic
Engineers Inc., New York, pp. 2780-2785, 1996.
[115] Shinar, J., and Shima, T., Nonorthodox Guidance Law Development Ap-
proach for Intercepting Maneuvering Targets, Journal of Guidance, Control,
and Dynamics, Vol. 25, No. 4, pp. 658-666, 2002.
[116] Shinar, J., and Steinberg, D., Analysis of Optimal Evasive Maneuvers
Based On a Linearized Two-Dimensional Model, Journal of Aircraft, Vol. 14,
pp. 795-802, August 1977.
[117] Shinar, J., and Tabak, R., New Results in Optimal Missile Avoidance
Analysis, Journal of Guidance, Control, and Dynamics, Vol. 17, No. 5, pp.
897-902, 1994.
[118] Stevens, B.L., and Lewis, F.L., Aircraft Control and Simulation, Wiley,
1992.
[119] Stoer, J., and Bulirsch, R., Introduction to Numerical Analysis, Springer
Verlag, New York, 1980.
[120] Strizzi, J., Ross, I.M., and Fahroo, F., Towards Real-Time Computation
of Optimal Controls for Nonlinear Systems, Proceedings of the AIAA Guid-
ance, Navigation, and Control Conference, Monterey, CA, 5-8 August 2002,
AIAA Paper No. 2002-4945.
28
[121] Tang, S., and Conway, A., Optimization of Low-Thrust Interplanetary Tra-
jectories Using Collocation and Nonlinear Programming, Journal of Guidance,
Control, and Dynamics, Vol. 18, No. 3, pp. 599-604, 1995.
[122] Turetsky, V., and Shinar, J., Missile guidance laws based on pursuit
evasion game formulations, Automatica, Vol. 39, No. 4, pp. 607-618, 2003.
[123] Vanderbei, R.J., LOQO: An interior point code for quadratic programming,
Technical Report, Statistics and Operations Research, Princeton Univeristy,
SOR-94-15, 1994.
[124] Vanderbei, R.J., LOQO User’s Manual - Version 4.05, Technical Report,
Operations Research and Financial Engineering, Princeton University, ORFE-
99-??, 2000. http://www.princeton.edu/~rvdb/tex/loqo/loqo405.pdf (cited
12.1.2003).
[125] Vanderbei, R.J., Case Studies in Trajectory Optimization: Trains, Planes,
and Other Pastimes, Optimization and Engineering, Vol. 2, No. 2, pp. 215-243,
2001.
[126] Virtanen, K., Ehtamo, H., Raivio, T., and Hamalainen, R.P., VI-
ATO – Visual Interactive Aircraft Trajectory Optimization, IEEE Transac-
tions on Systems, Man, and Cybernetics, Part C, Vol. 29, No. 3, pp. 409-421,
1999.
[127] Virtanen, K., Raivio, T., and Hamalainen, R.P., Decision Theoretical
Approach to Pilot Simulation, Journal of Aircraft, Vol. 36, No. 4, pp. 632-641,
1999.
[128] Virtanen, K., Raivio, T., and Hamalainen, R.P., An Influence Diagram
Approach to One-on-One Air Combat, Proceedings of the 10th International
Symposium on Differential Games and Applications, St. Petersburg, Russia,
July 8-11, 2002, Vol. 2, pp.859-864.
[129] Waltz, R.A., and Nocedal, J., Knitro User’s Manual, version 3.1,
Technical Report, Optimization Technology Center, Northwestern University,
OTC-2003/5, 2003.
[130] Zarchan, P., Proportional Navigation and Weaving Targets, Journal of
Guidance, Control, and Dynamics, Vol. 18, No. 5, pp.969-974, 1995.
[131] Zarchan, P., Tactical and Strategic Missile Guidance, Third Edition,
Progress in Astronautics and Aeronautics, Vol. 176, AIAA, 1997.
29
[132] Zimmerman, C., Dukeman, G., and Hanson, J., An Automated Method to
Compute Orbital Re-entry Trajectories with Heating Constraints, Proceedings
of the AIAA Guidance, Navigation, and Control Conference, Monterey, CA,
5-8 August 2002, AIAA Paper No. 2002-4454.
30