View
219
Download
0
Category
Preview:
Citation preview
MAE 552 – Heuristic Optimization
Lecture 2
January 25, 2002
The optimization problem is then:
Find values of the variables that minimize or maximize the objective function while satisfying the constraints.
The standard form of the constrained optimization problem can be written as:
Minimize: F(x) objective function
Subject to: gj(x)0 j=1,m inequality constraints
hk(x)0 k=1,l equality constraints
xi lower xi xi
upper i=1,n side constraints
where x=(x1, x2, x3, x4 , x5 ,xn)design variables
Conditions for OptimalityUnconstrained Problems
1. F(x)=0 The gradient of F(x) must vanish at the optimum
2. Hessian Matrix must be positive definite (i.e. all positive eigenvalues at optimum point).
2n
2
2n
2
1n
2
n2
2
22
2
12
2
n1
2
21
2
21
2
x
)x(F
xx
)x(F
xx
)x(F
xx
)x(F
x
)x(F
xx
)x(F
xx
)x(F
xx
)x(F
x
)x(F
H
Conditions for OptimalityUnconstrained Problems
• A positive definite Hessian at the minimum ensures only that a local minimum has been found
• The minimum is the global minimum only if it can be shown that the Hessian is positive definite for all possible values of x. This would imply a convex design space.
• Very hard to prove in practice!!!!
Conditions for OptimalityConstrained Problems
Kuhn Tucker Conditions
1. x* is feasible
2. jgj = 0 j=1,m
0)x(h)x(g)x(F ***j
l
1kjkmj
m
1jj
3.
sizein edunrestrict
0
km
j
These conditions only guarantee that x* is a local optimum.
Conditions for OptimalityConstrained Problems
• In addition to the Kuhn Tucker conditions two other conditions two other conditions must be satisfied to guarantee a global optima.
1. Hessian must be positive definite for all x.
2. Constraints must be convex.
A constraint is convex if a line connecting any two points in the feasible space travels always lies in the feasible region of the design space.
Determining the Complexity of Problems
Why are some problems difficult to Solve?
1. The number of possible solutions in the search space is so large as to forbid an exhaustive search for the best answer.
2. The problem is so complex that just to facilitate any answer at all requires that we simplify the model such that any result is essentially useless.
3. The evaluation function (objective function) that describes the quality of the proposed solution is noisy or time-varying, thereby requiring a series of solutions to be found.
4. The person solving the problem is inadequately prepared or imagines some psychological barrier that prevents them from discovering a solution.
1. The Size of the Search Space
Example 1: Traveling Salesman Problem (TSP)
• A salesman must visit every city in a territory exactly once and then return home covering the shortest distance
• Given the cost of traveling between each pair of cities, how would the salesman plan his trip to minimize the distance traveled?
NY
LA OrlandoDallas
Seattle
1. The Size of the Search Space
What is the size of the search space for the TSP?
Each tour can be described as a permutation of the cities.
Seattle-NY-Orlando-Dallas-LA
LA-NY-Seattle-Dallas-Orlando
etc.
Certain Tours are identical
Seattle-NY-Orlando-Dallas-LA
NY-Orlando-Dallas-LA-Seattle
Orlando-Dallas-LA-Seattle-NY
Every tour can be represented in 2n different ways where n is the number of cities.
1. The Size of the Search Space
Number of permutations of n cites = n!
2n ways to represent each tour.
Total number of unique tours =n!/(2n) = (n-1)!/2
Size of search space S = (n-1)!/2
So for our example n=5 and S=12 which could easily be solved by hand.
S grows exponentially however
n=10 S=180,000 possible solutions
n=20 S=10,000,000,000,000,000 possible solutions
1. The Size of the Search Space
n=50
S=100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
possible solutions
1. The Size of the Search Space
Example 2: Nonlinear Programming Problem
n
1i
2i
n
1i
n
1ii
2i
4
ix
)x(cos2)x(cos)x(F Minimize
10x0
:Bounds
02
n15x:)x(g
0x75.0:)x(g
i
n
1ii2
n
1ii1
1. The Size of the Search Space
What is the size of the search space for the NLP?
If we assume a machine precision of 6 decimal places, then eachVariable can take on 10,000,000 possible values.
The total number of possible solutions =107n
For n=2 there are 100,000,000,000,000 possible solutions
For n=50 there are 10350 possible solutions!!!!
Impossible to enumerate all these solutions even with the most powerful computers.
2. Modeling the Problem
Whenever we solve a problem we are actually only finding a solution to a MODEL of the problem
Example: Finite Element Model, CFD Model etc.
2 Steps to Problem Solving
1. Creating a Model for the Problem
2. Using that Model to Generate a Solution
Problem => Model => Solution
The ‘solution’ is only a ‘solution’ in terms of the model used.
2. Modeling the Problem
Example: Methods of Performing Nonlinear Aerodynamic Drag
Predications
Solution Method Time for 1 solution
Linear Theory Solution 2 seconds
Wing Fuselage Euler Solution
16 minutes
Wing Fuselage Navier Stokes Solution
2 hours
There is a tradeoff between the fidelity of the model and ability to solve it.
2. Modeling the Problem
When faced with a complex problem we have two choices:1. Simplify the model and try for a more exact solution with a
traditional optimizer.
2. We can keep the model as is and possibly only find an approximate solution or use a nontraditional optimization method to try to find an exact solution.
This can be written:
1. Problem => Modela=>Solutionp(Modela)
2. Problem=>Modelp=>Solutiona(Modelp)
• It is generally better to use strategy 2 because with strategy 1 there is no guarantee that the solution to an approximate model will be useful
3. System Changes over Time
Real world systems change over time
Example: In the TSP the time to travel between cities could change as a result of many factors.
• Road conditions
• Traffic Patterns
• Accidents
Suppose there are two possibilities that are equally likely for a trip from New York to Buffalo
1. Everything goes fine and it takes 6 hours
2. You get delayed by one of the factors above and it takes 7 hours
3. System Changes over Time
How can this information be put into the model?
1. Trip takes 6 hours 50% of the time
2. Trip takes 7 hours 50% of the time
NY
BuffaloTime=???
3. System Changes over Time
How can this information be put into the model?
1. Simple approach: 6.5 hours for the trip
• Problem: It never takes exactly 6.5 hours to make the trip so the solution found will be for the WRONG problem
2. Repeatedly simulate the system with each case occurring 50% of the time.
• Problem: It could be expensive to run repeated simulations.
4. Constraints
• Real world problems do not allow you to choose from the entire search space.
n
1i
2i
n
1i
n
1ii
2i
4
ix
)x(cos2)x(cos)x(F Minimize
10x0
:Bounds
02
n15x:)x(g
0x75.0:)x(g
i
n
1ii2
n
1ii1
4. Constraints
• Effect of Constraints
1. Good Effect: Remove part of the search space from consideration.
Inequality Constraints: Cut the design space into
Feasible and Infeasible regions
X1
X2
Infeasible
Feasible
g: x1+x2 3
4. Constraints
• Effect of Constraints
1. Good Effect: Remove part of the search space from consideration.
Equality Constraints:Reduce the design space to a line
or a plane.
X1
X2
Infeasible
Infeasible
g: x1+x2 = 3
4. Constraints
• Effect of Constraints
1. Good Effect: Remove part of the search space from consideration.
• If an algorithm can be designed so that it only considers the feasible part of the design space, the number of potential solutions can be reduced.
4. Constraints
2. Bad Effect: Need an algorithm that finds new solutions that are an improvement over previous solutions AND maintains feasibility.
• Often the optimal solution lie directly along one or more constraints. Difficult to move along constraint without corrupting solution.
• One option is to design an algorithm that locates a feasible design and then never corrupts it while searching for a design that better satisfies the objective function.
• Very difficult to do!!!!
Complexity Theory
•The complexity of decision and optimization problems are classified according to the relationship between solution time and input size.
•The simplest way to measure the running time of a program is to determine the overall number of instructions executed by the algorithm before halting.
•For a problem with input size n determine the cost (time) of applying the algorithm on the worst case instance of the problem.
•This provides an upper bound on the execution time.
Complexity Theory
•The goal is to express the execution time for the algorithm in terms of the input variables.
Time=F (Num of Inputs)
•The standard O notation is used to describe the execution cost of the algorithm with input size n:
Execution time grows no more than O(n) as n increases asymptotically
Asymptotic Analysis
• Let t(x) be the running time of algorithm A on input x. The worst case running time of A is given by
t(n)=max(t(x) | x such that |x| n)
• Upper bound: A has complexity O(f(n)) if t(n) is O(f(n)) (that is, we ignore constants)
• Lower bound: A has complexity (f(n)) if t(n) is (f(n))
Complexity Theory
Examples:
Execution Time Bounded By
O(n)
bn2 O(n2)
bn2+log(n)+n O(n2)
2n+bn2+log(n)+n O(2n)
Input size
• Size of input: number of bits needed to present the specific input
• Existence of encoding scheme which is used to describe any problem instance– For any pair of natural encoding schemes and
for any instance x, the resulting strings are related
Complexity Classes
• For any function f(n), TIME(f(n)) is the set of decision problems which can be solved with a time complexity O(f(n))– P = the union of TIME(nk) for all k– EXPTIME = the union of TIME (2nk) for all k
• P is contained in EXPTIME
• It is possible to prove (by diagonalization) that EXPTIME is not contained in P
Examples
• SATISFYING TRUTH ASSIGNMENT: given a logical equation and a truth assignment f, does f satisfy F?– SATISFYING TRUTH ASSIGNMENT is in P
• SATISFIABILITY (simply, SAT): given a logical equation formula F, is F satisfiable?– SAT is in EXPTIME.
• Open problem: SAT is in P?
Complexity Classes: Class NPO
• Optimization problems such that– A instance of a problem is recognizable in
polynomial time– Feasible solutions are recognizable in
polynomial time– Objective Function is computable in
polynomial time
Class PO
• NPO problems solvable in polynomial time
Examples: Linear, Quadratic Programming.
Class NP-hard problems
• An optimization problem P is NP-hard if any problem if it is at least as hard to solve as any other NP problems
• Solvable in Exponential Time Only
TRAVELING SALESMANMAXIMUM QUADRATIC PROGRAMMINGMAXIMUM KNAPSACKandMany More
Recommended