Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Introduction to Operations Research
Peng Zhang ∗
June 7, 2015
∗School of Computer Science and Technology, Shandong University, Jinan, 250101,
China. Email: [email protected].
0-0
1 Introduction
1
2 Overview of the Operations Research
Modeling Approach
2
3 Introduction to Linear Programming
3
4 Solving Linear Programming
Problems: The Simplex Method
4
5 The Theory of the Simplex Method
5
6 Duality Theory and Sensitivity
Analysis
6
7 Other Algorithms for Linear
Programming
7
8 The Transportation and Assignment
Problems
8
9 Network Optimization Models
9
10 Dynamic Programming
10
11 Integer Programming
11
Integer programming (IP): every variable is restricted to be an
integer.
Binary integer programming (BIP): every variable is restricted
to have values 0 or 1.
Mixed integer programming (MIP): only some of the variables
are restricted to have integer values.
11.1 PROTOTYPE EXAMPLE
The CALIFORNIA MANUFACTURING COMPANY problem.
The company has 10 million dollars. It wants to build factories and
warehouses in Los Angeles or San Francisco. The number of factories is
at most two, and the number of warehouses is at most one. Only when
a factory is built in some place, a warehouse can be built there.
11-1
The values and costs of factories and warehouses in different places
are given in the following table.
The question is, where it will build factories and warehouses, so that
the total value of the built factories and warehouses is maximized?
This problem can be formulated as the following binary integer
programming (BIP).
11-2
max 9x1 + 5x2 + 6x3 + 4x4
s.t. 6x1 + 3x2 + 5x3 + 2x4 ≤ 10
x3 + x4 ≤ 1
x3 ≤ x1
x4 ≤ x2
xj ∈ {0, 1}, ∀j
11-3
11.2 SOME BIP APPLICATIONS
11.3 INNOVATIVE USES OF BINARYVARIABLES
IN MODEL FORMULATION
11.4 SOME FORMULATION EXAMPLES
The Knapsack problem
Instance: There is a knapsack with capacity W , and n items each of
which has a weight wi > 0 and a value vi > 0.
Task: Put some items into the knapsack such that their total weight
is at most W and their total value is maximized.
Solution.
We use the decision variable xi to indicate whether item i is put into
11-4
the knapsack.
max∑i
vixi
s.t.∑i
wixi ≤W
xi ∈ {0, 1}, ∀i
The Minimum Spanning Tree problem
Instance: We are given an undirected graph G = (V,E) with non-
negative costs {ce} defined on edges.
Task: The goal is to find a minimum cost tree spanning all vertices
in G.
Solution.
11-5
For each edge e ∈ E, there is a decision variable xe. If xe = 1, e is
included in the solution. If xe = 0, it is not.
min∑e
cexe
s.t.∑
e∈δ(S)
xe ≥ 1, ∀∅ ̸= S ⊂ V
xe ∈ {0, 1}, ∀e ∈ E
The Traveling Salesman Problem
Instance: We are given n cities. For every two cities i ̸= j, there is a
length (e.g., distance) cij ≥ 0.
Task: Find a shortest tour that visits every city exactly once and
returns to the original city.
11-6
11-7
Solution 1.
11-8
For each ordered pari (i, j), we define a decision variable xij . If
xij = 1, then the edge (i, j) is used in the tour to be found. If xij = 0,
then the edge (i, j) is not used.
min∑i ̸=j
cijxij
s.t.∑j ̸=i
xij = 1, ∀i
∑j ̸=i
xji = 1, ∀i
∑i∈S,j∈S̄
xij ≥ 1, ∀∅ ̸= S ⊂ [n]
xij ∈ {0, 1}, ∀i ̸= j
11-9
Solution 2.
The decision variables {xij} are defined in the same way as above.
For each city i ∈ {1, · · · , n}, there is a variable ui, whose value can
be any real number.
Thus we get a MIP formulation for the TSP problem. This formula-
tion is by Miller, Tucker, and Zemlin [JACM’60].
11-10
min∑i ̸=j
cijxij (1)
s.t.∑j ̸=i
xij = 1, ∀i (2)
∑j ̸=i
xji = 1, ∀i (3)
ui − uj + nxij ≤ n− 1, ∀2 ≤ i ̸= j ≤ n (4)
xij ∈ {0, 1}, ∀i ̸= j
ui ≷ 0, ∀i
Theorem 11.1. The MIP (1) defines TSP.
Proof.
11-11
• We first prove that any solution to (1) is a TSP tour (a circuit visiting
every city exactly once).
• Constraint (2) says that every city has exactly one outgoing edge in the
solution. Constraint (3) says that every city has exactly one ingoing edge
in the solution. So, these two constraints guarantee that the solution
must consist of circuits. In the following we will prove that there is only
one circuit in the solution, by constraint (4).
• Actually, we will prove every circuit in the solution passes through city
1. Since city 1 can only be visited exactly once (by constraints (2) and
(3)), this means there is only one circuit in the solution.
• Suppose for the contradiction that in the solution there is a circuit
(i1, i2, · · · , ik, i1) consisting of length k edges which does not visit c-
11-12
ity 1. Since for every edge (i, j) in the circuit we have xij = 1, we have
the following k inequalities for this circuit by constraint (4):
ui1 − ui2 + n ≤ n− 1,
ui2 − ui3 + n ≤ n− 1,
· · · ,
uik − ui1 + n ≤ n− 1.
• Adding these k inequalities, we get kn ≤ k(n− 1), which is absurd.
• Next we prove any TSP tour defines a solution to MIP (1).
• Given a TSP tour, we define xij = 1 if edge (i, j) is used in the tour,
and xij = 0 otherwise. So, constraints (2) and (3) are satisfied.
• The tour must visit every city. Let city 1 be the first city on the tour,
11-13
and we define u1 = 1. In general, if city i is the k-th city (2 ≤ k ≤ n) on
the tour, we define ui = k.
• Now we consider constraint (4). Let (i, j) be any pair such that i ̸= j
and i, j ∈ {2, · · · , n}. Then we have two cases to consider: xij = 0 and
xij = 1.
• If xij = 0, then constraint (4) reduces to
ui − uj ≤ n− 1,
which obviously holds since ui ≤ n and uj ≥ 2.
• If xij = 1, then constraint (4) becomes
ui − uj + n ≤ n− 1,
11-14
which also holds since city j is the successor of city i on the tour and
hence ui − uj = −1.
• Note that since city 1 is not included in the condition of constraint (4),
we need not to consider constraint (4) on the “last” edge on the tour
ingoing to city 1 . (Suppose city i′ is the predecessor of city 1 on the
tour. Then we have xi′1 = 1, and ui′ = n and u1 = 1.)
11.5 SOME PERSPECTIVES ON SOLVING INTE-
GER PROGRAMMING PROBLEMS
IP may be easier than LP. Wrong!
11-15
Since LP can be efficiently solved, and IP has far fewer feasible solu-
tions than LP, it may seem that IP should be relatively easy to solve.
However, the opposite is true: IP is much harder to solve than LP.
There are two fallacies in the above reasoning.
(i) One fallacy is that having a finite number of feasible solutions
ensures the problem is readily solvable. The fact is that, finite
numbers can be astronomically large. For example, if there are
100 0-1 variables in an IP, then there will be 2100 feasible solutions
to be considered.
(ii) The second fallacy is that removing some non-integer solutions
from an LP will make it easier to solve. To the contrary, it is just
because all these feasible solutions are there so that there will be a
11-16
bfs which is also an optimal solution. By contrast, people cannot
prove similar beautiful property for IP.
One may think to use rounding to the closest integers. Does-
n’t work!
Example. Rounding may lead to unfeasible solutions.
max x2
s.t. − x1 + x2 ≤1
2
x1 + x2 ≤ 31
2
x1, x2 ∈ Z+
11-17
11-18
The optimal solution to the corresponding LP is (23 , 2). No matter
to round ( 23 , 2) to which solution of (1, 2) or (2, 2), it is not a feasible
solution.
Example. Even leading to a feasible solution, the rounded solution
may be far away from the optimal solution.
max x1 + 5x2
s.t. x1 + 10x2 ≤ 20
x1 ≤ 2
x1, x2 ∈ Z+
11-19
The optimal solution to the corresponding LP will be rounded to
(2, 1). However, the optimal solution to IP is (0, 2).
11-20
11.6 THE BRANCH-AND-BOUND TECHNIQUE
AND ITS APPLICATION TO BINARY IN-
TEGER PROGRAMMING
To illustrate the branch-and-bound method, we begin with an example.
(The California Manufacturing Company problem)
11-21
max 9x1 + 5x2 + 6x3 + 4x4 (IP0)
s.t. 6x1 + 3x2 + 5x3 + 2x4 ≤ 10
x3 + x4 ≤ 1
x3 ≤ x1
x4 ≤ x2
xj ∈ {0, 1}, ∀j
Branching
Since each variable is a 0-1 variable, a natural idea is to branch into
two subproblems on some variable xi. For example, if we branch on x1,
then we get two subproblems.
11-22
Subproblem 1 (x1 = 0):
max 5x2 + 6x3 + 4x4 (IP1)
s.t. 3x2 + 5x3 + 2x4 ≤ 10
x3 + x4 ≤ 1
x3 ≤ 0
x4 ≤ x2
x2, x3, x4 ∈ {0, 1}
11-23
Subproblem 2 (x1 = 1):
max 9 + 5x2 + 6x3 + 4x4 (IP2)
s.t. 3x2 + 5x3 + 2x4 ≤ 4
x3 + x4 ≤ 1
x3 ≤ 1
x4 ≤ x2
x2, x3, x4 ∈ {0, 1}
Bounding
For each of these subproblems, we can get an upper bound on its
optimal value by solving the corresponding LP-relaxation of it.
For example, let (LP0) be the LP-relaxation of (IP0). The optimal
11-24
solution to (LP0) is ( 56 , 1, 0, 1) with value Z∗LP0 = 16 1
2 . Since Z∗IP0 ≤
Z∗LP0 and Z∗
IP0 is an integer, we conclude that Z∗IP0 ≤ 16.
Similarly, we get that the optimal solution to (LP1) is (0, 1, 0, 1) with
value Z∗LP1 = 9, so Z∗
IP1 ≤ 9. The optimal solution to (LP2) is (1, 45 , 0,
45 )
with value Z∗LP2 = 16 1
5 , so Z∗IP2 ≤ 16.
11-25
In the procedure of the branch-and-bound method, we remember the
current best feasible solution (called incumbent) and its value Z∗.
So far, the current best feasible solution is (0, 1, 0, 1) with Z∗ = 9.
11-26
Fathoming1
A subproblem can be conquered (fathomed), and thereby dismissed
from further consideration, in the three ways described below.
(i) The subproblem’s upper bound is ≤ Z∗.
(Since the upper bound is ≤ Z∗, the optimal value of the subprob-
lem is obviously ≤ Z∗, too. Since there is already the incumbent
whose value is Z∗, this subproblem can be dismissed.)
(ii) The subproblem’s LP-relaxation has no feasible solution.
(iii) The subproblem’s LP-relaxation has integer (0-1) optimal solution.
If this solution is better than the incumbent, it becomes the new
1The original meaning of fathom is to understand what something means after
thinking about it carefully. It means here to conquer, not to consider further.
11-27
incumbent, and the fathoming rule 1 is reapplied to all unfathomed
subproblems with the new larger Z∗.
The Branch-and-Bound algorithm for BIP
1 The set of unfathomed subproblems LIST ← ∅. The incumbent is
assigned by empty. The current best value Z∗ ← −∞.
2 Compute the upper bound of the current whole problem. Apply
the three fathoming rules to the current problem. If it is not
fathomed, then insert the problem to LIST .
3 while LIST ̸= ∅ do
4 Take the subproblem that was created most recently. (Break
ties according to which has the larger bound.) Remove it from
LIST .
11-28
5 (Branching) Create two new subproblems by fixing the next
variable (the branching variable) at either 0 or 1.
6 (Bounding) For each of the two new subproblems, compute its
upper bound.
7 (Fathoming) For each of the two new subproblems, apply the
three fathoming rules on it. (If a problem in LIST is fathomed,
then remove it. Recall the fathoming rule 3.)
8 Insert the not fathomed new subproblems to LIST .
9 endwhile
10 return the incumbent.
Completing the Example
11-29
Branching on (IP2) with either x2 = 0 or x2 = 1, we get two sub-
problems (IP3) and (IP4).
11-30
Subproblem 3 (x1 = 1, x2 = 0):
max 9 + 6x3 + 4x4 (IP3)
s.t. 3x2 + 5x3 + 2x4 ≤ 4
x3 + x4 ≤ 1
x3 ≤ 1
x4 ≤ 0
x3, x4 ∈ {0, 1}
11-31
Subproblem 4 (x1 = 1, x2 = 1):
max 14 + 6x3 + 4x4 (IP4)
s.t. 5x3 + 2x4 ≤ 1
x3 + x4 ≤ 1
x3 ≤ 1
x4 ≤ 1
x3, x4 ∈ {0, 1}
11-32
Both (IP3) and (IP4) cannot be fathomed. (Recall the three fathom-
ing rules.)
11-33
Branching on (IP4) with either x3 = 0 or x3 = 1, we get two sub-
problems (IP5) and (IP6).
Subproblem 5 (x1 = 1, x2 = 1, x3 = 0):
max 14 + 4x4 (IP5)
s.t. 2x4 ≤ 1
x4 ≤ 1
x4 ∈ {0, 1}
11-34
Subproblem 6 (x1 = 1, x2 = 1, x3 = 1):
max 20 + 4x4 (IP6)
s.t. 2x4 ≤ −4
x4 ≤ 0
x4 ≤ 1
x4 ∈ {0, 1}
11-35
Since (IP6)’s relaxation (LP6) has no feasible solution, (IP6) is fath-
omed.
We branch on (IP5) with either x4 = 0 or x4 = 1, getting subproblem
11-36
7 and subproblem 8.
Subproblem 7 (x1 = 1, x2 = 1, x3 = 0, x4 = 0):
max 14 (IP7)
Subproblem 8 (x1 = 1, x2 = 1, x3 = 0, x4 = 1):
max 18 (IP8)
s.t. 2 ≤ 1
11-37
Since (LP8) is not feasible, (IP8) is fathomed.
(IP7) is fathomed too, since it has the integer optimal solution (1, 1, 0, 0)
with Z∗IP8 = 14.
11-38
Since Z∗IP8 is better than the value Z∗ of the incumbent, (1, 1, 0, 0)
becomes the new incumbent.
Since Z∗ = 14 now, (IP3) is fathomed by the fathoming rule 1.
Now LIST is empty (there is no unfathomed problem). The algo-
rithm terminates, and the optimal solution to the original problem is
(1, 1, 0, 0), with optimal value 14.
11.7 A BRANCH-AND-BOUNDALGORITHM FOR
MIXED INTEGER PROGRAMMING
Compared to the Branch-and-Bound algorithm for BIP, there are four
changes for the algorithm for MIP.
(i) Only the integer variables that have a non-integer value in the
11-39
optimal solution to the LP-relaxation of the current subproblem
are considered as branching variables.
(ii) When branching, just create two new subproblems by specifying
two ranges of values for the variable.
For example, if xj is the branching variable, then we add the fol-
lowing constraints to the two new subproblems,
xj ≤ ⌊x∗j⌋ and xj ≥ ⌈x∗
j⌉,
respectively.
(iii) In BIP, the optimal value Z∗LP is rounded down to be used as an
upper bound of the current subproblem. Now, in MIP, since there
may be non-integer variables, the upper bound is Z∗LP without
11-40
rounding down.
(iv) The fathoming rule 3 is now as follows.
If the optimal solution to the LP-relaxation of the current subprob-
lem has integer values for the integer restricted variables, then this
subproblem can be fathomed.
And, if this solution is better than the incumbent, it becomes the
new incumbent and the fathoming rule 1 is reapplied to all the
unfathomed subproblems with the new larger Z∗.
Remarks. In execution of the Branch-and-Bound algorithm for MIP,
a branching variable may recur again as the branching variable in the
future. See the example shown below.
The Branch-and-Bound algorithm for MIP
11-41
1 The set of unfathomed subproblems LIST ← ∅. The incumbent is
assigned by empty. The current best value Z∗ ← −∞.
2 Compute the upper bound of the current whole problem. Apply
the three fathoming rules to the current problem. If it is not
fathomed, then insert the problem to LIST .
3 while LIST ̸= ∅ do
4 Take the subproblem that was created most recently. (Break
ties according to which has the larger bound.) Remove it from
LIST .
5 (Branching) let xj be the branching variable and x∗j be its value
in the optimal solution to the LP-relaxation of the subprob-
lem. Create two new subproblems by adding the respective
constraints xj ≤ ⌊x∗j⌋ and xj ≥ ⌈x∗
j⌉.
11-42
6 (Bounding) For each of the two new subproblems, compute its
upper bound.
7 (Fathoming) For each of the two new subproblems, apply the
three fathoming rules on it. (If a problem in LIST is fathomed,
then remove it. Recall the fathoming rule 3.)
8 Insert the not fathomed new subproblems to LIST .
9 endwhile
10 return the incumbent.
An MIP Example. Solve the following MIP by Branch-and-Bound.
11-43
max 4x1 − 2x2 + 7x3 − x4 (MP0)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1, x2, x3 ∈ Z+
x4 ≥ 0
Solution.
Solving LP0, we get the following.
11-44
Branch into two subproblems (MP1) and (MP2) on variable x1.
11-45
max 4x1 − 2x2 + 7x3 − x4 (MP1)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1 ≤ 1 (*)
x1, x2, x3 ∈ Z+
x4 ≥ 0
11-46
max 4x1 − 2x2 + 7x3 − x4 (MP2)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1 ≥ 2 (*)
x1, x2, x3 ∈ Z+
x4 ≥ 0
11-47
(MP2) is fathomed since (LP2) has no feasible solution.
Branch into two subproblems (MP3) and (MP4) on variable x2.
11-48
max 4x1 − 2x2 + 7x3 − x4 (MP3)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1 ≤ 1 (*)
x2 ≤ 1 (*)
x1, x2, x3 ∈ Z+
x4 ≥ 0
11-49
max 4x1 − 2x2 + 7x3 − x4 (MP4)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1 ≤ 1 (*)
x2 ≥ 2 (*)
x1, x2, x3 ∈ Z+
x4 ≥ 0
11-50
Branch into two subproblems (MP5) and (MP6) on variable x1 (re-
curring).
11-51
max 4x1 − 2x2 + 7x3 − x4 (MP5)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1 ≤ 1 (*)
x2 ≤ 1 (*)
x1 ≤ 0 (*)
x1, x2, x3 ∈ Z+
x4 ≥ 0
11-52
max 4x1 − 2x2 + 7x3 − x4 (MP6)
s.t. x1 + 5x3 ≤ 10
x1 + x2 − x3 ≤ 1
6x1 − 5x2 ≤ 0
− x1 + 2x3 − 2x4 ≤ 3
x1 ≤ 1 (*)
x2 ≤ 1 (*)
x1 ≥ 1 (*)
x1, x2, x3 ∈ Z+
x4 ≥ 0
11-53
(MP5) is fathomed since we find the new incumbent (0, 0, 2, 12 ) with
Z∗ = 13 12 .
(MP4) is fathomed since its upper bound is ≤ Z∗.
11-54
(MP6) is fathomed since (LP6) has no feasible solution.
LIST is empty now. The optimal solution to (MP0) is (0, 0, 2, 12 ),
with the optimal value Z∗ = 13 12 .
11.8 THE BRANCH-AND-CUT APPROACH TO
SOLVING BIP PROBLEMS
A cutting plane for any IP problem is a new functional constraint that
reduces the feasible solution for the LP-relaxation without eliminating
any feasible solutions for the IP problem.
Example.
11-55
max 3x1 + 2x2
s.t. 2x1 + 3x2 ≤ 4
x1, x2 ∈ {0, 1}
The feasible region for the LP-relaxation of the above IP is as follows.
11-56
If we add the constraint x1+x2 ≤ 1 to the LP-relaxation, the feasible
region will be sharpened. However, no feasible solution to the original
IP problem is harmed.
11-57
In this case, the constraint x1 + x2 ≤ 1 is a cutting plane for the
original IP problem.
Generating Cutting Planes for BIP
There are many ways to generate cutting planes for BIP. Here we
11-58
just give one such method.
1. Consider any functional constraint in≤ form with only nonnegative
coefficients.
For example, 2x1 + 3x2 ≤ 4.
2. Find a group of variables (called a minimal cover of the constraint)
such that
(a) The constraint is violated if every variable in the group equals
to 1 and all other variables equal 0.
(b) But the constraint becomes satisfied if the value of any one of
these variables is changed from 1 to 0.
In our example, the group is {x1, x2}.
11-59
3. By letting N denote the number of variables in the group, the
resulting cutting plane is
Sum of variables in group ≤ N − 1.
So, the cutting plane for our example is x1 + x2 ≤ 1.
Remarks. For a given functional constraint, its cutting plane may
not be unique.
Example.
Consider the constraint
6x1 + 3x2 + 5x3 + 2x4 ≤ 10.
Then
x1 + x2 + x4 ≤ 2
11-60
is a cutting plane, since x1, x2 and x4 cannot be 1 simultaneously, but
every two of them can be 1 at the same time.
It is interesting to notice that this constraint has another cutting
plane
x1 + x3 ≤ 1.
11-61