48
Sebastian Sager Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24, 2009 Heidelberg

Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

  • Upload
    others

  • View
    64

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Linear ProgrammingSebastian Sager

Heidelberg Graduate School

of Mathematical and Computational

Methods for the Sciences

July 20-24, 2009

Heidelberg

Page 2: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Overview talk

• Tutorial example, geometric interpretation, polyhedra

• Simplex algorithm

• Extensions

• Duality

• Other algorithms

• References, Software

Page 3: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Example: oil blending

• Blend raw oil to Light, Middle or Heavy oil

• Two possibilities:

• A) 1 unit raw oil to 1 L, 2 M, 2 H

• B) 1 unit raw oil to 4 L, 2 M, 1 H

• Costs: A) 3 money units, B) 5 money units

• Delivery obligations: 4 L, 5 M, 3 H

• =⇒ How can we minimize our costs and stilldeliver what we are expected to?

Page 4: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Modeling

• Introduce variables x1 and x2: represent units of raw oil blended withprocedure A) resp. B)

• Objective: minimize costs (3x1 + 5x2)

• Constraint: positiveness (x1, x2 ≥ 0)

• Constraint: deliver at least 4 L (1x1 + 4x2 ≥ 4)

• Constraint: deliver at least 5 M (2x1 + 2x2 ≥ 5)

• Constraint: deliver at least 3 H (2x1 + 1x2 ≥ 3)

min 3x1 + 5x2

s.t. 1x1 + 4x2 ≥ 4

2x1 + 2x2 ≥ 5

2x1 + 1x2 ≥ 3

x1, x2 ≥ 0

Page 5: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

What is a Linear Program?

Let for the rest of the talk c, x ∈ Rn, b ∈ R

m, A ∈ Rm×n. Here b, c and A are

given, while x is what we are looking for.A Linear Program (LP) consists of a linear objective function cT x to beoptimized and constraints the solution x has to fulfill.

minx cT x =∑n

i=0cixi = c1x1 + c2x2 + · · · + cnxn

s.t. a11x1 + a12x2 + · · · + a1nxn ≤ b1

. . .

am11x1 + am12x2 + · · · + am1nxn ≤ bm1

. . .

am21x1 + am22x2 + · · · + am2nxn ≥ bm2

. . .

am31x1 + am32x2 + · · · + am3nxn = bm3

Page 6: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Transformations

• Maximizing f(x) is the same as minimizing −f(x).

• Multiply ≥ inequalities with −1 to get ≤ inequalities

• An equality can be split up in two inequalities that both haveto be fulfilled:

aTi x = bi ⇐⇒

aTi x ≥ bi

aTi x ≤ bi

• Inequalities can be transformed to equalities by introducingadditional (slack) variables:

aTi x ≤ bi ⇐⇒

aTi x + si = bi

si ≥ 0

Page 7: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Standard form

• Can transform problem to a mathematically equivalentprogram. Advantage: easier to handle in theory.

• In practice: exploit structure!

Standard form

minx cT x

s.t. Ax = b

x ≥ 0

Geometric interpretation

minx cT x

s.t. Ax ≤ b

Page 8: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Oil example: different formulations

Geometric interpretation

minx cT x

s.t. Ax ≤ b

Standard form

minx cT x

s.t. Ax = b

x ≥ 0

A =

0

B

B

B

B

B

B

B

@

−1 −4

−2 −2

−2 −1

−1 0

0 −1

1

C

C

C

C

C

C

C

A

, b =

0

B

B

B

B

B

B

B

@

−4

−5

−3

0

0

1

C

C

C

C

C

C

C

A

,

c =

0

@

3

5

1

A , x1, x2 free

A =

0

B

B

@

1 4 −1 0 0

2 2 0 −1 0

2 1 0 0 −1

1

C

C

A

, b =

0

B

B

@

4

5

3

1

C

C

A

,

c =

0

B

B

B

B

B

B

B

@

3

5

0

0

0

1

C

C

C

C

C

C

C

A

,

0

B

B

B

B

B

B

B

@

x1

x2

x3

x4

x5

1

C

C

C

C

C

C

C

A

0

B

B

B

B

B

B

B

@

0

0

0

0

0

1

C

C

C

C

C

C

C

A

Page 9: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Oil example: Geometric view

• Two-dimensional plane,xi ≥ 0

• L (1x1 + 4x2 ≥ 4)

• H (2x1 + 1x2 ≥ 3)

• M (2x1 + 2x2 ≥ 5)

• Objective function vector c

• ”Push” niveau lines =⇒

Optimal solutionx1 = 2, x2 = 1

2

x1

x2

1 2 3 4 5

1

2

3

4

5

6

−c

Page 10: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Geometric example in 3d

• Feasible region in threedimensions

• Hyperplanes orthogonal tovector c show solutions withthe same objective value

• ”Pushing” this hyperplaneout of feasible region =⇒

optimal solution

• Is this also true in higher di-mensions?!?

x2

x

x1

3

Hyperplane

Vertex

c

Page 11: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Fundamental Theorem of Linear Programming

Theorem. Let P = {x ∈ Rn : Ax = b, x ≥ 0} 6= ∅ be a polyhedron. Then either

the objective function cT x has no minimum in P or at least one vertex willtake the minimal objective value.

Proof. Distinguish two cases:

• There is a direction d ∈ Rn in P with x + λd ∈ P ∀λ ≥ 0, x ∈ P

and cT d < 0:Then cT (x + λd) = cT x + λcT d → −∞ for λ → ∞.

• There is no such direction d:As every point x ∈ P can be written as x = d +

i∈I λi vi,the minimum lies in the interior of the convex hull x =

i∈I λi vi withλi ≥ 0,

i∈I λi = 1 and {vi : i ∈ I} being the set of all vertices of thepolyhedron. But now it holds thatcT(∑

i∈I λi vi

)≥∑

i∈I λi min{cT vi, i ∈ I} = min{cT vi, i ∈ I}

Page 12: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

The basis

• How can we describe the vertices in a mathematical way?

• Assume from now on that we have the problem in standardform with dimensions n > m

minx cT x

s.t. Ax = b

x ≥ 0

• Idea: fix n − m variables xi to zero and put their indices intothe set N , all other indices to B. Then calculate x = A−1

B b.

Page 13: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

The basis - definitions

• An ordered pair S = (B,N) of sets B,N ⊆ {1, . . . , n} is calledcolumn basis of a given LP with constraint matrix A ∈ R

m×n,if:

1. B ∪ N = {1, . . . , n}

2. B ∩ N = ∅

3. |B| = m

4. AB = (aB1, aB2

, . . . , aBm) ∈ R

m×m is regular

• xi is called basis variable ⇔ i ∈ B

• x =

xB

xN

=

A−1

B b

0

is called basis solution vector.

Page 14: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Connection: basis solution vector – vertex

• Theorem. A point v ∈ P = {x ∈ Rn : Ax = b, x ≥ 0} is a vertex

of P , if and only if the columns of A that have the same indexas the positive components of v, are linearly indepentent.

• In other words: x is a vertex if and only if x is a basis solutionvector.

• We could simply calculate ALL basis solutions and compare

them - but there are exponentially many:0

@

n

m

1

A

Page 15: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Basis solutions for oil example

(n

m

)

=

(5

3

)

= 10

vertices / basis solutions

• Only four of them arefeasible (x ≥ 0)

• Description with basis

x1

x2

1 2 3 4 5

1

2

3

4

5

6

LH Mx1

x2

1 2 3 4 5

1

2

3

4

5

6

B={1,2,5} N={3,4}

B={1,2,3} N={4,5}

B={2,3,4} N={1,5}

LH M

?

Page 16: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Summary

• Different formulations for LPs. Most important:

minx cT x

s.t. Ax = b

x ≥ 0

• Optimal solution (if existent) is always a vertex

• A vertex can be described by a basis (B,N)

• B is a set of indices corresponding to variables xi that are”active” (→ referred to as active set strategy in NLP!)

Page 17: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Simplex algorithm - main idea

• Idea of the Simplex-Method: start from one vertex and jumpto a neighbor with a better value until we reach the optimum

• How can we "go" from one vertex to another? Just replaceone index in B!

• Two things important: choose feasible points and improveobjective value in each step

x1

x2

1

2

3

4

Page 18: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

How to find a neighbor . . .

A nonbasis-variable with index Nent will enter the basis with a valuexNent

= Φ ≥ 0. After this we still want Axnew = b with xnew = x + ∆x̃. Hence

(

AB AN

0 I

)

(x + ∆x̃) =

(

b

Φ ~eent

)

=⇒

(

AB AN

0 I

)

∆x̃ =

(

0

Φ ~eent

)

=⇒ ∆x̃ =

(

A−1

B −A−1

B AN

0 I

)(

0

Φ ~eent

)

= Φ

(

−A−1

B aNent

~eent

)

=⇒ xnew = x + ∆x̃ = x + Φ∆x with ∆x =

(

−A−1

B aNent

~eent

)

Page 19: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Updating

x1

x2

1 2 3 4 5

1

2

3

4

5

6

xxnew

Φ∆x

Page 20: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

. . . that is also feasible . . .

• Update xnew = x + Φ∆x

• xnew is constructed to satisfy Ax = b. What about x ≥ 0 ?

• Choose Φ so that no xnewi becomes negative and at least one

exactly zero. This index goes to N .

• Result: new feasible basis solution!

x1

x2

1 2 3 4 5

1

2

3

4

5

6

xxnew

Φ∆x

Φ too small!

x1

x2

1 2 3 4 5

1

2

3

4

5

6

xnewx

Φ∆x

Φ too big!

x1

x2

1 2 3 4 5

1

2

3

4

5

6

xxnew

Φ∆x

Φ correct!

Page 21: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

. . . and better!

• How does the objective value change?

• xnew = x + Φ∆x

cT (x + Φ∆x) = (cTB cT

N)

xB

xN

+ Φ

−A−1

·B aNent

~eent

= cTBxB − Φ

cT

BA−1

·B aNent︸ ︷︷ ︸

pNent

−cNent

• Choose an index Nent with pNent− cNent

> 0 to reduce theobjective value. If there is no such index we have found theoptimal solution!

Page 22: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

The primal simplex algorithm

1. Start with a feasible basis (B,N)

2. Calculate x =

(A−1

Bb

0

)

and pricing vector p = AT A−TB cB

3. Pricing:If p ≤ c the solution is optimal, else choose index Nent

4. Calculate update vector ∆x. If ∆x ≥ 0 ⇒ unbounded

5. Ratio test:Calculate Φ = min{− xi

∆xi

: ∆xi < 0} and index Bleave

6. Update sets and matrices, go to 2.

→dual algorithm

Page 23: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Summary

• Optimal solution always on a vertex ∼ basis solution

• Two sets B and N of indices, non-basis variables fixed

• Simplex: exchange two indices in each step whichcorresponds to moving to a neighbor vertex

• Calculate pricing vector p and compare with objective vector c

to see, if and in which direction the value gets better

• Primal algorithm: always feasible and work towards optimality

Page 24: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Overview talk

• Tutorial example, geometric interpretation, polyhedra

• Simplex algorithm

• Extensions

• Duality

• Other algorithms

• References, Software

Page 25: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Simplex: Open questions

• Prove that algorithm terminates

• How to treat degeneracy (cycling)?

• How to get a feasible basis (B,N)?

• Which index i with pi > ci to choose? ⇒ pricing-strategies

• How can we efficiently treat bounds, slack variables, sparsity,matrix decompositions, updates?

• What about stability? How to avoid basis matrices with a badcondition (close to singularity)?

• Idea: κ(A) ∼ 1

|∆xBleave|⇒ allow violation and choose index

with larger |∆xBleave|

Page 26: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Simplex: finding a feasible start basis

• Two phases. In phase one we solve the problem

minx

i si

s.t. Ax + s = b

x, s ≥ 0

starting with the feasible basis s = b ≥ 0, x = 0.

• If optimal solution has solution s 6= 0, the original problem isinfeasible, else x is feasible for it (phase two).

• Problem: needs many iterations, whole basis must beexchanged at least once!

Page 27: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Simplex for oil example

1. Start with basis B = {1, 2, 3}, N = {4, 5})

2. AB =

1 4 −1

2 2 0

2 1 0

, xB = A−1

B b =

0.5

2

4.5

p = AT A−TB cB = (3, 5, 0,−3.5, 2)T

3. Choose Nent = 5 with pNent= 2 > 0

4. ∆x =

(

−A−1

B aNent

~eent

)

= (1,−1,−3, 0, 1)T

5. Calculate Φ = min{− xi

∆xi

: ∆xi < 0}

Φ = min

−2

−1︸ ︷︷ ︸

x2

,−4.5

−3︸ ︷︷ ︸

x3

= 1.5 and Bleave = 3

x1

x2

1 2 3 4 5

1

2

3

4

5

6

B={1,2,3} N={4,5}

LH M

Page 28: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Simplex for oil example

6. New basis: B = {1, 2, 5}, N = {3, 4}

2. AB =

1 4 0

2 2 0

2 1 −1

, xB = A−1

B b =

2

0.5

1.5

p = AT A−TB cB = (3, 5,− 2

3,− 7

6, 0)T

3. p ≤ c ⇒ optimal solution!

x1

x2

1 2 3 4 5

1

2

3

4

5

6

B={1,2,5} N={3,4}

LH M

Page 29: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Duality

• Idea: get a lower bound on the minimum

• Let x be a feasible solution of the problem. Then in each row

Ai·x = bi

• Linear combination gives∑

i

yiAi·x =∑

i

yibi

• Try to choose y so that new coefficients are smaller, AT y ≤ c:

cT x ≥ yT Ax = yT b

• Choose best bound from all possible ones

maxy bT y

s.t. AT y ≤ c

Page 30: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Duality theorem (Gale, Kuhn, Tucker 1951)

Primal problem (PP)

minx cT x

s.t. Ax = b

x ≥ 0

Dual problem (DP)

maxy bT y

s.t. AT y ≤ c

Theorem. PP has a feasible, optimal solution x∗ if and only if DP has afeasible, optimal solution y∗. Then cT x∗ = bT y∗.

Proof. Assume x∗ is a feasible, optimal solution of PP. Define y∗ := A−TB cB.

• Feasibility of y∗:x∗ is optimal ⇒ p = AT A−T

B cB︸ ︷︷ ︸

y∗

≤ c

• Optimality of y∗:bT y∗ = bT A−T

B cB = cTBA−1

B b = cTBx∗

B = cT x∗ ≥ bT y ∀ feasible y

Page 31: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

The dual LP• min −→ max −→ min

• equation −→ free variable −→ equation

• inequality −→ signed variable −→ inequality

• objective function −→ right hand side −→ objective function

• The dual of the dual is the primal program again

• Feasible solutions of PP and DP bound one another

• Idea: algorithm that does to PP the same as the primalalgorithm does to DP

Page 32: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

The dual simplex algorithm

1. Start with an optimal basis (B,N) (that is: p ≤ c)

2. Calculate x, y and p

3. Pricing:If x ≥ 0 the solution is feasible, else choose index Bleave

4. Calculate update vectors ∆y = A−TB ~eBleave

and ∆p = AT ∆y.If ∆p ≥ 0 ⇒ LP is infeasible

5. Ratio test:Calculate Φ = max{ ci−pi

∆pi

: ∆pi < 0} and index Nent

6. Update sets and matrices, go to 2.

→primal algorithm

Page 33: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

What to know about dual algorithm

• Primal optimality = dual feasibility p ≤ c

• Primal feasibility = dual optimality x ≥ 0

• Primal: first choose entering index, then decide which indexhas to leave the basis

• Dual: first choose leaving index, then decide which index hasto enter the basis

Page 34: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Interpretation of p and y

• p − c are called reduced costs: profitable to increase a variable?

• Dual variables y are called shadow prices:• Solution depends continuously differentiable on the value of b.

Therefore: for a small change we have the same optimal basis:

xB + ∆xB = A−1

B (b + ∆b)

• Calculate objective value for modified right hand side b + ∆b

cT (x + ∆x) = cT x + cTBA−1

B ∆b = cT x + yT ∆b

• Shadow prices show how much a change in b is worth: how muchmore profit if we invest in ressource?

• Remark: For NLPs the dual variables are Lagrange multipliers

Page 35: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Shadow prices for oil example

• Optimal basis: B = {1, 2, 5}, N = {3, 4}

• AB =

1 4 0

2 2 0

2 1 −1

, xB = A−1

B b =

2

0.5

1.5

• y∗ = A−TB cB = ( 2

3, 7

6, 0)T

• Interpretation:

• Additional costs 2

3∆b1 for delivering more

light oil

• Additional costs 7

6∆b2 for delivering more

middle oil• No additional costs if we have to deliver

(a little bit) more heavy oil

x1

x2

1 2 3 4 5

1

2

3

4

5

6

c

Page 36: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Dual simplex for oil example

1. Start with basis B = {1, 3, 5}, N = {2, 4})

2. AB =

1 −1 0

2 0 0

2 0 −1

, xB = A−1

B b =

2.5

−1.5

2

y = A−TB cB = (0, 1.5, 0)T

p = AT y = (3, 3, 0,−1.5, 0)T ≤ (3, 5, 0, 0, 0)T

3. Choose Bleave = 2 with xBleave= −1.5 < 0

4. ∆y = A−TB ~eBleave

= (−1, 0.5, 0)T

∆p = AT ∆y = (0,−3, 1,−0.5, 0)T

x1

x2

1 2 3 4 5

1

2

3

4

5

6

LH M

B={1,3,5} N={2,4}

Page 37: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Dual simplex for oil example

5. Calculate Φ = max{ ci−pi

∆pi

: ∆pi < 0}

Φ = max

2

−3︸︷︷︸

x2

,1.5

−1.5︸ ︷︷ ︸

x4

= − 2

3and Nent = 1

6. New basis: B = {1, 2, 5}, N = {3, 4}

2. AB =

1 4 0

2 2 0

2 1 −1

, xB = A−1

B b =

2

0.5

1.5

p = AT A−TB cB = (3, 5,− 2

3,− 7

6, 0)T

3. xB ≥ 0 ⇒ feasible solution!

x1

x2

1 2 3 4 5

1

2

3

4

5

6

B={1,2,5} N={3,4}

LH M

Page 38: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Differences

• Dimensions of variables different: m and n. But: basismatrices have the same dimension!

• Can solve the problem with either one, can have completlydifferent behaviour (# of iterations)

• Adding a variable in PP: put in N and keep feasibility.

• Adding a variable in DP: loose feasibility!

• Adding a constraint in DP: put in N and keep feasibility.

• Adding a constraint in PP: loose feasibility!

⇒ use PP for Branch and Price, use DP for Branch and Bound

Page 39: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Primal-dual method• Use primal and dual algorithm for phase one and two!

• Take an abitrary basis (B, N) and calculate x. Phase one problem:

minx cT x

s.t. Ax = b

xB ≥ l

xN ≥ 0

• Choose li =

{

0 xBi≥ 0

xBixBi

< 0⇒ basis is feasible, apply algorithm

• Solve problem. In optimal solution it holds p ≤ c.

• This is dual feasibility, also for the original problem! Feasible start basisfor dual algorithm found (that is often very close to optimum)

• Can construct analogously a dual phase one problem

Page 40: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Summary

• A Linear Program is related to its dual: they have the same optimalvalue unless they are unbounded or infeasible

• While solving you always know both variables: one is feasible, the otherone is optimal

• If primal and dual variable are feasible, the optimal solution is found

• Dual variables yi are shadow prices - they indicate if one needs moreof the ”ressource” bi

• Primal and dual algorithm differ in the way changes in the model canbe incorporated

• Modern implementations use both algorithms for phase I and phase II

Page 41: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

History of Linear Programming

• George Dantzig invented the simplex algorithm in 1947

• The simplex algorithm has theoretically an exponential (in the numberof variables) runtime-behaviour – it is possible to construct exampleswhere all vertices are visited (Klee and Minty)

• In 1979 Leonid Khachiyan proposed an extension to the nonlinear”Ellipsoid method” of Shor and Nemirovski – the first method with apolynomial runtime behaviour

• Method performs bad in practice, therefore it is not used any more, butimportant for theory and boosted Interior Point Methods

Page 42: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Interior Point algorithms

Karmarkar, 1984. Idea:

• Walk through the interior of thefeasible domain

• Iterate on KKT-conditions withNewton method

• Gives a linear system in eachstep

• Interior Point, Primal and DualSimplex each perform best onapproximately one third of theproblem instances

x1

x2

Page 43: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

References• http://www-unix.mcs.anl.gov/otc/Guide

/faq/linear-programming-faq.html

• Vasek Chvátal, ”Linear Programming”, Freeman andCompany, New York, (1983)

• M. Padberg, ”Linear Optimization and Extensions”, SpringerHeidelberg, (1999)

• Roland Wunderling, ”Paralleler und objektorientierterSimplex-Algorithmus”, Technical Report TR 96-9,Konrad-Zuse-Zentrum für Informationstechnik Berlin, (1996)

Page 44: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Software

• Commercial codes:

• CPLEX by Robert Bixby (now distributed by ilog, France)

• Xpress-MP (dash, UK)

• Noncommercial open source codes:

• CLP Coin-OR solver.

• SoPlex by Roland Wunderling, (ZIB Berlin)

• GLPK GNU Linear Programming Kit. Solves also MILPs

• lp_solve by Michel Berkelaar. Solves also MILPs

• PCx by Steve Wright. Interior Point solver

• netlib and miplib provide test problems in MPS format

Page 45: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Direction in polyhedra

• d ∈ Rn is a direction in the

polyhedron P , if for x ∈ P

and ∀ λ ≥ 0 also x + λd ∈ P

Back x1

x2

1 2 3 4 5

1

2

3

4

5

6

Page 46: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Directions and the cost function

• cT (x + d) = cT x + cT d

• Improvement or not de-pends on sign of cT d !

Back x1

x2

1 2 3 4 5

1

2

3

4

5

6

−c

c

cT d > 0

cT d < 0

Page 47: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Convex hull

• The convex hull is the set ofall x that can be written asx =

i∈I λi vi withλi ≥ 0,

i∈I λi = 1 and{vi : i ∈ I} being the setof all vertices of the polyhe-dron

Back

x1

x2

1 2 3 4 5

1

2

3

4

5

6

Page 48: Linear Programming - MathOpt · Linear Programming Linear Programming Sebastian Sager Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences July 20-24,

Sebastian SagerLinear Programming

Degeneracy

• Number of hyperplanes inone point is larger thanvariable dimension n, thussteps with length Φ = 0

possible. Cycling (gettingstuck in one point) possible.

• Remedy: random pertuba-tions if cycling is detected

Back x1

x2

1 2 3 4 5

1

2

3

4

5

6

x