Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
APPROXIMATION ALGORITHMS
AND THE SET COVER PROBLEMBY DALIA COHN ALPEROVICH
1
LECTURE SECTIONS
1. Introduction
2. The Integer Program (IP) and the Linear Program (LP)
3. Rounding Approximation Algorithms
4. Introduction to LP-Duality
5. Greedy Approximation Algorithm
2
1. INTRODUCTION
• The set cover problem
• Example of usage
3
THE SET COVER PROBLEM
Given:
A ground set of n elements: 𝐸 = 𝑒1, 𝑒2, … , 𝑒𝑛
m subsets of those elements: 𝑆1, 𝑆2, … , 𝑆𝑚 where each 𝑆𝑗 ⊆ 𝐸
A non-negative weight for each subset: 𝑤𝑗 ≥ 0, j = 1,2, … ,m
Goal:
Find an 𝐼 ⊆ {1,… ,𝑚} that minimizes 𝑗∈𝐼 𝑤𝑗 subject to 𝑗∈𝐼 𝑆𝑗 = 𝐸
The set cover problem is NP-hard
4
MOTIVATION
Cellular networks optimization
A problem we all suffer from…
5
HOW DOES A MOBILE PHONE WORK?
6
EXAMPLE
7
EXAMPLE
8
EXAMPLE
9
EXAMPLE
10
EXAMPLE
11
EXAMPLE - FORMALLY
𝐸 = 𝐴1, 𝐴2, 𝐴3, 𝐴4
𝑆1 = 𝐴1, 𝐴2 , 𝑆2 = 𝐴2 , 𝑆3 = 𝐴1, 𝐴3, 𝐴4
𝑤1 = 𝑤2 = 𝑤3 = 1
12
2. THE INTEGER PROGRAM (IP) AND THE LINEAR
PROGRAM (LP)
• The set cover integer program (IP)
• The set cover linear program (LP) relaxation
13
EXAMPLE – INTEGER PROGRAM
𝐸 = 𝐴1, 𝐴2, 𝐴3, 𝐴4
𝑆1 = 𝐴1, 𝐴2 , 𝑆2 = 𝐴2 , 𝑆3 = {𝐴1, 𝐴3, 𝐴4}
Minimize 𝑥1 + 𝑥2 + 𝑥3
Subject to:
A1: 𝑥1 + 𝑥3 ≥ 1
A2: 𝑥1 + 𝑥2 ≥ 1
A3: 𝑥3 ≥ 1
A4: 𝑥3 ≥ 1
𝑥1, 𝑥2, 𝑥3 ∈ 0,1
14
INTEGER PROGRAM (IP)
Minimize 𝑗=1𝑚 𝑤𝑗𝑥𝑗
Subject to:
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≥ 1 , 𝑖 = 1, … , 𝑛
𝑥𝑗 ∈ 0,1 , 𝑗 = 1, … ,𝑚
As the integer program exactly models the set cover problem, solving it in polynomial time for any input would
prove that N=NP…
15
LINEAR PROGRAM (LP)
Minimize 𝑗=1𝑚 𝑤𝑗𝑥𝑗
Subject to:
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≥ 1 , 𝑖 = 1, … , 𝑛
Standard form – all constraints are of the kind “≥” and all variables are constrained to be non-negative
16
EXAMPLE – LINEAR PROGRAM
Minimize 𝑥1 + 𝑥2 + 𝑥3
Subject to:
𝑥1 + 𝑥3 ≥ 1
𝑥1 + 𝑥2 ≥ 1
𝑥3 ≥ 1
𝑥1, 𝑥2, 𝑥3 ∈ 0,1
17
𝒙𝟏 ≥ 𝟎 𝒙𝟐 ≥ 𝟎
EXAMPLE – LINEAR PROGRAM – OPTIMAL SOLUTION
𝑥1 =1
2, 𝑥2 =
1
2, 𝑥3 = 1
Minimize 𝑥1 + 𝑥2 + 𝑥3
Subject to:
𝑥1 + 𝑥3 ≥ 1
𝑥1 + 𝑥2 ≥ 1
𝑥3 ≥ 1
𝑥1 ≥ 0
𝑥2 ≥ 018
HOW DOES THE LP HELP US?
The linear program can be solved in polynomial time
The linear program is a relaxation of the integer program
Every feasible solution for the original is feasible for the linear program
The value of any feasible solution for the integer program has the same value in the linear program
We can obtain a lower bound on the value of the optimal solution in polynomial time
The linear programming relaxation can be used to derive approximation algorithms for the set cover problem
19
3. ROUNDING APPROXIMATION ALGORITHMS
• Deterministic approximation algorithm
• Random approximation algorithm
20
A DETERMINISTIC ROUNDING ALGORITHM
𝑓𝑖 = 𝑗: 𝑒𝑖 ∈ 𝑆𝑗 , i = 1, … , n
𝑓 = max𝑖=1,…,𝑛
𝑓𝑖 – The maximal number of sets in which any element appears
We round the fractional optimal solution 𝑥∗ to an integer solution 𝑋:
𝑥𝑗 = 1 if 𝑥𝑗∗ ≥
1
𝑓
𝑥𝑗 = 0 otherwise
21
EXAMPLE – DETERMINISTIC ROUNDING
Optimal solution for the LP:
𝑥1 =1
2, 𝑥2 =
1
2, 𝑥3 = 1
𝑓1 = 𝑓2 = 2, 𝑓3 = 𝑓4 = 1 ⇒ 𝑓 = 2
After rounding:
𝑥1 = 1, 𝑥2 = 1, 𝑥3 = 1
Set cover, not optimal
22
A DETERMINISTIC ROUNDING ALGORITHM
Lemma: The collection of subsets Sj, j ∈ 𝐼 is a set cover.
Proof:
Let’s have a look at some element 𝑒𝑖 ∈ 𝐸
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗∗ ≥ 1 as 𝑥∗ is a feasible solution to the linear program
There are 𝑓𝑖 terms in the sum, and by definition: 𝑓𝑖 ≤ 𝑓
For some j such that 𝑒𝑖 ∈ 𝑆𝑗 : 𝑥𝑗∗ ≥
1
𝑓
Therefore, 𝑗 ∈ 𝐼, and element 𝑒𝑖 is covered
23
A DETERMINISTIC ROUNDING ALGORITHM
Theorem: The rounding algorithm is an f-approximation algorithm for the set cover problem.
Proof:
It is clear that the algorithm runs in polynomial time.
1 ≤ 𝑓 ∗ 𝑥𝑗∗ for each 𝑗 ∈ 𝐼 by our construction:
𝑗∈𝐼
𝑤𝑗 ≤
𝑗=1
𝑚
𝑤𝑗 ∗ 𝑓 ∗ 𝑥𝑗∗ = (∗)
𝑓𝑤𝑗𝑥𝑗∗ ≥ 0 for 𝑗 = 1,… ,𝑚:
∗ = 𝑓
𝑗=1
𝑚
𝑤𝑗𝑥𝑗∗ = 𝑓 ∗ 𝑍𝐿𝑃
∗ = (∗∗)
The value of the optimal solution of the LP, 𝑍𝐿𝑃∗ , is a lower bound on the value of the optimal solution for the set cover problem:
(∗∗) ≤ 𝑓 ∗ 𝑂𝑃𝑇
24
A RANDOMIZED ROUNDING ALGORITHM - FIRST TRIAL
We interpret the fractional value 𝑥𝑗∗ as the probability that 𝑥𝑗 should be set to 1
Each set 𝑆𝑗 is included in our solution with probability 𝑥𝑗∗, where those m events are
independent random events
Let 𝑋𝑗 be a random variable that is 1 if subset 𝑆𝑗 is included in the solution, and 0
otherwise.
The expected value of the solution:
𝐸 𝑗=1𝑚 𝑤𝑗𝑋𝑗 = 𝑗=1
𝑚 𝑤𝑗Pr[𝑋𝑗 = 1] = 𝑗=1𝑚 𝑤𝑗𝑥𝑗
∗ = 𝑍𝐿𝑃∗ ≤ 𝑂𝑃𝑇
Where is the catch?
25
EXAMPLE– RANDOMIZED ROUNDING – FIRST TRIAL
Optimal solution for the LP:
𝑥1 =1
2, 𝑥2 =
1
2, 𝑥3 = 1
Possible outcome of the rounding:
𝑥1 = 0, 𝑥2 = 0, 𝑥3 = 1
Not a set cover!
1
22 =1
4of the options are not a set cover at all…
Other possibilities:
𝑥1 = 1, 𝑥2 = 1, 𝑥3 = 1 – Set cover, not optimal
𝑥1 = 0, 𝑥2 = 1, 𝑥3 = 1 – An optimal solution
𝑥1 = 1, 𝑥2 = 0, 𝑥3 = 1 – An optimal solution26
A RANDOMIZED ROUNDING ALGORITHM - FIRST TRIAL
Pr 𝑒𝑖 𝑛𝑜𝑡 𝑐𝑜𝑣𝑒𝑟𝑒𝑑 = 𝑗:𝑒𝑖∈𝑆𝑗(1 − 𝑥𝑗
∗) = ∗
1 − 𝑥 ≤ 𝑒−𝑥 for any x: ∗ ≤ 𝑗:𝑒𝑖∈𝑆𝑗𝑒−𝑥𝑗
∗
= 𝑒− 𝑗:𝑒𝑖∈𝑆𝑗
𝑥𝑗∗
= ∗∗
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≥ 1 by the LP constraint: (∗∗) ≤ 𝑒−1
It is quite likely that this procedure does not produce a set cover!
We say that we have an algorithm (or a family of algorithms) that works with high probability if for any constant c, we can devise a polynomial-time algorithm whose chance of failure is at most an inverse polynomial 𝑛−𝑐
27
A RANDOMIZED ROUNDING ALGORITHM – SECOND TRIAL
For each set 𝑆𝑗:
We imagine a coin that comes up heads with probability 𝑥𝑗∗
We flip the coin 𝑐 ∗ ln 𝑛 times
If it comes up heads in any of the 𝑐 ∗ ln 𝑛 trials, we include 𝑆𝑗 in our solution, otherwise not.
Pr 𝑒𝑖 𝑛𝑜𝑡 𝑐𝑜𝑣𝑒𝑟𝑒𝑑 = 𝑗:𝑒𝑖∈𝑆𝑗1 − 𝑥𝑗
∗ 𝑐∗ln 𝑛≤ 𝑗:𝑒𝑖∈𝑆𝑗
𝑒−𝑥𝑗∗𝑐∗ln 𝑛 = 𝑒
− 𝑐∗ln 𝑛 𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗∗
≤1
𝑛𝑐
Pr[there exists an uncovered element] = 𝑖=1𝑛 Pr 𝑒𝑖 𝑛𝑜𝑡 𝑐𝑜𝑣𝑒𝑟𝑒𝑑 ≤ 𝑖=1
𝑛 1
𝑛𝑐 ≤1
𝑛𝑐−1
28
A RANDOMIZED ROUNDING ALGORITHM – SECOND TRIAL
Theorem: The algorithm is a randomized 𝑂(ln 𝑛)-approximation algorithm that produces a set cover with high probability.
Proof:
The probability that 𝑆𝑗 is included in the solution: 𝑝𝑗 𝑥𝑗∗ = 1 − 1 − 𝑥𝑗
∗ 𝑐 ln 𝑛
𝑝′ 𝑥𝑗∗ = (𝑐 ln 𝑛) 1 − 𝑥𝑗
∗ 𝑐 ln 𝑛−1= (∗)
In the interval 𝑥𝑗∗ ∈ [0,1]:
if 𝑐 ln 𝑛 ≥ 1: ∗ ≤ 𝑐 ln 𝑛
Since 𝑝𝑗 0 = 0 and the bound above: 𝑝𝑗 𝑥𝑗∗ ≤ 𝑐 ln𝑛 𝑥𝑗
∗
Let 𝑋𝑗 be a random variable that is 1 if subset 𝑆𝑗 is included in the solution, and 0 otherwise.
The expected value of the solution:
𝐸 𝑗=1𝑚 𝑤𝑗𝑋𝑗 = 𝑗=1
𝑚 𝑤𝑗Pr[𝑋𝑗 = 1] ≤ 𝑗=1𝑚 𝑤𝑗 𝑐 ln 𝑛 𝑥𝑗
∗ = (𝑐 ln 𝑛) 𝑗=1𝑚 𝑤𝑗𝑥𝑗
∗ = (𝑐 ln 𝑛)𝑍𝐿𝑃∗
We can also show that:
𝐸 𝑗=1𝑚 𝑤𝑗𝑋𝑗 |𝑡ℎ𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑖𝑠 𝑎 𝑓𝑒𝑎𝑠𝑖𝑏𝑙𝑒 𝑠𝑒𝑡 𝑐𝑜𝑣𝑒𝑟 ≤ 2(𝑐 ln 𝑛)𝑍𝐿𝑃
∗29
4. INTRODUCTION TO LP-DUALITY
• Introduction to LP-duality
• Min-cut max-flow example
30
LP-DUALITY
Given a minimization IP or LP program, we are looking for a good lower bound on the optimal value
In the standard form, all the variables are non-negative and therefore we can use the constraints in order to
obtain such bound
31
EXAMPLE – LINEAR PROGRAM
Minimize 𝑥1 + 𝑥2 + 𝑥3
Subject to:
𝑥1 + 𝑥3 ≥ 1
𝑥1 + 𝑥2 ≥ 1
𝑥3 ≥ 1
𝑥1 ≥ 0
𝑥2 ≥ 0
Lower bound:
𝑥1 + 𝑥2 + 𝑥3 ≥ 1 ∗ 𝑥1 + 𝑥2 + 1 ∗ 𝑥3
≥ 1 + 1 = 2
32
EXAMPLE – LINEAR PROGRAM
Minimize 𝑥1 + 𝑥2 + 𝑥3
Subject to:
𝑥1 + 𝑥3 ≥ 1
𝑥1 + 𝑥2 ≥ 1
𝑥3 ≥ 1
𝑥1 ≥ 0
𝑥2 ≥ 0
Lower bound: 𝑦1, 𝑦2 ≥ 0
𝑥1 + 𝑥2 + 𝑥3 ≥ 𝑦1 ∗ 𝑥1 + 𝑥2 + 𝑦2 ∗ 𝑥3
≥ 𝑦1 ∗ 1 + 𝑦2 ∗ 133
EXAMPLE – LINEAR PROGRAM
Lower bound: 𝑦1, 𝑦2 ≥ 0
𝑥1 + 𝑥2 + 𝑥3 ≥ 𝑦1 ∗ 𝑥1 + 𝑥2 + 𝑦2 ∗ 𝑥3
≥ 𝑦1 ∗ 1 + 𝑦2 ∗ 1
The dual program:
Maximize 𝑦1 + 𝑦2
Subject to:
𝑦1 ≤ 1
𝑦2 ≤ 1
𝑦1 ≥ 0
𝑦2 ≥ 034
LP-DUALITY - SET COVER
Primal
Minimize 𝑗=1𝑚 𝒘𝒋𝑥𝑗
Subject to:
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≥ 𝟏 , 𝑖 = 1,… , 𝑛
𝑥𝑗 ≥ 0, 𝑗 = 1,… ,𝑚
Dual
Maximize 𝑖=1𝑛 (𝟏 ∗ 𝑦𝑖)
Subject to:
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≤ 𝒘𝒋, 𝑗 = 1,… ,𝑚
𝑦𝑖 ≥ 0, 𝑖 = 1,… , 𝑛
35
LP-DUALITY – DEFINITION – STANDARD FORM
Primal
Minimize 𝑗=1𝑛 𝒄𝒋𝑥𝑗
Subject to:
𝑗=1𝑛 𝒂𝒊𝒋𝑥𝑗 ≥ 𝒃𝒊, i = 1,… ,m
𝑥𝑗 ≥ 0, 𝑗 = 1, … , 𝑛
Where 𝒂𝒊𝒋, 𝒃𝒊, 𝒄𝒋 are given rational numbers
Dual
Maximize 𝑖=1𝑚 𝒃𝒊𝑦𝑖
Subject to:
𝑖=1𝑚 𝒂𝒊𝒋𝑦𝑖 ≤ 𝒄𝒋, j = 1, … , n
𝑦𝑖 ≥ 0, 𝑖 = 1,… ,𝑚
36
THE LP-DUALITY THEOREM
The LP-Duality Theorem: The primal program has finite optimum if and only if its dual program has finite optimum. Moreover, if 𝑥∗ = 𝑥1
∗, … , 𝑥𝑛∗ and 𝑦∗ = 𝑦1
∗, … , 𝑦𝑚∗ are optimal solutions for the primal and dual programs
respectively , then 𝑗=1𝑛 𝑐𝑗𝑥𝑗
∗ = 𝑖=1𝑚 𝑏𝑖𝑦𝑖
∗
The Weak LP-Duality Theorem: If 𝑥 = 𝑥1, … , 𝑥𝑛 and 𝑦 = 𝑦1, … , 𝑦𝑚 are feasible solutions for the primal and dual programs respectively , then 𝑗=1
𝑛 𝑐𝑗𝑥𝑗 ≥ 𝑖=1𝑚 𝑏𝑖𝑦𝑖
Proof:
Since y is a dual feasible and 𝑥𝑗’s are non-negative: 𝑗=1𝑛 𝑐𝑗𝑥𝑗 ≥ 𝑗=1
𝑛 𝑖=1𝑚 𝑎𝑖𝑗𝑦𝑖 𝑥𝑗
Since x is a primal feasible and 𝑦𝑖 ’s are non-negative: 𝑖=1𝑚 𝑗=1
𝑛 𝑎𝑖𝑗𝑥𝑗 𝑦𝑖 ≥ 𝑖=1𝑚 𝑏𝑖𝑦𝑖
And: 𝑗=1𝑛 𝑖=1
𝑚 𝑎𝑖𝑗𝑦𝑖 𝑥𝑗 = 𝑖=1𝑚 𝑗=1
𝑛 𝑎𝑖𝑗𝑥𝑗 𝑦𝑖
x and y are both the optimal solutions if and only if both “≥” hold with equality
37
COMPLIMENTARY SLACKNESS CONDITIONS
Primal
Minimize 𝑗=1𝑛 𝑐𝑗𝑥𝑗
Subject to:
𝑗=1𝑛 𝑎𝑖𝑗𝑥𝑗 ≥ 𝑏𝑖 , i = 1, … ,m
𝑥𝑗 ≥ 0, 𝑗 = 1,… , 𝑛
Where 𝑎𝑖𝑗 , 𝑏𝑖 , 𝑐𝑗 are given rational numbers
Dual
Maximize 𝑖=1𝑚 𝑏𝑖𝑦𝑖
Subject to:
𝑖=1𝑚 𝑎𝑖𝑗𝑦𝑖 ≤ 𝑐𝑗 , j = 1, … , n
𝑦𝑖 ≥ 0, 𝑖 = 1,… ,𝑚
Complimentary Slackness Conditions Theorem:
𝑥 = 𝑥1, … , 𝑥𝑛 and 𝑦 = 𝑦1, … , 𝑦𝑚 be feasible solutions for the primal and dual programs respectively. Then x and
y are both optimal if and only if all of the following conditions are satisfied:
• Primal complementary slackness conditions: For each 1 ≤ 𝑗 ≤ 𝑛 either 𝑥𝑗 = 0 or 𝑖=1𝑚 𝑎𝑖𝑗𝑦𝑖 = 𝑐𝑗
• Dual complementary slackness conditions: For each 1 ≤ 𝑖 ≤ 𝑛 either 𝑦𝑖 = 0 or 𝑗=1𝑛 𝑎𝑖𝑗𝑥𝑗 = 𝑏𝑖
38
LP-DUALITY – EXAMPLE – MIN CUT - MAX FLOW
Directed graph G=(V,E)
𝑠, 𝑡 ∈ 𝑉
Positive capacities: 𝑐: 𝐸 → 𝑅+
Find the maximum amount of flow that can be sent from s to t subject to:
Capacity constraints: for each arc e, the flow sent through e is bounded by its capacity
Flow conservation: at each node v, other than s and t, the total flow into v should be equal to the total flow out of v
39
LP-DUALITY – EXAMPLE – MIN CUT - MAX FLOW
s-t cut: a partition of nodes into two sets 𝑋, 𝑋 so that 𝑠 ∈ 𝑋 and 𝑡 ∈ 𝑋 and consists of the set of arcs going from
𝑋 to 𝑋.
The capacity of the cut: 𝑐 𝑋, 𝑋 , the sum of capacities of those arcs.
The min-cut max-flow theorem: If the capacity of a given s-t cut 𝑋, 𝑋 is equal to the value of a feasible flow
f, then 𝑋, 𝑋 must be a minimum s-t cut and f must be a maximum flow.
40
LP-DUALITY – EXAMPLE – MIN CUT - MAX FLOW
Primal
Maximize 𝑓𝑡𝑠
Subject to:
Capacity: 𝑓𝑖𝑗 ≤ 𝑐𝑖𝑗 , 𝑖, 𝑗 ∈ 𝐸
Conservation: 𝑗: 𝑗,𝑖 ∈𝐸 𝑓𝑗𝑖 − 𝑗: 𝑖,𝑗 ∈𝐸 𝑓𝑖𝑗 ≤ 0, 𝑖 ∈ 𝑉
𝑓𝑖𝑗 ≥ 0, 𝑖, 𝑗 ∈ 𝐸
∞
41
LP-DUALITY – EXAMPLE – MIN CUT - MAX FLOW
Primal
Maximize 𝑓𝑡𝑠
Subject to:
Capacity: 𝑓𝑖𝑗 ≤ 𝑐𝑖𝑗 , 𝑖, 𝑗 ∈ 𝐸
Conservation: 𝑗: 𝑗,𝑖 ∈𝐸 𝑓𝑗𝑖 − 𝑗: 𝑖,𝑗 ∈𝐸 𝑓𝑖𝑗 ≤ 0, 𝑖 ∈ 𝑉
𝑓𝑖𝑗 ≥ 0, 𝑖, 𝑗 ∈ 𝐸
Dual
Minimize (𝑖,𝑗) 𝑐𝑖𝑗𝑑𝑖𝑗
Subject to:
𝑑𝑖𝑗 − 𝑝𝑖 + 𝑝𝑗 ≥ 0, 𝑖, 𝑗 ∈ 𝐸
𝑝𝑠 − 𝑝𝑡 ≥ 1
𝑑𝑖𝑗 ≥ 0, 𝑖, 𝑗 ∈ 𝐸
𝑝𝑖 ≥ 0, 𝑖 ∈ 𝑉
∞
42
LP-DUALITY – EXAMPLE – MIN CUT - MAX FLOW
The dual is an LP relaxation of the IP:
Minimize (𝑖,𝑗) 𝑐𝑖𝑗𝑑𝑖𝑗
Subject to:
𝑑𝑖𝑗 − 𝑝𝑖 + 𝑝𝑗 ≥ 0, 𝑖, 𝑗 ∈ 𝐸
𝑝𝑠 − 𝑝𝑡 ≥ 1
𝒅𝒊𝒋 ∈ {𝟎, 𝟏}, 𝒊, 𝒋 ∈ 𝑬
𝒑𝒊 ∈ 𝟎, 𝟏 , 𝒊 ∈ 𝑽
Let (𝑑∗, 𝑝∗) be an optimal solution.
The only way to satisfy 𝑝𝑠∗ − 𝑝𝑡
∗ ≥ 1 is 𝑝𝑠∗
= 1, 𝑝𝑡∗ = 0
The solution therefore defines an s-t cut: for each 𝑖 ∈ 𝑉: if 𝑝𝑖
∗ = 1 then 𝑖 ∈ 𝑋, otherwise 𝑝𝑖∗
= 0
If 𝑖 ∈ 𝑋 and 𝑗 ∈ 𝑋, 𝑖, 𝑗 ∈ 𝐸 :
𝑑𝑖𝑗∗ − 𝑝𝑖
∗ + 𝑝𝑗∗ = 𝑑𝑖𝑗
∗ − 1 + 0 ≥ 0
⇒ 𝑑𝑖𝑗∗ = 1
The solution defines a min cut! 43
LP-DUALITY – EXAMPLE – MIN CUT - MAX FLOW
It is possible to show that the dual program always has an integral optimal solution
By the LP-duality theorem, maximum flow in G must equal capacity to the value of the dual’s optimal solution
Since the latter equals a capacity of a minimum s-t cut, we get the min-cut max-flow theorem!
44
LP-DUALITY – HOW DOES IT HELP US?
We can devise an algorithm that finds an integral solution to the primal and simultaneously a feasible solution to
the dual.
The approximation guarantee is established by comparing the cost of these two solutions.
We can choose the two solutions wisely, we don’t have to work with an arbitrary optimal solution to the LP.
The algorithm can be made more efficient since it doesn’t have to first solve the LP optimally.
45
5. GREEDY APPROXIMATION ALGORITHM
• Algorithm approximation algorithm
• Analysis using LP-duality
46
GREEDY ALGORITHM
1. 𝐶 ← 𝜙, 𝐼 ← 𝜙
2. While 𝐶 ≠ 𝐸 do:
1. Find the most cost-effective set 𝑆𝑗 in the current iteration: 𝛼 = min𝑗=1,…,𝑚
𝑤𝑗
𝑆𝑗∩ 𝐶
2. Pick 𝑆𝑗 : for each 𝑒 ∈ 𝑆𝑗\C:
1. Set 𝑝𝑟𝑖𝑐𝑒(𝑒) ← 𝛼
2. Add e to C
3. Add j to I
3. Output I
47
EXAMPLE
𝐸 = 𝐴1, 𝐴2, 𝐴3, 𝐴4
𝑆1 = 𝐴1 , 𝑆2 = 𝐴2 , 𝑆3 = 𝐴1, 𝐴3, 𝐴4
𝑤1 = 𝑤2 = 𝑤3 = 1
Iteration 1:
𝛼 =1
3for 𝑗 = 3
𝑝𝑟𝑖𝑐𝑒 𝐴1 = 𝑝𝑟𝑖𝑐𝑒 𝐴3 = 𝑝𝑟𝑖𝑐𝑒 𝐴4 =1
3
𝐶 ← 𝐴1, 𝐴3, 𝐴4 , 𝐼 ← {3}
Iteration 2:
𝛼 = 1 for 𝑗 = 1 (and for 𝑗 = 2)
𝑝𝑟𝑖𝑐𝑒 𝐴2 = 1
𝐶 ← 𝐴1, 𝐴3, 𝐴4 ∪ 𝐴2 , 𝐼 ← 3 ∪ {1}48
LP-DUALITY - SET COVER
Primal
Minimize 𝑗=1𝑚 𝒘𝒋𝑥𝑗
Subject to:
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≥ 𝟏 , 𝑖 = 1,… , 𝑛
𝑥𝑗 ≥ 0, 𝑗 = 1,… ,𝑚
Dual
Maximize 𝑖=1𝑛 (𝟏 ∗ 𝑦𝑖)
Subject to:
𝑗:𝑒𝑖∈𝑆𝑗𝑥𝑗 ≤ 𝒘𝒋, 𝑗 = 1,… ,𝑚
𝑦𝑖 ≥ 0, 𝑖 = 1,… , 𝑛
49
The prices of the elements give us a setting for the dual variable: 𝑦𝑖 ←𝑝𝑟𝑖𝑐𝑒 𝑒𝑖
𝐻𝑛, 𝑖
= 1,… , 𝑛 where 𝐻𝑛 = 1 +1
2+ ⋯+
1
𝑛.
LP-DUALITY
Lemma: The vector y: 𝑦𝑖 ←𝑝𝑟𝑖𝑐𝑒 𝑒𝑖
𝐻𝑛, 𝑖 = 1, … , 𝑛 is a feasible solution.
Proof:
Consider a set 𝑆𝑗 consists of k elements. Consider the iteration i in which the algorithm picks 𝑆𝑗.
𝑆𝑗 contains at most 𝑘 − (𝑖 − 1) uncovered elements.
Let element 𝑒𝑖′ be one of them, then 𝑦𝑖′ =𝑝𝑟𝑖𝑐𝑒 𝑒
𝑖′
𝐻𝑛≤
wj
k−i+1∗
1
Hn
Summing over all elements of 𝑆𝑗: 𝑖=1𝑘 𝑦𝑖 ≤
𝑤𝑗
𝐻𝑛∗
1
𝑘+
1
𝑘−1+ ⋯+
1
1=
𝐻𝑘
𝐻𝑛∗ 𝑤𝑗 ≤ 𝑤𝑗
50
EXAMPLE
𝑝𝑟𝑖𝑐𝑒 𝐴1 = 𝑝𝑟𝑖𝑐𝑒 𝐴3 = 𝑝𝑟𝑖𝑐𝑒 𝐴4 =1
4
𝑝𝑟𝑖𝑐𝑒 𝐴2 = 1
𝐻4 = 1 +1
2+
1
3+
1
4= 2
1
12
𝑦1 = 𝑦3 = 𝑦4 = 0.12
𝑦2 = 0.48
51
LP-DUALITY
Theorem: The approximation guarantee of the greedy algorithm is 𝐻𝑛.
Proof:
The cost of the set cover picked is: 𝑖=1𝑛 𝑝𝑟𝑖𝑐𝑒(𝑒𝑖) = 𝐻𝑛 ∗ 𝑖=1
𝑛 𝑦𝑖 = (∗)
Any feasible solution of the dual gives a lower bound on the primal
Any lower bound on the LP (the primal) is a lower bound on the IP as well
Then, because y is a feasible solution of the dual: ∗ ≤ 𝐻𝑛 ∗ 𝑂𝑃𝑇
By the harmonic number properties: log 𝑛 + 1 < 𝐻𝑛 < 1 + log 𝑛
There are strong complexity-theoretic reasons to believe that no polynomial time approximation algorithm can achieve a significantly better approximation ratio!
52
WHAT CELL IS YOUR MOBILE CONNECTED TO?
Taub
G-NetTrack application for Iphone or android
53
QUESTIONS?