14
UNC Chapel Hill Lin/Foskey/Manocha MST-Kruskal(G, w) 1. A // initially A is empty 2. for each vertex v V[G] // line 2-3 takes O(V) time 3. do Create-Set(v) // create set for each vertex 4. sort the edges of E by nondecreasing weight w 5. for each edge (u,v) E, in order by nondecreasing weight 6. do if Find-Set(u) Find-Set(v) // u&v on different trees 7. then A A {(u,v)} 8. Union(u,v) 9. return A Total running time is O(E lg E).

MST-Kruskal( G, w )

Embed Size (px)

DESCRIPTION

MST-Kruskal( G, w ). 1. A   // initially A is empty 2. for each vertex v  V [ G ] // line 2-3 takes O ( V ) time 3. do Create-Set( v ) // create set for each vertex 4. sort the edges of E by nondecreasing weight w - PowerPoint PPT Presentation

Citation preview

Page 1: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

MST-Kruskal(G, w)

1. A // initially A is empty

2. for each vertex v V[G] // line 2-3 takes O(V) time

3. do Create-Set(v) // create set for each vertex

4. sort the edges of E by nondecreasing weight w

5. for each edge (u,v) E, in order by nondecreasing weight

6. do if Find-Set(u) Find-Set(v) // u&v on different trees

7. then A A {(u,v)}

8. Union(u,v)

9. return A

Total running time is O(E lg E).

Page 2: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Analysis of Kruskal

Lines 1-3 (initialization): O(V)

Line 4 (sorting): O(E lg E)

Lines 6-8 (set operations): O(E log E)

Total: O(E log E)

Page 3: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Correctness of Kruskal

Idea: Show that every edge added is a safe edge for A

Assume (u, v) is next edge to be added to A.– Will not create a cycle

Let A’ denote the tree of the forest A that contains vertex u. Consider the cut (A’, V-A’). – This cut respects A (why?) – and (u, v) is the light edge across the cut (why?)

Thus, by the MST Lemma, (u,v) is safe.

Page 4: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Intuition behind Prim’s Algorithm

Consider the set of vertices S currently part of the tree, and its complement (V-S). We have a cut of the graph and the current set of tree edges A is respected by this cut.

Which edge should we add next? Light edge!

Page 5: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Basics of Prim ’s Algorithm

It works by adding leaves on at a time to the current tree. – Start with the root vertex r (it can be any vertex). At any

time, the subset of edges A forms a single tree. S = vertices of A.

– At each step, a light edge connecting a vertex in S to a vertex in V- S is added to the tree.

– The tree grows until it spans all the vertices in V.

Implementation Issues:– How to update the cut efficiently?– How to determine the light edge quickly?

Page 6: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Implementation: Priority Queue

Priority queue implemented using heap can support the following operations in O(lg n) time:– Insert (Q, u, key): Insert u with the key value key in Q– u = Extract_Min(Q): Extract the item with minimum key value in

Q

– Decrease_Key(Q, u, new_key): Decrease the value of u’s key value to new_key

All the vertices that are not in the S (the vertices of the edges in A) reside in a priority queue Q based on a key field. When the algorithm terminates, Q is empty. A = {(v, [v]): v V - {r}}

Page 7: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Example: Prim’s Algorithm

Page 8: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

MST-Prim(G, w, r)

1. Q V[G]2. for each vertex u Q // initialization: O(V) time3. do key[u] 4. key[r] 0 // start at the root5. [r] NIL // set parent of r to be NIL6. while Q // until all vertices in MST7. do u Extract-Min(Q) // vertex with lightest edge8. for each v adj[u]9. do if v Q and w(u,v) < key[v] 10. then [v] u11. key[v] w(u,v) // new lighter edge out of v

12. decrease_Key(Q, v, key[v])

Page 9: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Analysis of Prim

Extracting the vertex from the queue: O(lg n)

For each incident edge, decreasing the key of the neighboring vertex: O(lg n) where n = |V|

The other steps are constant time.

The overall running time is, where e = |E| T(n) = uV(lg n + deg(u) lg n) = uV (1+ deg(u)) lg n

= lg n (n + 2e) = O((n + e) lg n)

Essentially same as Kruskal’s: O((n+e) lg n) time

Page 10: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Correctness of Prim

Again, show that every edge added is a safe edge for A

Assume (u, v) is next edge to be added to A. Consider the cut (A, V-A).

– This cut respects A (why?) – and (u, v) is the light edge across the cut (why?)

Thus, by the MST Lemma, (u,v) is safe.

Page 11: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Optimization Problems

In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may be several solutions to achieve an optimal value.)

Two common techniques:– Dynamic Programming (global)– Greedy Algorithms (local)

Page 12: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Dynamic Programming

Similar to divide-and-conquer, it breaks problems down into smaller problems that are solved recursively.

In contrast to D&C, DP is applicable when the sub-problems are not independent, i.e. when sub-problems share sub-subproblems. It solves every sub-subproblem just once and saves the results in a table to avoid duplicated computation.

Page 13: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Elements of DP Algorithms

Substructure: decompose problem into smaller sub-problems. Express the solution of the original problem in terms of solutions for smaller problems.

Table-structure: Store the answers to the sub-problem in a table, because sub-problem solutions may be used many times.

Bottom-up computation: combine solutions on smaller sub-problems to solve larger sub-problems, and eventually arrive at a solution to the complete problem.

Page 14: MST-Kruskal( G, w )

UNC Chapel Hill Lin/Foskey/Manocha

Applicability to Optimization Problems

Optimal sub-structure (principle of optimality): for the global problem to be solved optimally, each sub-problem should be solved optimally. This is often violated due to sub-problem overlaps. Often by being “less optimal” on one problem, we may make a big savings on another sub-problem.

Small number of sub-problems: Many NP-hard problems can be formulated as DP problems, but these formulations are not efficient, because the number of sub-problems is exponentially large. Ideally, the number of sub-problems should be at most a polynomial number.