23
UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Lecture 1 (Part 3) Design Patterns for Optimization Problems Design Patterns for Optimization Problems Dynamic Programming & Greedy Dynamic Programming & Greedy Algorithms Algorithms

UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Embed Size (px)

DESCRIPTION

UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006. Lecture 1 (Part 3) Design Patterns for Optimization Problems Dynamic Programming & Greedy Algorithms. Algorithmic Paradigm Context. Divide &. Dynamic. Conquer. Programming. - PowerPoint PPT Presentation

Citation preview

Page 1: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

UMass Lowell Computer Science 91.503

Analysis of Algorithms Prof. Karen Daniels

Fall, 2006

UMass Lowell Computer Science 91.503

Analysis of Algorithms Prof. Karen Daniels

Fall, 2006

Lecture 1 (Part 3)Lecture 1 (Part 3)

Design Patterns for Optimization ProblemsDesign Patterns for Optimization ProblemsDynamic Programming & Greedy AlgorithmsDynamic Programming & Greedy Algorithms

Page 2: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Algorithmic Paradigm ContextAlgorithmic Paradigm Context

Divide &Conquer

DynamicProgramming

View problem as collection ofsubproblems

“Recursive” nature

Independent subproblems

Number of subproblems depends onpartitioning

factors

typically small

PreprocessingCharacteristic running time typically log

function of ndepends on numberand difficulty ofsubproblems

Primarily for optimizationproblemsOptimal substructure:optimal solution to problemcontains within it optimalsolutions to subproblems

Overlapping subproblems

Page 3: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Dynamic Programming Approach to Optimization ProblemsDynamic Programming Approach to Optimization Problems

1.1. Characterize structure of an optimal Characterize structure of an optimal solution.solution.

2.2. Recursively define value of an optimal Recursively define value of an optimal solution.solution.

3.3. Compute value of an optimal solution in Compute value of an optimal solution in bottom-up fashion.bottom-up fashion.

4.4. Construct an optimal solution from Construct an optimal solution from computed information.computed information.

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 4: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Dynamic ProgrammingDynamic Programming

Matrix ParenthesizationMatrix Parenthesization

Page 5: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Matrix Parenthesization Definitions

Example: Matrix Parenthesization Definitions

Given “chain” of n matrices: <AGiven “chain” of n matrices: <A11, A, A22, … A, … Ann, >, >

Compute product ACompute product A11AA22… A… Ann efficiently efficiently

Minimize “cost” = number of scalar multiplicationsMinimize “cost” = number of scalar multiplications

Multiplication order matters!Multiplication order matters!

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 6: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Matrix Parenthesization Step 1: Characterizing an Optimal SolutionExample: Matrix Parenthesization Step 1: Characterizing an Optimal Solution

Observation: Observation: Any parenthesization of Any parenthesization of AAiiAAi+1i+1… A… Ajj must split it between must split it between AAk k andand A Ak+1 k+1 for some k.for some k.

THM: Optimal Matrix Parenthesization:THM: Optimal Matrix Parenthesization:If an optimal parenthesization of If an optimal parenthesization of AAiiAAi+1i+1… A… Ajj splits at k, then splits at k, then

parenthesization of prefix parenthesization of prefix AAiiAAi+1i+1… A… Akk must be an optimal parenthesization.must be an optimal parenthesization.

Why?Why? If existed less costly way to parenthesize prefix, then substituting that If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize parenthesization would yield less costly way to parenthesize AAiiAAi+1i+1… A… Ajj , , contradicting optimality of that parenthesization. contradicting optimality of that parenthesization.

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

common DP proof technique: common DP proof technique: ““cut-and-paste” proof by contradictioncut-and-paste” proof by contradiction

Page 7: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Matrix Parenthesization Step 2: A Recursive SolutionExample: Matrix Parenthesization Step 2: A Recursive Solution

Recursive definition of minimum parenthesization Recursive definition of minimum parenthesization cost:cost:

m[i,j]= min{m[i,k] + m[k+1,j] + pi-1pkpj} if i < j

0 if i = j

How many distinct subproblems?How many distinct subproblems?

i <= k < j

each matrix Ai has dimensions pi-1 x pi

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 8: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Matrix Parenthesization Step 3: Computing Optimal CostsExample: Matrix Parenthesization Step 3: Computing Optimal Costs

00

2,6252,625

2,5002,500

1,0001,000

s: value of k that achieves optimal s: value of k that achieves optimal cost in computing m[i, j]cost in computing m[i, j]

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 9: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Matrix Parenthesization Step 4: Constructing an Optimal SolutionExample: Matrix Parenthesization Step 4: Constructing an Optimal Solution

PRINT-OPTIMAL-PARENS(s, i, j)PRINT-OPTIMAL-PARENS(s, i, j)if i = jif i = j

then print “A”then print “A”ii

else print “(“else print “(“ PRINT-OPTIMAL-PARENS(s, i, s[i, j])PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j)PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) print “)“print “)“

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 10: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Matrix Parenthesization Memoization

Example: Matrix Parenthesization Memoization

Provide Dynamic Programming efficiencyProvide Dynamic Programming efficiency ButBut with with top-downtop-down strategy strategy

Use recursionUse recursion Fill in table “on demand”Fill in table “on demand”

Example: Example: RECURSIVE-MATRIX-CHAIN:RECURSIVE-MATRIX-CHAIN:

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

MEMOIZED-MATRIX-CHAIN(p)MEMOIZED-MATRIX-CHAIN(p)

1 n length[p] - 11 n length[p] - 1

2 2 forfor i 1 i 1 toto n n

3 3 dodo for j i for j i toto n n

44 dodo m[i,j] m[i,j]

5 return LOOKUP-CHAIN(p,1,n)5 return LOOKUP-CHAIN(p,1,n)

LOOKUP-CHAIN(p,i,j) LOOKUP-CHAIN(p,i,j) 1 1 ifif m[i,j] < m[i,j] <

2 2 then return then return m[i,j]m[i,j]

3 3 ifif i=j i=j

44 then then m[i,j] 0m[i,j] 0

5 5 elseelse forfor k i to j-1 k i to j-1

6 6 dodo q LOOKUP-CHAIN(p,i,k) q LOOKUP-CHAIN(p,i,k)

+ LOOKUP-CHAIN(p,k+1,j)+ LOOKUP-CHAIN(p,k+1,j)

+ p+ pi-1 i-1 ppk k ppjj

77 ifif q < m[i,j] q < m[i,j]

88 thenthen m[i,j] q m[i,j] q

9 9 returnreturn m[i,j] m[i,j]

Page 11: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Dynamic ProgrammingDynamic Programming

Longest Common SubsequenceLongest Common Subsequence

Page 12: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: Longest Common Subsequence (LCS): MotivationExample: Longest Common Subsequence (LCS): Motivation

Strand of DNA: string over finite set {A,C,G,T}Strand of DNA: string over finite set {A,C,G,T} each element of set is a base: each element of set is a base: adenine, guanine, cytosine or thymineadenine, guanine, cytosine or thymine

Compare DNA similaritiesCompare DNA similarities SS1 1 = = ACCACCGGGGTCGTCGAGAGTTGGCGCGCCGGGGAAGCCGGCCGAAAAGCCGGCCGAA

SS2 2 = = GTCGTGTCGTTTCGGAACGGAATTGCCGGCCGTTTTGCGCTTCCTTGGTTAAAAAA

One measure of similarity:One measure of similarity: find the longest string Sfind the longest string S33 containing bases that also appear (not containing bases that also appear (not

necessarily necessarily consecutivelyconsecutively) in S) in S11 and S and S22

SS3 3 = = GTCGTCGGAAGCCGGCCGAAGTCGTCGGAAGCCGGCCGAA

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 13: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Definitions

Example: LCS Definitions

SequenceSequence is a is a subsequencesubsequence of of if if (strictly increasing (strictly increasing

indices of X)indices of X) such that such that example: example: is subsequence of is subsequence of

with index sequence with index sequence

Z is common subsequence of X and Y if Z is Z is common subsequence of X and Y if Z is subsequence of both X and Ysubsequence of both X and Y example:example:

common subsequence but not common subsequence but not longestlongest common subsequence. Longest?common subsequence. Longest?

Longest Common Subsequence ProblemLongest Common Subsequence Problem: Given 2 sequences : Given 2 sequences X, Y, find maximum-length common subsequence Z.X, Y, find maximum-length common subsequence Z.

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 14: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Step 1: Characterize an LCS

Example: LCS Step 1: Characterize an LCS

THM 15.1: Optimal LCS Substructure THM 15.1: Optimal LCS Substructure Given sequences:Given sequences:For any LCSFor any LCS of X and Y: of X and Y:

1 1 if if then then and Zand Zk-1k-1 is an LCS of X is an LCS of Xm-1m-1 and Y and Yn-1n-1

2 2 if if then then Z is an LCS of XZ is an LCS of Xm-1m-1 and Y and Y

3 if 3 if then then Z is an LCS of X and YZ is an LCS of X and Yn-1n-1

PROOF: based on producing PROOF: based on producing contradictionscontradictions

1 a) 1 a) Suppose . Appending to Z contradicts Suppose . Appending to Z contradicts longest longest nature of Z.nature of Z.

b) To establish b) To establish longest longest nature of Znature of Zk-1k-1, suppose common subsequence W of , suppose common subsequence W of XXm-1m-1 and and YYn-1n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction.has length > k-1. Appending to W yields common subsequence of length > k = contradiction.

2 2 Common subsequence W of Common subsequence W of XXm-1m-1 and and Y Y of length > k would also be common subsequence of length > k would also be common subsequence

of of XXmm, Y, contradicting , Y, contradicting longest longest nature of Z.nature of Z.

3 3 Similar to proof of (2)Similar to proof of (2) source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 15: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Step 2: A Recursive Solution

Example: LCS Step 2: A Recursive Solution

Implications of Thm 15.1:Implications of Thm 15.1:

??yesyes nono

Find LCS(Xm-1, Yn-1) Find LCS(Xm-1, Y) Find LCS(X, Yn-1)

LCS1(X, Y) = LCS(Xm-1, Yn-1) + xm LCS2(X, Y) = max(LCS(Xm-1, Y), LCS(X, Yn-1))

Page 16: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Step 2: A Recursive Solution (continued)

Example: LCS Step 2: A Recursive Solution (continued)

Overlapping subproblem structure:Overlapping subproblem structure:

Recurrence for length of optimal solution:Recurrence for length of optimal solution:

Conditions of problem can exclude some subproblems!

),(),( 111 YXLCSYXLCS mnm

),(),( 111 nnm YXLCSYXLCS),( YXLCS

c[i,j]= c[i-1,j-1]+1 if i,j > 0 and xi=yj

max(c[i,j-1], c[i-1,j]) if i,j > 0 and xi=yj

0 if i=0 or j=0

(mn) distinct (mn) distinct subproblemssubproblems

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 17: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Step 3: Compute Length of an LCSExample: LCS Step 3: Compute Length of an LCS

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

c tablec table

(represent b table)(represent b table)

BB CC BB AA

BB

CC

BB

AA

0

1

2

3

4

What is the What is the asymptotic worst-asymptotic worst-

case time case time complexity?complexity?

Page 18: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Step 4: Construct an LCSExample: LCS Step 4: Construct an LCS

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 19: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Example: LCS Improve the Code

Example: LCS Improve the Code

Can eliminate b tableCan eliminate b table c[i,j]c[i,j] depends only on 3 other c table entries: depends only on 3 other c table entries:

c[i-1,j-1]c[i-1,j-1] c[i-1,j] c[i-1,j] c[i,j-1] c[i,j-1] given value of given value of c[i,j]c[i,j], can pick the one in O(1) time, can pick the one in O(1) time reconstruct LCS in O(m+n) time similar to PRINT-LCSreconstruct LCS in O(m+n) time similar to PRINT-LCS same same (mn) space, but (mn) space, but (mn) was needed anyway...(mn) was needed anyway...

Asymptotic space reductionAsymptotic space reduction leverage: need only 2 rows of c table at a timeleverage: need only 2 rows of c table at a time

row being computedrow being computed previous rowprevious row

can also do it with ~ space for 1 row of c tablecan also do it with ~ space for 1 row of c table but does not preserve LCS reconstruction databut does not preserve LCS reconstruction data

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 20: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Dynamic ProgrammingDynamic Programming

Activity SelectionActivity Selection

Page 21: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Activity Selection Optimization Problem Activity Selection Optimization Problem

Problem Instance:Problem Instance: Set S = {1,2,...,n} of n activitiesSet S = {1,2,...,n} of n activities Each activity i has:Each activity i has:

start time: sstart time: sii

finish time : ffinish time : fii

Activities i, j are compatible iff non-overlapping: Activities i, j are compatible iff non-overlapping:

Objective:Objective: select a select a maximum-sizedmaximum-sized set of mutually compatible activities set of mutually compatible activities

ii fs

)[ ii fs )[ jj fs

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 22: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

source: 91.503 textbook Cormen, et source: 91.503 textbook Cormen, et al.al.

Page 23: UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2006

Algorithmic ProgressionAlgorithmic Progression

““Brute-Force”Brute-Force” (board work)(board work)

Dynamic Programming #1Dynamic Programming #1 Exponential number of subproblemsExponential number of subproblems (board work)(board work)

Dynamic Programming #2Dynamic Programming #2 Quadratic number of subproblemsQuadratic number of subproblems (board work)(board work)

Greedy AlgorithmGreedy Algorithm (board work: next week)(board work: next week)