81
Analysis and Design of Algorithms Deepak John Department Of Computer Applications , SJCET-Pala

Anlysis and design of algorithms part 1

Embed Size (px)

DESCRIPTION

Analyzing Algorithms and problems. Classifying functions by their asymptotic growth rate. Recursive procedures. Recurrence equations - Substitution Method, Changing variables, Recursion Tree, Master Theorem. Design Techniques- Divide and Conquer, Dynamic Programming, Greedy, Backtracking

Citation preview

Page 1: Anlysis and design of algorithms part 1

Analysis and Design of Algorithms

Deepak JohnDepartment Of Computer Applications , SJCET-Pala

Page 2: Anlysis and design of algorithms part 1

What is AlgorithmWhat is Algorithm Algorithm is any well-defined computational procedure that takes some is any well defined computational procedure that takes some

value, or set of values, as input and produces some value, orset of values, as output.

is thus a sequence of computational steps that transform theinput into the output.

is a tool for solving a well - specified computational is a tool for solving a well specified computationalproblem.

Page 3: Anlysis and design of algorithms part 1

What is a programWhat is a program A program is the expression of an algorithm in a programming

languageg g a set of instructions which the computer will follow to solve a

problem

Page 4: Anlysis and design of algorithms part 1

Importance of Analyze AlgorithmImportance of Analyze Algorithm Need to recognize limitations of various algorithms

for solving a problemfor solving a problem Need to understand relationship between problem size

and running timeand running time Need to learn how to analyze an algorithm's running

time without coding itg Need to learn techniques for writing more efficient

code Need to recognize bottlenecks in code as well as

which parts of code are easiest to optimize

Page 5: Anlysis and design of algorithms part 1

Floor and CeilingFloor and Ceiling

Page 6: Anlysis and design of algorithms part 1

3 useful formulas

Page 7: Anlysis and design of algorithms part 1

Analyzing Algorithm and Problems

An algorithm is a method or process to solve a problem satisfying the following properties:the following properties: Correctness Amount Of Work Done

A d W C A l i Average and Worst Case Analysis Amount of space used Simplicity and Clarity Optimality Implementation and programming Lower bounds and the complexity of problems Lower bounds and the complexity of problems

Page 8: Anlysis and design of algorithms part 1

Correctness Preconditions (characteristics of i/p ,it is expected to work)and post

conditions(result it is to produce for each i/p)conditions(result it is to produce for each i/p) Solution method Implementation

Amount Of Work Done Amount Of Work Done Highly depend on the programming language used and the

programmers style Also referred as complexity of algorithmp y g

Average and Worst Case AnalysisLet Dn be the set of inputs of size n and I be an element of Dn. Let

t(I) be the number of basic operations performed by the algorithm on inputl i ( ) i h i b f b i Worst case complexity W(n) is the maximum number of basic

operations performed by the algorythm on any size of input n.=max {t(I) | I Dn

Average case complexity A(n) Average case complexity A(n)Let Pr(I) be the probability that input I occurs. Then

A(n)= P r ( ) ( )I t IA(n)= P r ( ) ( )I D n

I t I

Page 9: Anlysis and design of algorithms part 1

Amount of space used - storage space (instructions ,constants etc and g p ( ,

extra space for input) Simplicity ,clarity and Optimality

-an algorithm is optimal if there is no algorithm in the class that performs fewer basis operations

Lower bounds and the complexity of problems Lower bounds and the complexity of problems

Page 10: Anlysis and design of algorithms part 1

Algorithm Complexity

Worst Case Complexity: Provides an upper bound on running time Provides an upper bound on running time the function defined by the maximum number of steps taken on any

instance of size n Best Case Complexity: the function defined by the minimum number of steps taken on any

instance of size ninstance of size n Average Case Complexity: Provides the expected running time Provides the expected running time the function defined by the average number of steps taken on any

instance of size n

Page 11: Anlysis and design of algorithms part 1

Best Worst and Average Case ComplexityBest, Worst, and Average Case Complexity

Page 12: Anlysis and design of algorithms part 1

Sequential Search, UnorderedI t E K h E i ith t i i d d 0 1) dInput: E, n, K, where E is an array with n entries indexed 0, …, n-1), and

K is the item sought. For simplicity, we assume that K and the entriesof E are integers, as is n.

Output: Returns ans, the location of K in E (-1 if K is not found.) Algorithm: Step (Specification)int seqSearch(int[] E, int n, int K)1. int ans, index;2 1 // A f il2. ans = -1; // Assume failure.3. for (index = 0; index < n; index++)4 if (K == E[index])4. if (K E[index])5. ans = index; // Success!6. break; // Done!;7. return ans;

Page 13: Anlysis and design of algorithms part 1

Analysis of the Algorithm• Basic Operation: Comparison of x with an array entryp p y y• Worst-Case Analysis: Let W(n) be a function. W(n) is the maximum

number of basic operations performed by the algorithm on any inputi F lsize n. For our example,

clearly W(n) = n.The worst cases occur when K appears only in the last position in theThe worst cases occur when K appears only in the last position in thearray and when K is not in the array at all.

Page 14: Anlysis and design of algorithms part 1

Average-case Analysis:g yA(n) for the success case, Ii represent the event that K appears in the i

th position in the array. Let t(I) be the number of comparisons done for input Ifor input I.

n-1Asucc(n)=∑ Pr(Ii | succ)t(Ii)=

1

0(1 / ) ( 1 )

n

in i

Asucc(n) ∑ Pr(Ii | succ)t(Ii)

i=0A (n) for the fail case.

(1 / ) ( ( 1 ) / 2 )( 1 ) / 2

n n nn

( )

A(fail)=nCombine both cases ,Let q be the probability that K is in the array

A(n)=Pr(succ) Asucc(n)+Pr(fail)Afail(n)=q((n+1)/2)+(1- q)n

(1 ( /2)) ( /2)=n(1-(q/2))+(q/2)

Page 15: Anlysis and design of algorithms part 1

Classifying functions by their asymptotic growth rateClassifying functions by their asymptotic growth rate The running time of an algorithm as input size is called the

t ti i tiasymptotic running time The notations (, O, , o, w ) describe different rate-of-growth

relations between the defining function and the defined set ofgfunctions.

O(g(n)), Big-Oh of g of n, the Asymptotic Upper Bound; (g(n)), Theta of g of n, the Asymptotic Tight Bound; and (g(n)), Omega of g of n, the Asymptotic Lower Bound.

Page 16: Anlysis and design of algorithms part 1

Big-O•We use O notation to give an upper bound on a function to within a constant We use O otat o to g ve a uppe bou d o a u ct o to w t a co sta tfactor.•Used for bound the worst case running time of the algorithm on every input•Let f(n) and g(n) be two functions. We write

0 such that and constants positiveexist there : ncngOnf

Let f(n) and g(n) be two functions. We writef(n) = O(g(n)) or f = O(g)

0allfor 0 nnncgnf

Page 17: Anlysis and design of algorithms part 1

Big Omega – Notation – A lower bound Used to bound the best case running time of an algorithm h hdi iihf

0

0

allfor 0such thatandconstantspositiveexist there :

nnncgnfncngnf

f(n)

( )

f(n)

cg(n)

n0

Page 18: Anlysis and design of algorithms part 1

-notation provides a tight bound

021

021

allfor 0such that and , , constants positiveexist there :

nnngcnfngcnccngnf

ngnfngOnfngnf AND c2g(n)

f(n)

c1g(n)

n0n0

Page 19: Anlysis and design of algorithms part 1

Relations Between , , ORelations Between , , OTheorem : For any two functions g(n) and f(n),

f(n) = (g(n)) iff f(n) = O(g(n)) and f(n) = (g(n)).f( ) (g( )) f( ) (g( )) f( ) (g( ))

I.e., (g(n)) = O(g(n)) (g(n))

In practice, asymptotically tight bounds are obtained from asymptotic upper and lower boundsasymptotic upper and lower bounds.

Page 20: Anlysis and design of algorithms part 1

(g(n)), functions that grow at least as fast as g(n) >=

(g(n)), functions that grow at the same rate as g(n) g(n)

=

g(n)

<=O(g(n)), functions that grow no faster than g(n)

Page 21: Anlysis and design of algorithms part 1

o-notationo notationFor a given function g(n), the set little-o:f(n)=o(g(n)) there exist positive constants c and n0 where c > 0, n0 > 0such that for all n n0, we have 0 f(n) < cg(n).

f(n) becomes insignificant relative to g(n) as n approaches infinity:f( ) g g( ) pp ylim [f(n) / g(n)] = 0

n

g(n) is an upper bound for f(n) that is not asymptotically tightg(n) is an upper bound for f(n) that is not asymptotically tight.

Page 22: Anlysis and design of algorithms part 1

-notation

f(n)=(g(n)) there exist positive constants c and n0 where c > 0, n0 > 0

notationFor a given function g(n), the set little-omega:

such that for all n n0, we have 0 cg(n) < f(n)}.

f(n) becomes arbitrarily large relative to g(n) as n approaches infinity:lim [f(n) / g(n)] = .n

g(n) is a lower bound for f(n) that is not asymptotically tight.g( ) f( ) y p y g

Page 23: Anlysis and design of algorithms part 1

Theoretical analysis of time efficiencyTheoretical analysis of time efficiencyTime efficiency is analyzed by determining the number of

repetitions of the basic operation as a function of input sizerepetitions of the basic operation as a function of input size Basic operation: the operation that contributes the most

towards the running time of the algorithm

input size

T(n) ≈ copC(n)

running timeexecution time

for basic operationor cost

Number of times basic operation is executed

or cost

Page 24: Anlysis and design of algorithms part 1

Recursive Procedure

A procedure that is defined in terms of itself In a computer language a function that calls itself In a computer language a function that calls itself A recursive algorithm is a problem solution that has been

expressed in terms of two or more easier to solve subblproblems

Page 25: Anlysis and design of algorithms part 1

Content of a Recursive MethodContent of a Recursive Method

Base case(s). V l f th i t i bl f hi h f Values of the input variables for which we perform no

recursive calls are called base cases (there should be at leastone base case).

Every possible chain of recursive calls must eventually reacha base case.

R i ll Recursive calls. Calls to the current method. Each recursive call should be defined so that it makes progress Each recursive call should be defined so that it makes progress

towards a base case.

Page 26: Anlysis and design of algorithms part 1

Recurrence equationRecurrence equationMerge Sort• T(n) = (1) if n=1• T(n) = (1) if n=1

T(n/2)+ T(n/2)+ (n) if n>1• Ignore details, T(n) = 2T(n/2)+ (n).Ignore details, T(n) 2T(n/2) (n).

Recurrence Equation Solving technique

Substitution method Master method Recursion Tree method

Page 27: Anlysis and design of algorithms part 1

The Substitution Method Used to establish either upper or lower bound on a

recurrence T o steps: Two steps:

1. Guess the form of the solution.2. Use mathematical induction to find the constants and show that

the solution works.Example

T(n) = 2T(n/2) + nT(n) = 2T(n/2) + nGuess (#1) T(n) = O(n)Need T(n) <= cn for some constant c>0Need T(n) < cn for some constant c>0Assume T(n/2) <= cn/2Inductive hypothesisThus T(n) <= 2cn/2 + n = (c+1) n( ) ( )

Our guess was wrong!!

Page 28: Anlysis and design of algorithms part 1

T(n) = 2T(n/2) + nGuess (#2) T(n) = O(n2)( ) ( ) ( )Need T(n) <= cn2 for some constant c>0Assume T(n/2) <= cn2/4 Inductive hypothesisThus T(n) <= 2cn2/4 + n = cn2/2+ n

Works for all n as long as c>=2 !!But there is a lot of “slack”

Page 29: Anlysis and design of algorithms part 1

Solve T(n)=2T(n/2)+n Guess the solution: T(n)=O(n lg n), i.e., T(n) cn lg n for some c.

Prove the solution by induction: Suppose this bound holds for n/2 i e Suppose this bound holds for n/2, i.e.,

T(n/2) cn/2 lg (n/2).

T(n) 2((cn/2 lg (n/2))+n( ) (( g ( )) cn lg (n/2)+n = cn lg n - cn lg 2 +n = cn lg n cn +n = cn lg n - cn +n cn lg n (as long as c1)

Works for all n as long as c>=1 !!This is the correct guess.

Page 30: Anlysis and design of algorithms part 1

1. Making a good guessA good guess is vital when applying this method. If the initial guessi h d b dj d l ( i )is wrong, the guess needs to be adjusted later.(Experience)

2. Subleties G ess is correct b t ind ction proof not ork Guess is correct, but induction proof not work. Problem is that inductive assumption not strong enough. Solution: revise the guess by subtracting a lower-order term.g y g Example: T(n)=T(n/2)+T(n/2)+1. Guess T(n)=O(n), i.e., T(n) cn for some c.

H T( ) /2 /2 1 1 hi h d i l However, T(n) c n/2+c n/2+1 =cn+1, which does not implyT(n) cn for any c.

Attempting T(n)=O(n2) will work, but overkill. New guess T(n) cn – b will work as long as b 1.

Page 31: Anlysis and design of algorithms part 1

3. Avoiding Pitfall

• It is easy to guess T(n)=O(n) (i e T(n) cn) for T(n)=2T(n/2)+n• It is easy to guess T(n)=O(n) (i.e., T(n) cn) for T(n)=2T(n/2)+n.• And wrongly prove:

– T(n) 2(c n/2)+n• cn+n• =O(n). wrongly !!!!

• Problem is that it does not pro e the t f of T( ) • Problem is that it does not prove the exact form of T(n) cn.

Page 32: Anlysis and design of algorithms part 1

4. Changing Variables• Suppose T(n)=2T(n)+lg n• Suppose T(n)=2T(n)+lg n.• Rename m=lg n. so T(2m)=2T(2m/2)+m.• Domain transformation:

– S(m)=T(2m), so S(m)=2S(m/2)+m.– Which is similar to T(n)=2T(n/2)+n.

So the solution is S( ) O( lg )– So the solution is S(m)=O(m lg m). – Changing back to T(n) from S(m), the solution is T(n)

=T(2m)=S(m)=O(m lg m)=O(lg n lg lg n).

Page 33: Anlysis and design of algorithms part 1

The Recursion-tree MethodThe Recursion tree Method

Idea: Each node represents the cost of a single subproblem. Sum up the costs with each level to get level cost.

S ll h l l l Sum up all the level costs to get total cost.

Page 34: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

Page 35: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

T(n)

Page 36: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

n2

T(n/4) T(n/2)

Page 37: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

n2

(n/4)2 (n/2)2

T( /16) T( /8) T( /8) T( /4)T(n/16) T(n/8) T(n/8) T(n/4)

Page 38: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

n2

( /16)2 ( /8)2 ( /8)2 ( /4)2

(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

(1)

Page 39: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

2nn2

( /16)2 ( /8)2 ( /8)2 ( /4)2

(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

(1)

Page 40: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

52nn2

( /16)2 ( /8)2 ( /8)2 ( /4)2

(n/4)2 (n/2)2 2165 n

(n/16)2 (n/8)2 (n/8)2 (n/4)2

(1)

Page 41: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

52nn2

( /16)2 ( /8)2 ( /8)2 ( /4)2

(n/4)2 2165 n

225

(n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2 225625 n

(1)

Page 42: Anlysis and design of algorithms part 1

Example of recursion treeExample of recursion tree

Solve T(n) = T(n/4) + T(n/2) + n2:

52nn2

( /16)2 ( /8)2 ( /8)2 ( /4)2

(n/4)2 2165 n

225

(n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2 225625 n

(1) 1 31652

165

1652 nTotal = 161616

= (n2) geometric series

Page 43: Anlysis and design of algorithms part 1

Recursion Tree for T(n)=3T(n/4)+(n2)Recursion Tree for T(n) 3T(n/4) (n )T(n) cn2

T(n/4) T(n/4) T(n/4)

cn2

( /4)2 c(n/4)2 c(n/4)2T(n/4) T(n/4) T(n/4) c(n/4)2 c(n/4) c(n/4)

T(n/16) T(n/16) T(n/16)T(n/16)T(n/16)T(n/16) T(n/16) T(n/16) T(n/16)(a) (b) (c)

cn2

2 2

cn2

(3/16)cn2c(n/4)2 c(n/4)2 c(n/4)2

c(n/16)2 c(n/16)2 c(n/16)2c(n/16)2c(n/16)2c(n/16)2 c(n/16)2 c(n/16)2 c(n/16)2

(3/16)cn2

(3/16)2cn2log 4nc(n/16) c(n/16) c(n/16)c(n/16)c(n/16)c(n/16) c(n/16) c(n/16) c(n/16)

T(1)T(1) T(1)T(1) T(1)T(1) (nlog 43)T(1)T(1) T(1)T(1) T(1)T(1) ( )3log4n= nlog 43 Total O(n2)

(d)

Page 44: Anlysis and design of algorithms part 1

Tree has log4n+1 levels (0,1,... log4

n ),ie subproblem size for a node at deph ‘i’ is (n/4)i .when the subproblem hits n=1,when (n/4)i =1 p ( ) p , ( )or i=log4

n . Each level has 3 times more nodes than the level above,so the

b f d t d th ‘i’ i 3inumber of nodes at depth ‘i’ is 3i . Each node i has a cost of c. (n/4i )2 . Total cost over all nodes at depth i is 3i (n/4i )2 =(3/16)i cn2. Total cost over all nodes at depth i is, 3 .(n/4 ) =(3/16) cn Last level depth log4

n has 3. log4n nodes ,each with cost T(1).

total cost= (nlog 43)( )

Page 45: Anlysis and design of algorithms part 1

Master Method/Theorem

The master method applies to recurrences of the form T(n) = a T(n/b) + f (n) ,

1 ( h b f b bl ) a 1 (the number of subproblems).b>1, (n/b is the size of each subproblem). f(n) is a given function and is asymptotically positive.( ) g y p y p

for T(n) = aT(n/b)+f(n), n/b may be n/b or n/b.If f( ) O( log a ) f 0 th T( ) ( log a)1. If f(n)=O(nlogba-) for some >0, then T(n)= (nlogba).

2. If f(n)= (nlogba), then T(n)= (nlogba lg n).3. If f(n)=(nlogba+) for some >0, and if af(n/b) cf(n) for some( ) ( ) , ( ) ( )

c<1 and all sufficiently large n, then T(n)= (f(n)).In each of the three cases ,we are comparing the function f(n) with

the function nlogba.the function n gb

Page 46: Anlysis and design of algorithms part 1

Application of Master TheoremApplication of Master Theorem T(n) = 9T(n/3)+n;

a=9 b=3 f(n) =n a=9,b=3, f(n) =n nlogba = nlog39 = (n2) f(n)=O(nlog39-) for =1f( ) ( ) By case 1, T(n) = (n2).

T(n) = T(2n/3)+1 a=1,b=3/2, f(n) =1 nlogba = nlog3/21 = (n0) = (1) By case 2, T(n)= (lg n).

Page 47: Anlysis and design of algorithms part 1

Application of Master TheoremApplication of Master Theorem T(n) = 3T(n/4)+nlg n; a=3 b=4 f(n) =nlg n a 3,b 4, f(n) nlg n nlogba = nlog43 = (n0.793) f(n)= (nlog43+) for 0.2 Moreover, for large n, c=3/4.

af(n/b) =3(n/4)lg (n/4) (3/4)nlg n = cf(n) By case 3 T(n) = (f(n))= (nlg n) By case 3, T(n) = (f(n))= (nlg n).

Page 48: Anlysis and design of algorithms part 1

Algorithm DesignAlgorithm Design

The strategies which may be used in the design of algorithms,The strategies which may be used in the design of algorithms, including: Divide-and-conquer algorithms Dynamic programming Greedy algorithms Backtracking algorithms Backtracking algorithms

Page 49: Anlysis and design of algorithms part 1

Divide and ConquerThe most well known algorithm design strategy:g g gy1. Divide instance of problem into two or more smaller instances2. Solve smaller instances recursively3 Obtain solution to original (larger) instance by combining these3. Obtain solution to original (larger) instance by combining these solutions

Page 50: Anlysis and design of algorithms part 1
Page 51: Anlysis and design of algorithms part 1

Time complexity of the general algorithmTime complexity of the general algorithm Time complexity:

T(n)=

2T(n/2)+S(n)+M(n) b

, n 1 , n < 1

where S(n) : time for splittingM(n) : time for merging

b

b : a constant e.g. Binary search e g quick sort e.g. quick sort e.g. merge sort

Page 52: Anlysis and design of algorithms part 1

Merge Sort—divide-and-conquerMerge Sort divide and conquer Divide: divide the n-element sequence into two

subproblems of n/2 elements each.p Conquer: sort the two subsequences recursively using

merge sort. If the length of a sequence is 1, do nothingi i i l d i dsince it is already in order.

Combine: merge the two sorted subsequences toproduce the sorted answerproduce the sorted answer.

Page 53: Anlysis and design of algorithms part 1

Dynamic Programming One disadvantage of using Divide-and-Conquer is that the

process of recursively solving separate sub-instances canprocess of recursively solving separate sub instances canresult in the same computations being performedrepeatedly since identical sub-instances may arise.

The idea behind dynamic programming is to avoid thispathologyTh h d ll li h hi b i i i The method usually accomplishes this by maintaininga table of sub-instance results.

Page 54: Anlysis and design of algorithms part 1

Dynamic Programming is an algorithm design technique foroptimization problems. In such problem there can be many solutions.E h l ti h l d i h t fi d l ti ith thEach solution has a value, and we wish to find a solution with theoptimal value.

Like divide and conquer, DP solves problems by combining solutionsto subproblems.

DP reduces computation by Solving subproblems in a bottom-up fashion Solving subproblems in a bottom up fashion. Storing solution to a subproblem the first time it is solved. Looking up the solution when subproblem is encountered again.

E l Example:Fibonacci numbers computed by iteration.

Page 55: Anlysis and design of algorithms part 1

Basic Outline of Dynamic ProgrammingBasic Outline of Dynamic ProgrammingTo solve a problem, we need a collection of sub-problems

that satisfy a few properties:that satisfy a few properties:1. There are a polynomial number of sub-problems.2. The solution to the problem can be computed easily fromp p y

the solutions to the sub-problems.3. There is a natural ordering of the sub-problems from

“smallest" to “largest".4. There is an easy-to-compute recurrence that allows us to

t th l ti t b bl f th l ticompute the solution to a sub-problem from the solutionsto some smaller sub-problems.

Page 56: Anlysis and design of algorithms part 1

Elements of Dynamic Programming (DP)

DP is used to solve problems with the following characteristics:

Simple subproblems We should be able to break the original problem to smaller

subproblems that have the same structuresubproblems that have the same structure Optimal substructure of the problems The optimal solution to the problem contains within optimal

solutions to its subproblems. Overlapping sub-problems there exist some places where we solve the same subproblem more there exist some places where we solve the same subproblem more

than once.

Page 57: Anlysis and design of algorithms part 1

Steps in Dynamic Programming

1. Characterize structure of an optimal solution.2 Define value of optimal solution recursively2. Define value of optimal solution recursively.3. Compute the value of an optimal solution4 Construct an optimal solution from computed values4. Construct an optimal solution from computed values.

Page 58: Anlysis and design of algorithms part 1

Knapsack Problem by DPKnapsack Problem by DP Given some items, pack the knapsack to get the maximum total

value. Each item has some weight and some value. Total weight thatg gwe can carry is no more than some fixed number W.

So we must consider weights of items as well as their value.Item # Weight Value1 1 82 3 62 3 63 5 5

Page 59: Anlysis and design of algorithms part 1

Given a knapsack with maximum capacity W, and a set Sconsisting of n items

Each item i has some weight wi and benefit value bi (all wi , biand W are integer values)

P bl H t k th k k t hi i t t l Problem: How to pack the knapsack to achieve maximum totalvalue of packed items?

Page 60: Anlysis and design of algorithms part 1

Optimal Binary Search Trees

Problem Given sequence K = k1 < k2 <··· < kn of n sorted keys, with a Given sequence K k1 k2 kn of n sorted keys, with a

search probability pi for each key ki. Want to build a binary search tree (BST) with minimum expected

hsearch cost. Actual cost = number of items examined. For key k cost = depth (k )+1 where depth (k ) = depth of k in For key ki, cost = depthT(ki)+1, where depthT(ki) = depth of ki in

BST T .• Expected Search Cost

n

pk

TE

)(depth1

]in cost search [

i

iiT pk1

)(depth1

Page 61: Anlysis and design of algorithms part 1

Example Consider 5 keys with search probabilities:

p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3.p1 p2 p3 p4 p5

k2 i depthT(ki) depthT(ki)·pi1 1 0.25

k1 k4

2 0 03 2 0.14 1 0.2

k3 k5

5 2 0.61.15

Therefore, E[search cost] = 2.15.

Page 62: Anlysis and design of algorithms part 1

ExampleExample p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3.

i depthT(ki) depthT(ki)·pi1 1 0.25

k2

2 0 03 3 0.154 2 0.45 1 0 3

k1 k5

5 1 0.31.10

k4

Therefore, E[search cost] = 2.10.

k3 This tree turns out to be optimal for this set of keys3 This tree turns out to be optimal for this set of keys.

Page 63: Anlysis and design of algorithms part 1

Step 1:Optimal SubstructureStep 1:Optimal Substructure

Any subtree of a BST contains keys in a contiguous range k k for some 1 ≤ i ≤ j ≤ nrange ki, ..., kj for some 1 ≤ i ≤ j ≤ n.

T

T

If T is an optimal BST and pT contains subtree T with keys ki, ... ,kj ,

then T must be an optimal BST for keys ki, ..., kj.

Page 64: Anlysis and design of algorithms part 1

Optimal SubstructureOptimal Substructure

One of the keys in ki, …,kj, say kr, where i ≤ r ≤ j,must be the root of an optimal subtree for these keysmust be the root of an optimal subtree for these keys.

Left subtree of kr contains ki,...,kr1. Ri ht bt f k t i k +1 k

kr

Right subtree of kr contains kr+1, ...,kj.

T fi d ti l BST To find an optimal BST: Examine all candidate roots kr , for i ≤ r ≤ j Determine all optimal BSTs containing k k and containing

ki kr-1 kr+1 kj

Determine all optimal BSTs containing ki,...,kr1 and containingkr+1,...,kj

Page 65: Anlysis and design of algorithms part 1

Step 2:Recursive Solutionp

Find optimal BST for ki,...,kj, where i ≥ 1, j ≤ n, j ≥ i1. When j = i1, the tree is empty.

Define e[i, j ] = expected search cost of optimal BST for ki,...,kj.

If j = i1, then e[i, j ] = 0. If j ≥ i, Select a root kr, for some i ≤ r ≤ j . Recursively make an optimal BSTs f k k th l ft bt d for ki,..,kr1 as the left subtree, and for kr+1,..,kj as the right subtree.

Page 66: Anlysis and design of algorithms part 1

Recursive Solution When the OPT subtree becomes a subtree of a node: Depth of every node in OPT subtree goes up by 1. E t d h t i b (i j) i th f ll th Expected search cost increases by w(i,j) ,is the sum of all the

probabilities in the subtree

If kr is the root of an optimal BST for ki,..,kj :r p i je[i, j ] = e[i, r1] + e[r+1, j] + w(i, j).

But, we don’t know kr. Hence,

Page 67: Anlysis and design of algorithms part 1

Step 3:Computing the expected search costStep 3:Computing the expected search cost

For each subproblem (i,j), store:For each subproblem (i,j), store: expected search cost in a table use only entries e[i, j ], where j ≥

i1. root[i, j ] = root of subtree with keys ki,..,kj, for 1 ≤ i ≤ j ≤ n.

w[1..n+1, 0..n] = sum of probabilities[ ] p w[i, i1] = 0 for 1 ≤ i ≤ n. w[i, j ] = w[i, j-1] + pj for 1 ≤ i ≤ j ≤ n.

Page 68: Anlysis and design of algorithms part 1

Greedy algorithm

Like dynamic programming, used to solve optimization problems. Greedy algorithms do not always yield optimal solutions but for many Greedy algorithms do not always yield optimal solutions, but for many

problems they do. A greedy algorithm always makes the choice that looks best at the

moment. Problems exhibit optimal substructure and the greedy-choice property.

A d l i h k i h A h h A greedy algorithm works in phases. At each phase: You take the best you can get right now, without regard for future

consequencesco seque ces You hope that by choosing a local optimum at each step, you will

end up at a global optimum

Page 69: Anlysis and design of algorithms part 1

Greedy algorithms don'tGreedy algorithms don t Do not consider all possible paths

D id f h i Do not consider future choices

Do not reconsider previous choices

Do not always find an optimal solution

greedy strategy usually progresses in a top-down fashion,making one greedy choice after another, reducing each givenproblem instance to a smaller one.

Page 70: Anlysis and design of algorithms part 1

Types of Solutions Produced by Greedy AlgorithmsTypes of Solutions Produced by Greedy Algorithms Optimal Solutions – The best possible answer that any

algorithm could find to the problemalgorithm could find to the problem

Good Solutions – A solution that is near optimal and could be good-enough for some problems

Bad Solutions – A solution that is not acceptable Bad Solutions A solution that is not acceptable

Worst Possible Solution – The solution that is farthest from the goal

Page 71: Anlysis and design of algorithms part 1

Elements of Greedy AlgorithmsElements of Greedy Algorithms1. Determine the optimal substructure of the problem.2 Greedy Choice Property2. Greedy Choice Property

Page 72: Anlysis and design of algorithms part 1

Traveling salesman A salesman must visit every city (starting from city A), and wants

to cover the least possible distanceH i i i ( d d) if He can revisit a city (and reuse a road) if necessary

He does this by using a greedy algorithm: He goes to the next nearest city from wherever he is

From A he goes to B From B he goes to D

Thi i i l iA B C2 4

This is not going to result in a shortest path!

The best result he can get now will 3 3 4 4 g

be ABDBCE, at a cost of 16 An actual least-cost path from A is

ADBCE at a cost of 14ED

ADBCE, at a cost of 14E

Page 73: Anlysis and design of algorithms part 1

Greedy vs dynamic programmingGreedy vs. dynamic programmingDynamic programming:• Make a choice at each step.p• Choice depends on knowing optimal solutions to subproblems. Solve

subproblems first.• Solve bottom-up.Greedy:

M k h i t h t• Make a choice at each step.• Make the choice before solving the subproblems.• Solve top-down Solve top down.

Page 74: Anlysis and design of algorithms part 1

BacktrackingBacktracking Backtracking is a systematic way to go through all the possible

configurations of a search space. is a methodical way of trying out various sequences of decisions,

until you find one that “works”. Recursion can be used for elegant and easy implementation of Recursion can be used for elegant and easy implementation of

backtracking. Backtracking ensures correctness by enumerating all possibilities. We can represent the solution space for the problem using a state

space tree The root of the tree represents 0 choices,p , Nodes at depth 1 represent first choice Nodes at depth 2 represent the second choice, etc.

Page 75: Anlysis and design of algorithms part 1

Backtracking AlgorithmBacktracking Algorithm Backtracking is a modified depth-first search of a tree. Approach Approach

1. Tests whether solution has been found2. If found solution, return it3. Else for each choice that can be made

a) Make that choiceb) Recurb) Recurc) If recursion returns a solution, return it

4. If no choices remain, return failure Some times called “search tree”

Page 76: Anlysis and design of algorithms part 1

Coloring a mapColoring a map You wish to color a map with not more than four colors

red, yellow, green, blue Adjacent countries must be in different colors You don’t have enough information to choose colors Each choice leads to another set of choices Each choice leads to another set of choices One or more sequences of choices may (or may not) lead to a

solution Many coloring problems can be solved with backtracking

Page 77: Anlysis and design of algorithms part 1

Example ProblemsExample Problems

Page 78: Anlysis and design of algorithms part 1
Page 79: Anlysis and design of algorithms part 1
Page 80: Anlysis and design of algorithms part 1
Page 81: Anlysis and design of algorithms part 1

Solve :T(n)=2T(√n)+1Solution By Changing Variables

<==2c lg m-2cS(m)<==c.lg my g g

n=2m ie m=lg nThen T(2m )=2T(2m/2 )+1

=O(lg m)T(n)=O(lg m)Ie =O(lg lg n)

Assume S(m)=T(2m )Then S(m)=2S(m/2)+1

Ie O(lg lg n)

By guessing S(m)=O lg mIe <==c.lg m

S(m/2)<==c lg m/2S(m/2)<==c.lg m/2S(m)<==2.c.lg m/2

<==2.c.lg m-lg 22.c.lg m lg 2