108
1 UNIT-II UNIT-II DIVIDE –AND- CONQUER ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 4:

UNIT-II

  • Upload
    bella

  • View
    50

  • Download
    0

Embed Size (px)

DESCRIPTION

ANALYSIS AND DESIGN OF ALGORITHMS. UNIT-II. CHAPTER 4:. DIVIDE –AND- CONQUER. OUTLINE. Divide – and - Conquer Technique Merge sort Quick sort Binary Search Multiplication of Large Integers and Strassen’s Matrix Multiplication. Divide – and – Conquer :. - PowerPoint PPT Presentation

Citation preview

Page 1: UNIT-II

11

UNIT-IIUNIT-II

DIVIDE –AND- CONQUER

ANALYSIS AND DESIGN OF ALGORITHMS

CHAPTER 4:

Page 2: UNIT-II

22

OUTLINE Divide – and - Conquer Technique

• Merge sort• Quick sort• Binary Search• Multiplication of Large Integers and

Strassen’s Matrix Multiplication

Page 3: UNIT-II

33

Divide – and - conquer algorithms work according to the

following general plan :

1. A problem’s instance is divided into several smaller instances of the same problem, ideally of about the same size.

2. The smaller instances are solved (typically recursively).

3. The solutions obtained for the smaller instances are combined to get a solution to the original problem.

Divide – and – Conquer :

Page 4: UNIT-II

44Figure: Divide – and – Conquer Technique.

Page 5: UNIT-II

55

Not every divide-and-conquer algorithm is necessarily more efficient than even a brute-force solution.

The divide-and-conquer approach yields some of the most important and efficient algorithms in computer science.

Divide-and-Conquer technique is ideally suited for parallel computations, in which each subproblem can be solved simultaneously by its own processor. Later all solutions can be merged to get solution to original problem. Thus, the execution speed of a program which is based on this technique can be improved significantly.

Divide – and – Conquer :

Page 6: UNIT-II

66

In the most typical case of divide-and-conquer, a problem’s instance of size n is divided into two instances of size n/2.

Generally, an instance of size n can be divided into b instances of size n/b, with a of them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1).

Assuming that size n is a power of b, we get the following recurrence for the running time T (n):

T (n) = aT (n/b) + f (n) ………….. (1) (General Divide-and-Conquer Recurrence) where f (n) is a function that accounts for the time spent on dividing

the problem into smaller ones and on combining their solutions.

Divide – and – Conquer :

Page 7: UNIT-II

77

MASTER THEOREM:

If f(n) Є Θ(nd) with d ≥ 0 in recurrence Equation (1), Then,

Θ(nd) if a < bd

T(n) Є Θ(nd log n) if a = bd

Θ(nlogb

a) if a > bd

Divide – and – Conquer :

Page 8: UNIT-II

88

Divide – and – Conquer :Example1: Consider the problem of computing the sum of n numbers stored in an array.

ALGORITHM Sum(A[0…n-1], low, high) //Determines sum of n numbers stored in an array A //using divide-and-conquer technique recursively. //Input: An array A[0…n-1], low and high initialized to 0 and n-1 // respectively. //Output: Sum of all the elements in the array. if low = high return A[low] mid ← (low + high)/2 return (Sum(A, low, mid) + Sum(A, mid+1, high))

Page 9: UNIT-II

99

4, 3, 2, 1

4, 3 2, 1

4 3 2 1

Example:

7 +

3 +

10 +

A, 0, 3

A, 0, 1 A, 2, 3

A, 0, 0 A, 1, 1 A, 2, 2 A, 3, 3

Recursion Tree:4 3 2 1

7

10

3

Page 10: UNIT-II

1010

Analysis of the algorithm to find the sum of all the elements of the array using divide-and-Conquer technique

Since the problem instance is divided into two parts, the recurrence relation for this algorithm is:

A(n) = 0 if n = 1 A(n/2) + A(n/2) + 1 otherwise No. of additions No. of additions to add the sum of on left part of array on right part of array left and right part

Page 11: UNIT-II

1111

Analysis of the algorithm to find the sum of all the elements of the array using divide-and-Conquer techniqueSolve the below recurrence equation using Backward substitution method: A(1) = 0 (Initial Condition) A(n) = 2 A(n/2) + 1 A(n) = 2 A(n/2) + 1 … (1) = 2[ 2 A(n/4) + 1 ] + 1 replacing n by n/2 in (1) = 22 A(n/22) + 2 + 1 A(n/2) = 2 A(n/4) + 1 = 22[ 2 A(n/23) + 1 ] + 2 + 1 replacing n by n/22 in (1) = 23 A(n/23) + 22 + 2 + 1 A(n/22) = 2 A(n/23) + 1 = 2i A(n/2i) + 2i-1 + 2i-2 + . . . + 2 + 1 a(rn – 1) = 1 (2i – 1) =2i A(n/2i) + 2i – 1 r – 1 2 -1 substituting 2i by n in above step yields, n A(n/n) + n – 1 = n A(1) + n – 1 = n . 0 + n – 1 A(n) Є Θ(n)

Page 12: UNIT-II

1212

The recurrence equation for the number of additions A(n) made by the divide-and-conquer summation algorithm on inputs of size n = 2k is

A(n) = 2A(n/2) + 1.Applying Master Theorem to above equation, Thus, for this example, a = 2, b = 2, and d = 0 hence, since a >bd, A(n) Є θ(nlog

b a) = θ (nlog

2 2)

A(n) Є θ (n).

Analysis of the algorithm to find the sum of all the elements of the array using divide-and-Conquer technique

Page 13: UNIT-II

1313

Divide – and – Conquer :Example2: Consider the problem of finding the largest element in an array. ALGORITHM Large(A[0…n-1], low, high) //Determines the largest element in an array A using divide-and- //conquer technique recursively. //Input: An array A[0…n-1], low and high initialized to 0 and n-1 respectively. //Output: Largest element in the array. if low = high return A[low] mid ← (low + high)/2 n1 ← Large(A, low, mid) n2 ← Large(A, mid+1, high) if n1 > n2 return n1 else return n2

Page 14: UNIT-II

1414

Mergesort The strategy behind Merge Sort is to change the problem of sorting

into the problem of merging two sorted sub-lists into one.

If the two halves of the array were sorted, then merging them carefully could complete the sort of the entire list.

4, 3, 2, 1

4, 3 2, 1

4 3 2 1

3, 4 1, 2

1, 2, 3, 4

Page 15: UNIT-II

1515

Merge Sort is a "recursive" algorithm because it accomplishes its task by calling itself on a smaller version of the problem (only half of the list).

For example, if the array had 2 entries, Merge Sort would begin by calling itself for item 1. Since there is only one element, that sub-list is sorted and it can go on to call itself in item 2.

Since that also has only one item, it is sorted and now Merge Sort can merge those two sub-lists into one sorted list of size two.

Mergesort

Page 16: UNIT-II

1616

The real problem is how to merge the two sub-lists.

While it can be done in the original array, the algorithm is much simpler if it uses a separate array to hold the portion that has been merged and then copies the merged data back into the original array.

The basic philosophy of the merge is to determine which sub-list starts with the smallest data and copy that item into the merged list and move on to the next item in the sub-list.

Mergesort

Page 17: UNIT-II

1717

MergesortALGORITHM Mergesort(A[0…n-1]) //Sorts array A[0…n-1] by recursive mergesort //Input: An array A[0…n-1] of orderable elements //Output: Array A[0…n-1] sorted in nondecreasing

order if n > 1 copy A[0 . . . n/2 - 1] to B[0 . . . n/2 - 1] copy A[ n/2 . . . n - 1] to C[0 . . . n/2 – 1] Mergesort(B[0 . . . n/2 – 1]) Mergesort(C[0 . . . n/2 – 1]) Merge(B, C, A)

Page 18: UNIT-II

1818

MergesortALGORITHM Merge(B[0 . . . p-1], C[0 . . . q-1], A[0 . . . p + q - 1]) //Merges two sorted arrays into one sorted array //Input: Arrays B[0 . . . p-1] and C[0 . . . q-1] both sorted //Output: Sorted array A[0 . . . p + q - 1] of the elements of B & Ci ← 0, j ← 0, k ← 0while i < p and j < q do if B[i] ≤ C[j] A[k] ← B[i]; i ← i + 1 else A[k] ← C[j]; j ← j + 1 k ← k + 1if i = p copy C[j . . . q – 1] to A[k . . . p + q – 1]else copy B[i . . . p – 1] to A[k . . . p + q – 1]

Page 19: UNIT-II

1919

#include<stdio.h>#include<conio.h>void mergesort(int *,int,int);void merge(int *,int,int,int);void main(){ int i,a[10],n;

clrscr();printf("enter n value\n");scanf("%d",&n);

printf("enter values\n");for(i=0;i<n;i++) scanf("%d",&a[i]);

mergesort(a,0,n-1);printf("\nSorted array\n");for(i=0;i<n;i++)printf("%d\t",a[i]);getch();

}

Mergesort

Page 20: UNIT-II

2020

void mergesort(int *a,int low,int high) {

int mid;if(low<high){ mid=(low+high)/2; mergesort(a,low,mid); mergesort(a,mid+1,high); merge(a,low,mid+1,high);}

}

Mergesort

Page 21: UNIT-II

2121

void merge(int *a,int low,int mid,int high) {

int i,j,k,m,temp[20];i=low,j=mid,k=-1;

while(i<=mid-1 && j<=high){if(a[i]<a[j])temp[++k]=a[i++];elsetemp[++k]=a[j++];}

Mergesort

Page 22: UNIT-II

2222

// copy remaining elements into temp array

for(m=i;m<=mid-1;m++)temp[++k]=a[m];

for(m=j;m<=high;m++)temp[++k]=a[m];

// copy elements of temp array back into array afor(m=0;m<=k;m++)a[low+m]=temp[m];

}

Mergesort

Page 23: UNIT-II

2323

Execution ExamplePartitionPartition

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 24: UNIT-II

2424

Execution Example (cont.) Recursive call, partition

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 25: UNIT-II

2525

Execution Example (cont.) Recursive call, partitionRecursive call, partition

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 26: UNIT-II

2626

Execution Example (cont.) Recursive call, base caseRecursive call, base case

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 27: UNIT-II

2727

Execution Example (cont.) Recursive call, base caseRecursive call, base case

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 28: UNIT-II

2828

Execution Example (cont.) MergeMerge

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 29: UNIT-II

2929

Execution Example (cont.) Recursive call, …, base case, mergeRecursive call, …, base case, merge

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

9 9 4 4

Page 30: UNIT-II

3030

Execution Example (cont.) MergeMerge

7 2 9 4 2 4 7 9 3 8 6 1 1 3 8 6

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 31: UNIT-II

3131

Execution Example (cont.) Recursive call, …, merge, mergeRecursive call, …, merge, merge

7 2 9 4 2 4 7 9 3 8 6 1 1 3 6 8

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 32: UNIT-II

3232

Execution Example (cont.) MergeMerge

7 2 9 4 2 4 7 9 3 8 6 1 1 3 6 8

7 2 2 7 9 4 4 9 3 8 3 8 6 1 1 6

7 7 2 2 9 9 4 4 3 3 8 8 6 6 1 1

7 2 9 4 3 8 6 1 1 2 3 4 6 7 8 9

Page 33: UNIT-II

3333

Mergesort Example: 4, 3, 2, 1

4, 3 2, 1

4 3 2 1

3, 4 1, 2

1, 2, 3, 4

A, 0, 3

A, 0, 1 A, 2, 3

A, 0, 0 A, 1, 1A, 2, 2 A, 3, 3

A, 0, 1, 1 A, 2, 3, 3

A, 0, 2, 3

Recursion Tree:

Page 34: UNIT-II

3434

Analysis of Mergesort

Assuming that n is a power of 2, the recurrence relation for the number of key comparisons C(n) is

C(n) = 2C(n/2) + Cmerge(n) for n > 1, C(1) = 0. where, Cmerge(n) is the number of key comparisons

performed during the merging stage.

For the worst case, Cmerge(n) = n – 1, and we have the recurrence

Cworst(n) = 2Cworst(n/2) + n – 1 for n > 1, Cworst(1) = 0.

Hence, according to the Master Theorem, Cworst(n) Є Θ(n log n) (Here, a=2, b=2,

d=1) (Since, a = bd, Cworst(n) Є Θ(nd log n))

Page 35: UNIT-II

3535

Quick sort divides the inputs according to their value to achieve its partition, a situation where all the elements before some position s are smaller than or equal to A[s] and all the elements after position s are greater than or equal to A[s]:

A[0] . . . A[s – 1] A[s] A[s + 1] . . . A[n – 1]

all are ≤ A[s] all are ≥ A[s]

After a partition has been achieved, A[s] will be in its final position in the sorted array, and we can continue sorting the two subarrays of the elements preceding and following A[s] independently by using same method.

Quick Sort

Page 36: UNIT-II

3636

Quick Sort ALGORITHM Quicksort(A[l..r]) //Sorts a subarray by quicksort //Input: A subarray A[l..r] of A[0..n - 1], defined by its left // and right indices l and r //Output: The subarray A[l..r] sorted in nondecreasing // order if l < r

s Partition(A[l..r]) //s is a split position Quicksort(A[l..s - 1]) Quicksort(A[s + 1..r])

Page 37: UNIT-II

3737

The hard part of Quick Sort is the partitioning.

Algorithm looks at the first element of the array (called the "pivot"). It will put all of the elements which are less than the pivot in the lower portion of the array and the elements higher than the pivot in the upper portion of the array. When that is complete, it can put the pivot between those sections and Quick Sort will be able to sort the two sections separately.

Quick Sort

Page 38: UNIT-II

3838

Procedure to achieve partition: First, we select pivot element p with respect to whose value we are going to divide the subarray ( Usually, first element in the subarray is taken as pivot).

The elements in the subarray are rearranged to achieve partition by using an efficient method based on left-to-right scan and right-to-left scan, each comparing the subarray’s element with the pivot.

The left-to-right scan, denoted by index i starts with the second element. This scan skips over elements that are smaller than the pivot and stops on encountering the first element greater than or equal to the pivot.

The right-to-left scan, denoted by index j starts with the last element. This scan skips over elements that are larger than the pivot and stops on encountering the first element smaller than or equal to the pivot.

Page 39: UNIT-II

3939

Procedure to achieve partition: After both scans stop, three situations may arise, depending on whether or not the scanning indices have crossed:

If scanning indices i and j have not crossed, i.e., i < j, we simply exchange A[i] and A[j] and resume the scans by incrementing i and decrementing j:

If the scanning indices have crossed over, i.e., i > j, we will have partitioned the array after exchanging the pivot with A[j]:

p all are ≤ p > p . . . < p all are ≥ p i j

p all are ≤ p ≤ p ≥ p all are ≥ p j i

Page 40: UNIT-II

4040

Procedure to achieve partition:

Finally, if the scanning indices stop while pointing to the same element, i.e., i=j, the value they are pointing to must be equal to p. Thus we have the array partitioned, with the split position s = i = j:

We can combine this case with the case of crossed-over indices (i>j) by exchanging the pivot with A[j] whenever i ≥ j.

p all are ≤ p = p all are ≥ p i=j

Page 41: UNIT-II

4141

Partition ProcedureALGORITHM Partition(A[l..r])//Partitions subarray by using its first element as a pivot//Input: subarray A[l..r] of A[0..n - 1], defined by its left and

right indices l and r (l< r)//Output: A partition of A[l..r], with the split position returned as

this function’s valuep A[l]i l; j r + 1repeatrepeat i i + 1 until A[i] ≥ prepeat j j -1 until A[j ] ≤ pswap(A[i], A[j ])until i ≥ jswap(A[i], A[j ]) //undo last swap when i ≥ jswap(A[l], A[j ])return j

Page 42: UNIT-II

4242

40 20 10 80 60 50 7 30 100Pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j

1. While A[i] <= A[pivot]++i

Page 43: UNIT-II

4343

40 20 10 80 60 50 7 30 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

i j

Page 44: UNIT-II

4444

40 20 10 80 60 50 7 30 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

i j

Page 45: UNIT-II

4545

40 20 10 80 60 50 7 30 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j

i j

Page 46: UNIT-II

4646

40 20 10 80 60 50 7 30 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j

i j

Page 47: UNIT-II

4747

40 20 10 80 60 50 7 30 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]

i j

Page 48: UNIT-II

4848

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]

i j

Page 49: UNIT-II

4949

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 1

i j

Page 50: UNIT-II

5050

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 1

i j

Page 51: UNIT-II

5151

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 1

i j

Page 52: UNIT-II

5252

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 1

i j

Page 53: UNIT-II

5353

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 1

i j

Page 54: UNIT-II

5454

40 20 10 30 60 50 7 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 55: UNIT-II

5555

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 56: UNIT-II

5656

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 57: UNIT-II

5757

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 58: UNIT-II

5858

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 59: UNIT-II

5959

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 60: UNIT-II

6060

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

i j

Page 61: UNIT-II

6161

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

ij

Page 62: UNIT-II

6262

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

ij

Page 63: UNIT-II

6363

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step

1

ij

Page 64: UNIT-II

6464

40 20 10 30 7 50 60 80 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

ij

Page 65: UNIT-II

6565

7 20 10 30 40 50 60 80 100pivot_index = 4

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

ij

Page 66: UNIT-II

6666

Partition Result

7 20 10 30 40 50 60 80 100[0] [1] [2] [3] [4] [5] [6] [7] [8]

< = A[pivot] > A[pivot]

Page 67: UNIT-II

6767

Recursion: Quicksort Sub-arrays

7 20 10 30 40 50 60 80 100[0] [1] [2] [3] [4] [5] [6] [7] [8]

< = A[pivot] > A[pivot]

Page 68: UNIT-II

6868

#include<stdio.h>#include<conio.h>void Quicksort(int*,int,int);int partition(int*,int,int);void main(){ int a[100],n,i; clrscr(); printf("\nEnter the size of the array\n"); scanf("%d",&n); printf("\nEnter %d eleemnts\n",n); for(i=0;i<n;i++) scanf("%d",&a[i]); Quicksort(a,0,n-1); printf("\nSorted array:\n"); for(i=0;i<n;i++) printf("%d\t",a[i]); getch();}

Quick sort program

Page 69: UNIT-II

6969

void Quicksort(int *a,int low,int high) { int mid; if(low < high) {

mid=partition(a,low,high); Quicksort(a,low,mid-1); Quicksort(a,mid+1,high);

} }

Quick sort program

Page 70: UNIT-II

7070

int partition(int *a,int low,int high) { int i,j,temp,pivot; pivot=a[low]; i=low; j=high+1; while(i<=j) {

do i++;while(pivot>=a[i]); do j--;while(pivot<a[j]); if(i<j)

temp=a[i],a[i]=a[j],a[j]=temp; } temp=a[low]; a[low]=a[j]; a[j]=temp; return j; }

Quick sort program

Page 71: UNIT-II

7171

5 3 1 9 8 2 4 7 a[0] a[1] a[2] a[3] a[4] a[5] a[6] a[7]

low = 0, high = 7mid = 4

low = 0, high = 3mid = 1

low = 5, high = 7mid = 6

low = 0, high = 0 low = 2, high =3mid = 2 low = 5, high = 5 low = 7, high = 7

low = 2, high = 1 low = 3, high = 3

Quicksort Example: Recursion Tree.

Page 72: UNIT-II

7272

Quick Sort Best case Efficiency

The number of key comparisons made before a partition is achieved, is n + 1 if the scanning indices cross over.

The number of key comparisons made before a partition is achieved is n if the scanning indices coincide.

Therefore, the number of key comparisons in the best case will satisfy the recurrence:

Cbest(n) = 2 Cbest(n/2) + n for n >1, Cbest(1) = 0 According to the Master Theorem, Cbest(n) Є Θ (nlog 2 n)

Page 73: UNIT-II

7373

Quick sort: Worst Case Assume first element is chosen as

pivot. Assume we get array that is already

in order:2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j

Page 74: UNIT-II

7474

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

i j

Page 75: UNIT-II

7575

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

i j

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

Page 76: UNIT-II

7676

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

ij

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

Page 77: UNIT-II

7777

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

ij

Page 78: UNIT-II

7878

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

ij

Page 79: UNIT-II

7979

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

ij

Page 80: UNIT-II

8080

2 4 10 12 13 50 57 63 100pivot_index = 0

[0] [1] [2] [3] [4] [5] [6] [7] [8]

> data[pivot]<= data[pivot]

1. While A[i] <= A[pivot]++i

2. While A[j] > A[pivot] - - j3. If i < j Swap A[i] and A[j]4. While j > i goto step 15. Swap A[j] and

A[pivot]

Page 81: UNIT-II

8181

Quick Sort Worst case analysis

In the worst case, all the splits will be skewed to the extreme: one of the two subarrays will be empty, while the size of the other will be just one less than the size of the subarray being partitioned. This situation arises in particular, for increasing arrays(already Sorted).

If A[0 . . . n-1] is a strictly increasing array and we use A[0] as the pivot, the left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach A[0], indicating the split at position 0:

So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0] with itself, the algorithm will find itself with the strictly increasing array A[1 . . . n-1]. This sorting continues untill the last one A[n-2] . . . n-1] has been processed. The total number of key comparisons made will be equal to

Cworst(n) = (n + 1) + n + . . . + 3 = (n +1) (n +2) - 3 Є Θ(n2).

2

A[0] A[1] . . . A[n-1]j i

Page 82: UNIT-II

8282

Quick Sort Average case analysis

Let Cavg(n) be the average number of key comparisons made by quicksort on a randomly ordered array of size n.

Cavg(0) = 0, Cavg(1) = 0.

After the split QUICKSORT calls itself to sort two subarrays, the average comparisons to sort an array A[0 . . s-1] is Cavg(s) and the average comparisons to sort an array A[s+1 . . n-1] is Cavg(n-1-s).

Assuming that the partition split can happen in each position s (0 ≤ s ≤ n-1) with the same probability 1/n, we get the following recurrence relation: n-1 Cavg(n) = 1/n ∑ [(n + 1) + Cavg(s) + Cavg(n-1-s)] for n > 1 s=0

After solving the above recurrence, we get the solution, Cavg(n) ≈ 2n ln n ≈ 1.38n log2 n

Page 83: UNIT-II

8383

Thus, on the average, quicksort makes only 38 % more comparisons than in the best case. Quicksort’s innermost loop is so efficient that it runs faster than mergesort on randomly ordered arrays, justifying the name given to the algorithm by its inventor, the prominent British computer scientist C.A.R. Hoare.

Quick Sort

Page 84: UNIT-II

8484

Summary of Sorting Algorithms

Algorithm Time NotesBubble-sort O(n2) slow (good for small

inputs)

Selection-sort O(n2) slow (good for small inputs)

Merge-sort O(n log n) fast (good for huge inputs)

Quick-sort O(n log n)expected

fastest (good for large inputs)

Page 85: UNIT-II

8585

Binary Search is an incredibly powerful technique for searching an ordered list.

The basic algorithm is to find the middle element of the list, compare it against the key, decide which half of the list must contain the key, and repeat with that half.

Binary search

Page 86: UNIT-II

8686

It works by comparing a search key K with the array’s middle element A[m].

If they match, the algorithm stops; otherwise, the same operation is repeated recursively for the first half of the array if K <A[m] and for the second half if K >A[m]:

Binary search

Page 87: UNIT-II

8787

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

BINARY SEARCH

Example:Maintain array of Items.

Store in sorted order.Use binary search to FIND Item with Key K= 33.

low=0high=n-1=15-1=14

Page 88: UNIT-II

8888

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow

if Key K is in array, then it has index between low and high.

Page 89: UNIT-II

8989

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow mid

Compute mid position and check if matching Key is in that position.

Mid = (low + high)/2 = (0+14)/2 = 7

Page 90: UNIT-II

9090

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow mid

Since 33 < 53, we can reduce search interval.

if (Key < a[mid]) high=mid-1

Page 91: UNIT-II

9191

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow

Since 33 < 53, we can reduce search interval.

Page 92: UNIT-II

9292

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow mid

Compute mid position and check if matching Key is in that position.

Mid = (low + high)/2 = (0+6)/2 = 3

Page 93: UNIT-II

9393

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow mid

Since 33 > 25, we can reduce search interval.

if (Key > a[mid]) low=mid+1

Page 94: UNIT-II

9494

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow

Since 33 > 25, we can reduce search interval.

Page 95: UNIT-II

9595

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

highlow

mid

Compute mid position and check if matching Key is in that position.

Mid = (low + high)/2 = (4+6)/2 = 5 If (key < a[mid]) high=mid-1

Page 96: UNIT-II

9696

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

low

high Since 33 < 43, we can reduce search interval.

Page 97: UNIT-II

9797

821 3 4 65 7Index 109 1112 14130641413 2533 5143 53Value 8472 9395 97966

low

Matching Key found. Return index 4.

high

Mid = (low + high)/2 = (4+4)/2 = 4 If (key == a[mid]) return mid

Page 98: UNIT-II

9898

Binary Search AlgorithmALGORITHM BinarySearch(A[0 . . . n-1], K) //Implements nonrecursive binary search //Input: An array A[0 . . . n-1] sorted in ascending order // and a search key K //Output: An index of the array’s element that is equal // to K or -1 if there is no such element l ← 0; r ← n -1 while l ≤ r do m ← (l + r)/2 if K = A[m] return m else if K < A[m] r ← m – 1 else l ← m + 1 return -1

Page 99: UNIT-II

9999

Binary Search Algorithm Analysis:Worst-Case Analysis

The worst-case inputs include all arrays that do not contain a given search key. Since after one comparison the algorithm faces the same

situation but for an array half the size, we get the following recurrence relation for Cworst(n):

Cworst (n) = Cworst( n/2 ) + 1 for n > 1, Cworst(1) = 1. Assuming that n =2k , the solution to the above recurrence

obtained is Cworst(2k) = k + 1 = log2 n + 1 Average-Case Analysis The average number of key comparisons made by binary

search is only slightly smaller than that in the worst case: Cavg(n) ≈ log2 n.

Page 100: UNIT-II

100100

Multiplication of large integers

Some applications, notably modern cryptology, require manipulation of long integers.

By applying divide-and-conquer technique to multiply two long numbers, the total number of multiplications performed can be reduced at the expense of a slight increase in the number of additions.

If we use the classic pen-and-pencil algorithm for multiplying two n-digit integers, each of the n digits of first number is multiplied by each of the n digits of second number for the total of n2 digit multiplications.

Page 101: UNIT-II

101101

Multiplication of large integers

Consider a case of two-digit integers, say 23 and 14.• These numbers can be represented as follows:

23 = 2 . 101 + 3 . 100 and 14 = 1 . 101 + 4 . 100

• Now let us multiply them: 23 * 14 = (2 . 101 + 3 . 100) * (1 . 101 + 4 . 100) = 2 . 101 * 1 . 101 + 2 . 101 * 4 . 100 + 3 . 100 *

1 . 101 + 3 . 100 * 4 . 100 = (2 * 1) 102 + (2 * 4 + 3 * 1)101 + (3 * 4)100

• The last formula yields the correct answer of 322. But it uses the same four digit multiplications as the pen-and-pencil algorithm.

• We can compute the middle term underlined with just one digit multiplication by taking advantage of the products 2 * 1 and 3 * 4 that need to be computed anyway:

2 * 4 + 3 * 1 = (2 + 3) * (1 + 4) – 2 * 1 – 3 * 4

Page 102: UNIT-II

102102

Multiplication of large integers

For any pair of two-digit integers a = a1a0 and b = b1b0, their product c can be computed by the formula:

c = a * b = c2102 + c1 101 + c0 , where, c2 = a1 * b1 is the product of their first digits, c0 = a0 * b0 is the product of their second digits, c1 = (a1 + a0) * (b1 + b0) – (c2 + c0) is the product of the sum of the

a’s digits and the sum of the b’s digits minus the sum of c2 and c0.

Now we apply this trick to multiply two n-digit integers a and b where n is a positive even number.• Let us divide both numbers in the middle.• We denote the first half of a’s digits by a1 and second half by a0;

for b, the notations are b1 and b0, respectively.• In these notations, a = a1a0 implies that a = a110n/2 + a0, and b = b1b0 implies that b = b110n/2 + b0.

Page 103: UNIT-II

103103

Multiplication of large integers• Therefore, taking advantage of the same trick we used for

two-digit numbers, we get c = a * b = ( a110n/2 + a0 ) * ( b110n/2 + b0 ) = (a1 * b1)10n + (a1 * b0 + a0 * b1)10n/2 + (a0 * b0) = c210n + c110n/2 + c0 , where, c2 = a1 * b1 is the product of their first halves,

c0 = a0 * b0 is the product of their second halves, c1 = (a1 + a0) * (b1 + b0) – (c2 + c0) is the product of the

sum of the a’s halves and the sum of the b’s halves

minus the sum of c2 and c0. If n/2 is even, we can apply the same method for computing the products c2, c0 and c1.

Page 104: UNIT-II

104104

Multiplication of large integers Thus, if n is a power of 2, we have a recursive algorithm for

computing the product of two n-digit integers. Recursion is stopped when n becomes one.

Analysis:

Since multiplication of n-digit numbers requires three multiplications of n/2 – digit numbers, the recurrence for the number of multiplications M(n) will be

M(n) = 3M(n/2) for n>1, M(1) = 1 Solving it by backward substitutions for n = 2k yields M(2k) = 3M(2k-1) = 3[3M(2k-2)] = 32M(2k-2) = 3iM(2k-i) = . . . =3kM(2k-k) 2k = n = 3kM(20) = 3kM(1) = 3k Taking log on both

sides, Since k = log2 n, we get, klog2 2

=log2 n M(n) = 3log

2n = nlog

23 ≈ n1.585 k = log2 n

Note: alog

bc = clog

ba

Page 105: UNIT-II

105105

Strassen’s Matrix Multiplication V. Strassen published an algorithm in 1969 which suggests that, we can

find the product C of two 2-by-2 matrices A and B with just seven multiplications as opposed to the eight required by the brute-force algorithm.

This is accomplished by using the following formulas: c00 c01 a00 a01 b00 b01

= * c10 c11 a10 a11 b10 b11

m1 + m4 – m5 + m7 m3 + m5

m2 + m4 m1 + m3 – m2 +m6

where, m1 = (a00 + a11) * (b00 + b11) m2 = (a10 + a11) * b00

m3 = a00 * (b01 – b11) m4 = a11 * (b10 – b00) m5 = (a00 + a01) * b11

m6 = (a10 - a00) * (b00 + b01) m7 = (a01 - a11) * (b10 + b11)

Page 106: UNIT-II

106106

Strassen’s Matrix Multiplication

Thus to multiply two 2-by-2 matrices, strassen’s algorithm makes 7 multiplications and 18 additions/subtractions, whereas brute-force algorithm requires 8 multiplications and 4 additions. Its importance can be seen as matrix order n goes to infinity.

Let A and B be two n-by-n matrices where n is a power of two. We can divide A, B, and their product C into four n/2-by-n/2 submatrices each as follows:

C00 C01 A00 A01 B00 B01

= * C10 C11 A10 A11 B10 B11

Example: C00 can be computed either as A00 * B00 + A01 * B10 or as M1 + M4 – M5 + M7 where M1, M4, M5, and M7 are found by Strassen’s formulas.

If the seven products of n/2-by-n/2 matrices are computed recursively by the same method, we have Strassen’s algorithm for matrix multiplication.

Page 107: UNIT-II

107107

Strassen’s Matrix MultiplicationAnalysis: If M(n) is the number of multiplications made by Strassen’s

algorithm in multiplying two n-by-n matrices (where n is a power of 2), we get the following recurrence relation for it:

M(n) = 7M(n/2) for n>1, M(1) = 1 Solving it by backward substitutions for n = 2k yields M(2k) = 7M(2k-1) = 7[7M(2k-2)] = 72M(2k-2) = 7iM(2k-i) = . . . =7kM(2k-k) 2k = n = 7kM(20) = 7kM(1) = 7k Taking log on

both sides, Since k = log2 n, we get, klog2

2 =log2 n M(n) = 7log

2n = nlog

27 ≈ n2.807 , k = log2 n

which is smaller than n3 required by the Note: brute-force algorithm. alog

bc

= clogba

Page 108: UNIT-II

108108

End of Chapter 4End of Chapter 4