28
M.Rawski Introduction to Algorithmics Complexity of algorithms Memory complexity - Number of variables, and the number and sizes of data structures used in executing algorithm Time complexity - Number of elementary operations carried out by the processor in such execution

Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

Embed Size (px)

Citation preview

Page 1: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Complexity of algorithms

Memory complexity

- Number of variables, and the number and sizes of datastructures used in executing algorithm

Time complexity

- Number of elementary operations carried out by the processor in such execution

Page 2: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Complexity of algorithms –

More precisely

� TIME COMPLEXITY OF THE ALGORITHM - relation between the number of basic operations carried out in execution of the algorithm and the size of input data (given as the function of the size of this data)

� np.

Bubblesort: F(N) = ?, where N is a szie of the list

Hanoi Towers problem: F(N) = ?, where N is a number of rings

Minimal spanning tree: F(N,M) = ?, where N is a number of vertices and M is the number of the edges in the graph (network of connections)

Page 3: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

What it means in practice ?

� In practice the time complexity decides about the usefulness of algorithms

� Since:

stating that computers are so efficient that the time does not make the problem is a myth ( prime factorization of large numbers - 300 digits – lasts millions of years)

• it is the needed to solve more and more complex problems :

- computer aiding the decision making

- weather prediction

• the real time computer systems of must offent be used

- automatic control in complex systems

Page 4: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Time execution improvements

� Normalization of values in the one-dimensional table (vector) to the maximal value:

� Input data: V(1), V(2), ..., V(N)

� Algorithm 1

� 1. find maximal value and store it in MAX variable ;

� 2. for I from 1 to N do :

2.1. V(I) ← V(I) ∗ 100 / MAX

� Algorithm 2

� 1. find maximal value and store it in MAX variable

� 2. TMP ← 100 / MAX ;

� 3. for I from 1 to N do :

3.1. V(I) ← V(I) ∗ TMP

Page 5: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Time execution improvements

� Linear search in a list of size N:

� Algorithm 1

� 1. select first element in the list ;

� 2. do the following:

2.1. check if current element is what you search for;

2.2. check if end of the list has beer reached;

2.3. select next element from the list

Page 6: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Time execution improvements

� Linear search in a list of size N:

� Algorithm 2

� 1. add element equal to what you are looking for at the endof the list;

� 2. select first element in the list ;

� 3. do the following:

3.1. check if current element is what you search for;

3.2. select next element from the list

� 4. check if end of the list has beer reached;

Page 7: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Time execution improvements

� We compare two algorithms solving the same problem using one bounded iteration (the number of iteration (N) depends on the size of input data)

� algorithm execution time in function of data size are as follows:

� F1(N) = K1 + L1 ∗ N� F2(N) = K2 + L2 ∗ N

� For their comparison we can use quotient :

� then s(N) = 1 means the same execution time

� (it is possible only, if K1 = K2 i L1 = L2)

� s(N) < 1 means that algorithm 1 is faster

� (but for some N it might be that s(N) > 1)

( ) ( )( )NF

NFNs

2

1=

Page 8: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Time execution improvements

� If we assume that we are intrested in solving problems with

growing size (N → ∞), we should use

( )2

1limL

LNs

N=

∞→

czas wykonania

rozmiar danychNN0

K1

F1

L1

K2

F2

L2

współczynnik porównania

rozmiar danychN

1

N0

K1

L1

K2

L2

F1

F2

execution time

data size

comparison coefficient

data size

Page 9: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Big-O notation

� Two algorithms with execution time F1(N) i F2(N) have complexity of the same order-of-magnitude, if

� where 0 < C < ∞

• if C = 0, the algorithm 1 has complexity order-of-magnitude better

• if C = ∞, the algorithm 1 has complexity order-of-magnitude worse

CNF

NFN

=∞→ )(

)(lim

2

1

Page 10: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Big-O notation

� Algorithm with execution time F(N) has linear complexity, if

� (0 < C < ∞);

� We denote it F(N) = O(N)

� Algorithm with execution time F(N) has quadratic complexity, if

� (0 < C < ∞);

� We denote it F(N) = O(N 2)

� Only finding the algorithm with order-of-magnitude betercomplexity makes essential improvement in solving the given algorithmic problem!

CN

NFN

=∞→

)(lim

CN

NFN

=∞→ 2

)(lim

Page 11: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Properties of the Big-O notation

� F(N) = O(F(N)),

� O(O(g(N))) = O(g(N)),

� c⋅O(g(N)) = O(g(N)) (dla 0 < c < ∞),� O(g(N)) + O(h(N)) = O(g(N) + h(N)),

� O(g(N)) ⋅ O(h(N)) = O(g(N) ⋅ h(N)) = g(N) ⋅ O(h(N)) = h(N) ⋅O(g(N)),

� if ,

then O(g(N)) + O(h(N)) = O(g(N) + h(N)) = O(h(N)),

0)(

)(lim =

∞→ Nh

NgN

Page 12: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Order-of-magnitude improvements

� In case algorithm performs different number of operation for different input data sets we can analyze execution time in worst case

◊ Execution time Alg. 1 in linear search is F1(N) ≈ 2 ∗ N◊ Execution time Alg. 2 in linear search is F2(N) ≈ N◊ Both have worst case time complexity O(N) – it means that there can be such input data that whole list of size N has to be traversed

Page 13: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Order-of-magnitude improvements

� Is it possible to do it much more efficiently?

� Binary search in sorted list:

� Y1, Y2, ..., YN(for any i < j we have Yi ≤ Yj)

� Algorithm 3

start

get whole list L

if middle element is what we are looking

for print

„fund”

stop

if middle element is grater than what we are

looking for

take first „half” of the list

take second „half” of the list

if current list is empty?

print „not fund”

yes no

yes no

yes no

Page 14: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Order-of-magnitude improvements

� How many times (in hte worst case) iteration is repeated in this

algorithm?

� Answer: 1 + log2 N

� Worst case time complexity of this algorithm is O(log 2 N)

� Since ,

binary search algorithm has order-of-magnitude lower complexity than simple search algorithm, but can be used only for sorted lists.

0log2 → ∞→NN

N

Page 15: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

How much better it is?

Improvement factor:

N 1 2+ log N

10 4 100 7 1 000 10 10 000 14 1 000 000 20 1 000 000 000 30 1 000 000 000 000 40 1 000 000 000 000 000 50 1 000 000 000 000 000 000 60

Page 16: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

How to asses ?

� Total time execution cost in worst case:

� K1 + max( K3 , K4 ) + K2 ∗ log2N

� Constant number of operation executed in iteration is of no importance – it is enough to select one that is executed in each iteration.

start

stop

cost K1 cost K2

costt K3

cost K4

(1)

(2)

(3)

Page 17: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

How to asses ?

� Total time execution cost in worst case:

K1 + max( K3 , K4 ) + K2 ∗ log2 N� Constant number of operation executed in iteration is of no importance – it is enough to select one that is executed in each iteration.

� The number of iteration executed decides of the complexity of the algorithm.

Page 18: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Examples ...

• Bubblesort – first version (nested loops) 1. do following N - 1 times:

1.1. ... ;1.2. do following N - 1 times:

• 1.2.1. ... ;

� Total execution time cost in worst case is:

� (N - 1) ∗ (N - 1) = N 2 - 2N +1 ;� N 2 is dominant term, thus time complexity is O(N 2)

• Enhanced bubblesortTotal execution time cost in worst case is:

• (N - 1) + (N - 2) + (N - 3) + ... + 2 + 1 = 0,5⋅ N 2 - 0,5⋅N, • It is less than in previous algorithm, but

time complexity still is O(N 2)

Page 19: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Examples...

• Summing up salaries of employees – time complexity O(N)

• Summing up salaries of employees earning more than their bosses – time complexity O(N 2)

• Finding maximal diagonal in a polygon (naive way) – time complexity O(N 2)

• Finding maximal diagonal in a polygon (sophisticated way)– time complexity O(N)

Page 20: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Examples...

• Hanoi Towers – recursive algorithm:� subroutine move N from X to Y using Z:

� (1) If N = 1 then output “X → Y”;

� (2) otherwise (i.e. if N > 1) do the following: (2.1) call move N - 1 from X to Z using Y;

(2.2) output “X → Y”;

(2.3) call move N -1 from Z to Y using X;

� (3) return;

� Let T(N) denote the time cost for N rings, we can write down following equations – called recurrence relations:

– T(1) = 1– T(N) = 2 ∗ T(N-1) +1

� We would like to find T(N) satisfying these constraints. In this case the solution is T(N) = 2N - 1, so time complexity is O(2N)

Page 21: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Examples...

• Treesort (without self-adjustment):worst case time complexity is O(N 2).

• Although left-first traversal is linear-time procedureO(N), but constructing the tree from inpuit sequence has quadratic worst case complexity

• Treesort (without self-adjustment):worst case time complexity is O(N ∗∗∗∗ log N) , what is an order-of-magnitude improvement

7

15

12

11

8

10

7 15 12 11 8 10

Improvement factor::

N N 2 N ∗ log N

10 100 33 100 10 000 664 1 000 1 000 000 9 965 1 000 000 1 000 000 000 000 19 931 568 1 000 000 000 1 000 000 000 000 000 000 29 897 352 853

Page 22: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Examples...

• Mergesort (recursive):

subroutine sort (L);

1. if L contains only one element, it is sorted ;

2. else do the following:

2.1. divide L into L1 and L2;

2.2. call sort (L1);

2.3. call sort (L2);

2.4. merge L1 and L2 into sorted list;

3. return.

� Let T(N) denote the time cost for N elements, recurrence relations are:

– T(1) = 0

– T(N) = 2 ∗ T(0,5 ∗ N) + N� The solution is T(N) = N ∗ log N, so time complexity is O(N ∗∗∗∗ log N)� Mergesort is one of the most efficient sorting algorithms (but requires

memory growing like O(N) )

Page 23: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Average-case complexity� Analysis of average case� In average case analysis takes into consideration the entire set of inputs and their probability of occurring

� Average-case time complexity of Quicksort algorithm is 1,4 N log2 N

Algorithm Average-case. Wiorst-case

Summing up salaries of employees O(N) O(N) Summing up salaries of employees earning more than their bosses

O(N 2) O(N 2)

Bubblesort O(N 2) O(N 2) Treesort (without self-adjustment) O(N ∗∗∗∗ log N) O(N ∗∗∗∗ log N) Mergesort O(N ∗∗∗∗ log N) O(N ∗∗∗∗ log N) Quicksort O(N ∗∗∗∗ log N) O(N 2)

Page 24: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Is it possible to do it better?

� The discovery of an algorithm is said to place upper bound on an algorithmic problem.

� If we can prove rigorously that algorithmic problem P cannot be solved by an algorithm that requires less time we place lower bound on an algorithmic problem.

an O(N 3) algorithm for P

an O(N 2) algorithm for P

Algorithmic problem P

a proof that P requires O(N ∗∗∗∗ log N)

a proof that P requires O(N)

Upper bund on P

Lower bund on P

P’s inherent time complexity ~

Page 25: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Closed problems andalgorithmic gaps

� + f(N) – function that grows incredibly slowly:

� for N=16 f(N) = 3

� for N=64000 f(N) = 4

� for N= far more than total number of particles in known universef(N) = 5

Problem Lower bound Upper bound. Searching unordered list O(N) O(N) Searching ordered list O(log N) O(log N) Sorting O(N ∗∗∗∗ log N) O(N ∗∗∗∗ log N) Minima spanning tree O(N) O(f(N) ∗∗∗∗ N) +

Page 26: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Barricading sleeping tigers

� Naive algorithm has O(N 3) complexity

• Much more efficient algorithm:

� 1. find the „lowest” point - P1 ;

� 2. sort the remaining points by the magnitude ofthe angle they form with the horizontal axis when connected with P1 - let the resulting list be P2, ... ,PN ;

� 3. start out with P1 and P2 in the current hull;

� 4. for J from 3 to N do the following ;

4.1. add PJ tentatively to the current hull ;

4.2. work backwards through the current hull, eliminating a point PK , if the two points P1 and PJ are on different sides of the line between PKand PK-1 , and terminating this backwards scan when a PK that does not need to be eliminated is encountered;

Page 27: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Example

�Step 2. Iterations of step 4.

1

2 3

4

5 6

7

8

9

10

11

12

1314

15

1

2 3

4

5 6

7

8

9

10

11

12

1314

15

Page 28: Memory complexity - Number of variables, and the …zpt2.tele.pw.edu.pl/rawski/Intr_Algor_6.pdf · - Number of elementary operationscarried out by the processor in such execution

M.Rawski Introduction to Algorithmics

Total time of the algorithm

Step 1. O(N)

Step 2. O(N ∗∗∗∗ log N)Step 3. O(1)

Step 4. O(N)

Total O(N ∗∗∗∗ log N)

1

2 3

4

5 6

7

8

9

10

11

12

1314

15

1

2 3

4

5 6

7

8

9

10

11

12

1314

15